text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
A method to maximise forest profitability through optimal rotation period selection under various economic, site and silvicultural conditions
Tohru Nakajima1,
Norihiko Shiraishi1,
Hidesato Kanomata2 &
Mitsuo Matsumoto3
New Zealand Journal of Forestry Science volume 47, Article number: 4 (2017) Cite this article
Maximising forest profitability is important from both economic and ecological perspectives. Managers of forest areas gain utility by optimising profits, and maximising the efficiency of a forest stand is also beneficial to the natural environment. This study presents a method to estimate and visualise forestry profitability based on variables defined in previous studies. The design space included economic and forest stand factors that can affect profitability. A contribution index analysis identified factors that significantly impact profitability, and these factors were then applied to data collected from a forest area in Japan. The effects of the two primary factors, discount rate and rotation period length, on a measure of profitability, the soil expectation value, were visualised in three-dimensional space.
The site used in this study, located by Morotsuka village in the Miyazaki Prefecture, Japan. Variables previously found to have significant effects on forestry profitability were used to define a design space of variables for calculating and displaying profitability, after which data from the cited study were used to estimate the variables' SEV contribution indices. The effects of the important factors for forestry profitability were then analysed and visualised. Dimensions of the design space were constructed from previously published forestry inventory data and consisted of two stand condition factors, three site condition factors, one economic condition factor and one silvicultural planning factor. This study used previously published inventory data regarding stand age, site index and tree species. Additionally, the forestry profit simulator was used to estimate the optimal rotation period in terms of soil expectation value. The relationships between SEV and these significant factors were then graphically visualised. The significant factors identified as described above were used to estimate SEV-based profitability distributions, based on the inventory data used to construct the design space and optimal rotation periods, for the studied forest.
Changes in rotation period affected forestry profitability. However, the effect depended on stand, site and economic conditions. In scenarios characterised by relatively low site productivity index and harvesting area, which results in low profitability, rotation period changes did not have a strong effect on profitability. On the other hand, it was vital to select the optimal rotation period for high profitability areas as even a small deviation had a significant impact on profitability. Furthermore, it was shown that by synchronising the harvesting times of small, adjacent stands, the overall profitability increased through reductions in forest management costs.
These results can help local forest management increase profitability through cooperation with individual forest owners. The presented method also has risk management applications, as it could be used to estimate the effects of external uncertainty variables on forest profitability.
There is a very long tradition of humans managing forests to obtain timber and various other products for both personal use and sale (Westoby 1989). Currently, there is a global consensus that sustainable forest management should be assured for the present and future. It is important that forest management is sustainable from both economic and environmental perspectives. Historically, uncontrolled logging has had detrimental effects on valued characteristics of forest environments, such as biodiversity, carbon stocks, aesthetic appeal and amenity value (Pukkala 2002). However, timber production does not necessarily affect all of these characteristics negatively. Instead, the relationships between timber production and some valued environmental characteristics can be synergetic rather than divisive (Cademus et al. 2014), and this synergy can exist between two or more forest products and services, such as biodiversity (Probst and Crow 1991), bioenergy-carbon sinks (Hoel and Sletten 2016) and multiple-use management (Hornbeck and Swank 1992). Moreover, Lu et al. (2014) provided evidence for a synergistic relationship between soil organic carbon and soil total nitrogen during timber production. On the other hand, provisioning services, which include wild-food production and timber harvesting, often cause large trade-offs with ecosystem functions such as water quality, flood control and ecotourism potential (Marianov et al. 2004; Millennium Ecosystem Assessment Board 2005). For example, the reduction of CO2 emissions could be facilitated by both maintaining forests' carbon stocks and sustainably producing timber as a carbon-neutral material (IPCC 2000; IPCC 2007). It has also been shown that appropriate forest management, when compared to the practice of abandoning an area after planting, can result in more varied forest types with greater biodiversity and amenity value (Boyce 1995).
Certain management practices help to maintain both economic and environmental sustainability in planted forests used for timber production. The rotation period is especially important in silvicultural management, as it determines logging intervals, and can dramatically affect stand conditions (Bettinger et al. 2009). For example, changes in the rotation period can alter the age distribution in a stand, potentially skewing it towards a population of younger trees. Previous studies have defined rotation periods in various terms, such as physical, technical and financial parameters, depending on the forest management objective (Bettinger et al. 2009; Hiley 1967). Numerous studies have also shown that the physical rotation age, which is based on the life span of a tree, varies greatly between species. For example, Sequoia sempervirens (D.Don) Endl. and Alnus rubra Bong. have physical rotation values of over 1000 and less than 100 years, respectively (Harrington 1990; Olson et al. 1990). Another definition, the technical rotation age, is based on the size of tree a particular economic market requires.
The Faustmann formula proposes that a rotation period that maximises the soil expectation value (SEV) will result in a final cutting age that is economically sustainable (Faustmann 1968). The Pressler formula (1860) can also be used to calculate the optimal rotation period. However, Samuelson (1976) suggested, following a discussion about the validities of the Faustmann and Pressler formula, that the Faustmann formula has better validity than other optimal rotation period formulae. The initial formula for maximising SEV has been modified to include variables that represent environmental characteristics (Hartman 1976). Hyytiäinen et al. (2004) included the optimal rotation simulation formula into their process-based forest growth model. Formulae for the optimal rotation period were originally applied only to even-aged forests but have recently been used to calculate optimal rotation periods for uneven-aged and natural forests (Chang and Gadow 2010). Previous studies have also investigated how economic conditions affect forest profitability; for example, Parajuli and Chang (2012) included timber price in their sensitivity analyses. On the other hand, Halbritter and Deegen (2015) investigated the relationship between silvicultural practices and rotation period by analysing how planting density influences the optimal rotation period. Ultimately, the optimal rotation period is site-specific, and the SEV depends on economic, stand and site conditions, as well as silvicultural practices used. A previous study showed that a few subjective conditions can change the SEV and optimal rotation period when forest management regimes aiming to maximise the SEV are applied (Davis et al. 2001).
However, there have been few (if any) attempts to determine and visualise forest profitability as a solution space optimised by calculating SEV from characteristics of the local forest environment.
Although previous studies have analysed how discount rate affects SEV (Davis et al. 2001), there have been few attempts to identify long-term forest management strategies through a multifactorial analysis. There is a need for a comprehensive analysis in Japan since the government has proposed that the target for annual timber production should increase from 23 million m3 to approximately 50 million m3 in the future. This is an achievable target as the average annual growth of Japan's forest resources is estimated to be 80 million m3 (Forestry Agency 2014). Thus, it is important to analyse the various factors that affect stand profitability so that timber production can be increased without jeopardising the sustainable management of forest resources. The objective of the presented study was to simulate how stand, site and economic conditions, as well as final cutting silvicultural practices, affect SEV distribution in a Japanese forestry area characterised by high productivity, such as the Miyazaki prefecture.
The present study used a contribution analysis to determine how stand, site and economic conditions, along with silvicultural planning, affect forest profitability. The resulting variables were then used to generate a solution space displaying SEV maxima and optimal rotation periods on both forest and stand levels. Furthermore, the distribution of SEV based on rotation periods was estimated in an actual forest area. In addition to describing the methodology and presenting the results, we also provide forest management recommendations that aim to maintain economic sustainability.
The site used in this study, located by Morotsuka village in the Miyazaki Prefecture, Japan, has been studied in previous research that includes a detailed forest inventory (Nakajima et al. 2011a).
The timber productivity of this site is amongst the highest in Japan (Miyazaki Prefecture Government 2015). It has a warm temperate climate, with a mean annual temperature of 14 °C and mean annual rainfall of 2445 mm. Forests cover about 17,785 ha in the area and planted forests about 12,541 ha (71% of the total). The study areas hosted Cryptomeria japonica (Thunb. ex L.f.) D.Don private forests with a bell-shaped tree age distribution, which is typical for Japanese planted forests of this species (Forestry Agency 2007). The C. japonica stands in the target area were mostly between 30 and 50 years old. The most frequent stand age was approximately 30 years.
In 2004, the study site was certificated by the Forest Stewardship Council (FSC). Therefore, it must not only be economically sustainable but also maintain environmental functions defined by an independent organisation (Gulbrandsen 2005).
Variables previously found to have significant effects on forestry profitability (Nakajima et al. 2011a) were used to define a design space of variables for calculating and displaying profitability (expressed as SEV), after which data from the cited study were used to estimate the variables' SEV contribution indices. The effects of the important factors for forestry profitability were then analysed and visualised as described below.
Design space modelling
Dimensions of the design space were constructed from previously published forestry inventory data (Nakajima et al. 2011a) and consisted of two stand condition factors (site index class and stand age), three site condition factors (area, and distances from forest road and strip road; referred to as ground condition variables in the previous study), one economic condition factor (discount rate) and one silvicultural planning factor (rotation period) as shown in Table 1. In this study, discount rate reflects the interest rate that is used to estimate the present value of future cash flows (Eatwell et al. 1987; Winton 1951). Discount rate is a measure of the capital cost and expected yield of investments.
Table 1 The design space of factors affecting forestry profitability
Minimum and maximum values were set for all factors, and then, a contribution index analysis was performed to determine each factor's effect on forest profitability, by calculating profitability (SEV) at values spanning from their minima to maxima.
Contribution analysis of selected factors
This study used previously published inventory data (Nakajima et al. 2011a) regarding stand age, site index and tree species. Additionally, the forestry profit simulator described by Nakajima et al. (2011a) was used to estimate the optimal rotation period in terms of soil expectation value. The model used the inventory data to simulate how factors such as rotation period length, site index and discount rate affect SEV.
The stem volume, timber volume and number of stems and timber logs were estimated from the local yield table with either the Construction System or the Woodmax algorithm (Nakajima et al. 2011a). The felling, logging, bunching and skidding productivity (m3 person−1 day−1) were then calculated by applying formulas described in Table 1 to the variables that had been estimated from historical harvesting records.
The average stem and timber volumes were calculated by dividing the total stem and timber volume by the number of stems and timber logs, respectively, which had been derived from the local yield table through either the construction system or the WoodMax algorithm. The distance from forest roads was obtained from forest inventory data published by the Morotsuka village government. Other variables are shown in Additional file 1: Table S1 along with the silvicultural costs, which were proposed by the forest association during interviews and based on historical records of silvicultural practices. The interviews were conducted during the 2009 winter season. Staff members from the Mimi River Forest Association and a forest owner in the study site were invited to oral interviews regarding the practical harvesting area, silvicultural costs and subsidies. The same interview pool was used in a previous study (Nakajima et al. 2011a). The response rate of these questionnaire surveys was close to 100%.
Forest inventory data such as stand age, tree species and site index can be used as input data for the stand growth model presented in a previous study (Nakajima et al. 2011a).
A subsidy was not directly applied as a variable in this study. A subsidy was applied as a constant as mentioned below.
The Japanese government subsidises various silvicultural practices in planted forests irrespective of the tree species, providing approximately 70% of the planting, thinning and other silvicultural costs. For this reason, these subsidies were not included as a variable in this study but rather treated as a constant with a value of 70% of silvicultural costs (Additional file 1: Table S1(b)). The reality of these subsidy systems was confirmed by checking active silvicultural areas throughout Japan and comparing the subsidies they received with the implemented silvicultural practices (Hiroshima and Nakajima 2006).
First, forestry profitability was estimated as SEV using the following two equations (Davis and Johanson 1987; Davis et al. 2001).
$$ {SEV}_1=\frac{a}{{\left(1+i\right)}^w-1} $$
where SEV1 is the present value of the land with a stand currently at age 1, a is the net income received at rotation age (w in years) every w years starting at the end of year w and i is the discount rate (in decimals). However, the most frequent stand age in our study is approximately 30 years. Thus, the stands reflect relatively young forest, and w1 would be almost equal to w. If the optimal rotation period (w) derived from maximising SEV1 is lower than the current stand age (t), it is impossible to carry out the optimal rotation period as the first final cutting age. Strictly speaking, the first optimal rotation period should not be lower than the current stand age. Therefore, we defined the first rotation period (w1) derived from maximising NR w1, after which the subsequent rotation periods (w) were calculated separately. Therefore, the first optimal rotation periods were selected as rotation periods in this study.
$$ {SEV}_t=\frac{NR_{w1} + {SEV}_1}{{\left(1+i\right)}^{w1 - t}} $$
where SEV t is the present value of a stand currently at age t, NRw 1 is the net return of the stand at age w1 and w1 − t is the number of years before the stand reaches rotation harvest age.
The variables regarding forestry profitability (a and NRw 1) in Eqs. (1) and (2) were based on results from a previous study (Nakajima et al. 2011a). The discount rate and rotation age (i and w 1) followed the ranges shown in Table 1. The ranges of initial conditions were defined as the maxima and minima for the factors shown in Table 1.
A contribution index, which treated forestry profitability as an objective function and was based solely on main effects (Gu 2002), was also calculated:
$$ {C}_i=\frac{V_i}{V\left[{SEV}_t\right]} $$
Here, C i is the contribution index of the stand age, site index class, distance from strip road, distance from forest road, stand area, discount rate and rotation period, when i = 1 to 7, respectively; V [SEV t ] is the variance of forestry profits over the total area; and V i is the variance of SEV depending on the listed factors (i = 1–7).
To determine the effect of each factor (i = 1,…,7) on forest profitability, their contributions to its variance were calculated. The relationship between V [SEV t ] and V i was calculated using the following formula, in conjunction with the smoothing spline ANOVA method (Gu 2002).
$$ V\left[{SEV}_t\right]={\displaystyle {\sum}_{i=1}^7}{V}_i $$
$$ {V}_i=\frac{1}{n}{\displaystyle {\sum}_{j=1}^n}{\left({SEV}_{i,j}-\overline{SEV_i}\right)}^2 $$
where SEV i, j is the SEV of factor i depending on the variable j, which represents sample id.
Visualisation of stand-level relationships between SEV and significant factors
Only factors with a calculated contribution to SEV of at least 0.1% were included in the study. All others were excluded, and the identification of significant factors ceased when the cumulative contribution of the included factors exceeded 99.7%. The relationships between SEV and these significant factors were then graphically visualised.
In addition to the ANOVAs mentioned above, a multiple regression analysis (Leona et al. 1996) was performed to estimate how stand, site and economic conditions, as well as silvicultural planning, influence forest profitability, measured as SEV. Only these factors were included in the multiple regression analysis because they met the requirements set for the contribution index analysis.
Using these factors, the SEV was calculated as
$$ \begin{array}{c}\hfill \mathrm{S}\mathrm{E}\mathrm{V}={a}_0+{a}_1{x}_1+{a}_2{x}_2+{a}_3{x}_3+{a}_4{x}_4\hfill \\ {}\hfill \mathrm{S}\mathrm{E}\mathrm{V}={a}_0+{a}_1{x}_1+{a}_2{x}_2+{a}_3{x}_3+{a}_4{x}_4....{a}_n{x}_n\hfill \end{array} $$
where x 1, x 2, x 3, x 4…x n represent selected variables derived from the analysis mentioned above and a 0, a 1, a 2, a 3, a 4…a n are constants. We used stepwise selection to determine the variables that would be included in the final model with a requirement that all included variables have a partial F statistic with a probability less than, or equal to, 0.05. Statistical analyses were performed in the software package Excel Statistics 2006 (Microsoft, Redmond, WA) according to recommendations from Social Survey Research Information (Social Survey Research Information 2006).
Estimation of forest-level profitability distributions
The significant factors identified as described above were used to estimate SEV-based profitability distributions, based on the inventory data used to construct the design space and optimal rotation periods, for the studied forest.
To confirm the robustness of detected effects of increasing the harvesting area on forestry profitability, we calculated two SEV distributions depending on local-scale rotation periods. The local-scale rotation periods are used as a generic term for the optimised rotation periods of the studied forest area. The total stand area was approximately 1700 ha, which was selected randomly based on a previous study (Nakajima et al. 2011a).
In both cases, we excluded stands with a negative SEV (yen ha−1); the management of these stands would be unprofitable, causing forest owners to abandon them. The first SEV distribution is based on the assumption that the harvesting area is equal to the target stand area. This distribution is equal to the SEV estimated through Eq. 2. In the calculation of the second distribution, it was assumed that stands with areas less than 1 ha would be treated as if their harvesting area was 1 ha. However, the SEV value can change under certain conditions, for example, if the timber price dramatically increases. In this situation (a higher timber price), stands that initially had a negative SEV may actually have a positive SEV. In this way, it may not be rational for forest owners to exclude stands with negative SEV values because timber prices fluctuate, and future SEV values could be positive. However, it is difficult to predict the future timber price. Additionally, the timber price has not dramatically changed during recent years. Under these conditions, excluding a negative SEV may be rational for forest owners. Therefore, this study fixed the timber price based on results from a previous study (Nakajima et al. 2011a).
Key factors affecting profitability
The contribution indices of discount rate, rotation period, site index class, harvesting area and stand age to forest profitability were 0.71, 0.16, 0.7, 0.05 and 0.02, respectively. Distances from forest and strip roads did not meet the criteria for significant factors and thus were not included in the analysis. The stand age was set to 30 years for the contribution index analysis as this was the most frequent stand age within the study site.
The three-dimensional visualisation presented in Fig. 1 illustrates how changes in discount rate and rotation period affect profitability when site index class, stand age and harvesting area were set to 1, 30 years and 1 ha, respectively. The figure shows the maximum SEV plotted against the discount rate and rotation period, as according to the contribution index analysis, these two factors had the largest impact on profitability. The optimal rotation period ranges from 65 years (discount rate 5%) to 80 years (discount rate 1%).
The three-dimensional SEV (yen) solution space defined by discount rate (%) and rotation period (year), with site index class, stand age and harvesting area set to 1, 30 years and 1 ha, respectively
Differences in current stand condition and other factors could alter the shape of the solution space, but the effects of discount rate and rotation period on forest profitability would remain similar.
The results show not only that SEV decreases as the discount rate increases but also that the rotation period affects SEV. The three-dimensional solution space reveals a point where a specific rotation period and discount rate maximise SEV. To determine the optimal rotation period, the discount rate can be held at a constant level and the solution space can be transformed into a digital elevation model (DEM). From this, the rotation period curve can be treated as a "mountain edge" and measured with the flow accumulation algorithm in ArcGIS 10 (ESRI, Redlands, CA).
The multiple regression analysis produced the following profitability model for planted forests in Japan:
$$ \mathrm{S}\mathrm{E}\mathrm{V}=2044500+656380A-417510D+6981.8R-793220SI $$
where A is the area (ha), D is the discount rate (%), R is the rotation period (year) and SI is the site index class.
The fit of the model was significant, and the p values for area (ha), discount rate (%), rotation (year) and site index class were all less than 0.01. The constants representing area and rotation period are positive, which means that SEV increases when the rotation period and/or the harvested area increase. On the other hand, the constants representing discount rate and site index class are negative, which means that SEV decreases when the discount rate and/or the site index class increase. This multiple regression analysis identifies the significant factors that affect profitability and presents a statistic model that can simplify the calculation of forest profitability at other planted forests in Japan.
Contour maps in digital elevation models illustrate the relationship between optimal rotation period and discount rate in various scenarios.
This method for determining the optimal rotation period was then applied to scenarios with the minimum harvesting area set to 1, 1.5 or 3 ha and site index class set to 1, 2 or 3. In the nine resulting possibilities, the optimal rotation period was charted via the mountain edge method over the solution space, as shown in Fig. 2. It was noted that an increase in harvesting area leads to an increase in the SEV and a decrease in the optimal rotation period.
The SEV (yen) solution spaces for nine scenarios with indicated harvesting areas, rotation periods and discount rate with site index classes 1 (top row), 2 (middle row) and 3 (bottom row). The stand age was set to 30 years for each scenario
The SEV ranges from approximately −9.45 million to 46.11 million, −17.40 million to 29.40 million and −26.30 million to 20.70 million yen ha−1 for site indices of 1, 2 and 3, respectively (Fig. 2). Hence, the range of SEV widens as the site index and harvesting area increase. Furthermore, the SEV range shifts when the discount rate changes. For example, the SEV ranges for site index classes 1, 2 and 3 are approximately 1.04 million to 4.61 million, 0.26 million to 1.64 million and 0.14 million to 1.28 million yen ha−1, respectively, when the discount rate is between 1 and 2%, but approximately 47,853 million to 2.37 million, −23,000 million to 1.24 million and −1.63 million to 1.01 million yen ha−1, respectively, when the discount rate is between 2 and 5%.
Effects of optimal rotation period on SEV distributions
The dependence of SEV on optimal rotation periods (under the two assumptions mentioned above) is illustrated in Fig. 3.
SEV distributions, in hectares, at indicated optimal rotation periods and discount rates, assuming that stands with negative SEV would be excluded (a–c) and that stands smaller than 1 ha would be treated like 1-ha harvesting stands (d–f). Increases in SEV are shown by increasingly dark areas within the bars, as shown in the keys
The total SEV of the distributions under the first assumption (that stands with negative SEV would be excluded) at discount rates of 1, 3 and 5% are approximately 2.14, 6.46 and 3.92 trillion yen, respectively, 50% of which falls within the age ranges of 30–44, 30–44 and 30–64 years, respectively (Fig. 3a–c). Profitability was generally highest with relatively short rotation periods. The total annually harvested areas, under the first assumption, at discount rates of 1, 3 and 5% were approximately 952, 731 and 722 ha, respectively (Fig. 3a–c). The average rotation periods under these discount rates were 70, 59 and 54 years, respectively.
In the SEV distributions obtained with the second assumption, the annually harvested areas at discount rates of 1, 3 and 5% were 1774, 1733 and 1025 ha, respectively (Fig. 3d–f). The average rotation periods in these cases, at the same discount rates, were 80, 70 and 60 years, respectively. These distributions were based on the assumption that stands with areas ≤1 ha would be treated as 1-ha stands during harvesting. This was due to the results of a questionnaire survey, which demonstrated that the minimal viable harvesting area was 1 ha. As the average stand area was approximately 0.5 ha, this assumption was intended to determine how profitability would increase if multiple adjacent small stands were harvested simultaneously in order to lower costs. For example, two 0.5-ha stands could be harvested at the same time, effectively creating a harvesting area of 1 ha. This simulation showed that simultaneous harvesting of small plots could raise profitability by 186, 242 and 138% at discount rates of 1, 3 and 5%, respectively.
It was important to select only factors for the study that had significant effects on forest profitability, defined as having an effect of at least 0.1%. The contribution index analysis showed that the key factors affecting forestry profitability, in decreasing order, were discount rate, rotation period, site index, harvesting area and stand age. The discount rate and rotation period determined approximately 90% of the total profitability. However, a stratification analysis suggested that effects of these two variables could noticeably change with changes to various stand and site factors. Discount rate has been previously shown to significantly affect profitability estimates (Davis et al. 2001; Richard and Puneet 2015), so we expected it to be one of the main factors in our study. Rotation periods determine the harvest intervals for a given stand, so variations in these periods will also affect the profitability of a forest area (Gunalay and Kula 2012). Site index is a measure of the productivity of a stand and thus is likely to reflect harvesting frequency. Increasing the harvesting area can also improve profitability due to the fixed costs of harvesting timber, such as labour and machinery.
When the various small stands are harvested separately, moving the harvesting machines between each site will be associated with certain costs and inefficiency. These costs can be reduced, and the harvesting efficiency could improve if these small stands are aggregated into a large harvesting area. The distribution of costs over a large area rather than individual small stands could boost profitability.
Stand age had the smallest effect of the five significant factors, but this variable can influence profitability due to the increase in timber volume as stand age increases. Distances from forest and strip roads did not meet the criteria for inclusion as significant factors, although they influence accessibility of harvesting areas and hence harvesting costs.
An increase in stand area also increased the effects of rotation period, as shown in Fig. 2. The figure illustrates that the optimal rotation period decreases as stand area increases under certain conditions (i.e. Table 1, Additional file 1: Table S1) defined in this study. Although it is relatively difficult to get large amounts of high-quality timber (i.e. the large diameter timber) under shorter rotation periods, the large diameter timber does not have a significant price premium in the target timber market. If we considered the timber market for specific traditional buildings, such as shrines and temples, then the optimal rotation period and silvicultural system would most probably shift.
This is consistent with expectations, as a larger stand area provides greater timber volume per harvesting operation because it can be harvested more efficiently per hectare than a small area, and the rotation period can be reduced while maintaining profitability.
Thus, as shown in the simulations in Fig. 3, small stand areas could be harvested simultaneously to increase profitability in situations where average stand area at the site is smaller than the minimal viable harvesting area (Nakajima et al. 2011b). The relationship between site index and optimal rotation period is also shown in Fig. 2. This result was expected since site index is a key determinant of tree growth rate and usually measured in terms of growth.
The SEV is more sensitive to changes in the discount rate when the discount rate is low (1–2%) than when it is high (2–5%), as shown in Figs. 1 and 2. Interestingly, the SEV is more sensitive to changes in rotation period when the discount rate is higher. The optimal rotation period increased when the discount rate increased from 3 to 5% (Fig. 3). Furthermore, Fig. 2 illustrates that optimal rotation period decreases as the harvesting area increases. Thus, the best rotation period for profitability depends on various economic and stand conditions, and a shorter rotation period is not ideal for all situations.
The discount rate is determined by a country's economic conditions. If we regard forestry as a competitive industry, a discount rate of more than 4% may be appropriate. However, the SEV of certain stands was negative at discount rates under 5%. Hence, there are currently a limited number of profitable stands at the study site, even though it is located in one of the most productive areas for the Japanese forestry industry. In this way, the profitability of an average Japanese forestry area would be even lower than what was estimated in this study. However, as expected, when the discount rate decreased to 3%, and to 1%, the number of stands with a positive SEV increased.
It is not straightforward to solely consider the effects of discount rates for the management of natural resources, as there may be strong ecological, ethical and political pressures to conserve the resources (Heal 2000). However, a discount rate of 1% is very low for a competitive industry, such as forestry (if the complicating aspects are neglected). In this context, stands that have a negative SEV at a discount rate of 5% should not be considered as profitable forestry areas. However, large Japanese timber mills require immense amounts of harvested timber for processing to remain profitable. Failure to provide enough raw material to meet these mills' operational demands would severely impair Japanese forestry and associated businesses. Therefore, if the mills are highly profitable, it may be worthwhile, from an economic perspective, to harvest forest stands with slightly negative SEV (via appropriate redistribution of profits) to maintain forestry product supply chains. Another complicating factor is that according to current national forest management plans, more than 30% of the total clear-cut area in Japan should no longer be maintained as planted forests (Forestry Agency 2014) but regenerated as natural stands, such as mixed hardwood forests. However, such stands should also be predominantly in areas with low profitability (and hence strongly negative SEV and poor fertility).
The densities of the contours in Fig. 2 illustrate the stability of SEV over varying rotation periods and discount rates. A higher density represents lower SEV stability. For example, the SEV is unstable between discount rates of 1 to 2%, as small changes in discount rate strongly affect profitability, but changes in the discount rate above 2% have weaker effects on SEV. This pattern has been noted in previous studies (Bettinger et al. 2009; Chladná 2007; Davis et al. 2001). The density of contours increases as harvesting area and/or site index increases, as shown in Fig. 2. This suggests that the selection of an optimal rotation period is more important in large areas with high productivity than in small stands (unless numerous small stands can be harvested simultaneously, or they can be harvested simultaneously with large stands).
In addition to these solution spaces, the multiple regression analysis provides further evidence for how the factors included in this study influence SEV. Generally, a larger harvesting area translates into better harvesting efficiency. In this way, it would be reasonable that as harvesting area increases, SEV improves, as was shown in our analyses. Furthermore, previous studies have shown that a higher discount rate and/or site index will decrease both the SEV and net present value of a stand (Davis et al. 2001; Nakajima et al. 2011b). The results of our analyses are consistent with previous studies (Bettinger et al. 2009; Davis et al. 2001) and demonstrate that the rotation period length, as well as stand, site and socioeconomic conditions, all have the expected effects on SEV.
Over the last 35 years, the price of timber has decreased, which has reduced forestry profitability and subsequently caused almost all of Japan's forest owners to become dependent on government subsidies. Previous studies have indicated that the intensity of silvicultural practices in an area, including planting, weeding, pruning, pre-commercial thinning and thinning, strongly correlates with the amount of national subsidy available (Hiroshima and Nakajima 2006). Based on previous research and the current forestry situation in Japan, subsidies could be considered as another variable to represent a socioeconomic condition in the calculation of forestry profitability (Nakajima et al. 2011a).
In relation to the categories of factors that affect forestry profitability defined in this study (Table 1), previous studies have considered the stumpage price (Penttinen 2006) and planting density as an economic condition and silvicultural system, respectively. Planting density is also strongly related to future stand conditions. These variables based on the additional data collection could be included in the presented model to determine how they affect forestry profitability. Furthermore, as the simulation system proposed in this study has also been applied to Japanese carbon emission reduction systems (Nakajima et al. 2011c), it could be used to estimate how the carbon price would affect rotation periods. A previous study (Coordes 2014) has shown that this kind of rotation analysis could be applied to adaptive forest management under various uncertainties. Because forest-level adaptive management is related to the aggregation of stand-level simulations within the target forest area, it would be possible to apply these results to adaptive forest management by combining the presented stand-level and forest-level analyses to a selected Japanese forest area.
Various studies have shown how forest profitability is dependent on the discount rate and numerous other variables, such as the carbon tax rate (Chladná 2007). This study employed a solution space to visualise the effects of various factors on forest profitability, measured through SEV. When the results are compared with previous studies (Loisel 2014; Price 2011), our study provides additional information, such as the stability of forest profitability, which is shown by the density of contour lines in the solution spaces (Fig. 2). This stability can provide additional information for forest managers and owners, helping them determine the optimal rotation period.
The multiple regression model used here includes harvesting with a logging vehicle based on relatively high road density. Therefore, the regression analysis should be recalculated if estimates of forest profitability under other harvesting systems, such as skyline harvesting, are desired. However, this result suggested that forest profitability can be expressed through simplified regression models. In this way, the presented regression model has an advantage in that it can estimate forest profitability through simple calculations. Additionally, the Japanese government hopes to increase road construction to encourage harvesting with logging vehicles in the future (Forestry Agency 2014). Therefore, the regression analysis applied in this study, which is based on harvesting with logging vehicles, is logical for estimating forest profitability in Japan.
Certain factors, such as site index, are highly insensitive to the influence of human activity. The application of appropriate fertilisers could increase the productivity of a site, but this practice is not popular in Japanese silviculture, so factors based on natural resources should remain constant. Other site condition factors, such as the harvesting area and rotation period, can be optimised quite easily. The modification of these factors does not raise costs, so it would be easy to increase profitability through, for instance, the synchronisation of harvest times and other silvicultural operations.
Modifications to harvesting area and rotation age are based on the decision-making of forest owners and a harvesting operational schedule, so the modification of these factors does not raise tangible costs, such as additional harvesting equipment expenses or labour costs.
If the total harvesting area in a local forestry area dramatically increases, then supply and demand effects, such as hiring a forestry crew, will increase during periods of high demand. However, even if the total harvesting area remains the same, the synchronisation of harvest times and other silvicultural operations that results from aggregating small stands into a larger area would increase harvesting effectiveness (Hansmann et al. 2016; Kittredge 2005).
Although we did not consider the possibility that the intangible costs (i.e. time spent negotiating the synchronisation of harvest times and other silvicultural operations amongst owners of small stands (Kittredge 2005)) might increase, it would be in the best interest of all forest owners to synchronise harvest times and silvicultural operations by aggregating small stands.
On the other hand, factors such as the establishment of forest roads incur initial physical costs, so changes to these types of factors are not ideal from an economic viewpoint. The most feasible strategy would be to increase the harvesting area with the consensus of forest owners.
The optimal rotation period may have to be recalculated when the harvest times of adjacent stands are synchronised. However, the rotation period should be based on the areas with the highest profitability, as changes in the rotation period of these areas can strongly affect overall SEV, and results of the local forest management analysis confirmed the importance of minimising changes to rotation periods in high profitability stands when the rotation period is optimised. The Japanese government has plans to increase forest road density and encourages the simultaneous harvesting of adjacent stands, as well as certain stands that are connected by a forest road (Forestry Agency 2014). Furthermore, according to the forest association interviews, the simultaneous harvesting of adjacent stands, even if they are owned by various private foresters, would be possible in the studied forest area.
An increase in the number of stands harvested per harvesting operation, as suggested, would increase the areas in which the SEV exceeds 0, and abandoned stands would be subject to active silvicultural management. Abandoned stands usually have high densities, large dead wood contents and poor conditions for sustainable forestry (Nakajima et al. 2011d). Thus, the simultaneous harvesting approach described above could increase the profitable stand area by increasing the area of actively managed stands.
The importance of increasing the total harvesting area by increasing the harvest of stands smaller than 1 ha (following assumption 2) was confirmed through an analysis of how local-scale variations affect rotation period, as illustrated in Fig. 3. Both efficiency and profitability would increase if groups of small adjacent stands were harvested simultaneously. The figure also shows that the rotation period increases when the discount rate rises, due to future returns becoming more attractive.
The national government has suggested that the final cutting area should be expanded to increase national timber production. It is possible that profitable stands could be harvested on a priority basis, as profitable stands typically have shorter rotation optimal periods than less profitable stands. It has also been suggested that the harvested stands should be replanted, with a management objective of maintaining stand and site conditions that most affect forest profitability.
Other studies have analysed uncertainties by including risk variables, such as biomass resources and fire hazards, in the calculations (Shettles et al. 2015; North Carolina Use-Value Advisory Board 2012). Spatial uncertainty in a large forest area has also been addressed (Wei and Murray 2015).
This study provides a starting point for the further analysis of uncertainties related to various factors in forest management. For example, one of the main risks in forest management in Japan is wind. By combining techniques presented in this study and previous research, it would be possible to estimate, and predict, the uncertainty of forestry profitability in Japan associated with wind and other risk factors.
This study presents a method to estimate and visualise forestry profitability based on variables defined in previous studies. Dimensions of the design space were constructed from previously published forestry inventory data and consisted of two stand condition factors, three site condition factors, one economic condition factor and one silvicultural planning factor. This study used previously published inventory data regarding stand age, site index and tree species. Additionally, the forestry profit simulator was used to estimate the optimal rotation period in terms of soil expectation value. The relationships between SEV and these significant factors were then graphically visualised. The significant factors identified as described above were used to estimate SEV-based profitability distributions, based on the inventory data used to construct the design space and optimal rotation periods, for the studied forest.
The design space included economic and forest stand factors that can affect profitability. A contribution index analysis identified factors that significantly impact profitability, and these factors were then applied to data collected from a forest area in Japan. The effects of the two primary factors, discount rate and rotation period length, on a measure of profitability, the soil expectation value, were visualised in three-dimensional space.
Bettinger, P., Boston, K., Siry, J. P., & Grebner, D. L. (2009). Forest management and planning. Burlington: Academic.
Boyce, S. G. (1995). Landscape forestry. New York: John Wiley & Sons.
Cademus, R., Escobedo, F., McLaughlin, D., & Abd-Elrahman, A. (2014). Analyzing trade-offs, synergies, and drivers among timber production, carbon sequestration, and water yield in Pinus elliotii forests in Southeastern USA. Forests, 5, 1409–1431.
Chang, S. J., & Gadow, K. (2010). Application of the generalized Faustmann model to uneven-aged forest management. Journal of Forest Economics, 16(4), 313–325.
Chladná, Z. (2007). Determination of optimal rotation period under stochastic wood and carbon prices. Forest Policy and Economics, 9(8), 1031–1045.
Coordes, R. (2014). Thinnings as unequal harvest ages in even-aged forest stands. Forest Science, 60(4), 677–690.
Davis, L. S., & Johanson, K. N. (1987). Forest management. New York: McGraw-Hill.
Davis, L. S., Johanson, K. N., Bettinger, P., & Howard, T. E. (2001). Forest management to sustain ecological, economic and social values. New York: McGraw-Hill.
Eatwell, J., Milgate, M., & Newman, P. (1987). The new Palgrave: a dictionary of economics. London: Macmillan.
Faustmann, M. (1968). Calculation of the value which forest land and immature stands possess for forestry, in Gane, ed. 1968a: 27–55, translated by W. Linnard from Allgemeine Forst- und Jagdzeitung, 15 December 1849, 27–55
Forestry Agency. (2007). Annual report on trends of forest and forestry—fiscal year 2006 (p. 17). Tokyo: Japan Forestry Association.
Forestry Agency. (2014). Forestry statistics (p. 260). Tokyo: Japan Forestry Association (in Japanese).
Gu, C. (2002). Smoothing spline ANOVA models. Berlin: Springer-Verlag Inc.
Gulbrandsen, L. H. (2005). Mark of sustainability? Challenges for fishery and forestry ecolabeling. Environment, 47, 8–23.
Gunalay, Y., & Kula, E. (2012). Optimum cutting age for timber resources with carbon sequestration. Resources Policy, 37, 90–92.
Halbritter, A., & Deegen, P. (2015). A combined economic analysis of optimal planting density, thinning and rotation for an even-aged forest stand. Forest Policy and Econonics, 51, 38–46.
Hansmann, R., Kilchling, P., & Seeland, K. (2016). The effects of regional forest owner organizations on forest management in the Swiss Canton of Lucerne. Small-Scale Forestry, 15, 159.
Harrington, C. A. (1990). Alnus rubra Bong. Red Alder. In R. M. Burns & B. H. Honkala (Eds.), Silvics of North America, volume 2: hardwoods (p. 654). Washington, DC: U.S. Department of Agriculture, Forest Service, Agriculture Handbook.
Hartman, R. (1976). The harvest decision when a standing forest has value. Economic Inquiry, 14(1), 52–58.
Heal, G. (2000). Nature and the marketplace: capturing the value of ecosystem services (p. 203). Washington, DC: Island Press.
Hiley, W. E. (1967). Woodland management (p. 464). London: Faber & Faber.
Hiroshima, T., & Nakajima, T. (2006). Estimation of sequestered carbon in Article-3.4 private planted forests in the first commitment period in Japan. Journal of Forest Research, 11, 427–437.
Hoel, M., & Sletten, T. M. R. (2016). Climate and forests: the tradeoff between forests as a source for producing bioenergy and as a carbon sink. Resource and Energy Economics, 43, 112–129.
Hornbeck, J. W., & Swank, W. T. (1992). Watershed ecosystems analysis as a basis for multiple-use management of eastern forests. Ecological Applications, 2, 238–247.
Hyytiäinen, K., Hari, P., Kokkila, T., Mäkelä, A., Tahvonen, O., & Taipale, J. (2004). Connecting process-based forest growth model to stand-level economic optimization. Canadian Journal of Forest Research, 34, 2060–2073.
IPCC. (2000). Land use, land-use change and forestry. Special report of the intergovernmental panel on climate change (p. 377). Cambridge: Cambridge University Press.
IPCC. (2007). In B. Metz, O. R. Davidson, P. R. Bosch, R. Dave, & L. A. Meyer (Eds.), Climate change mitigation. Contribution of working group III to the fourth assessment report of the intergovernmental panel on climate change (p. 852). Cambridge: Cambridge University Press.
Kittredge, B. D. (2005). The cooperation of private forest owners on scales larger than one individual property: international examples and potential application in the United States. Forest Policy and Economics, 7, 671–688.
Leona, S. A., Stephen, G. W., & Raymond, R. R. (1996). Multiple regression: testing and interpreting interactions. Thousand Oaks: Sage Publications.
Loisel, P. (2014). Impact of storm risk on Faustmann rotation. Forest Policy and Economics, 38, 191–198.
Lu, N., Fu, B., Jin, T., & Chang, R. (2014). Trade-off analyses of multiple ecosystem services by plantations along a precipitation gradient across Loess Plateau landscapes. Landscape Ecology, 29(10), 1697–1708.
Marianov, V., Snyder, S., & ReVelle, C. (2004). Trading off species protection and timber production in forests managed for multiple objectives. Environment and Planning B: Planning and Design, 31, 847–862.
Millennium Ecosystem Assessment Board. (2005). Millennium ecosystem assessment: ecosystems and human well-being: synthesis (p. 155). Washington, DC: Island Press.
Miyazaki Prefecture Government. (2015). Miyazaki forestry statistical data (p. 179). Miyazaki: Miyazaki Prefecture.
Nakajima, T., Matsumoto, M., Sasakawa, H., Ishibashi, S., & Tatsuhara, S. (2010). Estimation of growth parameters using the local yield table construction system for planted forests throughout Japan. Journal of Forest Planning, 15(2), 99–108.
Nakajima, T., Kanomata, H., Matsumoto, M., Tatsuhara, S., & Shiraishi, N. (2011a). Cost-effectiveness analysis of subsidy schemes for industrial timber development and carbon sequestration in Japanese forest plantations. Journal of Forestry Research, 22, 1–12.
Nakajima, T., Kanomata, H., Tatsuhara, S., & Shiraishi, N. (2011b). Simulation of the spatial distribution of thinning area under different silvicultural subsidy systems in Japanese plantation forests. Folia Forestalia Polonica, 53(1), 3–16.
Nakajima, T., Matsumoto, M., Sakata, K., & Tatsuhara, S. (2011c). Effects of the Japanese carbon offset system on optimum rotation periods and forestry profits. International Journal of Ecological Economics & Statistics, 21(11), 1–18.
Nakajima, T., Matsumoto, M., & Shiraishi, N. (2011d). Modeling diameter growth and self-thinning in planted Sugi (Cryptomeria japonica) stands. The Open Forest Science Journal, 4, 49–56.
North Carolina Use-Value Advisory Board. (2012). Use-value manual. For Agricultural, Horticultural and forest land. NC, USA: North Carolina Use-Value Advisory Board.
Oka, M. (2006). The study of analysis and valuation of harvesting operation by mechanisation (doctoral dissertation). Tokyo: University of Tokyo. (in Japanese. Title translated from Japanese to English)
Olson, D. F., Jr., Roy, D. F., & Walters, G. A. (1990). Sequoia Semperoirens (D. Don) Endl. Redwood. In R. M. Burns & B. H. Honkala (Eds.), Sylvics of North America, volume 1: conifers (p. 654). Washington, DC: U.S. Department of Agriculture, Forest Service, Agriculture Handbook.
Parajuli, R., & Chang, S. J. (2012). Carbon sequestration and uneven-aged management of loblolly pine stands in the Southern USA: a joint optimization approach. Forest Policy and Econonics, 22, 65–71.
Penttinen, M. (2006). Impact of stochastic price and growth processes on optimal rotation age. European Journal of Forest Research, 125, 335–343.
Pressler, M. R. (1860). For the comprehension of net revenue silviculture and the management objectives derived thereof [in German]. Allgemeine Forst und Jagd-Zeitung, 36, 173–191.
Price, C. (2011). Optimal rotation with declining discount rate. Journal of Forest Economics, 17, 307–318.
Probst, J. R., & Crow, T. R. (1991). Integrating biological diversity and resource management. Journal of Forestry, 89(1), 12–17.
Pukkala, T. (2002). Multi-objective forest planning (p. 207). Boston: Kluwer.
Richard, B. J., & Puneet, D. (2015). Optimal forest rotation with multiple product classes. Forest Science, 61, 458–465.
Samuelson, P. (1976). Economics of forestry in evolving society. Economic Inquiry, 14, 466–492.
Shettles, M., Temesgen, H. G., Andrew, N., et al. (2015). Spatial uncertainty in harvest scheduling. Forest Ecology and Management, 354, 18–25.
Social Survey Research Information. (2006). Excel statistics 2006. Tokyo: Social Survey Research Information Co., Ltd.
Wei, R., & Murray, A. T. (2015). Spatial uncertainty in harvest scheduling. Annals of Operations Research, 232, 275–289.
Westoby, J. (1989). Introduction to world forestry (p. 240). Hoboken: Wiley-Blackwell.
Winton, J. R. (1951). A dictionary of economic terms: for the use of newspaper readers and students. London: Routledge & K. Paul.
The authors would like to thank the staff at the Forestry Administration of Miyazaki Prefectural Government for their assistance in the data collection. This study was partly funded by Research Fellowships from the Japanese Science Promotion Society.
NS provided the specialistic information and knowledge regarding the previous and historical research about the rotation periods (L87-103 in revised manuscript). HK provided the logical idea for explaining the fact that the distribution of costs over a large area rather than individual small stands could boost profitability (L402-405 in revised manuscript) and contributed to the previous harvesting cost model (Nakajima et al. 2011a) used by this study paper. MM was a research project leader regarding the previous forest profitability simulation models (Nakajima et al. 2011a) used by this study paper and provided the specific idea for future vision and relationships between the presented study and previous studies (L477-483 in revised manuscript). All authors read and approved the final manuscript.
Laboratory of Forest Management, Graduate School of Agricultural and Life Sciences, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-8657, Japan
Tohru Nakajima
& Norihiko Shiraishi
Forestry and Forest Product Research Institute, 1 Matsunosato, Tsukuba, 305-8687, Japan
Hidesato Kanomata
Hokkaido Research Center, Forestry and Forest Products Research Institute, 7 Hitsujigaoka, Toyohira, Sapporo, Hokkaido, 062-8516, Japan
Mitsuo Matsumoto
Search for Tohru Nakajima in:
Search for Norihiko Shiraishi in:
Search for Hidesato Kanomata in:
Search for Mitsuo Matsumoto in:
Correspondence to Tohru Nakajima.
Additional file 1: Table S1.
Timber growth functions by site index class (a), planting costs (b), harvesting cost functions (c) and timber prices (d) derived from previous studies (Nakajima et al. 2010; Nakajima et al. 2011a; Nakajima et al. 2011b; Nakajima et al. 2011d; Oka 2006). (DOCX 79.8 kb)
Nakajima, T., Shiraishi, N., Kanomata, H. et al. A method to maximise forest profitability through optimal rotation period selection under various economic, site and silvicultural conditions. N.Z. j. of For. Sci. 47, 4 (2017) doi:10.1186/s40490-016-0079-6
Forest profitability
Soil expectation value
Silvicultural practices
Stand conditions
Rotation period | CommonCrawl |
Search all SpringerOpen articles
Applied Microscopy
Depth-dependent EBIC microscopy of radial-junction Si micropillar arrays
Kaden M. Powell1 &
Heayoung P. Yoon ORCID: orcid.org/0000-0003-0321-52881,2
Applied Microscopy volume 50, Article number: 17 (2020) Cite this article
Recent advances in fabrication have enabled radial-junction architectures for cost-effective and high-performance optoelectronic devices. Unlike a planar PN junction, a radial-junction geometry maximizes the optical interaction in the three-dimensional (3D) structures, while effectively extracting the generated carriers via the conformal PN junction. In this paper, we report characterizations of radial PN junctions that consist of p-type Si micropillars created by deep reactive-ion etching (DRIE) and an n-type layer formed by phosphorus gas diffusion. We use electron-beam induced current (EBIC) microscopy to access the 3D junction profile from the sidewall of the pillars. Our EBIC images reveal uniform PN junctions conformally constructed on the 3D pillar array. Based on Monte-Carlo simulations and EBIC modeling, we estimate local carrier separation/collection efficiency that reflects the quality of the PN junction. We find the EBIC efficiency of the pillar array increases with the incident electron beam energy, consistent with the EBIC behaviors observed in a high-quality planar PN junction. The magnitude of the EBIC efficiency of our pillar array is about 70% at 10 kV, slightly lower than that of the planar device (≈ 81%). We suggest that this reduction could be attributed to the unpassivated pillar surface and the unintended recombination centers in the pillar cores introduced during the DRIE processes. Our results support that the depth-dependent EBIC approach is ideally suitable for evaluating PN junctions formed on micro/nanostructured semiconductors with various geometry.
PN junctions are fundamental device elements that have been extensively used in various applications, including integrated electronic circuits, optical sensors and detectors, and energy harvesting and conversion systems (Chu et al. 2019; Sengupta et al. 1998; Neudeck 1989). Recent advances in micro/nanofabrication have enabled three-dimensional (3D) architectures that offer design flexibility to produce high-performance optoelectronic devices using cost-effective semiconductors (Garnett and Yang 2010; Li 2012; Yoon et al. 2010; Um et al. 2015). These low-quality materials, however, exhibit short minority carrier diffusion lengths (Ln, p < 10 μm) due to high concentrations of impurities and structural defects (e.g., point defect, vacancy, dislocation, grain boundary), limiting device performance designed in a planar geometry. In contrast, a radial-junction configuration maximizes light absorption along the length of the pillars while extracting minority carriers in the radial direction, effectively decoupling the competing processes. The length of the pillars is sufficiently tall to provide adequate light absorption of the indirect bandgap Si absorber, but their diameter allows photo-generated carriers to travel a much shorter distance as compared to traditional structures, increasing the carrier extraction efficiency (Kayes et al. 2005). Additionally, the 3D pillar geometry can be further tailored to reduce reflection and increase light-trapping (Garnett and Yang 2010; Kelzenberg et al. 2010). Our previous work demonstrated over twofold higher power conversion efficiencies with radial junction solar cells compared to their planar junction counterparts (Yoon et al. 2010; Kendrick et al. 2010; Yoon et al. 2011).
Establishing robust junctions is essential for high-performance PN devices, as it controls the flow of excess carriers in one direction, but not the other, providing rectifying characteristics. Various fabrication methods have been proposed and demonstrated to construct the 3D structures with conformal junction formation using top-down (Dowling et al. 2017; Zeniou et al. 2014; Huang et al. 2011; Han et al. 2014; Qian et al. 2020) or bottom-up approaches (Lew and Redwing 2003; Yoo et al. 2013). Plasma-based deep reactive ion etching (DRIE), also known as the "Bosch process" (Laermer and Schilp 2003), has been widely used for 3D patterning owing to the fast and easy processing with reproducible structures. By repeating a cycle of plasma ion etching and conformal polymer coating, DRIE enables the production of high-aspect-ratio structures on various semiconductor substrates. However, the aggressive etching processes often cause unintended surface damage (e.g., porous surface structures, electrically-active defect centers) (Oehrlein 1989; Wu et al. 2010). An inhomogeneous shallow PN junction created on this (sub)surface may introduce poor rectification and inferior diode characteristics. Much of the research activity has been focused on the optimization of fabrication approaches, whereas there are limited studies on the critical PN junction properties of 3D etched Si pillar arrays.
Electron beam induced current (EBIC) microscopy is a powerful analytical technique for studying local electronic states of semiconductor materials and devices (Zhou et al. 2020; Leamy 1982). EBIC uses a focused electron beam to create excess carriers (i.e., electron-hole pairs) near a Schottky or a PN junction. The generated carriers in the quasi-neutral region are diffused in the ambipolar directions. The portion of carriers that reach the junction depends on the recombination rate of their travel path. Subsequently, the local built-in or applied electric field at the junction separates the electron-hole pairs, producing an induced current (i.e., EBIC) in the external circuit. EBIC imaging is frequently used to map excess carrier recombination in semiconductors (Teplin et al. 2015). By fitting an EBIC profile that exponentially decays with distance from the junction, a minority carrier diffusion length can be determined (Yakimov 2015). Moreover, recent studies have proposed advanced EBIC simulations and numerical modeling for quantitative analysis of convoluted EBIC signals (Zhou et al. 2020; Haney et al. 2016).
In this work, we report measurements of radial PN junctions formed on Si micropillar arrays based on depth-dependent EBIC microscopy. By controlling the beam energy that determines the interaction volume between the injected electron beam and the Si structures, we measure the EBIC characteristics of the pillar surface as well as the pillar cores. Our radial PN junctions consist of p-type Si micropillar cores fabricated by DRIE and an n-type Si shell formed by phosphorous diffusion. We perform EBIC from the sidewall of the pillars to visualize the conformal PN junction. The junction quality of the micropillar array is determined by extracting a local carrier separation/collection efficiency using simulations and EBIC modeling. We compare the EBIC efficiency of the radial junction to the baseline planar PN junction, providing qualitative and quantitive assessment.
Fabrication of Si micropillar arrays
The radial PN junctions studied in this work consist of p-type Si micropillar cores fabricated by DRIE and an n-type Si shell formed by phosphorous gas diffusion. Details of full fabrication steps can be found elsewhere (Yoon et al. 2010, 2011). Briefly, the process was begun by conventional lithography to pattern a 200 nm thermal oxide layer on Si (p-type; ρ = 0.01 ~ 0.02 Ω‧cm). The patterned SiO2 layer serves as an etch mask in the subsequent anisotropic Si DRIE etching. In DRIE, a cycle of sulfur hexafluoride (SF6) etching and octafluorocyclobutane (C4F8)/oxygen (O2) polymer coating (Fig. 1a) was repeated. The C4F8/ O2 polymer coating protects the sidewalls, serving to increase the downward anisotropy of the subsequent SF6 etching step and improve verticality of the resulting pillars. Each cycle is composed of (i) 3.5 s SF6, (ii) 1.5 s C4F8 + O2, and (iii) 1 s O2 at an RF power of 1500 W, providing a Si etch rate of about 2.5 μm/min. To remove the sidewall roughness of the pillar arrays, we used several steps of cleaning processes: (i) O2 plasma cleaning (200 sccm O2 flow rate, 30 mTorr chamber pressure, and 1800 W RF power for 5 min), (ii) Piranha cleaning (H2SO4: H2O2 = 1: 1), and (iii) two successive wet oxidation (1000 °C for 25 min, resulting in ≈ 300 nm thick SiO2) and oxide removal processes (10% HF for 1 min). To form radial PN junctions, we used gas phase diffusion of a phosphorus oxychloride (POCl3) source at 1000 °C for 13 min. A 300 nm-thick aluminum (Al) metal film was thermally evaporated on the backside of the p-type Si and annealed at 600 °C for 10 min in the N2 atmosphere. The frontside contacts to the n+ diffused layers were formed with indium dots on the four corners of the pillar arrays after a native oxide removal in buffered oxide etchant (BOE 1:50, 30 s).
(a) Schematics illustrating a single cycle of the deep reactive-ion etching (DRIE) process. (b-d) SEM images of a pillar sidewall after (b) DRIE, (c) thermal oxidation, and (d) SiO2 removal with HF. (e) Top-view SEM image of a portion of the complete Si micropillar array. Inset displays cross-section of the device. (f) Typical I-V curve of pillar array radial junctions, showing ratifying diode behaviors. Inset plots the I-V on a log scale
Si planar PN junction controls
A high-quality Si planar PN device served as a baseline control to evaluate our radial junctions formed on the DRIE-fabricated Si micropillar arrays. We purchased a batch of commercially available Si solar cells (Solar Made) fabricated with high-quality, single-crystalline Si materials. The surface of each device was passivated (e.g., SiN) to reduce the surface recombination velocity. Previously, we conducted a cross-sectional EBIC measurement using this sample. The obtained carrier separation/collection efficiency near the planar PN junction was close to 100%, suggesting a robust PN junction of the planar device (Yoon et al. 2014). In this work, we perform EBIC in a depth-dependent configuration.
EBIC characterizations
EBIC measurements were carried out in a scanning electron microscope (SEM) equipped with a nano-manipulator. This probe arm was used for the placement of an electrical probe on a metal contact of the device, whereas the bottom metal contact was earthed to the SEM stub. The electrical wires were fitted with coaxial electrical feedthrough of the SEM, connecting to an EBIC current amplifier. Once the electrical contacts were made, a series of EBIC/SEM images were acquired at different incident beam energies using a software package.
Monte Carlo simulations (e-beam)
We used numerical simulations to estimate the two-dimensional (2D) profile in Si at various incident beam energies (CASINO software package: Monte CArlo SImulation of electroN trajectory in sOlids) (Drouin et al. 2007). In each simulation, we used 200,000 electrons at a fixed accelerating voltage in the range of 1 kV to 30 kV. The injected electrons were propagated in the Si matrix via elastic and inelastic scatterings until their energy becomes 50 eV or less. The radius of the normal incident beam was set to 2 nm. A homogeneous Si substrate in a planar geometry was used for the simulation.
Figure 1 displays the key fabrication steps to create Si micropillar arrays. Deep reactive ion etching (DRIE) is a well-established fabrication technique to produce 3D structures by repeating the cycle of a plasma ion etching and a conformal polymer coating (Fig. 1a). These etching/coating processes, however, introduce unintended porous surface structures and electrically-active defect centers. An example of the scalloping profile of the etched structures is shown in Fig. 1b. To remove this structural damage, we used two successive thermal oxidation (1000 °C for 25 min; ≈ 300 nm thick SiO2; Fig. 1c) and oxide removal processes (10% HF for 1 min). A representative SEM image in Fig. 1d confirms the dramatically enhanced surface smoothness of the Si micropillars after the rigorous cleaning and oxidation/strip processes. We formed the radial PN junction using gas diffusion of n-type phosphorous dopants. An estimated surface doping concentration (Nd) is ≈ 1020 cm− 3 with a junction depth (xj) of ≈ 0.3 μm (Neudeck 1989). The complete Si micropillars measured ≈ 30 μm in height and ≈ 7 μm in diameter with a distance between the pillars of approximately 4 μm (Fig. 1e). The current-voltage (I-V) curves of the pillar array in Fig. 1f showed a good diode behavior. We estimated the turn-on voltage of 0.63 V, the leakage current of 10 nA, and the ideality factor of about 1.7. Comprehensive dark and light I-V characteristics of the radial junctions in various geometrical parameters (e.g., diameter, height, pillar-to-pillar-distance) can be found elsewhere (Yoon et al. 2010, 2011). While extremely informative, I-V curves reflect the overall PN junction performance and do not capture the local junction properties.
EBIC microscopy allows a direct access to the local PN junction in 3D with an adjustable probe size from 10's nm to several μm, in accordance with the interaction volume between the incident electron beam and the semiconductors. To visualize the local junctions at the level of individual pillars, we carefully cleaved the Si pillar array using a fine scriber and exposed the cross-section. The sample was mounted on an EBIC holder, where the metal contacts of the emitter (i.e., indium dots on n+-Si) and the collector (i.e., Al on p-Si) were connected to the external EBIC circuit. The electron beam was injected from the n-Si emitter shell to the PN depletion region and the p-Si pillar core. Figure 2 displays an SEM and the corresponding 5 keV (Fig. 2b) and 10 keV (Fig. 2c) EBIC images. Figure 2d shows the overlaid SEM and 5 keV EBIC, indicating a continuous PN junction formed across the 3D geometry. The overall EBIC intensities of the individual pillars at 5 keV and 10 keV are relatively uniform, suggesting the presence of conformal radial junctions along the individual micropillars.
(a) SEM image of micropillar array and corresponding EBIC (electron beam induced current) maps at (b) 5 keV and (c) 10 keV. The relative signal uniformity suggests the presence of conformal radial junctions of individual pillars. (d) Overlay of SEM with 5 keV EBIC image confirms continuous PN junction across device. (e) Extracted EBIC line scan across pillar diameter. (f) EBIC line scans at 5 keV and 10 keV along pillar lengths demonstrate uniform signal magnitude and higher EHP generation at higher energy. Peak EBIC values occur in the pillar base region, where the cross-sectional PN junction is exposed
For a quantitative analysis, we extract the EBIC line scans along/across the pillars. The line scan plot across the pillar diameter (Fig. 2e) shows the highest EBIC value near the pillar center (≈ 80 nA) that decreases gradually with the electron beam probe moving away to the perimeter of the pillar (≈ 50 nA). Considering the direction of the incident electron beam to the curved pillar surface, as illustrated in the inset of Fig. 2e, we suggest that this EBIC change is mainly attributed to the shape of the pillar rather than due to inhomogeneous junction properties. With the increase of the backscattered electrons (BSE) and the decrease of the effective electron-hole pair (EHP) generation volume at the curved pillar surface, the reduction of the EBIC magnitude is evident near the pillar perimeter.
Figure 2f displays the EBIC line scans along the length of the pillars (i.e., axial-direction), showing a relatively constant EBIC value within the pillars at a fixed accelerating voltage. The mean EBIC value of the pillar increases from 80 nA at 5 keV to 480 nA at 10 keV. We observe the highest EBIC values are present in the area of the pillar base, which is associated with the direct local carrier generation within the depletion region. When the electron beam is directly injected to the cross-sectional PN junction (i.e., mechanically cleaved junction area), the generated EHPs are separated quickly without diffusion owing to the built-in electric field (C. J. Wu and Wittry 1978; Yakimov 2015). In contrast, the electron beam irradiated on the pillar sidewalls generates the EHPs in the depletion region as well as the charge-neutral regions (i.e., n-Si shell, p-Si pillar core). The excess carriers must travel to the junction (i.e., ambipolar diffusion) before they are separated and collected in EBIC. Since the n-Si emitter layer (Nd ≈ 1020 cm− 3, xj ≈ 300 nm) of our pillars is conductive, yet highly defective, the EHPs generated in this region tend to be recombined, decreasing overall EBIC values as compared to the direct electron beam injection at the cross-sectional PN junction. As the EHP generation volume increases with the accelerating voltage (i.e., 5 keV to 10 keV), the portion of the EHPs generated in the n-Si decreases, resulting in a comparable EBIC value of the pillar and near the base.
To assess the local junction quality of the micropillar array, we collected the baseline EBIC characteristics of a commercial planar device (Solar Made). This planar PN junction (n+-p) was built on a high-purity single crystalline Si substrate, and it showed a carrier collection efficiency close to 100% in the depletion region obtained in a normal collector EBIC configuration (Yoon et al. 2014). Figure 3 displays a representative SEM image of the planar PN device and the corresponding EBIC maps collected at 5 keV, 10 keV, and 20 keV. The large dark area of the EBIC images is associated with the metal contact, highlighted in yellow in the SEM image (Fig. 3a). The injected electron beam (1 keV ~ 30 keV) does not penetrate this thick metal layer (a few mm thick Ag paste), producing negligible EBIC signals (Fig. 3b-d). The dark speckles in the 5 keV EBIC image are likely attributed to thin organic residue or dust particles on the sample, of which EBIC contribution becomes insignificant with the higher beam energies (> 10 keV). Qualitatively, the bright contrast increases with a higher keV, showing similar behaviors as those observed with the pillar array radial junction (Fig. 2).
(a) SEM image of planar PN device with metal contact highlighted in yellow. [center top] Schematic of EBIC measurement. [center bottom] Slightly tilted SEM image of the sample, illustrating surface roughness. Corresponding EBIC scans of the planar device at (b) 5 keV, (c) 10 keV, and (d) 20 keV showing higher contrast and reduced spatial resolution with higher energy. (e) Increasing mean EBIC current calculated for Si line scans. (f-h) Plots of EBIC current along line scans shown with green lines in (b-d): (f) 5 keV, (g) 10 keV, (h) 20 keV. The green box highlights the representative sample topography. The two distinct features at 5 keV become broad and indistinguishable with higher beam energies
Figure 3f through g show the representative line scans extracted from the EBIC images (Fig. 3b-d). A relatively constant EBIC was observed in the device area, a stack of p-Si collector, n-Si emitter, and SiN passivation layer. A notable current fluctuation near the metal contact is mainly attributed to the spread of the metal paste. By aligning the line scans, we find that a low keV EBIC is much more sensitive to the surface features than higher keV. For instance, two distinct peaks observed in the 5 keV EBIC line scan (marked with a green box) conform to the sample topography shown in SEM (Fig. 3a). This feature becomes less distinguishable with increasing incident beam energy, as the electron beam penetrates deep in the sample with a larger EHP generation volume. We used the EBIC images from 5 keV to 30 keV and calculated mean EBIC values for the Si area, shown in Fig. 3e. The increase of EBIC with a higher keV is evident in the line scans. The average EBIC value increases from 127 nA at 5 kV to 3.55 μA at 30 kV, increasing over one order of magnitude. Interestingly, the EBIC values observed in the planar junction are slightly higher than those in the radial junction in Fig. 2: 127 nA (vs. 83 nA of the radial junction) at 5 keV, 574 nA (vs. 492 nA of the radial junction) at 10 keV.
The experimental results qualitatively suggest that EBIC magnitude near the PN junctions is strongly influenced by the EHP generation by the incident electron beam and the local carrier separation/collection properties. To gain a deeper understanding of the local radial junction characteristics, we estimate the carrier generation profile using Monte Carlo simulations and calculate the local carrier collection efficiency for the planar and the radial junctions. Figure 4a (top) displays an example of the simulated electron trajectories, where a ray of 5 keV electron beam is irradiated onto a Si substrate. The blue lines represent the collision events of the primary electrons with Si until they lose their initial energy (i.e., 5000 V in this example) to 50 V or lower. The red lines represent the paths of the backscattered electrons. A corresponding energy contour plot is shown in the bottom image. For instance, the 95% contour represents the sample area where the injected primary electrons have lost 95% of their initial energy. Figure 4b plots the estimated interaction bulb size at different accelerating voltages (1 kV ~ 30 kV). The overall ratio of depth to diameter (depth/diameter) is comparable for higher lost-energy contours (> 75%), yet slightly higher for low energy contours (< 50%), indicating a pear-shape of the interaction bulb. The calculated bulb size at 1 keV is approximately 19 nm, inferring that the spatial resolution of the EBIC image for flat Si devices can be achieved as high as < 20 nm. The inset of Fig. 4b shows the increase of the penetration depth with the beam energy, which was extracted from the 95% energy contour of each simulation. The numerical fit overlaid on the datasets confirms the bulb size is proportional to \( {E}_b^{1.78} \) (Eb is the beam energy), showing an excellent agreement with the analytical prediction of \( \approx {E}_b^{1.7} \) by Wittry et al. (Wittry and Kyser 1967). By controlling the incident beam energy, the size of the EHP generation bulb can be tunable from 10's nm to several μm, offering versatility to study local carrier dynamics in optoelectronic semiconductor materials and devices.
(a) Simulated electron paths for a ray of 5 kV electron beam in Si (200,000 electrons). Blue lines represent collision events of electrons with Si. Red lines represent backscattered electrons. (a, bottom) Energy contour plot from 5 kV simulation with percent energy loss contours. (b) Simulated interaction bulb size at various accelerating voltages: [top] maximum depth and [bottom] maximum diameter. Inset shows the penetration depth with beam energy (95% energy contour) and corresponding curve fit. (c) Estimated EBIC efficiency of the radial junction compared to the planar PN junction control
Based on the Monte-Carlo simulations and EBIC modeling (Leamy 1982; Haney et al. 2016; Yakimov 2015), we estimated the local carrier separation/carrier efficiency of the radial junction and compared it to planar PN controls. The EBIC collection efficiency (ηEBIC) is defined as the ratio of the measured current (IEBIC) to the EHP generation rate (β). Here, e is the unit charge (1.6 × 10− 19 C).
$$ \eta (EBIC)=\frac{I_{EBIC}}{e\cdot \beta}\kern0.5em $$
The generation rate, which is the total number of EHPs created by the injected electron beam, can be calculated as below.
$$ \beta =\frac{I_b\cdot {E}_b\cdot \alpha }{e\cdot {E}_{EHP}} $$
Eb is the incident electron beam energy, α is the fraction of beam energy absorbed inside the material (i.e., Si in our case), and EEHP is the average energy to create an electron-hole pair. The Ib of our SEM was measured in the range of 250 pA (Eb = 5 kV) to 300 pA (Eb = 20 kV). We calculated the magnitude of α using the backscattered coefficient obtained from the Monte Carlo simulation (e.g., 0.152 at 5 keV, 0.142 at 20 keV). The EEHP was estimated using an empirical relation of EEHP = 2.596 Eg + 0.174 (Kobayashi et al. 1972), giving EEHP ≈ 3.621 eV for Si (Eg = 1.12 eV). The EBIC currents (Ib) extracted from the line scans in Figs. 2 and 3 were used for the pillar array and the planar device, respectively. We note that a typical uncertainty in our EBIC measurement and analysis is about 10% associated with the fluctuations of the baseline e-beam current (Ib) and the signal-to-noise ratio of the EBIC preamplifier. Also, the parameters extracted from the Monte Carlo simulations (e.g., backscattered coefficient, mean EHP generation rate (β), empirical parameter (EEHP) to generate EHPs) contribute to the uncertainty.
Finally, we plot the resulting EBIC efficiency of the devices at different incident beam energies in Fig. 4c. The EBIC efficiency increases with the incident beam energy, reaching close to unity at Eb > 15 keV for the planar PN junction device. A similar trend was observed for the radial junction of the pillar array, yet the overall EBIC efficiency is slightly lower than that of the planar device (about 10%). In both cases, EBIC was measured in the depth-dependent configuration. The injected electrons travel from the highly-doped emitter (a few 100 nm thick) to the depletion region (< 1 μm) and the p-Si collector, generating the EHPs in three different layers. The low EBIC efficiency at 5 keV (50% for the planar device; 30% for the pillar array) is likely attributed to the EHP production in the highly-doped emitter region. Our Monte Carlo simulation shows an interaction bulb size of (300 nm)3 at 5 keV, suggesting that most EHPs were produced in the highly-defective (i.e., high-density of recombination centers) emitter region that promotes excess carrier recombination. At higher keV, most EHPs are generated in the strong built-in electric field region and the collector, increasing the EBIC efficiency. Our observation indicates that the surface damage introduced on the pillars by DRIE could be effectively removed by rigorous cleaning and oxidation/strip processes. The magnitude of the EBIC efficiency of our pillar array is about 70% at 10 kV, slightly lower than that of the planar device (≈ 81%). We speculate that a slightly higher EBIC efficiency for the planar junction is associated with the surface passivation (e.g., SiN) that decreases the surface recombination of EHPs. Detailed EBIC studies of the surface passivation based on 3D continuity equations together with Poisson equations (Yakimov 2015; Haney et al. 2016; Zhou et al. 2020) will provide additional insight on the excess carrier dynamics and general guidance to improve their device performance.
In summary, we have examined the radial junction characteristics of Si micropillar arrays using depth-dependent EBIC microscopy. The EBIC images collected from the sidewall of the pillars confirm the uniform PN junction conformally constructed on the 3D pillar array. We find the EBIC efficiency of the pillar array increases with the injected electron-beam voltage, consistent with the EBIC behaviors observed in a high-quality planar PN junction. The magnitude of the EBIC efficiency of our pillar array is about 70% at 10 kV, slightly lower than that of the planar device (≈ 81%). We suggest that this reduction could be attributed to the unpassivated pillar surface or the low material quality of the pillar core. Our results support that the depth-dependent EBIC approach is ideally suitable for evaluating 3D conformal PN junctions formed on micro/nanostructures with various geometries.
The datasets used in this study are available from the corresponding author on reasonable request.
2D:
Two-dimensional
Three-dimensional
BSE:
Backscattered electron
Monte-CArlo SImulation of electroN trajectory in sOlids
DRIE:
Deep reactive-ion etching
EBIC:
Electron-beam induced current
EHP:
Electron-hole pair
RF:
Y.H. Chu, C.Q. Qian, P. Chahal, C.Y. Cao, Printed diodes: Materials processing, fabrication, and applications. Adv. Sci. 6(6), 1801653 (2019). https://doi.org/10.1002/advs.201801653
K.M. Dowling, E.H. Ransom, D.G. Senesky, Profile evolution of high aspect ratio silicon carbide trenches by inductive coupled plasma etching. J. Microelectromech. Syst. 26(1), 135–142 (2017). https://doi.org/10.1109/Jmems.2016.2621131
D. Drouin, A.R. Couture, D. Joly, X. Tastet, V. Aimez, R. Gauvin, CASINO V2.42 - a fast and easy-to-use modeling tool for scanning electron microscopy and microanalysis users. Scanning 29(3), 92–101 (2007). https://doi.org/10.1002/sca.20000
E. Garnett, P.D. Yang, Light trapping in silicon nanowire solar cells. Nano Lett. 10(3), 1082–1087 (2010). https://doi.org/10.1021/nl100161z
H. Han, Z.P. Huang, W. Lee, Metal-assisted chemical etching of silicon and nanotechnology applications. Nano Today 9(3), 271–304 (2014). https://doi.org/10.1016/j.nantod.2014.04.013
P.M. Haney, H.P. Yoon, B. Gaury, N.B. Zhitenev, Depletion region surface effects in electron beam induced current measurements. J. Appl. Phys. 120(9), 095702 (2016). https://doi.org/10.1063/1.4962016
Z.P. Huang, N. Geyer, P. Werner, J. de Boor, U. Gosele, Metal-assisted chemical etching of silicon: A review. Adv. Mater. 23(2), 285–308 (2011). https://doi.org/10.1002/adma.201001784
B.M. Kayes, H.A. Atwater, N.S. Lewis, Comparison of the device physics principles of planar and radial p-n junction nanorod solar cells. J. Appl. Phys. 97(11), 114302 (2005). https://doi.org/10.1063/1.1901835
M.D. Kelzenberg, S.W. Boettcher, J.A. Petykiewicz, D.B. Turner-Evans, M.C. Putnam, E.L. Warren, et al., Enhanced absorption and carrier collection in Si wire arrays for photovoltaic applications. Nat. Mater. 9(3), 239 (2010) <Go to ISI>://WOS:000274700900021
C.E. Kendrick, H.P. Yoon, Y.A. Yuwen, G.D. Barber, H.T. Shen, T.E. Mallouk, et al., Radial junction silicon wire array solar cells fabricated by gold-catalyzed vapor-liquid-solid growth. Appl. Phys. Lett. 97(14), 143108 (2010). https://doi.org/10.1063/1.3496044
T. Kobayashi, M. Koyama, T. Sugita, S. Takayanagi, Performance of GaAs surface-barrier detectors made from high-purity gallium-arsenide. IEEE Trans. Nucl. Sci. 19(3), 324–32+ (1972). https://doi.org/10.1109/Tns.1972.4326745
Laermer, F., & Schilp, A. (2003). Method of anisotropic etching of silicon. Patent US6531068 (US)
H.J. Leamy, Charge collection scanning electron-microscopy. J. Appl. Phys. 53(6), R51–R80 (1982). https://doi.org/10.1063/1.331667
K.K. Lew, J.M. Redwing, Growth characteristics of silicon nanowires synthesized by vapor-liquid-solid growth in nanoporous alumina templates. J. Cryst. Growth 254(1–2), 14–22 (2003). https://doi.org/10.1016/S0022-0248(03)01146-1
X.L. Li, Metal assisted chemical etching for high aspect ratio nanostructures: A review of characteristics and applications in photovoltaics. Curr. Opin. Solid State Mater. Sci. 16(2), 71–81 (2012). https://doi.org/10.1016/j.cossms.2011.11.002
G.W. Neudeck, The PN Junction Diode (Addison-Wesley, Reading, 1989)
G.S. Oehrlein, Dry etching damage of silicon - a review. Mater. Sci. Eng. B Solid State Mater. Adv. Technol. 4(1-4), 441–450 (1989). https://doi.org/10.1016/0921-5107(89)90284-5
Qian, Y., Magginetti, D. J., Jeon, S., Yoon, Y., Olsen, T. L., Wang, M., et al. (2020). Heterogeneous optoelectronic characteristics of Si micropillar arrays fabricated by metal-assisted chemical etching. https://ui.adsabs.harvard.edu/abs/2020arXiv200616308Q. Accessed 1 June 2020
D.L. Sengupta, T.K. Sarkar, D. Sen, Centennial of the semiconductor diode detector. Proc. IEEE 86(1), 235–243 (1998). https://doi.org/10.1109/5.658775
C.W. Teplin, S. Grover, A. Chitu, A. Limanov, M. Chahal, J. Im, et al., Comparison of thin epitaxial film silicon photovoltaics fabricated on monocrystalline and polycrystalline seed layers on glass. Prog. Photovolt. 23(7), 909–917 (2015). https://doi.org/10.1002/pip.2505
H.D. Um, N. Kim, K. Lee, I. Hwang, J.H. Seo, Y.J. Yu, et al., Versatile control of metal-assisted chemical etching for vertical silicon microwire arrays and their photovoltaic applications. Sci. Rep. 5, 11277 (2015). https://doi.org/10.1038/srep11277
D.B. Wittry, D.F. Kyser, Measurement of diffusion lengths in direct-gap semiconductors by electron-beam excitation. J. Appl. Phys. 38(1), 375 (1967). https://doi.org/10.1063/1.1708984
B.Q. Wu, A. Kumar, S. Pamarthy, High aspect ratio silicon etch: A review. J. Appl. Phys. 108(5), 051101 (2010). https://doi.org/10.1063/1.3474652
C.J. Wu, D.B. Wittry, Investigation of minority-carrier diffusion lengths by electron-bombardment of Schottky barriers. J. Appl. Phys. 49(5), 2827–2836 (1978). https://doi.org/10.1063/1.325163
E.B. Yakimov, What is the real value of diffusion length in GaN? J. Alloys Compd. 627, 344–351 (2015). https://doi.org/10.1016/j.jallcom.2014.11.229
J. Yoo, S.A. Dayeh, W. Tang, S.T. Picraux, Epitaxial growth of radial Si p-i-n junctions for photovoltaic applications. Appl. Phys. Lett. 102(9), 093113 (2013). https://doi.org/10.1063/1.4794541
H.P. Yoon, P.M. Haney, J. Schumacher, K. Siebein, Y. Yoon, N.B. Zhitenev, Effects of focused-ion-beam processing on local electrical measurements of inorganic solar cells. Microsc. Microanal. 20(S3), 544–545 (2014). https://doi.org/10.1017/S1431927614004449
H.P. Yoon, Y.A. Yuwen, C.E. Kendrick, G.D. Barber, N.J. Podraza, J.M. Redwing, et al., Enhanced conversion efficiencies for pillar array solar cells fabricated from crystalline silicon with short minority carrier diffusion lengths. Appl. Phys. Lett. 96(21), 213503 (2010). https://doi.org/10.1063/1.3432449
H.P. Yoon, Y.A. Yuwen, H. Shen, N.J. Podraza, T.E. Mallouk, E.C. Dickey, et al., in 37th IEEE Photovoltaic Specialists Conference. Parametric study of micropillar array solar cells (2011), pp. 000303–000306. https://doi.org/10.1109/PVSC.2011.6185905
A. Zeniou, K. Ellinas, A. Olziersky, E. Gogolides, Ultra-high aspect ratio Si nanowires fabricated with plasma etching: Plasma processing, mechanical stability analysis against adhesion and capillary forces and oleophobicity. Nanotechnology 25(3), 035302 (2014). https://doi.org/10.1088/0957-4484/25/3/035302
R.N. Zhou, M.Z. Yu, D. Tweddle, P. Hamer, D. Chen, B. Hallam, et al., Understanding and optimizing EBIC pn-junction characterization from modeling insights. J. Appl. Phys. 127(2), 024502 (2020). https://doi.org/10.1063/1.5139894
The authors acknowledge the support from B. Baker, D. Magginetti, and S. Pritchett for the development of device fabrication. The radial junction processes and the EBIC data acquisition were performed at Penn State University (University Park, PA, USA) and the National Institute of Standards and Technology (Gaithersburg, MD, USA). We thank Y. Yuwen, T. Mayer, P. Haney, and N. Zhitenev for valuable discussions.
This research was supported by a University of Utah Seed Grant and New Faculty Start-up Funds. We acknowledge support by the USTAR shared facilities at the University of Utah, in part, by the MRSEC Program of NSF under Award No. DMR-1121252.
Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, 84112, USA
Kaden M. Powell & Heayoung P. Yoon
Materials Science and Engineering, University of Utah, Salt Lake City, UT, 84112, USA
Heayoung P. Yoon
Kaden M. Powell
KMP and HPY have contributed to sample preparation, data acquisition, data analysis, and manuscript writing. HPY supervised the overall project. All authors read and approved the final manuscript.
Correspondence to Heayoung P. Yoon.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Powell, K.M., Yoon, H.P. Depth-dependent EBIC microscopy of radial-junction Si micropillar arrays. Appl. Microsc. 50, 17 (2020). https://doi.org/10.1186/s42649-020-00037-4
EBIC
Electron beam induced current
EBIC efficiency
Radial junction
PN junction
Micropillar array
Local characterization
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Static equilibrium- when do you use forces and when do you use torques?
I'm working on an answered physics question in the book but I'm having trouble understanding why the problem could be solved by setting net torque equal to 0 and not net force. You are asked to find the magnitude of the tension in the cable (the magnitude of the force from the beam on the cable). With an axis of rotation at the hinge so as to eliminate any forces at that point, the solution is completed by balancing torques: I understand why that works, but I'm not sure why you can't instead balance forces as was done in previous problems. In other words:
(Fh=force of the hinge on the beam; Ft=tension force in the cable; m=mass of the beam; M=mass of the block)
x-horizontal, y-vertical
$Fh_x-F_t=0$
$-mg-Mg+Fh_y=0$
these equations produce: Fhcos(theta)=Ft & Fhsin(theta)=g(m+M)
If you divide the equations, you can eliminate Fh and solve for Ft... why is this not appropriate?
homework-and-exercises forces torque equilibrium statics
khajiitkhajiit
$\begingroup$ I can't actually view the problem statement and solution, so I can't provide a concrete answer, but when you are balancing a net force, you are making sure that the CoM of the object does not accelerate. When you are balancing net torque, you are making sure that the object does not rotate. Just because an object's CoM stays stationary does not mean it doesn't rotate and vice versa. In general, you would have to maintain both net force = 0 and net torque = 0 to specify static equilibrium. However, most problems are set up so that one of those equalities is trivially true. $\endgroup$ – enumaris Apr 2 '18 at 16:31
$\begingroup$ Does doing that get you a different solution? It seems like you're still using the fact that net torque = 0 to solve the problem; just you're solving for a different variable. I'm actually not even sure if they are solving for "torque" in this question. It looks a lot like their "$T_C$" is actually a force as well. $\endgroup$ – JMac Apr 2 '18 at 17:47
$\begingroup$ @enumaris I understand the mechanics of using torques and forces, but yes, I'm getting a different solution when I use forces as I laid out in my question. I think I get about 6400N when the answer is 6100N. $\endgroup$ – khajiit Apr 2 '18 at 18:38
$\begingroup$ @JMac I'm getting a different solution as I explain in my comment above. I'm not using torque at all--I'm simply using the fact that Fnet=0. $\endgroup$ – khajiit Apr 2 '18 at 18:39
$\begingroup$ I'm having trouble understanding where you got your equations from. For example, what is Ft? I'd suggest using Mathjax formatting so that it is more clear what each variable is supposed to be. Things like Ft and Fhy are hard to differentiate the variables and subscripts. $\endgroup$ – JMac Apr 2 '18 at 18:47
Balancing forces is useful, but it is not enough. There aren't enough constraints on the problem to find all the forces just by setting the net force to zero. In your diagram, you could add any value to $\vec{F_h}$. As long as you add the same to $\vec{T}_c$, the net force doesn't change. So find some sum of forces that comes to zero, then add 1 Newton to both horizontal forces on the problem, and you have a new zero-net-force solution.
On the other hand, as long as the net force and net torque are both zero on a stationary rigid object, it will stay stationary. We need both because net force only tells us about the acceleration of the center of mass. If a beam starts spinning around its center, it has zero net force on it (because the center of mass isn't moving), but there was some net torque that got it spinning. So setting $F_{net} = 0$ is not enough.
As for what happened in your work, it's hard to tell. First, try to stick with well-defined notation. This will make it much easier for people to understand what you mean. Your question defines the variables $\vec{T}_c$, $\vec{F}_v$, and $\vec{F}_h$, which you didn't use. Your proposed solution uses the undefined variables "Fhx" "Fhy" and "Ft". Then you write about "theta" without defining it, either.
My best guess is that you assumed that the net force from the wall on the beam acts along that beam. That is, in the notation of the problem, you assumed
$$\frac{\vec{F}_v}{\vec{F}_h} = \frac{a}{b}$$
but there is no reason to assume this. That sort of assumption works for strings or ropes, which can support any shear, but for beams, the wall can exert forces in the horizontal and vertical directions independently. You won't know how big those forces are unless you impose both the condition that the net force on the beam is zero, and the condition that the net torque on the beam is zero.
Mark EichenlaubMark Eichenlaub
$\begingroup$ Okay, thanks. I think it was the last part that cleared it up for me. I'm just not entirely seeing the difference between the tension force in a rope, which can be broken down into its horizontal and vertical components using the angle of incline, and a beam, which can't... $\endgroup$ – khajiit Apr 2 '18 at 19:29
$\begingroup$ Both can be broken down into components. However, in the beam the force is not guaranteed to be in the same direction as the beam. Imagine a water bottle sitting on your desk as the beam. Put a small sideways force on it with your finger. It doesn't go anywhere, but the net force from the table is not vertical any more; it's partially vertical (from the normal force from the table) and partially horizontal (from the friction force from the table), and we can't assume that the force is along the axis of the water bottle. $\endgroup$ – Mark Eichenlaub Apr 2 '18 at 19:33
$\begingroup$ For a string hanging from the ceiling suspending the water bottle, it's different. If you apply a horizontal force to hanging water bottle, it will move and be at an angle. The net force the string can exert is always long its own length. This is just basically what we mean by a "string" or "rope" in elementary physics problems. $\endgroup$ – Mark Eichenlaub Apr 2 '18 at 19:34
$\begingroup$ @MarkEichenlaub Sometimes in statics problems, they may also define that beam as a two force member. In that case, the force also would have to be applied at the same angle as the beam. That may have been OP's confusion, especially if they have recently learned about two force members. $\endgroup$ – JMac Apr 3 '18 at 13:30
$\begingroup$ @MarkEichenlaub The distinction between a string/rope and a beam definitely cleared it up for me--thank you. $\endgroup$ – khajiit Apr 3 '18 at 17:16
Not the answer you're looking for? Browse other questions tagged homework-and-exercises forces torque equilibrium statics or ask your own question.
Cylinder rolling down an inclined plane held by a string
Blocks stacked on an incline connected by rope around pulley
Determining direction of static friction when tension and gravity are present
Rigid body equilibrium word problem
Why do we consider the tension in a taut cable to be zero Newtons or Pound Force?
Three hinged rods problem
Static equilibrium question: Horizontal rod attached to a wall
Does $m$ in this problem refer to the total mass of the system or just the mass of a single body?
How would I draw the forces on two identical ladders propped up against each other, connected by a rope?
Mathematical segmentation of an accelerating body and the resultant torques | CommonCrawl |
Are there any examples of where the central limit theorem does not hold?
Wikipedia says -
In probability theory, the central limit theorem (CLT) establishes that, in most situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a "bell curve") even if the original variables themselves are not normally distributed...
When it says "in most situations", in which situations does the central limit theorem not work?
probability mathematical-statistics normal-distribution central-limit-theorem
Ryan McCauleyRyan McCauley
To understand this, you need to first state a version of the Central Limit Theorem. Here's the "typical" statement of the central limit theorem:
Lindeberg–Lévy CLT. Suppose ${X_1, X_2, \dots}$ is a sequence of i.i.d. random variables with $E[X_i] = \mu$ and $Var[X_i] = \sigma^2 < \infty$. Let $S_{n}:={\frac {X_{1}+\cdots +X_{n}}{n}}$. Then as $n$ approaches infinity, the random variables $\sqrt{n}(S_n − \mu)$ converge in distribution to a normal $N(0,\sigma^2)$ i.e.
$${\displaystyle {\sqrt {n}}\left(\left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)-\mu \right)\ {\xrightarrow {d}}\ N\left(0,\sigma ^{2}\right).}$$
So, how does this differ from the informal description, and what are the gaps? There are several differences between your informal description and this description, some of which have been discussed in other answers, but not completely. So, we can turn this into three specific questions:
What happens if the variables are not identically distributed?
What if the variables have infinite variance, or infinite mean?
How important is independence?
Taking these one at a time,
Not identically distributed, The best general results are the Lindeberg and Lyaponov versions of the central limit theorem. Basically, as long as the standard deviations don't grow too wildly, you can get a decent central limit theorem out of it.
Lyapunov CLT.[5] Suppose ${X_1, X_2, \dots}$ is a sequence of independent random variables, each with finite expected value $\mu_i$ and variance $\sigma^2$ Define: $s_{n}^{2}=\sum _{i=1}^{n}\sigma _{i}^{2}$
If for some $\delta > 0$, Lyapunov's condition ${\displaystyle \lim _{n\to \infty }{\frac {1}{s_{n}^{2+\delta }}}\sum_{i=1}^{n}\operatorname {E} \left[|X_{i}-\mu _{i}|^{2+\delta }\right]=0}$ is satisfied, then a sum of $X_i − \mu_i / s_n$ converges in distribution to a standard normal random variable, as n goes to infinity:
${{\frac {1}{s_{n}}}\sum _{i=1}^{n}\left(X_{i}-\mu_{i}\right)\ {\xrightarrow {d}}\ N(0,1).}$
Infinite Variance Theorems similar to the central limit theorem exist for variables with infinite variance, but the conditions are significantly more narrow than for the usual central limit theorem. Essentially the tail of the probability distribution must be asymptotic to $|x|^{-\alpha-1}$ for $0 < \alpha < 2$. In this case, appropriate scaled summands converge to a Levy-Alpha stable distribution.
Importance of Independence There are many different central limit theorems for non-independent sequences of $X_i$. They are all highly contextual. As Batman points out, there's one for Martingales. This question is an ongoing area of research, with many, many different variations depending upon the specific context of interest. This Question on Math Exchange is another post related to this question.
Sycorax
JohnJohn
$\begingroup$ I have removed a stray ">" from a formula that I think has crept in because of the quoting system - feel free to reverse my edit if it was intentional! $\endgroup$ – Silverfish May 30 '18 at 18:43
$\begingroup$ A triangular array CLT is probably a more representative CLT than the one stated. As for not independent, martingale CLT's are reasonably commonly used case. $\endgroup$ – Batman May 30 '18 at 23:13
$\begingroup$ @Batman, what's an example of a triangular array CLT? Feel free to edit my response, to add it. I'm not familiar with that one. $\endgroup$ – John May 31 '18 at 20:20
$\begingroup$ Something like sec. 4.2.3 in personal.psu.edu/drh20/asymp/lectures/p93to100.pdf $\endgroup$ – Batman Jun 1 '18 at 1:24
$\begingroup$ "as long as the standard deviations don't grow too wildly" Or shrink (eg: $\sigma_i^2 = \sigma_{i-1}^2/2$) $\endgroup$ – leonbloy Jun 2 '18 at 19:28
Although I'm pretty sure that it has been answered before, here's another one:
There are several versions of the central limit theorem, the most general being that given arbitrary probability density functions, the sum of the variables will be distributed normally with a mean value equal to the sum of mean values, as well as the variance being the sum of the individual variances.
A very important and relevant constraint is that the mean and the variance of the given pdfs have to exist and must be finite.
So, just take any pdf without mean value or variance -- and the central limit theorem will not hold anymore. So take a Lorentzian distribution for example.
cherubcherub
$\begingroup$ +1 Or take a distribution with an infinite variance, like the distribution of a random walk. $\endgroup$ – Alexis May 30 '18 at 14:35
$\begingroup$ @Alexis - assuming you are looking at a random walk at a finite point in time, I would have thought it would have a finite variance, being the sum of $n$ i.i.d steps each with finite variance $\endgroup$ – Henry May 30 '18 at 19:33
$\begingroup$ @Henry: Nope, am not assuming at a point in time, but the variance of the distribution of all possible random walks of infinite lengths. $\endgroup$ – Alexis May 30 '18 at 21:28
$\begingroup$ @Alexis If each step $X_i$ of the random walk is $+1$ or $-1$ i.i.d. with equal probability and the positions are $Y_n =\sum_1^n X_i$ then the Central Limit Theorem implies correctly that as $n \to \infty$ you have the distribution of $\sqrt{n}\left(\frac1n Y_n\right) = \frac{Y_n}{\sqrt{n}}$ converging in distribution to $\mathcal N(0,1)$ $\endgroup$ – Henry May 30 '18 at 23:35
$\begingroup$ @Alexis Doesn't matter for the CLT, because each individual distribution still has a finite variance. $\endgroup$ – Cubic May 31 '18 at 11:08
No, CLT always holds when its assumptions hold. Qualifications such as "in most situations" are informal references to the conditions under which CLT should be applied.
For instance, a linear combination of independent variables from Cauchy distribution will not add up to Normal distributed variable. One of the reasons is that the variance is undefined for Cauchy distribution, while CLT puts certain conditions on the variance, e.g. that it has to be finite. An interesting implication is that since Monte Carlo simulations is motivated by CLT, you have to be careful with Monte Carlo simulations when dealing with fat tailed distributions, such as Cauchy.
Note, that there is a generalized version of CLT. It works for infinite or undefined variances, such as Cauchy distribution. Unlike many well behaving distributions, the properly normalized sum of Cauchy numbers remains Cauchy. It doesn't converge to Gaussian.
By the way, not only Gaussian but many other distributions have bell shaped PDFs, e.g. Student t. That's why the description you quoted is quite liberal and imprecise, perhaps on purpose.
AksakalAksakal
Here is an illustration of cherub's answer, a histogram of 1e5 draws from scaled (by $\sqrt{n}$) sample means of t-distributions with two degrees of freedom, such that the variance does not exist.
If the CLT did apply, the histogram for $n$ as large as $n=1000$ should resemble the density of a standard normal distribution (which, e.g., has density $1/\sqrt{2\pi}\approx0.4$ at its peak), which it evidently does not.
library(MASS)
n <- 1000
samples.from.t <- replicate(1e5, sqrt(n)*mean(rt(n, df = 2)))
truehist(samples.from.t, xlim = c(-10,10), col="salmon")
Christoph HanckChristoph Hanck
$\begingroup$ You have to be slightly careful here as if you did this with a $t$-distribution with say $3$ degrees of freedom then the Central Limit theorem would apply but your graph would not have a peak density around $0.4$ but instead around $\frac1{\sqrt{6\pi}}\approx 0.23$ because the original variance would not be $1$ $\endgroup$ – Henry May 30 '18 at 19:43
$\begingroup$ That is a good point, one might standardize the mean by sd(x) to get something which, if the CLT works, converges by Slutzky's theorem, to a N(0,1) variate. I wanted to keep the example simple, but you are of course right. $\endgroup$ – Christoph Hanck May 31 '18 at 5:08
A simple case where the CLT cannot hold for very practical reasons, is when the sequence of random variables approaches its probability limit strictly from the one side. This is encountered for example in estimators that estimate something that lies on a boundary.
The standard example here perhaps is the estimation of $\theta$ in a sample of i.i.d. Uniforms $U(0,\theta)$. The maximum likelihood estimator will be the maximum order statistic, and it will approach $\theta$ necessarily only from below: naively thinking, since its probability limit will be $\theta$, the estimator cannot have a distribution "around" $\theta$ - and the CLT is gone.
The estimator properly scaled does have a limiting distribution - but not of the "CLT variety".
Alecos PapadopoulosAlecos Papadopoulos
You can find a quick solution here.
Exceptions to the central-limit theorem arise
When there are multiple maxima of the same height, and
Where the second derivative vanishes at the maximum.
There are certain other exceptions which are outlined in the answer of @cherub.
The same question has already been asked on math.stackexchange. You can check the answers there.
FerdiFerdi
$\begingroup$ By "maxima", do you mean modes? Being bimodal has nothing to do with failing to satisfy CLT. $\endgroup$ – Acccumulation May 30 '18 at 15:08
$\begingroup$ @Acccumulation: The wording here is confusing because it actually refers to the PGF of a discrete r.v. $M(z)=\sum_{n=-\infty}^\infty P(X=n)z^n$ $\endgroup$ – Alex R. May 30 '18 at 18:17
$\begingroup$ @AlexR. The answer doesn't make sense at all without reading through the link, and is far from clear even with the link. I'm leaning towards downvoting as being even worse than a link-only answer. $\endgroup$ – Acccumulation May 30 '18 at 18:27
Not the answer you're looking for? Browse other questions tagged probability mathematical-statistics normal-distribution central-limit-theorem or ask your own question.
Central limit theorem versus law of large numbers
Proofs of the central limit theroem
Central limit theorem and the law of large numbers
The Central Limit Theorem in Quantile Estimation
Question about standard deviation and central limit theorem
About the central limit theorem and statistical testing
Role of Central Limit Theorem in one-way ANOVA
Central limit theorem for maximum likelihood estimators when modelling assumptions are violated
A dynamical systems view of the Central Limit Theorem?
What does "properly normalized" mean in CLT? | CommonCrawl |
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
In a recent article discussing the demerits of relying on the p-value for statistical inference, called "Matrixx v. Siracusano and Student v. Fisher Statistical significance on trial" (DOI: 10.1111/j.1740-9713.2011.00511.x), Stephen T. Ziliak opposes the use of p-values. In the concluding paragraphs he says:
The data is the one thing that we already do know, and for certain. What we actually want to know is something quite different: the probability of a hypothesis being true (or at least practically useful), given the data we have. We want to know the probability that the two drugs are different, and by how much, given the available evidence. The significance test – based as it is on the fallacy of the transposed conditional, the trap that Fisher fell into – does not and cannot tell us that probability. The power function, the expected loss function, and many other decision-theoretic and Bayesian methods descending from Student and Jeffreys, now widely available and free on-line, do.
What is the power function, the expected loss function and "other decision-theoretic and Bayesian methods"? Are these methods widely used? Are they available in R? How are these new suggested methods implemented? How, for instance, would I use these methods to test my hypothesis in a dataset I would otherwise use conventional two-sample t-tests and p-values?
r hypothesis-testing statistical-significance bayesian p-value
ArielAriel
$\begingroup$ There are a lot of papers arguing against the use of $p$-values alone, but it really depends on the context, IMO. Could you add more information on what you're interested in (cf. your last sentence)? $\endgroup$
– chl
$\begingroup$ I don't have access to the article, but this argument indicates a rather flawed understanding of what's going on. Despite a flawed understanding, the conclusion that other statistics are worth consideration is reasonable. Expected loss function is simply an estimate of the expected value of the loss function (e.g. squared error, logistic, etc.). $\endgroup$
– Iterator
$\begingroup$ Due to a similar thread recently being posted, I have raised a query about this thread on Meta CV $\endgroup$
– Silverfish
This sounds like another strident paper by a confused individual. Fisher didn't fall into any such trap, though many students of statistics do.
Hypothesis testing is a decision theoretic problem. Generally, you end up with a test with a given threshold between the two decisions (hypothesis true or hypothesis false). If you have a hypothesis which corresponds to a single point, such as $\theta=0$, then you can calculate the probability of your data resulting when it's true. But what do you do if it's not a single point? You get a function of $\theta$. The hypothesis $\theta\not= 0$ is such a hypothesis, and you get such a function for the probability of producing your observed data given that it's true. That function is the power function. It's very classical. Fisher knew all about it.
The expected loss is a part of the basic machinery of decision theory. You have various states of nature, and various possible data resulting from them, and some possible decisions you can make, and you want to find a good function from data to decision. How do you define good? Given a particular state of nature underlying the data you have obtained, and the decision made by that procedure, what is your expected loss? This is most simply understood in business problems (if I do this based on the sales I observed in the past three quarters, what is the expected monetary loss?).
Bayesian procedures are a subset of decision theoretic procedures. The expected loss is insufficient to specify uniquely best procedures in all but trivial cases. If one procedure is better than another in both state A and B, obviously you'll prefer it, but if one is better in state A and one is better in state B, which do you choose? This is where ancillary ideas like Bayes procedures, minimaxity, and unbiasedness enter.
The t-test is actually a perfectly good solution to a decision theoretic problem. The question is how you choose the cutoff on the $t$ you calculate. A given value of $t$ corresponds to a given value of $\alpha$, the probability of type I error, and to a given set of powers $\beta$, depending on the size of the underlying parameter you are estimating. Is it an approximation to use a point null hypothesis? Yes. Is it usually a problem in practice? No, just like using Bernoulli's approximate theory for beam deflection is usually just fine in structural engineering. Is having the $p$-value useless? No. Another person looking at your data may want to use a different $\alpha$ than you, and the $p$-value accommodates that use.
I'm also a little confused on why he names Student and Jeffreys together, considering that Fisher was responsible for the wide dissemination of Student's work.
Basically, the blind use of p-values is a bad idea, and they are a rather subtle concept, but that doesn't make them useless. Should we object to their misuse by researchers with poor mathematical backgrounds? Absolutely, but let's remember what it looked like before Fisher tried to distill something down for the man in the field to use.
whuber♦
$\begingroup$ +1 for actually answering the question, and an additional (but virtual) +1 for challenging the quotation, which is provocative but problematic. I see you are a recent participant here but have already contributed many answers: many thanks and welcome (a bit belatedly) to our site! $\endgroup$
– whuber ♦
$\begingroup$ Thanks very much for your detailed answer. It helps to think about alternative strategies that are suggested in that paper critically. I asked this question because some colleagues used this paper to say that we shouldn't be looking at p-values at all and I realized that I didn't understand what these alternatives actually meant. Thanks for your clarification! $\endgroup$
$\begingroup$ @whuber I don't think this answers the question at all. OP was asking about the alternatives that Ziliak is suggesting, and this answer doesn't address them. For instance, Ziliak's critique of significance touches upon why do people use 5% or 1% significance. There's really no solid reason, and he was able to track these levels back to Fisher's papers. It's just some arbitrary, convenient number. As opposed to the "alternative" approaches based on pecuniary advantages, i.e. dollar values. $\endgroup$
– Aksakal
$\begingroup$ @Aksakal I believe that an important contribution is made to the conversation by relating hypothesis testing to a decision-theoretic problem and explicitly connecting the p-value to an expected risk (based on a 0-1 loss function). $\endgroup$
I recommend focusing on things like confidence intervals and model-checking. Andrew Gelman has done great work on this. I recommend his textbooks but also check out the stuff he's put online, e.g. http://andrewgelman.com/2011/06/the_holes_in_my/
Michael BishopMichael Bishop
The ez package provides likelihood ratios when you use the ezMixed() function to do mixed effects modelling. Likelihood ratios aim to quantify evidence for a phenomenon by comparing the likelihood (given the observed data) of two models: a "restricted" model that restricts the influence of the phenomenon to zero and an "unrestricted" model that permits non-zero influence of the phenomenon. After correcting the observed likelihoods for the models' differential complexity (via Akaike's Information Criterion, which is asymptotically equivalent to cross-validation), the ratio quantifies the evidence for the phenomenon.
Mike LawrenceMike Lawrence
All those techniques are available in R in the same sense that all of algebra is available in your pencil. Even p-values are available through many many different functions in R, deciding which function to use to get a p-value or a Bayesian posterior is more complex than a pointer to a single function or package.
Once you learn about those techniques and decide what question you actually want the answer too then you can see (or we can provide more help) how to do it using R (or other tools). Just saying that you want to minimize your loss function, or to get a posterior distribution is about as useful as replying "food" when asked what you want to eat for dinner.
Greg SnowGreg Snow
Not the answer you're looking for? Browse other questions tagged r hypothesis-testing statistical-significance bayesian p-value or ask your own question.
ASA discusses limitations of $p$-values - what are the alternatives?
What is a good index of the degree of violation of normality and what descriptive labels could be attached to that index?
What is the decision-theoretic justification for Bayesian credible interval procedures?
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Are smaller p-values more convincing?
Why does significance not equal validity? Why can't I stop a test as soon as it has significant results?
Is this the solution to the p-value problem?
Claim the superiority of a treatment in the context of a significant difference with a two-sided test
Typo or change of interpretation in loss function in the Bayesian Choice textbook?
Differentiable programming for general Bayesian decision theory
lang-r | CommonCrawl |
62. Oscillating Functions
Definition. When \(\phi(n)\) does not tend to a limit, nor to \(+\infty\), nor to \(-\infty\), as \(n\) tends to \(\infty\), we say that \(\phi(n)\) oscillates as \(n\) tends to \(\infty\).
A function \(\phi(n)\) certainly oscillates if its values form, as in the case considered in the last example above, a continual repetition of a cycle of values. But of course it may oscillate without possessing this peculiarity. Oscillation is defined in a purely negative manner: a function oscillates when it does not do certain other things.
The simplest example of an oscillatory function is given by \[\phi(n) = (-1)^{n},\] which is equal to \(+1\) when \(n\) is even and to \(-1\) when \(n\) is odd. In this case the values recur cyclically. But consider \[\phi(n) = (-1)^{n} + (1/n),\] the values of which are \[-1 + 1,\quad 1 + (1/2),\quad -1 + (1/3),\quad 1 + (1/4),\quad -1 + (1/5),\ \dots.\] When \(n\) is large every value is nearly equal to \(+1\) or \(-1\), and obviously \(\phi(n)\) does not tend to a limit or to \(+\infty\) or to \(-\infty\), and therefore it oscillates: but the values do not recur. It is to be observed that in this case every value of \(\phi(n)\) is numerically less than or equal to \(3/2\). Similarly \[\phi(n) = (-1)^{n} 100 + (1000/n)\] oscillates. When \(n\) is large, every value is nearly equal to \(100\) or to \(-100\). The numerically greatest value is \(900\) (for \(n = 1\)). But now consider \(\phi(n) = (-1)^{n}n\), the values of which are \(-1\), \(2\), \(-3\), \(4\), \(-5\), …. This function oscillates, for it does not tend to a limit, nor to \(+\infty\), nor to \(-\infty\). And in this case we cannot assign any limit beyond which the numerical value of the terms does not rise. The distinction between these two examples suggests a further definition.
Definition. If \(\phi(n)\) oscillates as \(n\) tends to \(\infty\), then \(\phi(n)\) will be said to oscillate finitely or infinitely according as it is or is not possible to assign a number \(K\) such that all the values of \(\phi(n)\) are numerically less than \(K\), \(|\phi(n)| < K\) for all values of \(n\).
These definitions, as well as those of § 58 and 60, are further illustrated in the following examples.
Example XXIV
Consider the behaviour as \(n\) tends to \(\infty\) of the following functions:
1. \((-1)^{n}\), \(5 + 3(-1)^{n}\), \((1,000,000/n) + (-1)^{n}\), \(1,000,000(-1)^{n} + (1/n)\).
2. \((-1)^{n}n\), \(1,000,000 + (-1)^{n}n\).
3. \(1,000,000 – n\), \((-1)^{n}(1,000,000 – n)\).
4. \(n\{1 + (-1)^{n}\}\). In this case the values of \(\phi(n)\) are \[0,\quad 4,\quad 0,\quad 8,\quad 0,\quad 12,\quad 0,\quad 16,\ \dots.\] The odd terms are all zero and the even terms tend to \(+\infty\): \(\phi(n)\) oscillates infinitely.
5. \(n^{2} + (-1)^{n}2n\). The second term oscillates infinitely, but the first is very much larger than the second when \(n\) is large. In fact \(\phi(n) \geq n^{2} – 2n\) and \(n^{2} – 2n = (n – 1)^{2} – 1\) is greater than any assigned value \(\Delta\) if \(n > 1 + \sqrt{\Delta + 1}\). Thus \(\phi(n) \to +\infty\). It should be observed that in this case \(\phi(2k + 1)\) is always less than \(\phi(2k)\), so that the function progresses to infinity by a continual series of steps forwards and backwards. It does not however 'oscillate' according to our definition of the term.
6. \(n^{2}\{1 + (-1)^{n}\}\), \((-1)^{n}n^{2} + n\), \(n^{3} + (-1)^{n}n^{2}\).
7. \(\sin n\theta\pi\). We have already seen (Exs. XXIII. 9) that \(\phi(n)\) oscillates finitely when \(\theta\) is rational, unless \(\theta\) is an integer, when \(\phi(n)= 0\), \(\phi(n) \to 0\).
The case in which \(\theta\) is irrational is a little more difficult. But it is not difficult to see that \(\phi(n)\) still oscillates finitely. We can without loss of generality suppose \(0 < \theta < 1\). In the first place \(|\phi(n)| < 1\). Hence \(\phi(n)\) must oscillate finitely or tend to a limit. We shall consider whether the second alternative is really possible. Let us suppose that \[\lim \sin n\theta\pi = l.\] Then, however small \(\epsilon\) may be, we can choose \(n_{0}\) so that \(\sin n\theta\pi\) lies between \(l -\epsilon\) and \(l + \epsilon\) for all values of \(n\) greater than or equal to \(n_{0}\). Hence \(\sin(n + 1)\theta\pi – \sin n\theta\pi\) is numerically less than \(2\epsilon\) for all such values of \(n\), and so \(|\sin \frac{1}{2}\theta\pi \cos(n + \frac{1}{2})\theta\pi| < \epsilon\).
Hence \[\cos(n + \tfrac{1}{2})\theta\pi = \cos n\theta\pi \cos\tfrac{1}{2}\theta\pi – \sin n\theta\pi \sin\tfrac{1}{2}\theta\pi\] must be numerically less than \(\epsilon/|\sin\frac{1}{2}\theta\pi|\). Similarly \[\cos(n – \tfrac{1}{2})\theta\pi = \cos n\theta\pi \cos\tfrac{1}{2}\theta\pi + \sin n\theta\pi \sin\tfrac{1}{2}\theta\pi\] must be numerically less than \(\epsilon/|\sin\frac{1}{2}\theta\pi|\); and so each of \(\cos n\theta\pi \cos\frac{1}{2}\theta\pi\), \(\sin n\theta\pi \sin\frac{1}{2}\theta\pi\) must be numerically less than \(\epsilon/|\sin\frac{1}{2}\theta\pi|\). That is to say, \(\cos n\theta\pi \cos\frac{1}{2}\theta\pi\) is very small if \(n\) is large, and this can only be the case if \(\cos n\theta\pi\) is very small. Similarly \(\sin n\theta\pi\) must be very small, so that \(l\) must be zero. But it is impossible that \(\cos n\theta\pi\) and \(\sin n\theta\pi\) can both be very small, as the sum of their squares is unity. Thus the hypothesis that \(\sin n\theta\pi\) tends to a limit \(l\) is impossible, and therefore \(\sin n\theta\pi\) oscillates as \(n\) tends to \(\infty\).
The reader should consider with particular care the argument '\(\cos n\theta\pi \cos\frac{1}{2}\theta\pi\) is very small, and this can only be the case if \(\cos n\theta\pi\) is very small'. Why, he may ask, should it not be the other factor \(\cos\frac{1}{2}\theta\pi\) which is 'very small'? The answer is to be found, of course, in the meaning of the phrase 'very small' as used in this connection. When we say '\(\phi(n)\) is very small' for large values of \(n\), we mean that we can choose \(n_{0}\) so that \(\phi(n)\) is numerically smaller than any assigned number, if . Such an assertion is palpably absurd when made of a fixed number such as \(\cos\frac{1}{2}\theta\pi\), which is not zero.
Prove similarly that \(\cos n\theta\pi\) oscillates finitely, unless \(\theta\) is an even integer.
8. \(\sin n\theta\pi + (1/n)\), \(\sin n\theta\pi + 1\), \(\sin n\theta\pi + n\), \((-1)^{n} \sin n\theta\pi\).
9. \(a\cos n\theta\pi + b\sin n\theta\pi\), \(\sin^{2}n\theta\pi\), \(a\cos^{2}n\theta\pi + b\sin^{2}n\theta\pi\).
10. \(a + bn + (-1)^{n} (c + dn) + e\cos n\theta\pi + f\sin n\theta\pi\).
11. \(n\sin n\theta\pi\). If \({\theta}\) is integral, then \(\phi(n) = 0\), \(\phi(n) \to 0\). If \(\theta\) is rational but not integral, or irrational, then \(\phi(n)\) oscillates infinitely.
12. \(n(a\cos^{2} n\theta\pi + b\sin^{2} n\theta\pi)\). In this case \(\phi(n)\) tends to \(+\infty\) if \(a\) and \(b\) are both positive, but to \(-\infty\) if both are negative. Consider the special cases in which \(a = 0\), \(b > 0\), or \(a > 0\), \(b = 0\), or \(a = 0\), \(b = 0\). If \(a\) and \(b\) have opposite signs \(\phi(n)\) generally oscillates infinitely. Consider any exceptional cases.
13. \(\sin(n^{2}\theta\pi)\). If \(\theta\) is integral, then \(\phi(n) \to 0\). Otherwise \(\phi(n)\) oscillates finitely, as may be shown by arguments similar to though more complex than those used in Exs. XXIII. 9 and Exs. XXIV. 7.1
14. \(\sin(n!\, \theta\pi)\). If \(\theta\) has a rational value \(p/q\), then \(n!\, \theta\) is certainly integral for all values of \(n\) greater than or equal to \(q\). Hence \(\phi(n) \to 0\). The case in which \(\theta\) is irrational cannot be dealt with without the aid of considerations of a much more difficult character.
15. \(\cos(n!\, \theta\pi)\), \(a\cos^{2}(n!\, \theta\pi) + b\sin^{2}(n!\, \theta\pi)\), where \(\theta\) is rational.
16. \(an – [bn]\), \((-1)^{n}(an – [bn])\).
17. \([\sqrt{n}]\), \((-1)^{n}[\sqrt{n}]\), \(\sqrt{n} – [\sqrt{n}]\).
18. The smallest prime factor of \(n\). When \(n\) is a prime, \(\phi(n) = n\). When \(n\) is even, \(\phi(n) = 2\). Thus \(\phi(n)\) oscillates infinitely.
19. The largest prime factor of \(n\).
20. The number of days in the year \(n\) A.D.
Example XXV
1. If \(\phi(n) \to +\infty\) and \(\psi(n) \geq \phi(n)\) for all values of \(n\), then \(\psi(n) \to +\infty\).
2. If \(\phi(n) \to 0\), and \(|\psi(n)| \leq |\phi(n)|\) for all values of \(n\), then \(\psi(n) \to 0\).
3. If \(\lim |\phi(n)| = 0\), then \(\lim \phi(n) = 0\).
4. If \(\phi(n)\) tends to a limit or oscillates finitely, and \(|\psi(n)| \leq |\phi(n)|\) when \(n \geq n_{0}\), then \(\psi(n)\) tends to a limit or oscillates finitely.
5. If \(\phi(n)\) tends to \(+\infty\), or to \(-\infty\), or oscillates infinitely, and \[|\psi(n)| \geq |\phi(n)|\] when \(n \geq n_{0}\), then \(\psi(n)\) tends to \(+\infty\) or to \(-\infty\) or oscillates infinitely.
6. 'If \(\phi(n)\) oscillates and, however great be \(n_{0}\), we can find values of \(n\) greater than \(n_{0}\) for which \(\psi(n) > \phi(n)\), and values of \(n\) greater than \(n_{0}\) for which \(\psi(n) < \phi(n)\), then \(\psi(n)\) oscillates'. Is this true? If not give an example to the contrary.
7. If \(\phi(n) \to l\) as \(n \to \infty\), then also \(\phi(n + p) \to l\), \(p\) being any fixed integer. [This follows at once from the definition. Similarly we see that if \(\phi(n)\) tends to \(+\infty\) or \(-\infty\) or oscillates so also does \(\phi(n + p)\).]
8. The same conclusions hold (except in the case of oscillation) if \(p\) varies with \(n\) but is always numerically less than a fixed positive integer \(N\); or if \(p\) varies with \(n\) in any way, so long as it is always positive.
9. Determine the least value of \(n_{0}\) for which it is true that \[(a)\ n^{2} + 2n > 999,999\quad (n \geq n_{0}),\qquad (b)\ n^{2} + 2n > 1,000,000\quad (n \geq n_{0}).\]
10. Determine the least value of \(n_{0}\) for which it is true that \[(a)\ n + (-1)^{n} > 1000\quad (n \geq n_{0}),\qquad (b)\ n + (-1)^{n} > 1,000,000\quad (n \geq n_{0}).\]
11. Determine the least value of \(n_{0}\) for which it is true that \[(a)\ n^{2} + 2n > \Delta\quad (n \geq n_{0}),\qquad (b)\ n + (-1)^{n} > \Delta\quad (n \geq n_{0}),\] \(\Delta\) being any positive number.
[(a) \(n_{0} = [\sqrt{\Delta + 1}]\): () \(n_{0} = 1 + [\Delta]\) or \(2 + [\Delta]\), according as \([\Delta]\) is odd or even, i.e. \(n_{0} = 1 + [\Delta] + \frac{1}{2} \{1 + (-1)^{[\Delta]}\}\).]
12. Determine the least value of \(n_{0}\) such that \[(a)\ n/(n^{2} + 1) < .0001,\qquad (b)\ (1/n) + \{(-1)^{n}/n^{2}\} < .000 01,\] when \(n \geq n_{0}\). [Let us take the latter case. In the first place \[(1/n) + \{(-1)^{n}/n^{2}\} \leq (n + 1)/n^{2},\] and it is easy to see that the least value of \(n_{0}\), such that \((n + 1)/n^{2} < .000 001\) when \(n \geq n_{0}\), is \(1,000,002\). But the inequality given is satisfied by \(n = 1,000,001\), and this is the value of \(n_{0}\) required.]
See Bromwich's Infinite Series, p. 485.↩︎
$\leftarrow$ 58-61. Definition of a limit and other definitions Main Page 63-68. General theorems concerning limits $\rightarrow$ | CommonCrawl |
2010 Mathematics Subject Classification: Primary: 03BXX Secondary: 68P05 [MSN][ZBL]
Formulas are syntactically correct expressions in a formalized language defined over a signature, a set of variables, and a logics. In this way, formulas are quite similar to terms. Since predicates and logics symbols are included in their inductive definition, they represent truth values instead of sort values, however.
For examples of the exact definition of the concept of a formula in several formalized languages, see the articles Axiomatic set theory; Arithmetic, formal; Predicate calculus; Types, theory of. In mathematical practice, formulas also have a semantic meaning. They can be either names, or forms of statements, definition-abbreviations, etc.
1 Definition of Formulas
2 Identifying and Manipulating Free Variables
3 Sentences and Atomic Formulas
4 Morphisms
Definition of Formulas
Let $\Sigma =(S,F)$ be a signature and $P$ be a set of predicate symbols for $S$ with range $\mathbb{B}$ representing the set of truth values of the underlying logics. As usual, it holds $P\cap S=\emptyset$ and $P\cap F= \emptyset$. The notions of arity and type defined for the function symbols $f\in F$ may also be defined for the predicate symbols $p\in P$: Every $p\in P$ is assigned an arity ar$\colon P\longrightarrow \mathbb{N}_0$ giving the number of arguments of $p$. Every predicate symbol $p\in P$ is also assigned a predicate type type$\colon s_1\times\cdots\times s_{ar(p)} \longrightarrow \mathbb{B}$.
Let $X_s$ be a set of free variables of sort $s\in S$ with $X_s\cap F=\emptyset$, $X_s\cap P=\emptyset$, and $X_s\cap S=\emptyset$. Furthermore, let the set of free variables be defined as disjoint union $X:= \bigcup_{s\in S} X_s$. Then the set $Q(\Sigma,P,X)$ of atomic formulas consists of all $p(t_1,\ldots,t_n)$ for predicates $p\in P$ with type$(p)= s_1\times\cdots\times s_{ar(p)}$ and $t_i\in T_{s_i}(\Sigma,X)$. Examples for such predicates $p\in P$ are properties, equalities, inequalities etc. Atomic formulas are also called atoms.
The set $L(\Sigma,P,X)$ of (general) formulas depends on the underlying logics. For PL1, it is the smallest set containing
the atomic formulas $Q(\Sigma,P,X)$
the expressions $p_1\vee p_2$, $p_1\wedge p_2$, $\neg p$ for $p,p_1,p_2\in L(\Sigma,P,X)$
the expressions $\forall x_1\in X_{s_1},\ldots,x_n\in X_{s_n}\colon p(x_1,\ldots,x_n)$ and $\exists x_1\in X_{s_1},\ldots,x_n\in X_{s_n}\colon p(x_1,\ldots,x_n)$ for predicates $p\in P$ with type$(p)= s_1\times\cdots\times s_{ar(p)}$
Identifying and Manipulating Free Variables
Analogous to terms, the existence or nonexistence of free variables in a formula makes a fundamental difference (see for example section Sentences and Atomic Formulas). Thus, procedures for determining and manipulating free variables in formulas exist corresponding to the ones defined for terms. These procedure are somewhat more complicated as in the case of terms, however, for handling the additional logics symbols and the existence of bound variables.
A mapping $V\colon L(\Sigma,P,X) \longrightarrow 2^X$ for identifying the free variables in a formula is inductively defined as follows:
For a predicate $p(t_1,\ldots,t_n)$ with $p\in P$ of type$(p)= s_1\times\cdots\times s_{ar(p)} $ and terms $t_i\in T_{s_i}(\Sigma,X)$, it holds $V(p(t_1,\ldots,t_n)) := V(t_1)\cup\cdots\cup V(t_n)$
For a formula $p\in L(\Sigma,P, X)$, it holds $V(\neg p) = V(p)$
For a formula $p_1,p_2\in L(\Sigma,P, X)$, it holds $V(p_1\vee p_2) = V(p_1)\cup V(p_2)$ and $V(p_1\wedge p_2) = V(p_1)\cup V(p_2)$.
For a formula $p\in L(\Sigma,P, X)$, it holds $V(\forall x\colon p) = V(p)\setminus \{x\}$ and $V(\exists x\colon p) = V(p)\setminus \{x\}$
Let $p\in L(\Sigma,P,X)$ be a formula, $w\in T(\Sigma,X)$ be a term, and $x\in X$ be a variable. The substitution $p[x\leftarrow w]$ of $x$ with $w$ is inductively defined as follows:
$x[x\leftarrow w]:= w$
$y[x\leftarrow w]:= y$ for $y\in X$ with $x\neq y$
$p(t_1,\ldots,t_n)[x\leftarrow w] := p(t_1[x\leftarrow w],\ldots,t_n[x\leftarrow w])$ for a predicate $p\in P$ of type$(p)= s_1\times\cdots\times s_{ar(p)} $ and terms $t_i\in T_{s_i}(\Sigma,X)$.
$(\neg p)[x\leftarrow w] = \neg (p[x\leftarrow w])$ for a formula $p\in L(\Sigma,P, X)$
$(p_1\vee p_2)[x\leftarrow w] = p_1[x\leftarrow w]\vee p_2[x\leftarrow w]$ and $(p_1\wedge p_2)[x\leftarrow w] = p_1[x\leftarrow w]\wedge p_2[x\leftarrow w]$ for formulas $p_1,p_2\in L(\Sigma,P, X)$
$(\forall y\colon p)[x\leftarrow w] = \forall y\colon p[x\leftarrow w]$ and $(\exists y\colon p)[x\leftarrow w] = \exists y\colon p[x\leftarrow w]$ for a formula $p\in L(\Sigma,P, X)$ and a variable $y\in X$, $y\neq x$
$(\forall x\colon p)[x\leftarrow w] = \forall x\colon p$ and $(\exists x\colon p)[x\leftarrow w] = \exists x\colon p$ for a formula $p\in L(\Sigma,P, X)$
The last rule of the definition holds, since $x$ is already used as a bound variable in the quantified expressions. Multiple substitutions $p[x_1\leftarrow w_1,\ldots, x_n\leftarrow w_n]$ can be defined analogously. One has to note, that the substitutions of $x_1,\ldots,x_n$ with $w_1,\ldots,w_n$ are executed simultaneously and not consecutively. Otherwise, in the case of $x_j\in V(w_k)$, different results have to be expected.
Sentences and Atomic Formulas
A formula $p\in L(\Sigma,P,X)$ is called closed, if $V(p)= \emptyset$. Closed formulas are also called sentences or, more precisely, $\Sigma$-sentences. They do not necessarily belong to the set $L(\Sigma,P,\emptyset)$, however, because bound variables can exist in $p$ (e.g. quantifier variables). For a $\Sigma$-formula $p(x_1,\ldots,x_n)\in L(\Sigma,P,X)$ with free variables $x_i\in X_{s_i}$ of sorts $s_i\in S$, its universal closure gives a $\Sigma$-sentence $\forall x_1\in X_{s_1}\cdots \forall x_n\in X_{s_n} \colon p(x_1,\ldots,x_n)$ [W90]. Sentences may assigned a fixed truth value contrary to general formulas. Since one can instantiate free variables with any truth value, the truth value of a general formula can vary according to the specific instantiation as well.
An atom $p(t_1,\ldots,t_n)\in Q(\Sigma,P,X)$ is called a ground atom, if the terms $t_i$ do not contain a free variable (i.e. if $V(t_i)=\emptyset$ for $i=1,\ldots,n$). The ground atoms are the closed atomic formulas. They are also called atomic sentences. Sentences are defined inductively based on atomic sentences by application of connective and quantifier symbols.
Morphisms
A signature morphism $m\colon \Sigma_1\longrightarrow \Sigma_2$ can be extended to a morphism between formulas, if some weak additional assumptions are fulfilled. This shows that basically the existence of a signature morphism suffices to relate the formalized languages of formulas (and terms) defined over $\Sigma_1$ and $\Sigma_2$ as well. The extension of $m$ is composed of the following parts:
For the terms contained in a formula, the corresponding extension of $\sigma$ to terms have to be used.
For predicates, some kind of compatibility is required.
Connectives and quantifiers remain unchanged.
Formally, this leads to the following statement: Let $m\colon \Sigma_1\longrightarrow \Sigma_2$ be a signature morphism for signatures $\Sigma_1=(S_1,F_1), \Sigma_2=(S_2,F_2)$. The morphism $m$ may be extended to a mapping, which is defined for sets $X = \bigcup_{s\in S_1} X_s$, $X'= \bigcup_{s\in S_2} X_s'$ of variables as well with $m(x)\in X'_{m(s)}$ for $x\in X_s$, $s\in S_1$. Furthermore, the extension may be defined also for predicates $P_1$ defined over $S_1$ and $P_2$ defined over $S_2$ such that the predicate type is preserved under $m$ according to $\forall p\in P_1\colon \mbox{type}(m(p)) = m_S(\mbox{type}(p)) := m_S(s_1)\times\cdots\times m_S(s_{ar(p)}) \longrightarrow m_S(s)$ for predicates $p\in P_1$ with type$(p)= s_1\times\cdots\times s_{ar(p)}$. Then the signature morphism $m$ can also be extended to a morphism $m^\ast\colon L(\Sigma_1,P_1,X)\longrightarrow L(\Sigma_2,P_2,X')$ between formulas. Such an extension is called a translation as in the case of terms.
[EM85] H. Ehrig, B. Mahr: "Fundamentals of Algebraic Specifications", Volume 1, Springer 1985
[M89] B. Möller: "Algorithmische Sprachen und Methodik des Programmierens I", lecture notes, Technical University Munich 1989
[W90] M. Wirsing: "Algebraic Specification", in J. van Leeuwen: "Handbook of Theoretical Computer Science", Elsevier 1990
Formula. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Formula&oldid=29765
This article was adapted from an original article by V.N. Grishin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Formula&oldid=29765"
Mathematical logic and foundations
Numerical analysis and scientific computing | CommonCrawl |
Modulation of the ionosphere by Pc5 waves observed simultaneously by GPS/TEC and EISCAT
V. Belakhovsky1,4,
V. Pilipenko2,4,
D. Murr3,
E. Fedorov4 &
A. Kozlovsky5
Earlier studies demonstrated that the monitoring of the ionospheric total electron content (TEC) by global satellite navigation systems is a powerful method to study the propagation of transient disturbances in the ionosphere, induced by internal gravity waves. This technique has turned out to be sensitive enough to detect ionospheric signatures of magnetohydrodynamic waves as well. However, the effect of TEC modulation by ULF waves is not well examined as a responsible mechanism has not been firmly identified. During periods with intense Pc5 waves distinct pulsations with the same periodicity were found in the TEC data from high-latitude GPS receivers in Scandinavia. We analyze jointly responses in TEC variations and EISCAT ionospheric parameters to global Pc5 pulsations during the recovery phase of the strong magnetic storms on October 31, 2003. Comparison of periodic fluctuations of the electron density at different altitudes from EISCAT data shows that main contribution into TEC pulsations is provided by the lower ionosphere, up to ~150 km, that is the E-layer and lower F-layer. This observational fact favors the TEC modulation mechanism by field-aligned plasma transport induced by Alfven wave. Analytical estimates and numerical modeling support the effectiveness of this mechanism. Though the proposed hypothesis is basically consistent with the analyzed event, the correspondence between magnetic and ionospheric oscillations is not always perfect, so further studies need to be conducted to understand fully the TEC modulations associated with Pc5 pulsations.
The ionosphere represents an inner boundary of the near-Earth environment where the energy exchange occurs between the neutral atmosphere and the plasma of outer space. MHD waves provide an effective channel of the energy transfer from the outer magnetosphere to the bottom of the ionosphere. The interaction between the solar wind and magnetosphere acts as a permanent source of various types of MHD waves in the ultra-low-frequency (ULF) band, which fill the entire magnetosphere and reach its inner boundary, the ionosphere. While ground magnetometers and magnetospheric satellites provided tremendous amount of information about ULF wave properties in the magnetosphere and on the ground, the wave properties in the ionosphere remained unavailable to in situ observations. Low-Earth orbit satellites can detect a high-frequency part only of ULF spectrum (Pc1–3 waves). Ionospheric signatures of long-period ULF waves (Pc4–5, Pi2–3) can be detected by modern HF radio sounding techniques: Doppler sounders (Menk et al. 2007; Waters et al. 2007; Pilipenko et al. 2013) and SuperDARN HF radars (Lester et al. 2000; Ponomarenko et al. 2001; Teramoto et al. 2014). The ever-growing array of global satellite navigation systems (GPS, GLONASS, etc.) provide information on variations of a radiopath-integrated ionospheric parameter—the total electron content (TEC). GPS/TEC observations are becoming a powerful tool to monitor the propagation of acoustic and internal gravity waves along the ionosphere (Afraimovich et al. 2013; Komjathy et al. 2012).
The GPS/TEC technique turned out to be sensitive enough to detect ionospheric signatures of ULF waves as well. Early results, during the era of Faraday technique with geostationary beacons, reported TEC fluctuations related to geomagnetic variations in the Pc3–4 range (30- to 50-s period) (Davies and Hartman 1976; Okuzawa and Davies 1981). The TEC modulation by intense Pc5 pulsations was found by Pilipenko et al. (2014a) and Watson et al. (2015). Thus, the standard TEC/GPS technique is sufficiently sensitive to detect ULF waves in some cases.
However, a physical mechanism of TEC periodic modulation associated with ULF waves has not been established yet. Additional periodic ionization and consequently TEC variations may occur owing to the ULF-modulated precipitation of energetic electrons into the ionosphere (Watson et al. 2015). Other possible mechanisms of TEC modulation by incident MHD waves comprise (a) plasma compression by evanescent fast compressional mode wave arising from the interaction of an Alfven wave with the anisotropic ionosphere (Pilipenko and Fedorov 1995); (b) periodic advection across a lateral gradient of the ionospheric plasma (Waters and Cox 2009); (c) periodic vertical shift and reconfiguration of the plasma vertical profile (Poole and Sutcliffe 1987); (d) frictional heating of ionospheric ions owing to periodic dragging through neutrals, and (e) field-aligned plasma transport by an Alfven wave (Pilipenko et al. 2014a). These possible mechanisms of TEC modulation by magnetospheric ULF waves provide main contribution either to the upper ionosphere (F-layer) or to the lower ionosphere (E-layer and bottom of F-layer). A possibility to reveal a contribution of ionospheric plasma density oscillations at different altitudes into the total TEC fluctuations may help to identify a responsible mechanism.
Here we analyze a unique event when the same global Pc5 waves were detected in the ionosphere by the GPS/TEC technique (Pilipenko et al. 2014a) and EISCAT radar (Pilipenko et al. 2014b). We analyze these observations simultaneously which has provided an additional information on the relationship between geomagnetic and ionospheric variations. Observational results are validated by numerical modeling of Alfven wave interaction with a realistic ionosphere profile.
We use the standard TEC data with 30-s resolution from an array of GPS receivers in Scandinavia (see map in Fig. 1). Dual-frequency GPS method uses the phase information from radio signals transmitted from GPS satellites at the L1 (1575.42 MHz) and L2 (1227.60 MHz) frequencies for estimating the slant TEC. The slant TEC along a radiopath can be converted into the vertical vTEC, denoted here as N T, by assuming the altitude of pierce points to be 250 km. As a measure of columnar density N T the TEC unit (1 TECu = 1016 m−2) is used.
Pierce points at 250 km altitude of radio paths from GPS satellites GPS7, GPS9, and GPS28 to ground receivers KIRU (violet line), VARS (orange line), and TROM (green line) in Scandinavia during October 31, 2003, 1100–1130 UT. Magnetometers are denoted with triangles, GPS receivers are denoted with squares, riomer KIL is marked with asterisk, and EISCAT is shown by dark circle
Magnetometer 10-s data from the IMAGE array, covering the range of geographic latitudes from ~79° to ~58°, are used (Fig. 1). The magnetometer observations are augmented with the multi-beam IRIS riometer data from Kilpisjarvi (KIL) that monitor a cosmic noise absorption caused by the energetic (>30 keV) electron precipitation into the ionosphere. The magnetometer data have been decimated to a common 30-s step with TEC data.
We use the data with 30-s cadence from the UHF radar EISCAT, comprising the receivers at Sodankyla (SOD) and Kiruna (KIR), and receiver–transmitter at Tromso (TRO) (Fig. 1). EISCAT radar beam was directed along the geomagnetic field line. Intersection of receiving paths from SOD and KIR is located nearly above the magnetic station TRO (geographic coordinates 68.0°N, 19.1°E) at altitude ~290 km. This radar system enables one to determine the vector of the ionospheric plasma drift velocity V and corresponding electric field E. The EISCAT radar system also measures the altitude profile of electron density N e(z), ion temperature T i(z), and electron temperature T e(z) along the beam up to ~400 km.
October 31, 2003, ULF event
During the recovery phase of large magnetic storm on October 31, 2003, very intense (up to a few hundred nT) global quasi-monochromatic Pc5 waves were observed (Kleimenova and Kozyreva 2005). Typically, global Pc5 waves are observed during high solar wind streams, and they are about an order of magnitude more intense than common Pc5 pulsations. The reason for such outstanding intensity has not been found yet. Global Pc5 waves are coherent over a wide range of geomagnetic latitudes, during morning and post-noon hours (Potapov et al. 2006). Detailed studies of this event indicated that global Pc5 pulsations are probably caused by oscillations of the magnetospheric MHD waveguide, engulfing an entire magnetosphere, up to equatorial latitudes (Marin et al. 2014). At high latitudes, magnetospheric waveguide oscillations are strongly coupled with field line Alfven oscillations (Pilipenko et al. 2012).
During the periods with elevated Pc5 activity, 1100–1200 and 1200–1300 UT, TEC fluctuations have been compared with ground geomagnetic variations at station KIR (geographic latitude 67.8°) and ionospheric parameters determined by EISCAT radar. The tracks of intersection with the ionosphere of radio paths from satellites GPS7 and GPS9 to ground GPS receivers KIRU, VARS, and TROM (pierce points) are shown in Fig. 1.
The TEC data show gradual variations around 30–40 TECu with superposed small-scale fluctuations. To highlight these fluctuations the TEC data have been detrended with a cutoff frequency of 1 mHz. Quasi-periodic TEC pulsations have been revealed over a wide latitudinal range. The comparison for the period 1100–1330 UT of TEC fluctuations along paths GPS7/KIRU, GPS9/KIRU with magnetic variations at KIR and EISCAT-derived ionospheric density N e in the lower ionosphere, shows the occurrence of persistent periodicity in all these parameters (Fig. 2). The peak-to-peak amplitudes of oscillations of the TEC are ΔN T ~ 0.6 TECU (GPS7/KIRU), and ~1.0 TECU (GPS9/KIRU), and magnetic pulsations ΔB ~ 400 nT (X component) at KIR. According to visual inspection the phase relation between magnetic (X component) and TEC variations is not very stable: It varies from roughly out-of-phase during ~1100–1200 and ~1215–1235 UT to roughly in-phase during ~1200–1215 and ~1305–1330 UT.
Multi-instrument observations of Pc5 waves during October 31, 1100–1330 UT: a X component (in nT) of geomagnetic pulsations at KIR, b detrended (with a 1-mHz cutoff frequency) TEC fluctuations (in TECu) along radio paths GPS7/KIRU (dotted line) and GPS9/KIRU (solid line), c EISCAT N e fluctuations at h = 110 km; and d cosmic noise absorption from KIL riometer
At the same time, the riometer data do not demonstrate the periodicity evident in magnetometer data (bottom panel in Fig. 2). Just few peak-to-peak correspondence may be found, representing weak signatures of ULF modulation of energetic electron precipitation.
Spectral analysis confirmed the occurrence of the same periodicity with f ~ 2.4 mHz in variations of the geomagnetic field, TEC (GPS7, GPS9), and EISCAT N e (Pilipenko et al. 2014a, b). Cross-spectral analysis also showed a good correspondence between TEC and B variations. During the 1130–1300 UT time interval the spectral coherency of TEC fluctuations at GPS9/KIRU and magnetic pulsations at KIR around the frequency 2.5 mHz was high, γ(f) ~ 0.8. The ratio between the spectral densities of TEC and X-component magnetic variations at this frequency was ΔN T(f)/ΔB(f) ~ 2 × 10−3 TECu/nT. Magnetic pulsations (X component, KIR) and EISCAT electric field E x had coherency γ ~ 0.8. The cross-correlation between TEC variations from GPS9/KIRU and EISCAT field E x had a high coherency γ(f) ~ 0.86. The ratio between spectral amplitudes at this frequency was ΔN T(f)/E x (f) ~ 4 × 10−3 TECu/(mV/m).
An important parameter of ULF wave structure is its scale in the latitudinal (radial) and longitudinal (azimuthal) directions. The longitudinal propagation features are characterized by the azimuthal wave number m, which can be determined from a cross-correlation time shift Δτ between two detrended time series with T-periodic variations at sites separated in longitude by ΔΛ, as follows m = (Δτ/T)(360°/ΔΛ). The cross-correlation function R(Δτ) for magnetic and TEC variations during time interval 1100–1130 UT has been estimated using the magnetic stations KIR-LOZ at geographic latitude ~67.8°, longitudinally separated in geographic coordinates by ΔΛ ~ 15.4°, and the longitudinally separated pierce points along receiver/satellite paths TROM/GPS9 and VARS/GPS28 at geographic latitude ~69.7° and separated in longitude by ΔΛ = 27.2° (Fig. 1). The cross-correlation function of both magnetic and TEC variations has asymmetric form (Fig. 3), indicating westward propagation. The 10-s magnetic data reveal time delay Δτ = 15 s (Fig. 3, upper panel). The correlation threshold at 95 % confidence level, estimated by means a Monte Carlo test (Regi et al. 2015), at this lag is ~0.2. Thus, the cross-correlation coefficient ~0.9 is statistically significant. Though the 30-s time resolution of TEC data is not sufficient to determine exactly a time shift, R(Δτ) reaches extreme values R max ~ 0.8 also around Δτ ~ 15 s (Fig. 3, bottom panel). For the wave frequency f ~ 2.5 mHz (T ~ 400 s) this time shift corresponds to the azimuthal wave number (in geographic coordinates) m ≈ 0.9 for magnetic data and m ≈ 0.5 for TEC data. However, assuming that the error on the TEC delay time Δ(Δτ) = 15 s the estimated azimuthal wave number m has relative error Δm/m ~ 100 %. Thus, both magnetic and TEC data show a Pc5 wave propagation in the same westward direction, and no reliable conclusion about correspondence of the m values from ionospheric TEC data and geomagnetic data can be made.
The cross-correlation function R(Δτ) of magnetic variations (upper panel) during time interval 1100–1130 UT at stations KIR-LOZ (Φ ~ 67.8°), longitudinally separated by ΔΛ ~ 15.4° in geographic coordinates, and of TEC variations (bottom panel) determined from the longitudinally separated by ΔΛ = 27.2° in geographic coordinates pierce points (Φ ~ 68.7°–69.8°) along receiver/satellite paths TROM/GPS9 and VARS/GPS28
To find out which altitudes contributes most to the TEC variations, we have integrated ionospheric N e(z) data from EISCAT over two different altitude range: the bottom ionosphere from 103 to 152 km and the F-layer from 152 to 415 km. The height-time diagram of N e(t) variations and altitude-integrated ionospheric densities \(\left\langle {N_{\text{e}} } \right\rangle\) (in TECu) are compared with actual vTEC variations for two time intervals: 1100–1200 UT for GPS9 (Fig. 4) and 1250–1350 UT for GPS7 (Fig. 5). Though short-lived Pc5 geomagnetic pulsation burst during 1220–1240 UT is also accompanied by TEC variations, the response at EISCAT is not very clear, so this time interval has not been included in a detailed analysis. Comparison of the EISCAT-derived quasi-TEC fluctuations \(\left\langle {N_{\text{e}} } \right\rangle\) with periodic vTEC variations shows that a closest match is observed for the bottom ionosphere, evidencing that the main contribution is provided by the lower ionosphere, up to ~150 km (that is the E-layer and lower F-layer). However, the correspondence is not perfect, and maximal cross-correlation coefficient between vTEC and \(\left\langle {N_{\text{e}} } \right\rangle\) is R ~ 0.75 for the first interval and R ~ 0.72 for the second interval.
Time variations of the EISCAT electron density during 2003, October 31, 1100–1200 UT: a altitude-time plot (color scale); b N e variations (blue line) altitude-integrated over the range 103–152 km (in TECu), and superposed TEC variations (red line) from GPS9/KIRU; c N e variations altitude-integrated over the range 152–415 km (in TECu) and superposed vTEC variations GPS9/KIRU
Time variations of the EISCAT electron density during 2003, October 31, 1250–1350 UT: a altitude-time plot (color scale), b N e variations altitude-integrated over the range 103–152 km (in TECu), and superposed vTEC variations GPS7/KIRU, c N e variations altitude-integrated over the range 152–415 km (in TECu) and superposed vTEC variations GPS7/KIRU
Possible mechanisms of TEC modulation by MHD waves
The temporal variations in the TEC evaluated along the line between a source (S) and receiver (R), N T(t), is given by the linearized path-integrated electron continuity equation
$$\partial_{t} N_{\text{T}} = \int_{S}^{R} {\partial_{t} N(z){\text{d}}z} = - \int\limits_{S}^{R} {\left[ {V_{z} \partial_{z} N_{0} + {\mathbf{V}}_{ \bot } \nabla N_{0} + N_{0} \nabla {\mathbf{V}}} \right]} {\text{d}}z + \left\langle Q \right\rangle - \left\langle L \right\rangle$$
where \(\left\langle Q \right\rangle\) and \(\left\langle L \right\rangle\) are the height-integrated electron production and loss rates, respectively, V = { V z , V ⊥ } is the plasma velocity from the ULF perturbation, and N 0(z) is the background ionospheric density. Assuming Q(z) and L(z) are steady and equal, variations in TEC arise from the advection \(\left( { \propto {\mathbf{V}}\nabla N_{0} } \right)\) and divergence \(\left( { \propto \nabla {\mathbf{V}}} \right)\) terms.
Variations in TEC along the signal path introduce time/phase delays for high-frequency (HF) radio wave propagation through the ionosphere. The modulation of the ionospheric plasma density is due to the interaction of an incident MHD wave with the ionosphere–atmosphere–ground system. In this event no noticeable riometer variations with the same periodicity as geomagnetic pulsations were observed, so the mechanism of the periodic precipitation of energetic electrons is not considered, though the lack of soft (<1 keV) electron precipitation cannot be guaranteed.
We use a standard model of the magnetosphere–ionosphere–atmosphere–ground system to describe the properties of MHD waves in a realistic ionosphere. We use the non-rectangular coordinate system {x 1, x 2, x 3}, where coordinate lines x 3 coincide with magnetic field lines, x 1 is measured along north–south direction, and x 2 is measured eastward. The atmosphere and the ground are assumed to be isotropic conductors with conductivities σ a and σ g. The system parameters do not vary in the horizontal direction, i.e., along x 1 and x 2. The ionospheric plasma has anisotropic conductivity: Pedersen σ P(z) and Hall σ H(z). The magnetospheric plasma above the ionosphere is characterized by an Alfven velocity VA and wave number k A = ω/V A.
ULF electromagnetic field inside the ionosphere is composed from Alfven and fast magnetosonic (compressional) waves, consisting of incident, reflected, and mutually converted modes. The coupled MHD equations describing these modes in an anisotropic collisional plasma can be found in Waters et al. (2007) and Fedorov et al. (2016). The vertical profile of local of the ionospheric parameters (conductivity tensor, Alfven velocity, electron mobility tensor, etc.) is to construct from an adequate ionospheric model. Further, it will be assumed that an incident ULF wave is the Alfven mode.
The TEC and magnetometers observations at longitudinally widely separated sites have shown a large azimuthal scale of both ionospheric and magnetic Pc5 pulsations, corresponding to m ≤ 1. Therefore, it may be supposed that k 2 ~ m → 0. The scale in radial (latitudinal) direction is characterized by the wave vector k ≅ k 1. The incident Alfven mode elongated in the azimuthal direction has an azimuthal magnetic component b 2 only, whereas the wave electric field component E 1 lies in the meridional plane and is transverse to B 0. The fast magnetosonic (FMS) mode has non-vanishing azimuthal electric component E 2, magnetic radial b 1, and compressional \(b_{3} \equiv b_{\parallel }\) components.
The components of electron velocity induced by the wave electric field E are determined by local plasma mobility tensor \(\hat{\mu }^{{ ( {\text{e)}}}}\)
$$\begin{aligned} V_{1}^{{ ( {\text{e)}}}} & = \mu_{1}^{{ ( {\text{e)}}}} E_{1} + \mu_{2}^{{ ( {\text{e)}}}} E_{2} \\ V_{2}^{{ ( {\text{e)}}}} & = - \mu_{2}^{{ ( {\text{e)}}}} E_{1} + \mu_{1}^{{ ( {\text{e)}}}} E_{2} \\ V_{3}^{{ ( {\text{e)}}}} & = \mu_{3}^{{ ( {\text{e)}}}} E_{3} \\ \end{aligned}$$
The corresponding electron density perturbation \(N^{{ ( {\text{e)}}}} = N_{1}^{{ ( {\text{e)}}}} + N_{2}^{{ ( {\text{e)}}}} + N_{3}^{{ ( {\text{e)}}}}\) is produced by three currents (Pedersen, Hall, and parallel) as follows
$$\frac{{N_{1}^{{ ( {\text{e)}}}} }}{{N_{0} }} = \frac{k}{\omega }\mu_{1}^{{ ( {\text{e)}}}} E_{1} ,\quad \frac{{N_{2}^{{ ( {\text{e)}}}} }}{{N_{0} }} = \frac{k}{\omega }\mu_{2}^{{ ( {\text{e)}}}} E_{2} ,\quad \frac{{N_{3}^{{ ( {\text{e)}}}} }}{{N_{0} }} = \frac{1}{i\omega }\partial_{3} \mu_{3}^{{ ( {\text{e)}}}} E_{3}$$
The ion density perturbation \(N^{{({\text{i}})}} = N_{1}^{{({\text{i}})}} + N_{2}^{{({\text{i}})}} + N_{3}^{{({\text{i}})}}\) keeps the plasma electroneutrality.
The magnetic field and plasma compression can be produced by an incident fast mode wave \(\Delta N_{\text{T}} /N_{\text{T}} \simeq b_{\parallel } /B_{0} .\) Such effect of the ionospheric plasma periodic compression by Pc5 pulsations, associated with the fast mode, was indeed observed with GPS observations at low latitudes, where Pc5 wave frequency is much lower than the Alfven field line eigenfrequency (Vorontsova et al. 2016).
However, even an incident Alfven wave can produce a secondary fast compressional mode upon the interaction with the anisotropic ionosphere. This evanescent fast mode wave is excited in the ionosphere by incident Alfven wave owing to the ionospheric Hall conductance. Thus, TEC modulation may be related to the compression of plasma caused by this secondary fast compressional mode [the term N 2 in (3)]. Indeed, if we substitute \(E_{2} = ({\omega \mathord{\left/ {\vphantom {\omega {k)\,b_{3} }}} \right. \kern-0pt} {k)\,b_{3} }}\) in \(N_{2} = \frac{k}{\omega }\frac{{N_{0} }}{{B_{0} }}E_{2} ,\) we obtain \(\frac{{N_{2} }}{{N_{0} }} = \frac{{b_{3} }}{{B_{0} }},\) where b 3 is the compressional magnetic component. Pilipenko and Fedorov (1995) estimated the modulation of plasma owing to a partial conversion of an incident Alfven wave into an evanescent fast mode wave and showed that it might be significant for small-scale incident waves. Such small-scale waves are to be screened by the ionosphere from ground magnetometers.
Other possible mechanism of TEC modulation by an incident Alfven wave (Waters and Cox 2009) may comprise a periodic drift (advection) across a lateral gradient of the ionospheric plasma [second right-hand term in (1)]. This mechanism can be expected to be important only for more localized and steep ionospheric inhomogeneities. We have no information about lateral gradients of N e during this event. However, the largest effect produced by this mechanism is expected to be in the F-layer, where plasma concentration is highest.
The finite east–west E 2 component of an incident Alfven wave causes a vertical plasma drift \(V_{z} = {{E_{2} \cos I} \mathord{\left/ {\vphantom {{E_{2} \cos I} {B_{0} }}} \right. \kern-0pt} {B_{0} }},\) where I is the geomagnetic field inclination. This vertical shift causes a plasma modification due to the changes of ionization–recombination balance owing to a strong dependence of the ionization Q(z) and recombination L(z) rates on altitude (Poole and Sutcliffe 1987). Periodic vertical shift of ionospheric plasma, which is accompanied by a reconfiguration of the ionization–recombination balance, can provide a noticeable contribution to the TEC modulation by Pc5 electric field, but around the maximum of ionization (F-layer). Though, E 2 component of the large-scale Alfven wave (k 2 → 0) is to be small.
The periodic heating of ionospheric ions can occur during times when plasma is dragged through neutrals by the Pc5 wave electric field (Lathuillere et al. 1986). This additional plasma heating may shift the ionization–recombination balance due to the dependence of the recombination coefficient β(T) on temperature and cause plasma density variations (Pilipenko et al. 2014a). The periodic ion heating in the bottom ionosphere during October 31, 2003, event by Pc5 wave electric field indeed can be seen in the EISCAT data (Pilipenko et al. 2014b).
In a realistic ionosphere all the above mechanisms may operate simultaneously, so it is hard to distinguish their contribution into the TEC variations and to compare their efficiency, because many specific parameters are not well known for an event under study. Combined EISCAT and TEC observations have indicated that the plasma modulation by Pc5 wave is most significant in the lower ionosphere. This fact contradicts the predictions of TEC modulation theories, based on the F-layer vertical shift and lateral gradient. At the same time, this observational fact favors the mechanism of field-aligned plasma transfer induced by Alfven wave. Further we present a simple theoretical model to examine this mechanism in a greater detail.
Field-aligned plasma transport
Though all of the above-mentioned mechanisms may somewhat contribute to the periodic TEC variations, here we concentrate on another possible mechanism, related to field-aligned plasma transport (Cran-McGreehin et al. 2007). To give an insight into its basic physics, we first provide a simple analytical estimate in a model with vertical geomagnetic field B 0 = B 0 e 3. The field-aligned current \(j_{3} \equiv j_{Z}\) transported by an Alfven wave, incident onto the ionosphere from the magnetosphere, provides an additional periodic plasma flow in/out the ionosphere. As a result, the plasma density in the bottom ionosphere periodically increases/decreases. An upper estimate of this effect can be obtained from the height-integrated balance equation
$$\partial_{t} N_{\text{T}} = j_{3} /e$$
where e = 1.6 × 10−19 C is the electron charge. The electron transverse diffusion across B 0 during a wave period is assumed to be small. From Eq. (4) it follows that an oscillating current \(j_{3} \, = j_{3}^{(0)} \exp ( - i\omega t)\) causes periodic fluctuations of TEC with amplitude \(\Delta N_{\text{T}} = ij_{3}^{(0)} /e\omega\). The current j 3 transported by an Alfven wave is related to the wave magnetic field \(b_{2}\) in the ionosphere as follows \(j_{3} = - ikb_{2} /\mu_{0} .\) Combination of these two relationship yields
$$\Delta N_{\text{T}} = \frac{{kb_{2} }}{{e\mu_{0} \omega }} = \frac{T}{{e\mu_{0} \lambda }}b_{2}$$
where T is the wave period and λ is the wave latitudinal scale corresponding to the characteristic wave number k ~ 2π/λ. The relationship (5) gives an upper-limit order of magnitude estimate of the ionospheric effect. Let us suppose that a typical peak-to-peak H-component amplitude of global 400-s Pc5 pulsations on the ground is \(b_{1}^{{ ( {\text{g)}}}} \simeq 400\) nT. For simplicity, a radio path between a satellite and ground receiving station is assumed to be vertical. Because the transverse scale of global Pc5 waves under study, ~103 km (Kleimenova and Kozyreva 2005), is much bigger than the height of the ionospheric conductive layer, ~100 km, the magnetic field in the ionosphere is related to the response on the ground as \(b_{1}^{{ ( {\text{g)}}}} /b_{2} \simeq \left( {\Sigma _{\text{H}} /\Sigma _{\text{P}} } \right)\sin I\) (Hughes and Southwood 1976). For typical at high latitudes ΣH/ΣP ~ 1.6, and sin I ~ 0.9, the magnetic field disturbance in the ionosphere is b 2 ~ 290 nT. According to (5) an Alfven wave with the given amplitude should cause fluctuations in the bottom ionosphere with ΔN T ~ 0.2 TECu. This order-of-magnitude estimate indicates that the effect of periodic pumping/depleting into the lower ionosphere of the field-aligned electron flux, transported by an Alfven wave, in principle can be responsible for the TEC modulation by global Pc5 pulsations. In reality, the TEC modulation rate is determined by a spatial integration along a radio path of local plasma response to an inhomogeneous wave E-field and thus should be strongly dependent on the satellite elevation angle and wave transverse scale. These factors can be taken into account with the help of a numerical model only.
Numerical model of Alfven wave interaction with the realistic ionosphere
The above simplified estimate has been validated with numerical modeling. We consider the incidence of Alfven wave with a large azimuthal scale (k 2 = 0) onto the ionosphere. For such wave structure the E–W electric field component E 2 → 0, so the mechanism of vertical drift is inoperative. The heating of the ionosphere by wave has been neglected as well. Thus, this simplified case enables us to isolate and examine the mechanism of the field-aligned plasma transport.
The ionospheric medium parameters have been derived from the IRI-2012 model. The modeling procedure comprises the following steps: (1) For a given geophysical conditions the IRI-derived altitude profile (80–2000 km) of ionospheric parameters is constructed; (2) the wave electric E(z) and magnetic b(z) field vertical structure is calculated using the numerical solution of a set of coupled MHD wave equations in the ionosphere, whereas incidence of Alfven wave with horizontal wave vector k has been assumed; (3) using the mobility tensor the plasma vertical and horizontal fluxes are determined via (2) throughout the ionosphere; (4) using the continuity equation the local disturbance of plasma density N e(z) is determined; and (5) the vertical structure of local plasma density disturbance is height-integrated to provide a disturbance of TEC, N T. The magnetic field geometry (inclination of B 0 is I = 61.9°) and IRI parameters correspond to the observational period 2003.10.31, 11.5 UT near TRO. The IRI model yields the height-integrated Pedersen and Hall conductance as follows \(\Sigma _{\text{P}} = 2.2\) S and \(\Sigma _{\text{H}} = 3.6\) S.
When MHD waves interact with the ionosphere, their wave properties are modified. The calculated vertical structure of the wave electric field for various frequencies of incident Alfven wave with transverse wave vector k = 10−3 km−1 is shown in Fig. 6. For a chosen azimuthally large-scale structure, the N–S electric field component is dominating, \(E_{1} \gg E_{2} .\) Along z the E 1(z) component is nearly constant thanks to the field line equipotentiality. A weaker E 2 component of a secondary fast mode emerges thanks to the mode coupling in the anisotropically conductive ionosphere. The fast mode is evanescent and decays with altitude.
Wave electric field vertical structure, produced by incident Alfven wave with k = 10−3 km−1: Solid lines correspond to E 1 component, and dashed lines correspond to E 2 component. Different colors denote different frequencies (shown in the inset). All wave fields in the ionosphere have been normalized to the magnitude of the total horizontal magnetic field on the ground b 1 = 1 nT
Local disturbances of electron density N e(z), induced by the wave E-field, are shown in Fig. 7. The field-aligned electron fluxes result in enhancements of N e in the E-layer and bottom of F-layer. The vertical profile of disturbed plasma density N e(z) has local maxima at altitudes ~300, ~200 km, and order of magnitude larger at ~120 km. Calculations show that the contribution to the electron plasma density disturbance by the Pedersen current and plasma compression are small as compared with the contribution from the field-aligned electron flux. Thus, the periodic field-aligned electron flux produces local variations of electron density in the E-layer and bottom of the F-layer.
Wave-induced plasma density perturbation, produced by incident Alfven wave with k = 10−3 km−1. Different colors denote different frequencies (shown in the legend)
The resultant height-integrated effect is presented in Fig. 8. This figure shows the calculated dependence of the TEC amplitude fractional modulation, ΔN T/N T, on the transverse wave number k induced by an incident Alfven wave with total horizontal magnetic component on the ground b 1 = 1 nT (peak-to-peak amplitude 2 nT) for various frequencies. In the range up to k = 10−2 km the TEC modulation rate increases with k owing to the increase of the wave-transported field-aligned current (Fig. 8a). The modulation rate is higher for lower frequencies. For k = 0.006 km (wave scale ~2π/k ~ 103 km) the predicted TEC modulation depth is ~0.025 %. Therefore, for peak-to-peak 400 nT Pc5 pulsations, recorded during October 31, 2003, event, the TEC fluctuations are expected to be ~5 %. This value is about the observed peak magnitudes during the analyzed event: N T ~ 40 TECu, ΔN T ~ 1 TECu, hence ΔN T/N T ~ 2.5 %. The theoretical value is probably somewhat overestimated because only the vertical radiopath satellite-receiver is considered.
Dependence of amplitude (upper panel) and phase (bottom panel) of relative TEC modulation ΔN T/N T (in %) on the transverse wave number k, induced by an incident Alfven wave with peak-to-peak amplitude of horizontal magnetic component b 1 = 1 nT for various frequencies (shown in the inset)
Our idealized model predicts a phase shift between the TEC variations and ground magnetic pulsations (H component) about Δφ ~ 100° (Fig. 8b). However, a cross-spectral estimate of Δφ(f) is not very reliable because the phase delay between TEC and B time series fluctuates considerably during the analyzed events. Therefore, the relative phase information cannot tell anything definitive to be compared with the model prediction.
Our current knowledge of the ULF wave physics has been acquired mainly with the help of ground-based or satellite-borne magnetometers. However, their capabilities are limited, because even low-orbiting satellites cannot detect in situ long-period ULF waves in the ionosphere—the region where energy flows from the magnetosphere into the upper atmosphere. Moreover, the ionosphere screens transversely small-scale structures (<100 km) from ground magnetometers. Ionospheric radars have emerged as a valuable source of additional information for ULF wave studies. GPS/TEC observations are expected to provide similarly new information about MHD wave interaction with the ionosphere.
Long-period pulsations are the most powerful wave process in the near-Earth environment. The radar observations showed that Pc5 waves can noticeably modulate the ionospheric plasma: the electric field E, plasma convection velocity V, E-layer electron density N e and the ionosphere conductance Σ, and electron T e and ion T i temperatures in both F- and E-layers (see references in Pilipenko et al. 2014b). Recent observations by Pilipenko et al. (2014a) and Watson et al. (2015) have demonstrated that Pc5 waves are capable to modulate TEC as well.
One may expect that all the Pc5 wave-induced fractional variations of plasma and magnetic field should be of the same magnitude, like in any linear wave. However, GPS observations have revealed that the depth of periodic TEC modulation is sometimes even somewhat larger (e.g., in the event of October 31, 2003, ΔN T/N T ~ 2.5 %) than the geomagnetic field modulation (ΔB/B 0 ~ 1 %). In principle, ULF modulation of energetic electron precipitation, inducing an additional periodic ionization of the lower ionosphere, can cause periodic TEC variations with much higher depth than geomagnetic field variations (Watson et al. 2015). However, during the event under consideration no periodic electron precipitation occurred as evidenced by simultaneous riometer observations. The mechanism of the field-aligned plasma transport by Alfven waves, described in "Numerical model of Alfven wave interaction with the realistic ionosphere" section, can theoretically produce relative amplitudes of TEC variations larger than that of geomagnetic pulsations. However, we have analyzed GPS/TEC data during an event with very intense Pc5 waves. Whether TEC modulation by less intense ULF waves would be revealed by standard GPS technique is an open question.
Consideration of possible mechanisms of TEC modulation by a magnetospheric Alfven wave ("Possible mechanisms of TEC modulation by MHD waves" section) has shown that in principle the plasma heating, vertical drift, steep gradient, and field-aligned transport can provide a noticeable input into the TEC variations. In some cases, a periodically modulated precipitation of magnetospheric electrons can be effective, too. A feature of the field-aligned electron transport mechanism is that it contributes mainly into the plasma density of the bottom ionospheric layers. This feature is basically in accordance with the combined GPS/EISCAT/magnetometer observations. However, as the correlation between \(\left\langle {N_{\text{e}} } \right\rangle\) and vTEC is R ~ 0.75, the field-aligned transport mechanism is responsible for ~56 % of TEC variations. Thus, though the proposed hypothesis on field-aligned plasma transport is basically consistent with the analyzed event, the correspondence between magnetometer, EISCAT, and GPS/TEC oscillations is not always perfect. For example, short-lived Pc5 geomagnetic pulsations during 1220–1240 UT is accompanied by TEC variations, but the response at EISCAT is not clear. Probably, some of the mentioned above mechanisms also contribute to TEC periodic modulation by ULF wave, though a limited available information does not enable us to single them out.
The phase relation between magnetic and TEC variations is not very stable and does not allow to use it for the model validation and discrimination of possible mechanisms. Moreover, the phase information is rather subtle, and its consideration must be done with a great caution. The developed model is still too simplified: Pc5 transverse spatial structure is modeled as plane wave, and TEC is calculated along a vertical path. Therefore, the phase information can be used as a tool for the discrimination of possible mechanisms only on the basis of a more advanced model. More conclusive judgments can be stated only after detailed studies with the use of other ionospheric instruments that will provide additional information about ionospheric plasma parameters and incident particle fluxes.
Long-period geomagnetic Pc5 pulsations being the most powerful wave process in the terrestrial environment can significantly modulate the local densities of the magnetospheric and ionospheric plasma. Even radiopath-integrated TEC has turned out to be sensitive enough to respond to intense Pc5 waves. So far, the effect of TEC modulation by ULF waves is a challenge for the MHD wave theory, because responsible mechanisms of such modulation have not been firmly established yet. Analysis of the altitude profile of the electron density fluctuations derived from EISCAT data during the global Pc5 wave event has shown that main contribution into the periodic TEC variations is provided by lower ionosphere, up to ~150 km, that is the E-layer and lower F-layer. This observational fact favors the field-aligned plasma transfer induced by Alfven wave as a dominant modulation mechanism. The analytical estimate and numerical modeling have shown a high efficiency of this mechanism. However, sometimes the correspondence between magnetometer, EISCAT, and GPS/TEC oscillations is not perfect, which indicates that some other modulation mechanisms could come into play.
Afraimovich EL, Astafyeva EI, Demyanov VV et al (2013) A review of GPS/GLONASS studies of the ionospheric response to natural and anthropogenic processes and phenomena. J Space Weather Space Clim 3:A27
Cran-McGreehin AP, Wright AN, Hood AW (2007) Ionospheric depletion in downward currents. J Geophys Res 112:A10309
Davies K, Hartman GK (1976) Short-period fluctuations in total columnar electron content. J Geophys Res 81:3431–3434
Fedorov E, Mazur N, Pilipenko V, Engebretson M (2016) Interaction of magnetospheric Alfven waves with the ionosphere in the Pc1 frequency band. J Geophys Res 121:321–337
Hughes WJ, Southwood DJ (1976) The screening of micropulsation signals by the atmosphere and ionosphere. J Geophys Res 81:3234–3240
Kleimenova NG, Kozyreva OV (2005) Spatial-temporal dynamics of Pi3 and Pc5 geomagnetic pulsations during the extreme magnetic storms in October 2003. Geomagn Aeron (Engl Transl) 45:71–79
Komjathy A, Galvan DA, Stephens P et al (2012) Detecting ionospheric TEC perturbations caused by natural hazards using a global network of GPS receivers: the Tohoku case study. Earth Planets Space 64:1287–1294
Lathuillere C, Glangeaud F, Zhao ZY (1986) Ionospheric ion heating by ULF Pc5 magnetic pulsations. J Geophys Res 91:1619–1626
Lester M, Davies JA, Yeoman TK (2000) The ionospheric response during an interval of Pc5 ULF wave activity. Ann Geophys 18:257–261
Marin J, Pilipenko V, Kozyreva O, Stepanova M, Engebretson M, Vega P, Zesta E (2014) Global Pc5 pulsations during strong magnetic storms: excitation mechanisms and equatorward expansion. Ann Geophys 32:319–331
Menk FW, Waters CL, Dunlop SI (2007) ULF Doppler oscillations in the low latitude ionosphere. Geophys Res Lett 34:L10104. doi:10.1029/2007GL029300
Okuzawa T, Davies K (1981) Pulsations in the total columnar electron content. J Geophys Res 86:1355–1363
Pilipenko V, Fedorov E (1995) Modulation of total electron content in the ionosphere by geomagnetic pulsations. Geomagn Aeron (Engl Transl) 34:516–519
Pilipenko V, Belakhovsky V, Kozlovsky A, Fedorov E, Kauristie K (2012) Determination of the wave mode contribution into the ULF pulsations from combined radar and magnetometer data: method of apparent impedance. J Atmos Sol-Terr Phys 77:85–95
Pilipenko VA, Fedorov EN, Teramoto M, Yumoto K (2013) The mechanism of mid-latitude Pi2 waves in the upper ionosphere as revealed by combined Doppler and magnetometer observations. Ann Geophys 31:689–695
Pilipenko V, Belakhovsky V, Murr D, Fedorov E, Engebretson M (2014a) Modulation of total electron content by ULF Pc5 waves. J Geophys Res 119:4358–4369
Pilipenko V, Belakhovsky V, Kozlovsky A, Fedorov E, Kauristie K (2014b) ULF wave modulation of the ionospheric parameters: radar and magnetometer observations. J Atmos Sol-Terr Phys 108:68–76
Ponomarenko PV, Waters CL, Sciffer MD, Fraser BJ (2001) Spatial structure of ULF waves: comparison of magnetometer and Super DARN data. J Geophys Res 106:10509–10517
Poole AWV, Sutcliffe PR (1987) Mechanisms for observed total electron content pulsations at mid latitudes. J Atmos Terr Phys 49:231–236
Potapov A, Guglielmi A, Tsegmed B, Kultima J (2006) Global Pc5 event during 29–31 October 2003 magnetic storm. Adv Space Res 38:1582–1586
Regi M, De Lauretis M, Francia P (2015) Pc5 geomagnetic fluctuations in response to solar wind excitation and their relationship with relativistic electron fluxes in the outer radiation belt. Earth Planets Space 67:9. doi:10.1186/s40623-015-0180-8
Teramoto M, Nishitani N, Pilipenko V et al (2014) Pi2 pulsation simultaneously observed in the E and F region ionosphere with the SuperDARN Hokkaido radar. J Geophys Res 119:3444–3462
Vorontsova E, Pilipenko V, Fedorov E, Sinha AK, Vichare G (2016) Modulation of total electron content by global Pc5 waves at low latitudes. Adv Space Res 57:309–319
Waters CL, Cox SP (2009) ULF wave effects on high frequency signal propagation through the ionosphere. Ann Geophys 27:2779–2788
Waters CL, Yeoman TK, Sciffer MD, Ponomarenko P, Wright DM (2007) Modulation of radio frequency signals by ULF waves. Ann Geophys 25:1113–1124
Watson C, Jayachandran PT, Singer HJ, Redmon RJ, Danskin D (2015) Large-amplitude GPS TEC variations associated with Pc5–6 magnetic field variations observed on the ground and at geosynchronous orbit. J Geophys Res. doi:10.1002/2015JA021517
BV made GPS and magnetometer data analysis, VP made analytical estimates and drafted the manuscript, DM participated in the GPS and riometer data analysis, EF performed numerical modeling, AK carried out the EISCAT data analysis. All authors read and approved the final manuscript.
This study was funded by the RF President Grant MK-4210.2015.5 (BV), RFBR Grants 14-05-00588 (VP), 15-05-01814 (EF), and NSF Grant ATM-0827903 to Augsburg College (DM). Dual-frequency 30-s rate GPS daily data files in RINEX format are freely available from the IGS (ftp://cddis.gsfc.nasa.gov). We are indebted to the staff of EISCAT for operating the facility and supplying the data. We thank the institutes who maintain the IMAGE magnetometer array (www.ava.fmi.fi/image). The riometer data originated from the IRIS, operated by the Lancaster University in collaboration with the Sodankyla Geophysical Observatory. We appreciate detailed comments of both reviewers.
Polar Geophysical Institute, Apatity, Russia
V. Belakhovsky
Space Research Institute, Moscow, Russia
V. Pilipenko
Augsburg College, Minneapolis, MN, USA
D. Murr
Institute of Physics of the Earth, Moscow, Russia
V. Belakhovsky, V. Pilipenko & E. Fedorov
Sodankyla Geophysical Observatory, University of Oulu, Oulu, Finland
A. Kozlovsky
E. Fedorov
Correspondence to V. Pilipenko.
Belakhovsky, V., Pilipenko, V., Murr, D. et al. Modulation of the ionosphere by Pc5 waves observed simultaneously by GPS/TEC and EISCAT. Earth Planet Sp 68, 102 (2016). https://doi.org/10.1186/s40623-016-0480-7
EISCAT
ULF waves
Pc5 pulsations
Alfven waves
3. Space science | CommonCrawl |
Comparing algorithms for tridiagonal linear systems solution
Below there are two algorithms for solving tridiagonal linear systems of the form $$ \left[ \begin{array}{ccccc|c} b_1 & c_1 & & & &d_1\\ a_2 & b_2 & c_2 & & & d_2\\ & \ddots & \ddots & \ddots & & \vdots\\ & & a_{n-1} & b_{n-1} & c_{n-1} & d_{n-1}\\ & & & a_n & b_n & d_n \end{array} \right]. $$ I called them Algorithms A and B. Both of them are equivalent to Gaussian elimination, but with important difference in the form of the resulting triangular (bidiagonal) matrix.
My main question is: which one of them is more preferrable?
Algorithm A is the one that described in Wikipedia and many textbooks, it is called Thomas algorithm and is implemented, for example, in Numerical Recipes in some tricky form. Algorithm B is more straightforward and, in my opinion, is more numerically stable in cases when $|b_i|\gg|a_i|+|c_i|$ . Though I haven't seen Algorithm B in texbooks, note that exactly this algorithm is implemented in the mentioned Wikipedia article, see "Implementation in Fortran 90", while "Implementation in Matlab" deals with Algorithm A ("Implementation in C" in its current state is a mess that does not seem to work at all).
$$ \begin{array}{|c|c|}\hline \mathbf{Algorithm\ A} & \mathbf{Algorithm\ B}\\\hline \textit{% Elimination}&\textit{% Elimination}\\ \begin{array}{l} \tilde c_1=c_1/b_1\\ \tilde d_1=d_1/b_1\\ \mathbf{for }\quad i=2 \quad \mathbf{to}\quad n-1 \quad \textbf{do}\\ \quad q=b_i-a_i c_{i-1}\\ \quad \tilde c_i=c_i/q\\ \quad \tilde d_i=(d_i-a_i \tilde d_{i-1})/q\\ \mathbf{end do}\\ \tilde d_n=(d_n-a_n \tilde d_{n-1})/(b_{n}-a_n \tilde c_{n-1})\\ \\ \end{array} & \begin{array}{l} \\ \\ \hat b_1=b_1\\ \mathbf{for }\quad i=2 \quad \mathbf{to}\quad n \quad \textbf{do}\\ \quad q=a_i/\hat b_{i-1}\\ \quad \hat b_i=b_i-q c_{i-1}\\ \quad \hat d_i=d_i-q \hat d_{i-1}\\ \mathbf{end do}\\ \\ \\ \end{array}\\ \hline \textit{% Resulting system} & \textit{% Resulting system}\\ \left[ \begin{array}{ccccc|c} 1 & \tilde c_1 & & & &\tilde d_1\\ & 1 & \tilde c_2 & & & \tilde d_2\\ & & \ddots & \ddots & & \vdots\\ & & & 1 & \tilde c_{n-1} & \tilde d_{n-1}\\ & & & & 1 & \tilde d_n \end{array} \right] & \left[ \begin{array}{ccccc|c} \hat b_1 & c_1 & & & &\hat d_1\\ & \hat b_2 & c_2 & & & \hat d_2\\ & & \ddots & \ddots & & \vdots\\ & & & \hat b_{n-1} & c_{n-1} & \hat d_{n-1}\\ & & & & \hat b_n & \hat d_n \end{array} \right]\\ \hline \textit{% Backsubtitution} & \textit{% Backsubtitution}\\ \begin{array}{l} \\ x_n=\tilde d_n\\ \mathbf{for }\quad i=n-1 \quad \mathbf{downto}\quad 1 \quad \textbf{do}\\ \quad x_i=\tilde d_i-\tilde c_i x_{i+1}\\ \\ \end{array} & \begin{array}{l} \\ x_n=\hat d_n/\hat b_n\\ \mathbf{for }\quad i=n-1 \quad \mathbf{downto}\quad 1 \quad \textbf{do}\\ \quad x_i=(\hat d_i-c_i x_{i+1})/\hat b_i\\ \\ \end{array}\\\hline \end{array} $$
linear-algebra
Aron Ahmadia
faleichikfaleichik
$\begingroup$ I looked around a bit at some of the primary resources online, and I couldn't find any papers in the obvious places (LAPACK Working Notes, etc...) discussing this particular routine, and the LAPACK routine for this, xgtsl, still bears Jack's original copyright. Neither of your approaches employs pivoting, which is probably a more important factor than the other differences between them. $\endgroup$
– Aron Ahmadia
$\begingroup$ @AronAhmadia: That's true, unless the tridiagonal system is diagonally dominant, in which case no partial pivoting is necessary. $\endgroup$
– Paul ♦
Both algorithms compute $LU$ decompositions (solving against $L$ while it is being formed) and then solve against the resulting $U$. The difference is that Algorithm A forces $U$ to have a diagonal of all ones (we say that $U$ is unit-diagonal), while Algorithm B forces $L$ to have a unit diagonal (this is the usual convention).
Regardless of whether or not one is more stable than the other, both are a bad idea; as @AronAhmadia mentioned, you should use an algorithm which performs partial pivoting. I would go with the LAPACK routine dgtsv.
Jack PoulsonJack Poulson
$\begingroup$ Thank you! Your answer is quite surprising for me, since I've always believed Numerical Recipes (apps.nrbook.com/rollover/index.html, section 2.4) which says that "tridiagonal algorithm is the rare case of the algorithm that, in practice, is more robust than theory says it should be". Please look at two more questions in the updated post. $\endgroup$
– faleichik
$\begingroup$ @faleichik - I'm going to roll back your edit, as it's rather unfair to answerers to modify a question to ask new things (the point of modifying a question is to improve clarity, not get more things). $\endgroup$
$\begingroup$ If you want to ask a new question, that's fine, though you should just look up the definition of pivoting, there are plenty of easily-constructed cases where you create a divide-by-zero or divide-by-near-zero problem. Regarding the article describing the implementation, there may not be much, since it is a fairly trivial extension of LU. $\endgroup$
$\begingroup$ Honestly, I don't think that my updating was dishonest. Firstly I wanted to add the questions you've deleted as a comments to Jack's answer, but decided to add them to the main post in order to attract more attention from other people. I don't really think that these additional questions deserve a separate post. $\endgroup$
$\begingroup$ As for the pivoting: you know, I'm aware of how it works for tridiagonal matrices, the comments in 'dgtsv' are pretty clear. I was just wondering if someone considered this problem in a textbook because all texts I've seen deal with Thomas algorithm and nothing more. $\endgroup$
Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question.
Libraries for solving sparse linear systems
Initial guesses for perturbed linear systems
Algorithms for linear system of ODEs
Methods for solving linear systems
Efficient solver for a symmetric tridiagonal system where the upper/lower diagonals are offset
Solving Linear Systems in Julia
Efficient algorithm for solving linear system with symmetric near-tridiagonal matrix?
What are the prominent algorithms for solving systems of linear inequalities? | CommonCrawl |
Why is black hole entropy not an extensive quantity?
The Bekenstein entropy for a black hole is proportional to the surface area $A$ of the black hole $$ S_{BH} = \frac{k_B}{4 l_P^2} A $$ with the Planck length $l_P = \sqrt{\frac{\hbar G}{c^3}}$.
The area is the surface of a sphere with Schwarzschild radius $r_s = \frac{2 M G}{c^2}$, so $$ A = 4 \pi r_s^2 = 16\pi \left(\frac{G}{c^2}\right)^2 M^2 $$ and the black hole entropy is therefore proportional to the mass of the black hole $M$ squared: $$ S_{BH} = \frac{4 \pi k_B G}{\hbar c} M^2. $$ But this quite unusual for an entropy. In classical thermodynamics entropy is always supposed to be an extensive quantity, so $S\sim M$. But the black hole entropy $S_{BH} \sim M^2$ is obviously a non-extensive quantity. Isn't a non-extensive entropy inconsistent within the framework of thermodynamics? Why is it, that the entropy of a black hole must be a non-extensive quantity? Shouldn't we better define an entropy for a black hole from e.g. the ratio of Schwarzschild radius to Planck length, which would give us an extensive entropy like $$ S_{BH, ext} \sim k_B \frac{r_s}{l_P} \sim k_B\sqrt{\frac{4G}{\hbar c}} M $$
thermodynamics entropy black-hole-thermodynamics
asmaier
$\begingroup$ thermodynamics in the presence of gravity is no longer extensive, even classical gravity, due to the long range nature of the force. That is one of the reasons why some people developed non-extensive thermodynamics, such as T'sallis statistics en.wikipedia.org/wiki/Tsallis_statistics $\endgroup$ – Wolphram jonny Jul 25 '16 at 22:07
$\begingroup$ @Wolphramjonny FWIW I think this comment could be developed into a very nice answer, if you can explain a little about Tsallis statistics (which are almost not mentioned before on this site). $\endgroup$ – Rococo Jul 25 '16 at 22:43
$\begingroup$ To the main point, a non-extensive entropy is certainly very interesting, and is the jumping off point for speculation about black holes, quantum gravity, and the holographic principle (en.wikipedia.org/wiki/Holographic_principle)... but it is certainly not inconsistent with thermodynamics. Specifying to entanglement entropy, one also sees this in condensed matter systems (usually near the ground state)- keyword is "area law." $\endgroup$ – Rococo Jul 25 '16 at 22:59
$\begingroup$ @Rococo please feel free to answer using my comment, perhaps I commented first, but I suspect you know more that I do about the subject $\endgroup$ – Wolphram jonny Jul 25 '16 at 23:05
$\begingroup$ @Wolphramjonny Thank you, but actually I really don't- I was hoping to learn something too :) $\endgroup$ – Rococo Jul 25 '16 at 23:07
This is an answer adapted from Rococo and Wolphram jonny's comments plus a little googling.
Thermodynamics in the presence of gravity is no longer extensive (even classical gravity) due to the long range nature of gravity. This is one of the reasons why people developed non-extensive thermodynamics, like Tsallis statistics.
Tsallis Statistics
Tsallis statistics was originated by Constantino Tsallis, a Brazilian physicist working in Rio de Janeiro (though he was born in Greece in 1943 and grew up in Argentina). He introduced what is now known as Tsallis entropy and Tsallis statistics in his 1988 paper Possible generalization of Boltzmann–Gibbs statistics.
Tsallis statistics is considered to be a good (maybe even the best) candidate for a non-extensive theory of thermodynamics. It is intended to supplement Boltzmann-Gibbs statistics, not replace it. Tsallis statistics is a collection of mathematical functions and associated probability distributions that can be used to derive Tsallis distributions from the optimization of the Tsallis entropic form. They are also useful for characterizing complex, anomalous diffusion.
Tsallis Entropy
Tsallis entropy is a generalization of the standard Boltzmann-Gibbs entropy. Also introduced by Constantino Tsallis in the same 1988 paper, it is identical in form to Havrda–Charvát structural α-entropy within information theory. From the year 2000 on, a wide variety of evidence has been accumulated that confirms Tsallis entropy's experimental predictions. A short list of the most notable confirmations is given below:
The distribution characterizing the motion of cold atoms in dissipative optical lattices, predicted in 2003 and observed in 2006
The fluctuations of the magnetic field in the solar wind enabled the calculation of the q-triplet (or Tsallis triplet)
The velocity distributions in driven dissipative dusty plasma
Spin glass relaxation
Trapped ion interacting with a classical buffer gas
High energy collisional experiments at LHC/CERN (CMS, ATLAS and ALICE detectors) and RHIC/Brookhaven (STAR and PHENIX detectors)$^1$
While not all of the implications of this theory can be completely known, it refines the Boltzmann-Gibbs definition of entropy, provides a further tool, Tsallis statistics, to explore non-extensive thermodynamics, it is a jumping off point for much speculation about black holes, quantum gravity, and the holographic principle, to name a few examples.
Bekenstein and Black Hole Thermodynamics
It is unusual that Bekenstein used a non-extensive quantity, namely the mass squared, and Tsallis statistics wouldn't have played a part in this. The reason for this, though, was really just a gut feeling on the part of Bekenstein.
It all started (so to speak) with Stephen Hawking's area theorem for black holes ($S = k A/4$). Right away (this is November of 1970), he noticed that his law bore an uncanny resemblance to the second law of thermodynamics. However, he thought it was nonsensical that this could be true - it didn't make sense that the two were related, and anyway, black holes were black.
Jacob Bekenstein was not convinced. Hawking saying that the two were not the same meant the violation of the second law of thermodynamics. Every scientist sided with Hawking in this argument except John Wheeler, Bekenstein's PhD advisor (because, according to him, "your idea is crazy enough that it just might be right"). In his paper (which can be read here) Bekenstein says,
All the analogies we have mentioned are suggestive of a connection between thermodynamics and black-hole physics in general, and between entropy and black-hole area in particular. But so far the analogies have been of a purely formal nature, primarily because entropy and area have different dimensions. We shall remedy this deficiency...by constructing out of black-hole area an expression for black-hole entropy with the correct dimensions.
It should also be noted that the area theorem proposed by Hawking (the event horizon area of a black hole cannot decrease; it increases in most transformations of the black hole) requires increasing behavior that is reminiscent of the thermodynamic entropy of closed systems, and as such it is reasonable that black holes should be a monotonic function of the area (and it is the simplest such function).
So matters remained until the next year, when Hawking showed that black holes do indeed emit radiation in the form of virtual particles, and the rest, as they say, is history. All of the other laws of black holes formulated were basically the laws of thermodynamics for black holes, resulting in black hole thermodynamics and the famous (sort of) equation $S_{BH} = \frac{k A}{4 l_p^2}$.
In the equation, $S_{BH}$ is the entropy of a black hole (or Bekenstein-Hawking, whichever you prefer), $k$ is Boltzmann's constant, $A$ is the area of the event horizon of the black hole, and $l_p$ is the Planck length, so $l^2_p$ is the Planck area. Interestingly, looking at your calculations, you use $l_p$ instead of $l^2_p$. I assume in your equation that you use $k_B$ as Boltzmann's constant, instead of $k$.
So, looking at the similarities between the laws of thermodynamics and the laws of black hole thermodynamics, I think it was a pretty reasonable (considering the results, which make sense) assumption, though of course we have the benefit of hindsight. The main consequences of these thoughts were in terms of information - one could ask how it is possible that all the information of the black hole is "coded" on its surface. This idea was formalized by the holographic principle. If this idea is true (and many theoretical calculations point to, at the very least, this making sense) the entropy of a black hole has to be proportional to the area of the black hole (@BobBee goes in-depth into this in his answer, and explains it very well).
The next step in black hole thermodynamics would be to calculate a theory of quantum gravity. Why? Well, black holes are at that intersection where both gravity and quantum mechanics are important. They have a singularity, and all of our laws of physics break down there. There are still problems to be solved in black hole thermodynamics, but I think that non-extensive entropies are consistent with the theory of thermodynamics.
It should be noted, when talking about consistency or inconsistency here, that entropy has "changed" a decent amount since it was first formulated. From Clausius' definition, through Boltzmann and Gibbs, Claude Shannon (in terms of information theory), Bekenstein and Hawking (in terms of black holes) and Tsallis, entropy has been found to have many connections to many fields. As WetSavannaAnimal aka RodVance says in his answer, we need to broaden what we mean by extensive.
Thanks to Wolfram jonny and Rococo for their great comments. I used the website linked below for the quote and for the section on Tsallis entropy. I used this website for the information on Constantino Tsallis. I used this website for my information on Tsallis statistics. For the very curious, here is a website where if you scroll down a tad you'll see a pdf of Dr. Tsallis' paper.
For the section on Bekenstein, I mainly used Black Holes and Time Warps by Kip Thorne; a copy of it on google books can be found here. The relevant pages are 422 through 427. I also used this website. Bekenstein's paper is cited within the text; that is where the quote is from. Another very informative website is this one.
Finally, both of the other answers here are very good. Thanks to BobBee for explaining how the holographic principle fits in, and recent developments (and, of course, about how we need to generalize our definition of extensive). Thanks to WetSavannaAnimal aka RodVance for expanding upon BobBee's answer, your explanation was also very insightful and helpful.
$^1$Quote from this website
heatherheather
$\begingroup$ Bekenstein published his ideas about black hole entropy in 1973. I don't think Tsallis statistics from 1988 could have played a role in his reasoning, why the black hole entropy must be a non-extensive quantity. $\endgroup$ – asmaier Jul 26 '16 at 6:41
$\begingroup$ here is a nice example of how non-extensive statistics work better that the reagular one in self gravitating systems researchgate.net/publication/… $\endgroup$ – Wolphram jonny Jul 29 '16 at 23:35
$\begingroup$ @heather: I'm sorry, but I'm still missing the physical argument that made Bekenstein/Hawking think, that the entropy of the black hole cannot be proportional to it's mass (as it is normally the case for entropy), but must be proportional to mass squared. $\endgroup$ – asmaier Jul 30 '16 at 20:50
$\begingroup$ Funny thing though, Gibbs' entropy was already defined for general systems (extensive or non-extensive) in 1902. Was Tsallis entropy really necessary or is the entire motivation based on a misunderstanding of classic statistical mechanics? $\endgroup$ – Nanite Jul 30 '16 at 22:44
$\begingroup$ No. But I found a hint somewhere else. Somebody mentioned that mass is not conserved during a black hole merger, part of it is always converted to energy by gravitational radiation. So the mass of a merged black hole would be less than the mass of the two black holes before the merge. If entropy would be proportional to mass, this would mean that the entropy of the black hole after the merge would be lower than the entropy of the two black holes before. This would be a violation of the law that entropy should never decrease. $\endgroup$ – asmaier Jul 31 '16 at 21:23
Actually it is an extensive quantity, but not in the way extensive is used in classical thermodynamics. It is proportional to area and not mass (or equivalently for a classical object, volume). Entropy is normally (but not always) proportional to mass, or energy, which is approximately proportional to the number of elementary particles and thus the number of possible states. For a black hole (BH), the number of possible states is proportional to the number of different Planck areas there are in the BH horizon. i.e., the states information is as though it was stored in the horizon, not the bulk
We don't know how to calculate the entropy of a singularity, the physics breaks down and it requires quantum gravity to calculate it. But Hawking has calculated the temperature of a BH, through his calculations, where he discovered that a BH radiates as a black body at a given temperature given by the inverse of its mass. From there it is easy to get the entropy.
The simplistic version then is that since $dS = dQ/T$, with $Q$ the heat absorbed by the BH and $T$ its black body radiation-equivalent temperature, if you take Hawking's result as $T = k/M$ for some constant $k$, then $dS = kMdQ$. Since the heat absorbed into the BH is energy, it increases the BH mass (in natural units) as $dQ = dM$. So, substituting, $dS = kMdM$, and then integrating $S = kM^2$. You can see other derivations, but when Hawking found his BH radiation temperature dependent on mass, there's then no two ways around it that they could think of, nor anybody else so far.
This of course was proven by Hawking-Bekenstein based on finding that the Hawking radiation is black body with temperature proportional to the so called surface gravity at the horizon, which is inversely proportional to the BH mass, and with Bekenstein then led to entropy proportional to the horizon area, and in fact equal to the number of Planck areas multiplied by $k_b/4$ ($k_b$ is the Boltzmann constant), as per the Hawking-Bekenstein equation, also discussed in @Heather's answer, and written (slightly off) in the question. This relates BH entropy and physics to thermodynamics – and indeed BH entropy can grow or stay the same, but never decrease, by the second law of thermodynamics. The two BH's which merged on 9/14/15 had a final entropy greater than the sum of the two before, plus some entropy was radiated away in the gravitational radiation. The BH entropy laws also lead to equations for the maximum energy that may be extracted from merging BH's. Bekenstein also showed that BH entropies are the maximum entropy possible for any volume of space with the same volume as the BH.
But the question still remains: why, and how? How is it that the possible states of the BH are encoded in the horizon, if indeed the statistical interpretation holds? For only if it holds would the statistical interpretation be on firm ground, regardless of the thermodynamic relationships. So there's been attempts and some success (but no proof or certainty yet) in two separate results in physics.
One is the Holographic principle by 't Hooft and Susskind, and others, now a few years old, and still no proof, but some ocassional developments on it. They conjecture that the quantum gravity solution of a spacetime in d+1 dimensions is in a 1-1 correspondence to a Conformal Field Theory (CFT) without gravity, in the d dimensional boundary of the spacetime.
They based this on generalizing the proven results of the AdS/CFT correspondence found by Maldacena and others, who proved the results for a string quantum gravity in anti-de Sitter (AdS) spacetime. AdS see is a vacuum solution of the Einstein Field Equations with a negative cosmological constant (de Sitter is with a positive cosmological constant, the limit of our known universe as its age goes to infinity and the cosmological constant completely dominates). The AdS/CFT correspondence is also called gauge/gravity duality, gauge for the CFT. CFT's are quantum field theories.
The Holographic conjecture and the AdS/CFT correspondence have led people to think that information on the quantum states of the bulk, in some cases or in general, is stored in its boundaries. Or surfaces, similar to the way holography works for a 3D object. But there are also some counter-examples, and so in general it is still an interesting approach, but not well understood or accepted. Still, if the general case is valid, the ideas is that the states of the BH would be encoded, or imprinted, on its horizon. Possibly that could provide a mechanism for the matter/energy that falls in, and the quantum information that was thought lost to the BH, to be in the horizon and not lost, and possibly later encoded in the Hawking blackbody (with something extra then) radiation.
There is another more recent finding that indicates that the information on the state of the BH may be stored at the horizon. It's by Hawking, Perry and Strominger, from January of 2016. See the paper in arXiv, and the Phys. Rev. article in June 2016 (I've not read the Phys Rev version but the abstracts are the same).
What they claim is new, and based on new re-discovered asymptotic symmetries at conformal infinity. They claim that based on those symmetries BH's have conserved quantities, they call soft hair. That is, that BH's have some hair that is over and beyond the mass, angular momentum and charge proved by Hawking years ago. Where it is not the same he 'proved' before (yes, there's reason this new symmetries break on of his assumptions back then, that the vacuum was non-degenerate, was unique) is that BHs actually have what the authors call soft hair, very low energy hair that in the limit is zero energy, but still is there. The soft hair is due to soft photons or soft gravitons, that reside in the horizon. They claim that, from part of their abstract:
This Letter gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units.
Thus they claim they have argued or shown that the entropy is due to the degrees of freedom, or possible states of, the horizon, in accord with BH thermodynamics. Still, they do state, in the body of the arXiv paper, that they have not proven that indeed there's enough degrees of freedom to really store all the information, and that work remains. Their degrees of freedom, or the soft hair, comes from new symmetries that they re-discovered on the conformal infinity of asymptotically flat spacetime (think of a BH in asymptotically flat spacetime), and which lead to new conserved quantities, the soft hair. Since the BH horizon is one boundary of the asymptotically flat spacetime (i.e., outside the horizon), they show using Penrose diagrams and can calculate the conserved soft hair over the BH horizon.
The soft 'charges' (the conserved entities of the symmetries) they re-discovered had been identified by Weinberg in 1965, based on conformal symmetries at infinity called the BMS symmetries (Bondi, actually also van der Burg, Metzner and Sachs), found and published by those first 3 in a paper, and Sachs in another, in 1962. See the Living Reviews article. Those 4 demonstrated in 1962 that there are additional symmetries at conformal infinity in asymptotically flat spacetime, besides the Poincare group. The BMS group was also used by them to define the BMS mass, in asymptotically flat spacetime, at conformal infinity. They found a family of symmetries called supertranslations, and another called superrotations, generalizations of the Poincare group which also inlcudes it, basically coming out of the conformal invariant structure. They also found that those symmetries were non-trivial, that were physical, and cannot be transformed away. Those symmetries turns out were an infinite set of diffeomrphisms, and lead to the soft hair. They occur for gravitational fields and for electromagnetic fields, so for soft photons and gravitons. Hawking et al. did their calculations for electromagnetic fields in a BH, but argued for the same effect for the gravitational field. They admit there's still a lot to calculate.
So, the most promising (with all the unknowns and unproven theories yet) explanations for the entropy of a BH to be proportional to the horizon area is that the information on the state of the BH, at a quantum level, is imprinted in its horizon. If so it has to be proportional to area and not mass. [BTW, a personal aside, I'm proud to have had Sachs as my advisor in General Relativity, but it was about 5-6 years later, and he was not doing the gravitational wave work he did earlier anymore, not of course did I]
Bosoneando
Bob BeeBob Bee
$\begingroup$ I just have to ask: how does this answer the question? I don't think it really does. $\endgroup$ – heather Aug 2 '16 at 14:38
$\begingroup$ The question is what are the physical states that provide for the possible states of the system. Once you have that you count and if equiprobabke you take p x log(p) and add them. It is usually extensive because the sum is over all possible states. So the question is where is that state information stored? In the BH horizon, if those hypothesis are right. No on the bulk, but in the surface area. That was the point of it all. I tried to give details as to why some physicists claim that. $\endgroup$ – Bob Bee Aug 2 '16 at 19:00
$\begingroup$ The question is about the logic behind Bekenstein's choice to use a non-extensive quantity instead of an extensive quantity in his equation for black hole entropy. $\endgroup$ – heather Aug 2 '16 at 19:01
$\begingroup$ No. You are missing a lot. He figured that would do it. Hawking proved that indeed it reproduces the black body radiation spectrum and temperature where the entropy is proportional to area. Mass didn't do it. The others, from AdS to the soft hair and other things back it up. $\endgroup$ – Bob Bee Aug 2 '16 at 19:08
$\begingroup$ I know that Hawking proved that, and everything. I'm trying to explain that to the OP! That is, however, the question. $\endgroup$ – heather Aug 2 '16 at 19:08
I'd like to add to BobBee's insightful answer, which one can summarize as: we need to broaden our notion of extensive for systems Gibbs, Boltzmann and all the others could not conceive of.
Another point, which is implicit in the notions that BobBee's answer discusses is that entropy is always extensive in the following broadened sense:
Shannon entropy is additive for the composite of statistically independent systems
simply by construction (the definition). For the pedantic, let's say we multiply the Shannon entropy by the Boltzmann constant, to make it reduce to classical thermodynamic entropy when this latter notion is used (those who say that thermodynamic entropy and Shannon entropy are not the same, please read my answer here about how they are postulated to be the same modulo the Boltzmann constant).
In causal set theory (which I only have the fleetingest knowledge of) I understand, the assumed "atoms" of spacetime causally influence one another and you of course have pairs of these atoms that are entangled but which lie on either side of the Schwarzschild horizon: one of the pair is inside the black hole and therefore cannot be probed from the outside, whilst the other pair member is in our universe. The outside-horizon pair member observable in our universe therefore has "hidden" state variables, i.e. encoded in the state of the pair member inside the horizon that add to its von Neumann entropy, as we would perceive it outside the horizon. So the theory foretells an entropy proportional to the horizon area (the famous Hawking equation $S = k\,A/4$) because it is the area that is proportional to the number of such pairs that straddle the horizon.
WetSavannaAnimalWetSavannaAnimal
$\begingroup$ @ArtBrown That's a good point, and you are correct. I guess I tend to think of the modern conception of the notion as Shannon's because he was the first to clearly think of the notion of information content. Certainly one needs, for example, the noiseless coding theorem to show that the Boltzmann entropy is proportional to the minimum number of bits needed to encode, with arbitrarily small coding error probability, the full state of a system conditioned on the knowledge of the macrostate when that system comprises statistically independent constituents with identical probability distribution. $\endgroup$ – WetSavannaAnimal Aug 7 '16 at 6:47
Although the existing answers are extensive (sorry for the pun) I want to add the following thought, which I found in Susskinds book https://en.wikipedia.org/wiki/The_Black_Hole_War :
The reason why entropy $S_{BH}$ of a black hole is proportional to $M^2$ is not that the entropy for a black hole is counted differently, but is in the definition of mass for the black hole.
When we talk about entropy of a black hole, the mass $M$ means the so called gravitational mass. But one can also define a so called baryonic or free mass, by taking all the (baryonic) particles (Susskind is actually talking about strings) an object consist of and weight them separately, then adding up all the mass. Counterintuitively the gravitational mass is lower than the free mass (because of the negative gravitational binding energy) and for very dense objects like a neutron star the gravitational mass $M$ is already quite a lot less (20%) than its so called baryonic or free mass $M_{b}$, see also
What is the binding energy of a neutron star?
https://www.astro.umd.edu/~miller/teaching/questions/neutron.html
Susskind explains that for a black hole this effect is even more pronounced, namely as I understand it $$ M = \sqrt{M_b} $$ And so we come to the conclusion that entropy $S$ of a black hole only looks like a non-extensive quantity because in the Bekenstein-Hawking formula we relate it to the gravitational mass. In fact if we relate it to the free mass (gravitational mass + gravitational binding energy) the entropy of a black hole is still an extensive quantity $$ S \sim M^2 \sim M_b $$
protected by Qmechanic♦ Aug 3 '16 at 15:46
Not the answer you're looking for? Browse other questions tagged thermodynamics entropy black-hole-thermodynamics or ask your own question.
Black hole entropy versus entropy of normal matter
What is entropy really?
Bekenstein entropy black hole v.s Hawking entropy black hole
No hair theorem and black hole entropy
Is this derivation of Black Hole entropy viable?
Maximum Density that We Can Store Information at?
Black Hole Entropy Calculation
Link between Hawking-Bekenstein Black hole entropy and entanglement entropy
If Black holes are maximal entropy how can they evaporate?
Do black lines exist?
Smallest possible black hole containing any information? | CommonCrawl |
Adobe Photoshop CS6 HACK Free License Key Free For PC
Download ->->->-> DOWNLOAD (Mirror #1)
Adobe Photoshop CS6 Crack+ Free [Mac/Win] [Latest]
With a growing interest in the image manipulation tool, Photoshop has become a major tool used by web designers to work with images and create layouts for web designs.
There are over 900,000 free tutorials and online resources for Photoshop available on the Web, ranging from beginner guides to advanced learning materials. Photoshop tutorials are designed to teach Photoshop, but also provide Photoshop users with a wealth of resources for the program.
A Checklist of Photoshop Tutorials
A checklist will help you decide which resources are better suited to your style of learning.
Sort Tutorials by Popularity
Search for Tutorials
Resources are listed by relevancy to the core tutorials, and as you change the filters the results change and are ordered by popularity. Below you can search for tutorials.
Using Photoshop Tutorials
Adobe Photoshop Tutorials
Photoshop is a renowned image editing tool with extensive features that empower designers to manipulate images to produce the best visual effects.As every amateur online sports bettor can tell you, October is a tricky month for sports betting. As the NHL's playoff schedule unfolds in the coming weeks, the mounting tension in sports betting markets will only increase. Such dynamic races for the Cup will be on display online, as competitors search for the best bet options to jump into the race. As TheScore.com's NHL betting outlook page will outline (see below), there are several NFL odds that could be sneaky plays in this chase.
The worst part is that we could have 5 or 6 of those teams in the playoffs! — Grosjean (@GrosjeanHenry) April 17, 2016
NFL Race to The Playoffs
With a total of 10 teams in the NFL playoff picture, bettors in our market see a top three playoff division between the NFC and AFC. The NFC West and AFC South are both perceived as top-level playoff contenders, but pundits are predicting a wide-open divisional race. While the NFC West and AFC South are expected to have strong showings over the next couple of weeks, the popularity contest between those divisions has NFL playoff betting
Adobe Photoshop CS6 Free
But new users often don't know where to start with Photoshop Elements. They want to use its powerful tools, but can't see how to get started.
In this guide, we have narrowed the focus on using the program's first-rate tools for design: Edit, Image, Adjustments, and Adjustment Layers, to produce original and creative content, such as black and white pictures, animated GIFs, and anime avatars.
Edit a photo with Photoshop Elements
Everything starts with an image file that needs editing. The next sections will help you get started on that. Once you have completed the task of editing an image, check out the sections on using Photoshop Elements to design a new image from scratch.
(Image credits: Joshua Young)
By default, when you open Photoshop Elements, you are presented with an empty canvas. It's time to create your first image with this program. You can use one of these methods:
Crop from a photo with Capture, the Elements tool built in (Image credit: Adobe)
With a photo, select Start, and Photoshop Elements opens a new window that enables you to crop the photo's dimensions. You can crop the photo so that it's bigger than the default dimensions in the window, creating a new image.
The new canvas is larger than your photo. It is filled with the photo's background. To separate your photo from the background, hold down the Ctrl key, click the Canvas Tools on the top toolbar, and then click either the Marquee or Lasso tools. With your mouse pointer over your photo, click and drag a border around your image.
When you are done, click OK to return to your original image. You now have a new empty canvas that is the same size as your photo.
(Image credits: Adobe)
Let's use the tools that are built into Photoshop Elements to create a black and white version of the image below.
To make it easier to understand, you can split the process of editing into three sections:
Tools for enhancing the photo
Tools for designing a new image
Tools for retouching an existing image
1. Tools for enhancing the photo
1.1. Selective Color
Selective Color is a Photoshop Elements tool that is good at making images look more professional and polished. It can help you clean up colors in an
Adobe Photoshop CS6 Crack
. [@r09].
One important class of gauge theories are the strongly coupled ones. The models of holographic QCD [@r10; @r11; @r12; @r13] (for recent reviews see Refs. [@r14; @r15; @r16]) correspond to IR conformal theories. In this context it is natural to think about the dual of the $\mathcal N=4$ super Yang-Mills theory, where the chiral symmetry is broken by the corresponding confinement phase. Let us start with the massless scalar field dual to the photon and the Yang-Mills $SU(N)$ field in the adjoint representation. The static quark-antiquark potential in this theory is [@r15] $$V(r)=-\frac{4\pi}{3} \alpha_s\frac{1}{r}+\frac{\pi^2}{3} \alpha_s^2\frac{1}{r^2}+\ldots~,$$ which in leading order in the $\alpha_s$ expansion is reminiscent of $V(r)$ in four-dimensional asymptotically flat anti-de Sitter spacetime [@r17], $V(r)=\frac{\pi}{12} e^{ -2\, \kappa \,r}-\frac{\kappa^2}{6} e^{ -\, 4\, \kappa\, r}+\ldots$ for a large negative cosmological constant $\kappa^{ -1}$. Therefore, it is not unreasonable to expect that the same holds for the vector and the matter field (i.e., the static potential for the dual of the $N$ M2-branes at the origin of our type IIA background) in the confining phase of the $\mathcal N=4$ theory.
A more phenomenological approach, which was originally proposed in Ref. [@r18], led to a universal correction to $V(r)$ in the confining phase of the theory, based on the idea of "deconfinement, chiral symmetry restoration and restoration of the rotational invariance of the ground state of a quark-antiquark system" [@r18]. The result is $$V(r)=\frac{\pi}{3}\left(\alpha_s-\frac{\
Fibroblast growth factor 23 aggravates diabetic cardiomyopathy.
Diabetic cardiomyopathy is a frequent cause of mortality in patients with diabetic nephropathy and retinopathy. Fibroblast growth factor 23 (FGF23) is a regulator of phosphate and vitamin D metabolism in the kidney and parathyroid gland. In the present study, we investigated whether FGF23 is associated with diabetic cardiomyopathy. Twenty-five diabetic patients (diabetes duration: 28 +/- 13 months) and 25 healthy subjects served as comparison. Patients were divided into three groups: five patients with type 2 diabetes without diabetic cardiomyopathy, nine patients with type 2 diabetes with diabetic cardiomyopathy and 11 patients with impaired glucose tolerance. Plasma FGF23 was increased in the diabetic group compared with the healthy control group (105 +/- 22 versus 28 +/- 5 pg/ml; P File photo
NEW DELHI: Drug dealers have been sharing a unique, and lucrative, method of inviting bulk orders of medicines from buyers online by showing photograph of a jockey smoking a bidi with a pack of smokes in his hand.
"It appears that the drug dealers are trying to change the audience of online buyers," said Sanjeev Sharma, president of the Directorate General of Drugs, the central drug watchdog.
"Bidis are priced quite high but why would anyone pay so much for a pack of cigarettes instead of buying the much cheaper e-cigarettes? They are not aware that bidis are laced with hashish," added Sharma, who is also director of Research and Development at the Department of AYUSH (Ay
For a long time now, the big story in gaming hardware has been the introduction of more powerful hardware into the consumer space. Every year, the new generation of consoles are targeted at the bleeding edge hardware in the hopes of giving their target audience a leap in visual fidelity and gameplay experience. It's a similar story for PC gaming hardware, with the latest games moving towards higher resolutions and multiple monitor setups that fit comfortably on laptops.
With that in mind, I've taken the time to test several configurations across a few popular operating systems to find out how powerful a gaming
https://technospace.co.in/upload/files/2022/06/LnYO5VCXxppfB3YRYqJH_30_3afcad82e6333d13be5ec18be1eaa220_file.pdf
https://feimes.com/photoshop-cs5-for-pc/
https://portalnix.com/photoshop-2021-version-22-5-1-full-license-free-download/
https://www.raven-guard.info/photoshop-cc-jb-keygen-exe-download-updated-2022/
https://axisflare.com/upload/files/2022/06/tBibZsuik6POyWk41W4s_30_3afcad82e6333d13be5ec18be1eaa220_file.pdf
https://mhealthtechsolutions.com/2022/06/30/adobe-photoshop-2021-version-22-0-1-registration-code-free/
https://xn--80aagyardii6h.xn--p1ai/wp-content/uploads/2022/07/adobe_photoshop_cc_2018_hack__license_code__keygen_free_macwin_final_2022.pdf
https://www.greatescapesdirect.com/2022/06/adobe-photoshop-2021-version-22-2-free-license-key-download-final-2022/
https://energyconnectt.com/wp-content/uploads/2022/06/Photoshop_CS4.pdf
https://arctic.ucalgary.ca/system/files/webform/vikdary338.pdf
https://ak-asyl-mgh.de/advert/photoshop-2021-version-22-0-0-nulled-win-mac/
https://liquidonetransfer.com.mx/?p=28846
http://ampwebsitedesigner.com/2022/06/30/photoshop-2021-version-22-4-2022/
http://autocracymachinery.com/?p=24901
https://allindiaherb.com/photoshop-cc-2014-serial-number-and-product-key-crack-activation-key-free-x64-updated-2022/
https://toronto-dj.com/advert/adobe-photoshop-cc-free-download-pc-windows-2022-latest/
https://www.farmington.nh.us/sites/g/files/vyhlif566/f/uploads/transfer_station_information_flyer.pdf
https://lanoticia.hn/advert/photoshop-2020-version-21-keygen-crack-serial-key-activation-free-download/
http://manukau.biz/advert/photoshop-cc-2014-keygen-crack-serial-key-free-download-for-pc-latest-2022/
https://forallequal.com/photoshop-cc-with-registration-code-free-3264bit-updated/ | CommonCrawl |
The $\mathbb{Z_5}$-vector space $\mathfrak{B}$3 over the field $(\mathbb{Z_5}, +, .)$
2. Biological mathematical model
3. The canonical base of the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$
This is a formal introduction to the genetic code $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$ over the field $(\mathbb{Z_5}, +, .)$. This mathematical model is defined based on the physicochemical properties of DNA bases (see previous post). This introduction can be complemented with a Wolfram Computable Document Format (CDF) named IntroductionToZ5GeneticCodeVectorSpace.cdf available in GitHub. This is graphic user interface with an interactive didactic introduction to the mathematical biology background that is explained here. To interact with a CDF users will require for Wolfram CDF Player or Mathematica. The Wolfram CDF Player is freely available (easy installation on Windows OS and on Linux OS).
If the Watson-Crick base pairings are symbolically expressed by means of the sum "+" operation, in such a way that hold: G + C = C + G = D, U + A = A + U = D, then this requirement leads us to define an additive group ($\mathfrak{B}^3$, +) on the set of five DNA bases ($\mathfrak{B}^3$, +). Explicitly, it was required that the bases with the same number of hydrogen bonds in the DNA molecule and different chemical types were algebraically inverse in the additive group defined in the set of DNA bases $\mathfrak{B}$. In fact eight sum tables (like that one shown below), which will satisfice the last constraints, can be defined in eight ordered sets: {D, A, C, G, U}, {D, U, C, G, A}, {D, A, G, C, U}, {D, U, G, C, A},{G, A, U, C},{G, U, A, C},{C, A, U, G} and {C, U, A, G} [1,2]. The sets originated by these base orders are called the strong-weak ordered sets of bases [1,2] since, for each one of them, the algebraic-complementary bases are DNA complementary bases as well, pairing with three hydrogen bonds (strong, G:::C) and two hydrogen bonds (weak, A::U). We shall denote this set SW.
A set of extended base triplet is defined as $\mathfrak{B}^3$ = {XYZ | X, Y, Z $\in\mathfrak{B}$}, where to keep the biological usual notation for codons, the triplet of letters $XYZ\in\mathfrak{B}^3$ denotes the vector $(X,Y,Z)\in\mathfrak{B}^3$ and $\mathfrak{B} =$ {A, C, G, U}. An Abelian group on the extended triplets set can be defined as the direct third power of group:
$(\mathfrak{B}^3,+) = (\mathfrak{B},+)×(\mathfrak{B},+)×(\mathfrak{B},+)$
where X, Y, Z $\in\mathfrak{B}$, and the operation "+" as shown in the table [2]. Next, for all elements $\alpha\in\mathbb{Z}_{(+)}$ (the set of positive integers) and for all codons $XYZ\in(\mathfrak{B}^3,+)$, the element:
$\alpha \bullet XYZ = \overbrace{XYZ+XYX+…+XYZ}^{\hbox{$\alpha$ times}}\in(\mathfrak{B}^3,+)$ is well defined. In particular, $0 \bullet X =$ D for all $X\in(\mathfrak{B}^3,+) $. As a result, $(\mathfrak{B}^3,+)$ is a three-dimensional (3D) $\mathbb{Z_5}$-vector space over the field $(\mathbb{Z_5}, +, .)$ of the integer numbers modulo 5, which is isomorphic to the Galois field GF(5). Notice that the Abelian groups $(\mathbb{Z}_5, +)$ and $(\mathfrak{B},+)$ are isomorphic. For the sake of brevity, the same notation $\mathfrak{B}^3$ will be used to denote the group $(\mathfrak{B}^3,+)$ and the vector space defined on it.
+ D A C G U
D D A C G U
A A C G U D
C C G U D A
G G U D A C
U U D A C G
This operation is only one of the eight sum operations that can be defined on each one of the ordered sets of bases from SW.
Next, in the vector space $\mathfrak{B}^3$, vectors (extended codons): e1=ADD, e2= DAD and e3=DDA are linearly independent, i.e., $\sum\limits_{i=1}^3 c_i e_i =$ DDD implies $c_1=0, c_2=0$ and $c_3=0$ for any distinct $c_1, c_2, c_3 \in\mathbb{Z_5}$. Moreover, the representation of every extended triplet $XYZ\in\mathfrak{B}^3$ on the field $\mathbb{Z_5}$ as $XYZ=xe_1+ye_2+ze_3$ is unique and the generating set $e_1, e_2$, and $e_3$ is a canonical base for the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$. It is said that elements $x, y, z \in\mathbb{Z_5}$ are the coordinates of the extended triplet $XYZ\in\mathfrak{B}^3$ in the canonical base ($e_1, e_2, e_3$) [3]
José M V, Morgado ER, Sánchez R, Govezensky T. The 24 Possible Algebraic Representations of the Standard Genetic Code in Six or in Three Dimensions. Adv Stud Biol, 2012, 4:119–52.
Sánchez R, Grau R. An algebraic hypothesis about the primeval genetic code architecture. Math Biosci, 2009, 221:60–76. | CommonCrawl |
ZOJ Problem Set - 4052
Time Limit: 2 Seconds Memory Limit: 65536 KB
DreamGrid has $n$ friends which are conveniently numbered from $1$ to $n$. They can be divided into two groups (possibly empty) such that:
Every pair of friends in the first group have to know each other.
Every pair of friends in the second group must not know each other.
Now, given the pairs of friends who know each other, DreamGrid would like to know the number of ways to find a group of friends with maximum size such that every pair of friends in the group have to know each other, and he would also like to know the number of ways to find a group of friends with maximum size such that every pair of friends in the group must not know each other.
There are multiple test cases. The first line of input contains an integer $T$, indicating the number of test cases. For each test case:
The first line contains two integers $n$ and $m$ $(1 \le n \le 10^5, 0 \le m \le 10^5)$ -- the number of friends and the number of pairs of friends who know each other.
The $i$-th of the following $m$ lines contains two integers $a_i$ and $b_i$ ($1 \le a_i, b_i \le n, a_i \ne b_i$), which denotes that the $a_i$-th friend and the $b_i$-th friend know each other. Note that every unordered pair of ($a, b$) will appear at most once.
It is guaranteed that neither the sum of all $n$ nor the sum of all $m$ exceeds $2 \times 10^6$.
For each test case, output two integers separated by a single space.
The first integer indicates the number of ways to find a group of friends with maximum size such that every pair of friends in the this group have to know each other.
The second integer indicates the number of ways to find a group of friends with maximum size such that every pair of friends in the group must not know each other.
Sample Output
Author: ZHANG, Yong
Source: The 2018 ACM-ICPC Asia Qingdao Regional Contest, Online
Submit Status | CommonCrawl |
KRM Home
Kinetic models of conservative economies with need-based transfers as welfare
February 2020, 13(1): 187-210. doi: 10.3934/krm.2020007
Global analytic solutions of the semiconductor Boltzmann-Dirac-Benney equation with relaxation time approximation
Marcel Braukhoff
Institute for Analysis and Scientific Computing, Vienna University of Technology, Wiedner Hauptstrasse 8-10, 1040 Wien, Austria
Received March 2019 Revised July 2019 Published December 2019
Fund Project: The author was partially funded by the Austrian Science Fund (FWF) project F 65
The semiconductor Boltzmann-Dirac-Benney equation
$ \partial_t f + \nabla\epsilon(p)\cdot\nabla_x f - \nabla \rho_f(x,t)\cdot\nabla_p f = \frac{\mathcal F_\lambda(p)-f}\tau, \quad x\in\mathbb{R}^d,\ p\in B, \ t>0 $
is a model for ultracold atoms trapped in an optical lattice. The global existence of a solution is shown for small
$ \tau>0 $
assuming that the initial data are analytic and sufficiently close to the Fermi-Dirac distribution
$ \mathcal F_\lambda $
. This system contains an interaction potential
$ \rho_f: = \int_B fdp $
being significantly more singular than the Coulomb potential, which causes major structural difficulties in the analysis.
The key technique is based of the ideas of Mouhot and Villani by using Gevrey-type norms which vary over time. The global existence result for small initial data is also generalized to
$ \partial_t f + Lf = Q(f), $
$ L $
is a generator of an
$ C^0 $
-group with
$ \|e^{tL}\|\leq Ce^{\omega t} $
$ t\in\mathbb R $
$ \omega>0 $
and, where further additional analytic properties of
$ Q $
are assumed.
Keywords: Vlasov-Dirac-Benney equation, Vlasov equation, optical lattice, analytic norms.
Mathematics Subject Classification: Primary: 35F25, 35F20, 35Q20; Secondary: 35Q83.
Citation: Marcel Braukhoff. Global analytic solutions of the semiconductor Boltzmann-Dirac-Benney equation with relaxation time approximation. Kinetic & Related Models, 2020, 13 (1) : 187-210. doi: 10.3934/krm.2020007
N. W. Ashcroft and N. D. Mermin, Solid state physics, Physics Today, 30 (1977), P61. doi: 10.1063/1.3037370. Google Scholar
A. Al-Masoudi, S. Dörscher, S. Häfner, U. Sterr and C. Lisdat, Noise and instability of an optical lattice clock, Phys. Rev. A, 92 (2015), 063814, 7 pages. doi: 10.1103/PhysRevA.92.063814. Google Scholar
N. B. Abdallah and P. Degond, On a hierarchy of macroscopic models for semiconductors, J. Math. Phys., 37 (1996), 3306-3333. doi: 10.1063/1.531567. Google Scholar
E. Bloch, Ultracold quantum gases in optical lattices, Nature Physics, 1 (2005), 23-30. doi: 10.1038/nphys138. Google Scholar
M. Braukhoff, Effective Equations for a Cloud of Ultracold Atoms in an Optical Lattice, Ph.D thesis, University of Cologne, Germany, 2017. Google Scholar
M. Braukhoff, Semiconductor Boltzmann-Dirac-Benney equation with a BGK-type collision operator: Existence of solutions vs. ill-posedness, Kinet. Relat. Models, 12 (2019), 445–482, arXiv 1711.06015 [math.AP]. doi: 10.3934/krm.2019019. Google Scholar
M. Braukhoff and A. Jüngel, Energy-transport systems for optical lattices: Derivation, analysis, simulation, Mathematical Models and Methods in Applied Sciences, 28 (2018), 579-614. doi: 10.1142/S021820251850015X. Google Scholar
C. Bardos and N. Besse, The Cauchy problem for the Vlasov-Dirac-Benney equation and related issues in fluid mechanics and semi-classical limits,, Kinet. Relat. Models, 6 (2013), 893-917. doi: 10.3934/krm.2013.6.893. Google Scholar
C. Bardos and N. Besse, Hamiltonian structure, fluid representation and stability for the Vlasov-Dirac-benney equation, In Hamiltonian Partial Differential Equations and Applications, 1–30, Fields Inst. Commun., 75, Fields Inst. Res. Math. Sci., Toronto, ON, 2015. doi: 10.1007/978-1-4939-2950-4_1. Google Scholar
C. Bardos and N. Besse, Semi-classical limit of an infinite dimensional system of nonlinear Schrödinger equations,, Bull. Inst. Math., Acad. Sin. (N.S.), 11 (2016), 43-61. Google Scholar
C. Bardos and A. Nouri, A Vlasov equation with Dirac potential used in fusion plasmas, J. Math. Phys., 53 (2012), 115621, 16pp. doi: 10.1063/1.4765338. Google Scholar
O. Dutta, M. Gajda, P. Hauke, M. Lewenstein, D.-S. Lühmann, B. Malomed, T. Sowinski and J. Zakrzewski, Non-standard Hubbard models in optical lattices: A review, Rep. Prog. Phys., 78 (2015), 066001, 47 pages. doi: 10.1088/0034-4885/78/6/066001. Google Scholar
K.-J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations, Graduate Texts in Mathematics, Springer, 2000. Google Scholar
D. Han-Kwan and T. T. Nguyen, Ill-posedness of the hydrostatic Euler and singular Vlasov equations, Arch. Rational Mech. Anal., 221 (2016), 1317-1344. doi: 10.1007/s00205-016-0985-z. Google Scholar
D. Han-Kwan and F. Rousset, Quasineutral limit for Vlasov-Poisson with Penrose stable data, Ann. Sci. cole Norm. Sup., 49 (2016), 1445-1495. doi: 10.24033/asens.2313. Google Scholar
P.-E. Jabin and A. Nouri, Analytic solutions to a strongly nonlinear Vlasov equation,, C. R., Math., Acad. Sci. Paris, 349 (2011), 541-546. doi: 10.1016/j.crma.2011.03.024. Google Scholar
A. Jaksch, Optical lattices, ultracold atoms and quantum information processing, Contemp. Phys., 45 (2004), 367-381. doi: 10.1080/00107510410001705486. Google Scholar
A. Jüngel, Transport Equations for Semiconductors, Lecture Notes in Physics, 773. Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-540-89526-8. Google Scholar
T. Kato, Perturbation Theory for Linear Operators, Die Grundlehren der mathematischen Wissenschaften, Band 132 Springer-Verlag New York, Inc., New York, 1966. Google Scholar
S. Mandt, Transport and Non-Equilibrium Dynamics in Optical Lattices. From Expanding Atomic Clouds to Negative Absolute Temperatures, PhD thesis, University of Cologne, 2012. Google Scholar
G. Metivier, Remarks on the well-posedness of the nonlinear Cauchy problem, Geometric Analysis of PDE and Several Complex Variables, 337–356, Contemp. Math., 368, Amer. Math. Soc., Providence, RI, 2005. doi: 10.1090/conm/368/06790. Google Scholar
C. Mouhot and C. Villani, On Landau damping,, Acta Math., 207 (2011), 29-201. doi: 10.1007/s11511-011-0068-9. Google Scholar
N. Ramsey, Thermodynamics and statistical mechanics at negative absolute temperature, Phys. Rev., 103 (1956), 20-28. doi: 10.1103/PhysRev.103.20. Google Scholar
A. Rapp, S. Mandt and A. Rosch, Equilibration rates and negative absolute temperatures for ultracold atoms in optical lattices, Phys. Rev. Lett., 105 (2010), 220405, 4 pages. doi: 10.1103/PhysRevLett.105.220405. Google Scholar
U. Schneider, L. Hackermüller, J. Ph. Ronzheimer, S. Will, S. Braun, T. Best, I. Bloch, E. Demler, S. Mandt, D. Rasch and A. Rosch, Fermionic transport and out-of-equilibrium dynamics in a homogeneous Hubbard model with ultracold atoms, Nature Physics, 8 (2012), 213-218. doi: 10.1038/nphys2205. Google Scholar
Claude Bardos, Nicolas Besse. The Cauchy problem for the Vlasov-Dirac-Benney equation and related issues in fluid mechanics and semi-classical limits. Kinetic & Related Models, 2013, 6 (4) : 893-917. doi: 10.3934/krm.2013.6.893
Ugo Bessi. Viscous Aubry-Mather theory and the Vlasov equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 379-420. doi: 10.3934/dcds.2014.34.379
Frédérique Charles, Bruno Després, Benoît Perthame, Rémis Sentis. Nonlinear stability of a Vlasov equation for magnetic plasmas. Kinetic & Related Models, 2013, 6 (2) : 269-290. doi: 10.3934/krm.2013.6.269
Emmanuel Frénod, Sever A. Hirstoaga, Eric Sonnendrücker. An exponential integrator for a highly oscillatory vlasov equation. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 169-183. doi: 10.3934/dcdss.2015.8.169
Hyung Ju Hwang, Juhi Jang. On the Vlasov-Poisson-Fokker-Planck equation near Maxwellian. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 681-691. doi: 10.3934/dcdsb.2013.18.681
Laurent Bernis, Laurent Desvillettes. Propagation of singularities for classical solutions of the Vlasov-Poisson-Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 13-33. doi: 10.3934/dcds.2009.24.13
Armando Majorana. Approximate explicit stationary solutions to a Vlasov equation for planetary rings. Kinetic & Related Models, 2017, 10 (2) : 467-479. doi: 10.3934/krm.2017018
Baptiste Fedele, Claudia Negulescu. Numerical study of an anisotropic Vlasov equation arising in plasma physics. Kinetic & Related Models, 2018, 11 (6) : 1395-1426. doi: 10.3934/krm.2018055
Aurore Back, Emmanuel Frénod. Geometric two-scale convergence on manifold and applications to the Vlasov equation. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 223-241. doi: 10.3934/dcdss.2015.8.223
Jean Dolbeault. An introduction to kinetic equations: the Vlasov-Poisson system and the Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 361-380. doi: 10.3934/dcds.2002.8.361
Marcel Braukhoff. Semiconductor Boltzmann-Dirac-Benney equation with a BGK-type collision operator: Existence of solutions vs. ill-posedness. Kinetic & Related Models, 2019, 12 (2) : 445-482. doi: 10.3934/krm.2019019
Yemin Chen. Analytic regularity for solutions of the spatially homogeneous Landau-Fermi-Dirac equation for hard potentials. Kinetic & Related Models, 2010, 3 (4) : 645-667. doi: 10.3934/krm.2010.3.645
Renjun Duan, Shuangqian Liu. Cauchy problem on the Vlasov-Fokker-Planck equation coupled with the compressible Euler equations through the friction force. Kinetic & Related Models, 2013, 6 (4) : 687-700. doi: 10.3934/krm.2013.6.687
Anaïs Crestetto, Nicolas Crouseilles, Mohammed Lemou. Kinetic/fluid micro-macro numerical schemes for Vlasov-Poisson-BGK equation using particles. Kinetic & Related Models, 2012, 5 (4) : 787-816. doi: 10.3934/krm.2012.5.787
Renjun Duan, Tong Yang, Changjiang Zhu. Boltzmann equation with external force and Vlasov-Poisson-Boltzmann system in infinite vacuum. Discrete & Continuous Dynamical Systems - A, 2006, 16 (1) : 253-277. doi: 10.3934/dcds.2006.16.253
José R. Quintero, Alex M. Montes. On the exact controllability and the stabilization for the Benney-Luke equation. Mathematical Control & Related Fields, 2019, 0 (0) : 0-0. doi: 10.3934/mcrf.2019039
François Golse, Clément Mouhot, Valeria Ricci. Empirical measures and Vlasov hierarchies. Kinetic & Related Models, 2013, 6 (4) : 919-943. doi: 10.3934/krm.2013.6.919
Wenmin Gong, Guangcun Lu. On Dirac equation with a potential and critical Sobolev exponent. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2231-2263. doi: 10.3934/cpaa.2015.14.2231
Shuji Machihara. One dimensional Dirac equation with quadratic nonlinearities. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 277-290. doi: 10.3934/dcds.2005.13.277
Milena Stanislavova, Atanas Stefanov. Effective estimates of the higher Sobolev norms for the Kuramoto-Sivashinsky equation. Conference Publications, 2009, 2009 (Special) : 729-738. doi: 10.3934/proc.2009.2009.729
2018 Impact Factor: 1.38
HTML views (38) | CommonCrawl |
Delay induced spatiotemporal patterns in a diffusive intraguild predation model with Beddington-DeAngelis functional response
MBE Home
Dynamics of an ultra-discrete SIR epidemic model with time delay
June 2018, 15(3): 629-652. doi: 10.3934/mbe.2018028
Optimal individual strategies for influenza vaccines with imperfect efficacy and durability of protection
Francesco Salvarani 1, and Gabriel Turinici 2,,
Université Paris-Dauphine, PSL Research University, CNRS UMR 7534, CEREMADE, 75016 Paris, France, & Università degli Studi di Pavia, Dipartimento di Matematica, 27100 Pavia, Italy,
Université Paris-Dauphine, PSL Research University, CNRS UMR 7534, CEREMADE, 75016 Paris, France, & Institut Universitaire de France, Paris, France
* Corresponding author: Gabriel Turinici
Received March 2017 Accepted July 29, 2017 Published December 2017
Full Text(HTML)
Figure(8) / Table(1)
We analyze a model of agent based vaccination campaign against influenza with imperfect vaccine efficacy and durability of protection. We prove the existence of a Nash equilibrium by Kakutani's fixed point theorem in the context of non-persistent immunity. Subsequently, we propose and test a novel numerical method to find the equilibrium. Various issues of the model are then discussed, such as the dependence of the optimal policy with respect to the imperfections of the vaccine, as well as the best vaccination timing. The numerical results show that, under specific circumstances, some counter-intuitive behaviors are optimal, such as, for example, an increase of the fraction of vaccinated individuals when the efficacy of the vaccine is decreasing up to a threshold. The possibility of finding optimal strategies at the individual level can help public health decision makers in designing efficient vaccination campaigns and policies.
Keywords: Mean Field Games, SIR model, vaccination persistence, durability of protection, limited immunity, vaccine effectiveness, influenza.
Mathematics Subject Classification: Primary: 92D30; Secondary: 92C42, 60J20, 91A13.
Citation: Francesco Salvarani, Gabriel Turinici. Optimal individual strategies for influenza vaccines with imperfect efficacy and durability of protection. Mathematical Biosciences & Engineering, 2018, 15 (3) : 629-652. doi: 10.3934/mbe.2018028
A. Abakuks, Optimal immunisation policies for epidemics, Advances in Appl. Probability, 6 (1974), 494-511. doi: 10.1017/S0001867800039963. Google Scholar
R. M. Anderson and R. M. May, Infectious Diseases of Humans Dynamics and Control, Oxford University Press, 1992.Google Scholar
J. Appleby, Getting a flu shot? it may be better to wait, CNN, September 15, http://edition.cnn.com/2016/09/26/health/wait-for-flu-shot/index.html, 2016.Google Scholar
N. Bacaër, A Short History of Mathematical Population Dynamics, Springer-Verlag London, Ltd., London, 2011. doi: 10.1007/978-0-85729-115-8. Google Scholar
Y. Bai, N. Shi, Q. Lu, L. Yang, Z. Wang, L. Li, H. Han, D. Zheng, F. Luo, Z. Zhang and X. Ai, Immunological persistence of a seasonal influenza vaccine in people more than 3 years old, Human Vaccines & Immunotherapeutics, 11 (2015), 1648-1653. doi: 10.1080/21645515.2015.1037998. Google Scholar
C. T. Bauch and D. J. D. Earn, Vaccination and the theory of games, Proc. Natl. Acad. Sci. USA, 101 (2004), 13391-13394 (electronic). doi: 10.1073/pnas.0403823101. Google Scholar
C. T. Bauch, A. P. Galvani and D. J. D. Earn, Group interest versus self-interest in smallpox vaccination policy, Proceedings of the National Academy of Sciences, 100 (2003), 10564-10567. doi: 10.1073/pnas.1731324100. Google Scholar
C. T. Bauch, Imitation dynamics predict vaccinating behaviour, Proc Biol Sci, 272 (2005), 1669-1675. doi: 10.1098/rspb.2005.3153. Google Scholar
E. A. Belongia, M. E. Sundaram, D. L. McClure, J. K. Meece, J. Ferdinands and J. J. VanWormer, Waning vaccine protection against influenza a (h3n2) illness in children and older adults during a single season, Vaccine, 33 (2015), 246-251. doi: 10.1016/j.vaccine.2014.06.052. Google Scholar
Adrien Blanchet and Guillaume Carlier, From Nash to Cournot-Nash equilibria via the Monge-Kantorovich problem Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 372 (2014), 20130398, 11pp. doi: 10.1098/rsta.2013.0398. Google Scholar
R. Breban, R. Vardavas and S. Blower, Mean-field analysis of an inductive reasoning game: Application to influenza vaccination Phys. Rev. E, 76 (2007), 031127. doi: 10.1103/PhysRevE.76.031127. Google Scholar
D. L. Brito, E. Sheshinski and M. D. Intriligator, Externalities and compulsary vaccinations, Journal of Public Economics, 45 (1991), 69-90. doi: 10.1016/0047-2727(91)90048-7. Google Scholar
B. Buonomo, A. d'Onofrio and D. Lacitignola, Global stability of an {SIR} epidemic model with information dependent vaccination, Mathematical Biosciences, 216 (2008), 9-16. doi: 10.1016/j.mbs.2008.07.011. Google Scholar
P. Cardaliaguet and S. Hadikhanloo, Learning in mean field games: The fictitious play, ESAIM Control Optim. Calc. Var., 23 (2017), 569-591. doi: 10.1051/cocv/2016004. Google Scholar
F. Carrat and A. Flahault, Influenza vaccine: The challenge of antigenic drift, Vaccine, 25 (2007), 6852-6862. doi: 10.1016/j.vaccine.2007.07.027. Google Scholar
F. H. Chen, A susceptible-infected epidemic model with voluntary vaccinations, Journal of Mathematical Biology, 53 (2006), 253-272. doi: 10.1007/s00285-006-0006-1. Google Scholar
M. L. Clements and B. R. Murphy, Development and persistence of local and systemic antibody responses in adults given live attenuated or inactivated influenza a virus vaccine, Journal of Clinical Microbiology, 23 (1986), 66-72. Google Scholar
C. T. Codeço, P. M. Luz, F. Coelho, A. P Galvani and C. Struchiner, Vaccinating in disease-free regions: a vaccine model with application to yellow fever, Journal of The Royal Society Interface, 4 (2007), 1119-1125. Google Scholar
F. Coelho and C. T. Codeço, Dynamic modeling of vaccinating behavior as a function of individual beliefs PLoS Comput Biol, 5 (2009), e1000425, 10pp. doi: 10.1371/journal.pcbi.1000425. Google Scholar
M.-G. Cojocaru, Dynamic equilibria of group vaccination strategies in a heterogeneous population, Journal of Global Optimization, 40 (2008), 51-63. doi: 10.1007/s10898-007-9204-7. Google Scholar
R. B. Couch and J. A. Kasel, Immunity to influenza in man, Annual Reviews in Microbiology, 37 (1983), 529-549. doi: 10.1146/annurev.mi.37.100183.002525. Google Scholar
N. Cox, Influenza seasonality: Timing and formulation of vaccines, Bulletin of the World Health Organization, 92 (2014), 311-311. doi: 10.2471/BLT.14.139428. Google Scholar
O. Diekmann and J. A. P. Heesterbeek, Mathematical Epidemiology of Infectious Diseases. Model Building, Analysis and Interpretation, Wiley Series in Mathematical and Computational Biology. John Wiley & Sons, Ltd., Chichester, 2000. Google Scholar
Josu Doncel, Nicolas Gast, and Bruno Gaujal, Mean-Field Games with Explicit Interactions, working paper or preprint, 2016.Google Scholar
A. d'Onofrio, P. Manfredi and E. Salinelli, Vaccinating behaviour, information, and the dynamics of SIR vaccine preventable diseases, Theoretical Population Biology, 71 (2007), 301-317. Google Scholar
A. d'Onofrio, P. Manfredi and E. Salinelli, Fatal SIR diseases and rational exemption to vaccination, Mathematical Medicine and Biology, 25 (2008), 337-357. Google Scholar
P. Doutor, P. Rodrigues, M. do Céu Soares and F. A. C. C. Chalub, Optimal vaccination strategies and rational behaviour in seasonal epidemics, Journal of Mathematical Biology, 73 (2016), 1437-1465. doi: 10.1007/s00285-016-0997-1. Google Scholar
J. Dushoff, J. B Plotkin, C. Viboud, D. J. D. Earn and L. Simonsen, Mortality due to influenza in the United States-an annualized regression approach using multiple-cause mortality data, American journal of epidemiology, 163 (2006), 181-187. doi: 10.1093/aje/kwj024. Google Scholar
J. M. Ferdinands, A. M. Fry, S. Reynolds, J. G. Petrie, B. Flannery, M. L. Jackson and E. A. Belongia, Intraseason waning of influenza vaccine protection: Evidence from the us influenza vaccine effectiveness network, 2011-2012 through 2014-2015, Clinical Infectious Diseases, 64 (2017), p544. Google Scholar
P. E. M. Fine and J. A. Clarkson, Individual versus public priorities in the determination of optimal vaccination policies, American Journal of Epidemiology, 124 (1986), 1012-1020. doi: 10.1093/oxfordjournals.aje.a114471. Google Scholar
P. J. Francis, Optimal tax/subsidy combinations for the flu season, Journal of Economic Dynamics and Control, 28 (2004), 2037-2054. doi: 10.1016/j.jedc.2003.08.001. Google Scholar
D. Fudenberg and D. K. Levine, The Theory of Learning in Games volume 2 of MIT Press Series on Economic Learning and Social Evolution, MIT Press, Cambridge, MA, 1998. Google Scholar
S. Funk, M. Salathé and V. A. A. Jansen, Modelling the influence of human behaviour on the spread of infectious diseases: A review, Journal of The Royal Society Interface, 7 (2010), 1247-1256. doi: 10.1098/rsif.2010.0142. Google Scholar
A. P. Galvani, T. C. Reluga and G. B. Chapman, Long-standing influenza vaccination policy is in accord with individual self-interest but not with the utilitarian optimum, Proceedings of the National Academy of Sciences, 104 (2007), 5692-5697. doi: 10.1073/pnas.0606774104. Google Scholar
P.-Y. Geoffard and T. Philipson, Disease eradication: Private versus public vaccination, The American Economic Review, 87 (1997), 222-230. Google Scholar
N. C. Grassly and C. Fraser, Seasonal infectious disease epidemiology, Proceedings of the Royal Society of London B: Biological Sciences, 273 (2006), 2541-2550. doi: 10.1098/rspb.2006.3604. Google Scholar
S. Greenland and R. R. Frerichs, On measures and models for the effectiveness of vaccines and vaccination programmes, International Journal of Epidemiology, 17 (1988), p456.Google Scholar
M. E. Halloran, I. M. Longini and C. J. Struchiner, Design and Analysis of Vaccine Studies, Statistics for Biology and Health. Springer New York, 2009. doi: 10.1007/978-0-387-68636-3. Google Scholar
H. W. Hethcote and P. Waltman, Optimal vaccination schedules in a deterministic epidemic model, Mathematical Biosciences, 18 (1973), 365-381. doi: 10.1016/0025-5564(73)90011-4. Google Scholar
M. Huang, R. P. Malhamé and P. E. Caines, Nash equilibria for large-population linear stochastic systems of weakly coupled agents, In Elkébir Boukas and Roland P. Malhamé, editors,, Analysis, Control and Optimization of Complex Dynamic Systems, Springer US,, 4 (2005), 215-252. doi: 10.1007/0-387-25477-3_9. Google Scholar
M. Huang, R. P. Malhamé and P. E. Caines, Large population stochastic dynamic games: Closed-loop mckean-vlasov systems and the Nash certainty equivalence principle, Commun. Inf. Syst., 6 (2006), 221-252. doi: 10.4310/CIS.2006.v6.n3.a5. Google Scholar
R. Jordan, D. Kinderlehrer and F. Otto, The variational formulation of the Fokker-Planck equation, SIAM J. Math. Anal., 29 (1998), 1-17. doi: 10.1137/S0036141096303359. Google Scholar
S. Kakutani, A generalization of Brouwer's fixed point theorem, Duke Math. J., 8 (1941), 457-459. doi: 10.1215/S0012-7094-41-00838-4. Google Scholar
E. Kissling, B. Nunes, C. Robertson, M. Valenciano, A. Reuss, A. Larrauri, J. M. Cohen, B. Oroszi, C. Rizzo, A. Machado, D. Pitigoi, L. Domegan, I. Paradowska-Stankiewicz, U. Buchholz, A. Gherasim, I. Daviaud, J. K. Horvath, A. Bella, E. Lupulescu, J. O'Donnell, M. Korczynska, A. Moren and I.-MOVE case-control study team, I-move multicentre casecontrol study 2010/11 to 2014/15: Is there within-season waning of influenza type/subtype vaccine effectiveness with increasing time since vaccination?, Euro Surveill., 21 (2016), 30201. doi: 10.2807/1560-7917.ES.2016.21.16.30201. Google Scholar
A. Lachapelle, J. Salomon and G. Turinici, Computation of mean field equilibria in economics, Math. Models Methods Appl. Sci., 20 (2010), 567-588. doi: 10.1142/S0218202510004349. Google Scholar
L. Laguzet and G. Turinici, Global optimal vaccination in the SIR model: Properties of the value function and application to cost-effectiveness analysis, Mathematical Biosciences, 263 (2015), 180-197. doi: 10.1016/j.mbs.2015.03.002. Google Scholar
L. Laguzet and G. Turinici, Individual vaccination as Nash equilibrium in a SIR model with application to the 2009-2010 influenza A (H1N1) epidemic in France, Bulletin of Mathematical Biology, 77 (2015), 1955-1984. doi: 10.1007/s11538-015-0111-7. Google Scholar
J.-M. Lasry and P.-L. Lions, Lions, Jeux à champ moyen. I: Le cas stationnaire,, C. R., Math., Acad. Sci. Paris, 343 (2006), 619-625. doi: 10.1016/j.crma.2006.09.019. Google Scholar
J.-M. Lasry and P.-L. Lions, Lions, Jeux à champ moyen. II: Horizon fini et contrôle optimal,, C. R., Math., Acad. Sci. Paris, 343 (2006), 679-684. doi: 10.1016/j.crma.2006.09.018. Google Scholar
J.-M. Lasry and P.-L. Lions, Mean field games, Japanese Journal of Mathematics, 2 (2007), 229-260. doi: 10.1007/s11537-007-0657-8. Google Scholar
A. S. Monto, S. E. Ohmit, J. G. Petrie, E. Johnson, R. Truscon, E. Teich, J. Rotthoff, M. Boulton and J. C. Victor, Comparative efficacy of inactivated and live attenuated influenza vaccines, New England Journal of Medicine, 361 (2009), 1260-1267. doi: 10.1056/NEJMoa0808652. Google Scholar
R. Morton and K. H. Wickwire, On the optimal control of a deterministic epidemic, Advances in Appl. Probability, 6 (1974), 622-635. doi: 10.1017/S0001867800028482. Google Scholar
J. Müller, Optimal vaccination strategies-for whom?, Mathematical Biosciences, 139 (1997), 133-154. Google Scholar
S. Ng, V. J. Fang, D. K. M. Ip, K.-H. Chan, G. M. Leung, J. S. Malik Peiris and B. J. Cowling, Estimation of the association between antibody titers and protection against confirmed influenza virus infection in children, Journal of Infectious Diseases, 208 (2013), 1320-1324. doi: 10.1093/infdis/jit372. Google Scholar
K. L. Nichol, A. Lind, K. L. Margolis, M. Murdoch, R. McFadden, M. Hauge, S. Magnan and M. Drake, The effectiveness of vaccination against influenza in healthy, working adults, New England Journal of Medicine, 333 (1995), 889-893. doi: 10.1056/NEJM199510053331401. Google Scholar
M. T Osterholm, N. S. Kelley, A. Sommer and E. A. Belongia, Efficacy and effectiveness of influenza vaccines: A systematic review and meta-analysis, The Lancet Infectious Diseases, 12 (2012), 36-44. doi: 10.1016/S1473-3099(11)70295-X. Google Scholar
T. C. Reluga, C. T. Bauch and A. P. Galvani, Evolving public perceptions and stability in vaccine uptake, Math. Biosci., 204 (2006), 185-198. doi: 10.1016/j.mbs.2006.08.015. Google Scholar
T. C. Reluga and A. P. Galvani, A general approach for population games with application to vaccination, Mathematical Biosciences, 230 (2011), 67-78. doi: 10.1016/j.mbs.2011.01.003. Google Scholar
S. P. Sethi and P. W. Staats, Optimal control of some simple deterministic epidemic models, J. Oper. Res. Soc., 29 (1978), 129-136. doi: 10.1057/jors.1978.27. Google Scholar
E. Shim, G. B. Chapman, J. P. Townsend and A. P. Galvani, The influence of altruism on influenza vaccination decisions, Journal of The Royal Society Interface, 9 (2012), 2234-2243. doi: 10.1098/rsif.2012.0115. Google Scholar
D. M. Skowronski, S. Aleina Tweed, S. Aleina Tweed and G. De Serres, Rapid decline of influenza vaccine-induced antibody in the elderly: Is it real, or is it relevant?, The Journal of Infectious Diseases, 197 (2008), 490-502. doi: 10.1086/524146. Google Scholar
N. M. Smith, J. S. Bresee, D. K. Shay, T. M. Uyeki, N. J. Cox and R. A. Strikas, Prevention and control of influenza: Recommendations of the advisory committee on immunization practices (acip), MMWRRecomm Rep, 55 (2006), 1-42. https://www.cdc.gov/mmwr/preview/mmwrhtml/rr5510a1.htm.Google Scholar
P. G. Smith, L. C. Rodrigues and P. E. M. Fine, Assessment of the protective efficacy of vaccines against common diseases using case-control and cohort studies, International Journal of Epidemiology, 13 (1984), 87-93.Google Scholar
C. J. Struchiner, M. E. Halloran, J. M. Robins and A. Spielman, The behaviour of common measures of association used to assess a vaccination programme under complex disease transmission patterns-a computer simulation study of malaria vaccines, International Journal of Epidemiology, 19 (1990), 187-196. doi: 10.1093/ije/19.1.187. Google Scholar
I. Swiecicki, T. Gobron and D. Ullmo, Schrödinger approach to mean field games, Phys. Rev. Lett., 116(2016), 128701. doi: 10.1103/PhysRevLett.116.128701. Google Scholar
J. D Tamerius, J. Shaman, W. J. Alonso, K. Bloom-Feshbach, C. K. Uejio, An. Comrie and C. Viboud, Environmental predictors of seasonal influenza epidemics across temperate and tropical climates, PLoS Pathog, 9 (2013), e1003194.Google Scholar
J. J. Treanor, H. K. Talbot, S. E. Ohmit, L. A. Coleman, M. G. Thompson, P.-Y. Cheng, J. G. Petrie, G. Lofthus, J. K. Meece, J. V. Williams, L. Berman, C. Breese Hall, A. S. Monto, M. R. Griffin, E. Belongia and D. K. Shay, Effectiveness of seasonal influenza vaccines in the United States during a season with circulation of all three vaccine strains, Clinical Infectious Diseases, 55 (2012), 951-959. doi: 10.1093/cid/cis574. Google Scholar
G. Turinici, Metric gradient flows with state dependent functionals: the Nash-MFG equilibrium flows and their numerical schemes, Nonlinear Analysis 165 (2017) 163-181.Google Scholar
R. Vardavas, R. Breban and S. Blower, Can influenza epidemics be prevented by voluntary vaccination?, PLoS Comput Biol, 3 (2007), e85. doi: 10.1371/journal.pcbi.0030085. Google Scholar
G. A. Weinberg and P. G. Szilagyi, Vaccine epidemiology: Efficacy, effectiveness, and the translational research roadmap, Journal of Infectious Diseases, 201 (2010), 1607-1610 doi: 10.1086/652404. Google Scholar
X. Zhao, V. J. Fang, S. E. Ohmit, A. S. Monto, A. R. Cook and B. J. Cowling, Quantifying protection against influenza virus infection measured by hemagglutination-inhibition assays in vaccine trials,, Epidemiology, 27 (2016), 143-151. doi: 10.1097/EDE.0000000000000402. Google Scholar
Figure 1. Two possible forms for the function $A$.
Figure Options
Download as PowerPoint slide
Figure 2. Individual model.
Figure 5. Results for Subsection 4.3.1. Top: the optimal converged strategy $\xi^{MFG}$ at times $\{t_0,...,t_{N-1} \}$. The weight of the non-vaccinating pure strategy (i.e., corresponding to time $t = \infty$) is $68\%$. Bottom: the corresponding cost $\mathcal{C}_{\xi^{MFG}}$. The red line corresponds to the cost of the non-vaccinating pure strategy $(\mathcal{C}_{\xi^{MFG}})_{N+1}$.
Figure 3. The optimal converged strategy $\xi^{MFG}$ at times $\{t_0,...,t_{N-1} \}$ for subsection 4.2, case $\mathcal{M}_1$. The weight of the non-vaccinating pure strategy (i.e., corresponding to time $t = \infty$) is $88\%$; this means that $12\%$ of the population vaccinates.
Figure 4. The optimal converged strategy $\xi^{MFG}$ at times $\{t_0,...,t_{N-1} \}$ for subsection 4.2, case $\mathcal{M}_2$. Here $15\%$ of the population vaccinates.
Figure 6. Results of Subsection 4.3.1. Top: the evolution of the susceptible class $S_n$; bottom: the (total) infected class $I_n$.
Figure 7. The decrease of the incentive to change strategy $E(\xi_k)$. Note that $E(\xi_k)$ does not decrease monotonically. In fact, there is no reason to expect such a behavior, since we are not minimizing $E(\cdot)$ in a monotonic fashion.
Figure 8. Results of Subsection 4.3.2. Top: the optimal converged strategy $\xi^{MFG}$. The weight of the non-vaccinating pure strategy (i.e., corresponding to time $t = \infty$) is $91\%$. Bottom: the corresponding cost $\mathcal{C}_{\xi^{MFG}}$. The thin horizontal line corresponds to the cost of the non-vaccinating pure strategy $(\mathcal{C}_{\xi^{MFG}})_{N+1}$.
Table 1. Results for the Subsection 4.4. Individual vaccination policy with respect to the failed vaccination rate of the vaccine.
Failed vaccination rate $f$ Vaccination rate $1-\xi_\infty$
$0.00$ $5.04 \%$
$0.55$ $7.20\%$
Download as excel
Fabio Camilli, Elisabetta Carlini, Claudio Marchi. A model problem for Mean Field Games on networks. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4173-4192. doi: 10.3934/dcds.2015.35.4173
Sunmi Lee, Romarie Morales, Carlos Castillo-Chavez. A note on the use of influenza vaccination strategies when supply is limited. Mathematical Biosciences & Engineering, 2011, 8 (1) : 171-182. doi: 10.3934/mbe.2011.8.171
Dennis L. Chao, Dobromir T. Dimitrov. Seasonality and the effectiveness of mass vaccination. Mathematical Biosciences & Engineering, 2016, 13 (2) : 249-259. doi: 10.3934/mbe.2015001
Alan J. Terry. Pulse vaccination strategies in a metapopulation SIR model. Mathematical Biosciences & Engineering, 2010, 7 (2) : 455-477. doi: 10.3934/mbe.2010.7.455
Qianqian Cui, Zhipeng Qiu, Ling Ding. An SIR epidemic model with vaccination in a patchy environment. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1141-1157. doi: 10.3934/mbe.2017059
Pierre Cardaliaguet, Jean-Michel Lasry, Pierre-Louis Lions, Alessio Porretta. Long time average of mean field games. Networks & Heterogeneous Media, 2012, 7 (2) : 279-301. doi: 10.3934/nhm.2012.7.279
Martin Burger, Marco Di Francesco, Peter A. Markowich, Marie-Therese Wolfram. Mean field games with nonlinear mobilities in pedestrian dynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1311-1333. doi: 10.3934/dcdsb.2014.19.1311
Yves Achdou, Manh-Khang Dao, Olivier Ley, Nicoletta Tchou. A class of infinite horizon mean field games on networks. Networks & Heterogeneous Media, 2019, 14 (3) : 537-566. doi: 10.3934/nhm.2019021
Josu Doncel, Nicolas Gast, Bruno Gaujal. Discrete mean field games: Existence of equilibria and convergence. Journal of Dynamics & Games, 2019, 0 (0) : 1-19. doi: 10.3934/jdg.2019016
Folashade B. Agusto, Abba B. Gumel. Theoretical assessment of avian influenza vaccine. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 1-25. doi: 10.3934/dcdsb.2010.13.1
Shujing Gao, Dehui Xie, Lansun Chen. Pulse vaccination strategy in a delayed sir epidemic model with vertical transmission. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 77-86. doi: 10.3934/dcdsb.2007.7.77
Urszula Ledzewicz, Heinz Schättler. On optimal singular controls for a general SIR-model with vaccination and treatment. Conference Publications, 2011, 2011 (Special) : 981-990. doi: 10.3934/proc.2011.2011.981
Martino Bardi. Explicit solutions of some linear-quadratic mean field games. Networks & Heterogeneous Media, 2012, 7 (2) : 243-261. doi: 10.3934/nhm.2012.7.243
Yves Achdou, Victor Perez. Iterative strategies for solving linearized discrete mean field games systems. Networks & Heterogeneous Media, 2012, 7 (2) : 197-217. doi: 10.3934/nhm.2012.7.197
Diogo A. Gomes, Gabriel E. Pires, Héctor Sánchez-Morgado. A-priori estimates for stationary mean-field games. Networks & Heterogeneous Media, 2012, 7 (2) : 303-314. doi: 10.3934/nhm.2012.7.303
Olivier Guéant. New numerical methods for mean field games with quadratic costs. Networks & Heterogeneous Media, 2012, 7 (2) : 315-336. doi: 10.3934/nhm.2012.7.315
Juan Pablo Maldonado López. Discrete time mean field games: The short-stage limit. Journal of Dynamics & Games, 2015, 2 (1) : 89-101. doi: 10.3934/jdg.2015.2.89
Eunha Shim. Prioritization of delayed vaccination for pandemic influenza. Mathematical Biosciences & Engineering, 2011, 8 (1) : 95-112. doi: 10.3934/mbe.2011.8.95
Sherry Towers, Katia Vogt Geisse, Chia-Chun Tsai, Qing Han, Zhilan Feng. The impact of school closures on pandemic influenza: Assessing potential repercussions using a seasonal SIR model. Mathematical Biosciences & Engineering, 2012, 9 (2) : 413-430. doi: 10.3934/mbe.2012.9.413
Patrick Gerard, Christophe Pallard. A mean-field toy model for resonant transport. Kinetic & Related Models, 2010, 3 (2) : 299-309. doi: 10.3934/krm.2010.3.299
PDF downloads (102)
HTML views (510)
Francesco Salvarani Gabriel Turinici | CommonCrawl |
The Marcinkievicz Interpolation Theorem for Rearrangement-Invariant Function Spaces and Applications
Franziska Fehér[1]
(1) Rheinisch-Westfälische Technische Hochschule Aachen, Germany
The interpolation theorem of J. Marcinkievicz [17] states that any sublinear operator $T$ which is simultaneously of weak types ($p_1, q_1$) and ($p_2, q_2$) is also a bounded operator from the Lebesgue space $L_p(0, l), 0 < l \leq \infty$, into itself, provided $p_2 < p < p_1$. The aim of this paper is to generalize this theorem to the setting of rearrangement-invariant Banach function spaces, and thus to render the theorem available to a much larger range of applications.
Fehér Franziska: The Marcinkievicz Interpolation Theorem for Rearrangement-Invariant Function Spaces and Applications. Z. Anal. Anwend. 2 (1983), 111-125. doi: 10.4171/ZAA/53 | CommonCrawl |
The incidence of necrotic enteritis in turkeys is associated with farm, season and faecal Eimeria oocyst counts
Magne Kaldhusdal1,
Eystein Skjerve2,
Magne Kjerulf Hansen3,
Inger Sofie Hamnes1,
Bruce David4,
Skjalg Arne Hanssen4 &
Atle Løvland4
BMC Veterinary Research volume 17, Article number: 292 (2021) Cite this article
Specific studies on the epidemiology of necrotic enteritis in turkeys are absent in the literature. Necrotic enteritis is common in turkeys and a leading cause of use of therapeutic antibiotics. This study describes the incidence of necrotic enteritis in turkey farms, and the association between incidence and bird age, season, faecal oocyst counts, grow-out size and feed mill.
Necrotic enteritis was diagnosed post mortem in 20.2 % of 545 grow-outs of commercial female and male B.U.T. 10 turkeys started during the years 2010–2016. 80 % of all cases occurred at four to seven weeks of age. Median (minimum-maximum) age at disease detection was 37 (18–115) days. Turkey age at detection was influenced by season, and varied from 33 days among grow-outs hatched in February to 42 days among those hatched in July-August. The incidence also varied with season, showing peak occurrence among grow-outs hatched during February-March and the lowest incidence in turkeys hatched in July-August. 59 % of all cases were detected in 25 % of the farms. The incidence per farm varied from below 4 to 59 %. A multilevel mixed-effects logistic regression model indicated clear impacts of farm and season on incidence, and border-line impacts of grow-out size and feed mill. Grow-outs diagnosed with necrotic enteritis had higher counts of faecal Eimeria oocysts than grow-outs without a diagnosis. This difference was particularly clear during the high-risk period at five to seven weeks of age. Necrotic enteritis was the cause of treatment with therapeutic antibiotics in 88.2 % of all cases of treatment.
Our data indicate that necrotic enteritis incidence in turkeys can be substantially influenced by risk factors at farm level. The incidence showed two seasonal peaks; a moderate peak in turkeys hatched in October/November and a marked peak in turkeys hatched during February/March. Mitigation measures at the farm may therefore be of particular importance during these months in farms located in the Northern temperate zone. Measures which effectively reduce counts of faecal Eimeria oocyst are likely to be among the more promising actions to take both at the farm and at population level.
Scientific reports dealing with necrotic enteritis (NE) in turkeys are few and in most cases not comprehensive. Clostridium perfringens is considered the causative agent of most cases of NE in turkeys [1] as in broiler chickens. Because turkeys and chickens as well as their production systems differ in several ways, some aspects of NE in turkeys and chickens are also likely to differ. Specific studies on the epidemiology of NE in turkeys are therefore needed to help understand and prevent NE in this bird species.
Although NE in turkeys is rarely reported and discussed in the scientific literature, it is probably quite common [2]. In Norway NE in turkeys is the main leading cause of the use of therapeutic antibiotics in poultry. Due to the risk of development of resistance, it is desirable to reduce the use of therapeutic antibiotics as much as possible. Improved knowledge about, and management of, factors influencing the risk of NE is therefore of importance not only for improved turkey health, welfare and production performance but also for reduced risk of antibiotic resistance. Field data and challenge experiments have documented the role of Eimeria spp. as a risk factor for NE in broilers [3,4,5], and a predisposing role of Eimeria spp. in turkey NE has been suggested based on field data [2, 6] and experimental data [7].
This study aims to describe aspects of the epidemiology of NE in a population of commercial turkeys, including incidence, age distribution and the potential associations between disease occurrence and year, season, grow-out size and feed mill, controlling for the farm effect. The relationship between NE and coccidial oocyst counts is estimated in a subgroup of grow-outs. The prescription pattern for therapeutic antibiotics and the relationship between NE occurrence and production data are also reported.
Bird age at NE detection
Age of grow-outs at NE detection was based on the first instance if NE was diagnosed several times during the same grow-out period. NE was diagnosed in 110 of 545 grow-outs and age at diagnosis was recorded in 107 of these. The median (min-max) age at NE detection was 37 (18–115) days. Mean age was 40 days. 80 % of all cases occurred at four to seven weeks of age (days 28–50). The age at NE detection tended to be higher in grow-outs hatched in June-August than during other parts of the year (Fig. 1b). The difference between grow-outs hatched in February as compared to July-August was significant (p = 0.03). The median age at NE detection among grow-outs hatched in February and July-August was 33 and 42 days, respectively.
Distribution of age in days at detection of necrotic enteritis in 107 grow-outs of commercial slaughter turkeys. a All 107 grow-outs with age data. b Distributions of age at NE diagnosis per month (1 = January, 2 = February and so forth until 12 = December)
Incidence of necrotic enteritis
NE was reported in 110/545 grow-outs (20.2 %) started between August 2010 and October 2016. Among years with complete data (2010–2015) the percentage of grow-outs diagnosed with NE varied from a minimum of 10.6 in 2014 to a maximum of 28.6 in 2012.
Data on monthly incidences of NE (Table 1) indicate a distinct minimum incidence among grow-outs hatched during July-August and peak occurrences among grow-outs hatched during February-March and September-November. Monthly incidences show two steep increases (from grow-outs hatched in January to those hatched in February and from grow-outs hatched in August to those hatched in September) and a prolonged period of five months with continuously diminishing incidence (grow-outs hatched from February to July). Because most cases of NE appeared four to seven weeks after hatch, this means that the highest frequencies of NE were detected in March-April. In contrast, August-September was associated with the lowest NE level. The highest (28 %) and lowest (14 %) quarterly incidences were found in grow-outs hatched during the first and third quarters of the year, respectively. Thus, season appeared to be an important factor in NE incidence (Table 2).
Table 1 Monthly and quarterly incidences (%) of necrotic enteritis (NE)
Table 2 The final statistical model based on data from 545 turkey grow-outs started during 2010–2016
Necrotic enteritis and farm
A total of 32 farms from the most extensive study sample were represented by at least ten grow-outs each and a total of 442 grow-outs. NE was diagnosed in 20 % of these grow-outs, and 59 % of the NE cases were found in 25 % of these farms. NE incidence per farm varied between 0/25 grow-outs (< 4 %) and 10/17 grow-outs (59 %), suggesting a solid influence from farm.
Necrotic enteritis and grow-out size
The median grow-out size was 8200 birds (Table 3 ). Eight of ten grow-outs were started with 5000 to 13,000 day-old birds. The NE incidence among grow-outs smaller than 6000 birds appeared to be lower than the incidence among grow-outs of 6 000 birds or more (Fig. 2; Table 2).
Percentage of grow-outs diagnosed with NE in 13 size categories based on numbers of day-old turkeys per grow-out. Numbers of day-old birds started (grow-out size) are indicated in thousands below each bar. The number of grow-outs per size category varied from 15 (14,000–14,999 day-olds) to 90 (6000–6999 day-olds)
Table 3 Grow-out sizea and age at slaughter of turkeys, based on data from 545 grow-outs
Necrotic enteritis, feed mill and withdrawal of anticoccidial drugs
A total of nine feed mills provided feeds. NE was diagnosed in grow-outs supplied by all five mills providing feed to at least ten grow-outs. More than 91 % of the 545 grow-outs were provided with feeds from three different mills. One of these mills stood out with a higher percentage of NE grow-outs (46.5 % cases among 43 grow-outs) than the two others (16.2 and 21.5 % cases among 291 and 163 grow-outs, respectively). Data on time of anticoccidial drug withdrawal from the feed were available from 11 grow-outs supplied by these major feed mills. Anticoccidial drug was withdrawn at day 56 or later in seven grow-outs, between days 50–55 in two grow-outs and between days 42 and 49 in two grow-outs. These sparse data are supported by personal communications from the relevant feed companies, indicating that the majority of grow-outs were offered feed supplemented with anticoccidial drugs up to day 56 or more.
The roles of variables potentially influencing necrotic enteritis incidence
Based on available data and preliminary examinations the following five categorical variables were formulated and selected for further analyses concerning association with NE incidence: Farm, Season (quarter at hatch), Year housed, Grow-out size (number of day-old birds per grow-out), and Feed mill. In order to examine the relationship between these variables and NE incidence in context, multivariable models with adjusted odds ratio estimates were built. A model including all five explanatory variables with Farm as a random effect indicated that Year housed was not significantly (p = 0.13 or higher for all years) associated with NE incidence. All other tested variables were recorded with p-values at 0.05 or lower in this model. Year housed was therefore removed from the model (Table 2). Removing any of the remaining variables did not improve the fit of the model. The final model was therefore.
$$\mathrm{Necrotic}\;\mathrm{enteritis}\;\mathrm{incidence}\;=\;\mathrm{Farm}\;+\;\mathrm{Season}\;+\;\mathrm{Grow}-\mathrm{out}\;\mathrm{size}\;+\;\mathrm{Feed}\;\mathrm{mill}$$
The logistic model does not give an exact measure of the variance, but based on the Pseudo R2 measure, 6.1 % of the variance of the data could be explained by the model. Random effect of farm was substantial (chi-square = 17.45, p < 0.001). Furthermore, the data indicate a clear impact of Season and more border-line impacts of Grow-out size and Feed mill (Table 2).
Necrotic enteritis and faecal oocyst counts
The relationship between faecal oocyst counts (OPG) counts and NE incidence was examined in data from 39 grow-outs. Figure 3a indicates that NE (dotted line) was detected during weeks 4 to 7 with the highest level of incidence during weeks 5 to 7. Median and mean age at detection of NE was 38 and 39 days, respectively. These figures are similar to the results from the study population of 545 grow-outs, suggesting that the subsample with OPG data was representative of the larger study sample regarding age pattern of NE occurrence.
Box plots depicting the age dynamics of faecal oocyst counts (median counts: solid black line) per week of age. Y-axis indicates log10 oocysts per gram faeces (OPG). X-axis indicates age in weeks (3 to 8) of the examined turkey groups. a OPG of groups from grow-out diagnosed with necrotic enteritis (NE). NE occurrence is indicated as a dotted line (NE, arrow pointing at the line) in relative levels. The peak value at five weeks of age corresponds to detection of NE in 17.9 % of the grow-outs at that age. b OPG from grow-outs without NE diagnosis. The graphs are based on data from 39 grow-outs raised on 16 different farms
Figure 3a also indicates a general concurrence in a rise of OPG counts and a rise of NE incidence, supporting a predisposing role of Eimeria in turkey NE. However, the picture is complex. Firstly, each grow-out consisted of at least two separate turkey groups; typically one group of females and one group of males. If NE was detected in only one turkey group, the whole grow-out was recorded as NE positive and treated with antibiotics. NE status was not recorded at the group level, hence we cannot analyse the direct relationship between OPG count and NE occurrence in each turkey group. OPG counts from grow-outs with NE in many cases represented a mix of samples from turkey groups with and without NE. However, if a predisposing role of high OPG counts is assumed, this would be expected to be reflected in a generally higher OPG level among grow-outs diagnosed with NE. A comparison of OPG counts of Fig. 3a (grow-outs diagnosed with NE) and Fig. 3b (grow-outs without an NE diagnosis) indicates a higher OPG level among grow-outs diagnosed with NE during the high-risk period weeks five to seven. This difference between grow-outs with and without NE was statistically significant (p = 0.03). The same trend (p = 0.07) was present during the whole age interval from three to eight weeks of age, although there appeared to be no difference in OPG counts between grow-outs with and without NE at three and four weeks of age.
Necrotic enteritis and antibiotic therapy
A total of 127 of 545 grow-outs were treated at least once with therapeutic antibiotics. NE was the cause of treatment in 112 (88.2 %) of these cases. The two other most common causes of antibiotic treatment were gizzard inflammation and erysipelas.
Phenoxymethylpenicillin and amoxicillin were the two most commonly used therapeutic antibiotics (Table 4). There was an evident change in the prescription pattern during the study period, from the predominant use of amoxicillin to a predominance of phenoxymethyl-penicillin.
Table 4 Use of therapeutic antibiotics in Norwegian slaughter turkeys 2010–2016
Necrotic enteritis and foot pad scores
Foot pad scores were recorded at slaughter of 40 hen grow-outs. The median time interval between NE diagnosis and day of hen slaughter was 45.5 days. There was no significant difference (p = 0.52) in hen foot pad scores between grow-outs with (median score: 96) and without (median score: 94) a previous NE diagnosis. Most grow-outs were slaughtered more than three weeks after NE detection. Data from only two grow-outs that were slaughtered less than three weeks after NE detection (mean score: 120) were available.
Necrotic enteritis and production performance
There was no association between a NE diagnosis and mean daily weight gain in female and male turkeys. The feed conversion ratio (FCR) was recorded for males and females taken together. Grow-outs diagnosed with NE tended to have less efficient feed conversion than grow-outs without this diagnosis (0.9 and 1.5 % differences for median and mean FCR, respectively). However, the association was not statistically significant (p = 0.12). A similar but slightly stronger association was found concerning profit per bird to the farmer (8.3 and 6.8 % differences for median and mean profit, respectively) with a p-value of 0.07 (Table 5).
Table 5 Production performance in grow-outs with and without a NE diagnosis
This study provides new and comprehensive data on the epidemiology of NE in commercial turkey meat production. NE was the most commonly diagnosed disease in this turkey population. The disease was deemed severe enough to be treated with antibiotics in the drinking water in 20.2 % of the 545 grow-outs that were started during 2010–2016. Previously published, solid population-based data on NE occurrence are absent (turkeys) or scarce (broilers) [8]. NE incidence seems to vary substantially over time in turkeys as well as in broilers. The incidence of clinical NE in our turkey population (quarterly incidence 14–28 %) appeared to be higher than mean levels in broilers reported before 1990 (5–9 %; these means include occurrence of epidemics), and more similar to peak levels reported during broilers epidemics (12–35 %) [8].
Median and mean age at the first instance of NE detection were 37 and 40 days, respectively. Eighty per cent of all cases appeared at four to seven weeks of age. Previous works [2, 9] have also reported distinct age intervals for the majority of NE cases in turkeys, with few cases appearing before six weeks of age and after 12 weeks of age. In our material the risk interval is earlier, which may be due to differences in production systems; e.g. housing, management, and feeding. Based on their field data, Droual et al. [2] suggest that there may be a resistance to NE in young poults, a finding that agrees with published experimental data [7, 10]. NE in commercial broilers is rarely found in birds younger than two weeks of age [11], and experimental NE in broilers is usually induced at about three weeks of age [5]. The age patterns of NE in turkeys and broilers thus appear to be similar in that NE is rarely found during the first few weeks of life.
Our data on the distribution of NE cases among farms and our final multivariable model (Table 3) indicates an apparent farm effect on NE incidence, suggesting that NE incidence might be substantially reduced using actions taken at farm level. The Farm variable comprises a multitude of factors (e.g. the quality of animal housing and other aspects of the physical environment, farm management quality, and biosecurity measures) whose relative importance cannot be evaluated based on the data available in this study. However, we do have data on two more specific aspects that also are related to the farm and its management; Grow-out size and Feed mill.
Feed mill was included (OR = 3.03, p = 0.05 for one feed mill, Table 3) in the multivariable model with Farm as a random effect. The mill with increased NE incidence supplied only six farms and 43 grow-outs with feeds, which may be seen as a cause for caution in emphasizing mill effect. However, the finding do suggest that some aspects of feed associated with mill (e.g. contamination with pathogenic C. perfringens strains, feed processing, feed structure, feed additives, ingredients and nutrients) might have influenced NE occurrence in these commercial turkey farms. This assumption is following previous findings in broiler chickens [8, 12, 13]. More work on specific factors related to nutrition and feeding of turkeys and their possible association with NE incidence is needed to determine the importance of such factors.
The binary variable Grow-out size assigned grow-outs to two categories: Those started with fewer than 6 000 birds and grow-outs started with at least 6 000 birds. Our data suggest an increasing trend in NE occurrence (OR = 2.8, p = 0.05, Table 3) with increased grow-out size. A possible reason for this trend might be that large grow-outs on average comprised higher numbers of turkey groups. Even with the same risk of NE per turkey group in small and large grow-outs, the risk of NE per grow-out would therefore be higher among large grow-outs. An alternative or supplementary explanation could be that a large grow-out is a risk factor per se. If this is the case, the current trend of increasing farm and grow-out size demands even more focus on measures against NE.
In our study of Eimeria oocyst counts (OPG) from 39 grow-outs sampled during 2015–2017 (including 36 of the grow-outs from the larger study population) we found that OPG counts and NE incidence peaked more or less concurrently at around five to six weeks of age (Fig. 3). Furthermore, the increase in OPG values was most marked among grow-outs diagnosed with NE. These findings suggest that subclinical coccidiosis may have been a predisposing factor contributing to the peak occurrence of NE at around five to seven weeks of age. This conclusion based on data from commercial farms is supported by experimental results indicating that coccidial challenge at five (but not at three) weeks of age combined with Clostridium perfringens challenge is an effective means of reproducing severe intestinal lesions in turkeys [7]. The role of Eimeria as a predisposing factor for NE has also been demonstrated in broiler experiments [5], although different Eimeria species appear to differ in their ability to induce NE mortality [4]. Early withdrawal of in-feed anticoccidial drugs might have contributed to increased OPG counts and NE in some cases. However, based on available information about the timing of withdrawals (not prior do day 42) and peak occurrence of NE (before day 42), anticoccidial drug withdrawal seems an unlikely main driver of NE appearance. In many cases neither high OPG counts nor NE was prevented by in-feed anticoccidial drugs.
Season of the year affected NE occurrence in two ways; age at appearance (Fig. 1) and incidence (Table 2). Median age at NE detection was at or above 40 days in grow-outs hatched during months June to September (i.e. in grow-outs with NE outbreaks in July to October), and below 40 days during all other months. NE appeared particularly early in grow-outs hatched in February. Grow-outs hatched in February also had the highest incidence of NE (36.8 %), whereas grow-outs hatched in July were recorded with the lowest incidence (9.3 %). These findings indicate that season-dependent factors influence age pattern and occurrence of NE, with peak NE incidence during March-April (four to seven weeks after hatch in February). Climate is a possible element in this context, by influencing the environment in the turkey house, and consequently the conditions for the proliferation of pathogens. Moist litter [14] due to condensation caused by cold weather and restricted ventilation may promote Eimeria proliferation and subclinical coccidiosis, and thus predispose the turkeys to NE [2, 6, 7]. However, cold weather alone cannot explain the peak occurrence in March-April, because January and February usually are colder months at the locations of these farms (Table 6). A possible explanation might be that heating of the coldest inlet air in January and February leads to low relative humidity, which delays oocyst sporulation. A combination of changing weather conditions and sub-optimal management of in-door climate may explain the peak occurrence of NE in March and April.
Table. 6 Location of turkey farms, and temperature (°C) and precipitation (mm) data
Furthermore, it is clear from the data (Fig. 3) that relatively high OPG counts in some turkey groups were not always associated with an NE diagnosis. NE grow-outs comprised only one third (21/64) of the turkey groups with OPG levels at or above 10 000, which suggests that other factors were also significant predictors of an NE diagnosis.
The potential role of haemorrhagic enteritis (caused by turkey adenovirus A species in genus Siadenovirus) as a predisposing factor for NE was discussed by Droual et al. [2]. They argued that because this virus is prevalent in turkeys, and the disease is associated with immunosuppression and occurs during the same age interval as NE, haemorrhagic enteritis may increase the likelihood that NE will occur. Our study was not designed to investigate the role of haemorrhagic enteritis, but it is noteworthy that this disease was diagnosed six days before an NE diagnosis in one of the grow-outs with NE that were examined for OPG counts.
The use of therapeutic antibiotics was, to a large extent, determined by NE incidence. The observed change in prescription pattern from amoxicillin to phenoxymethyl-penicillin was the result of a deliberate choice that took place because phenoxymethyl-penicillin has a more narrow spectrum of activity, and therefore is more targeted against Clostridium perfringens causing NE, and less likely to induce antibiotic resistance in other pathogens.
NE did not influence the daily weight gain of female or male turkeys. The values of this variable were based on achieved growth from the day of hatch until slaughter at about 85 (females) or 132 (males) days of age. This lack of influence of NE on weight gain does not exclude the possibility that NE temporarily reduced weight gain. Compensatory growth may have taken place during the weeks between NE occurrence and slaughter (48 days difference between median age at NE and median age at hen slaughter). Furthermore, the estimated sex-specific association between NE and weight gain could have been weakened by the likely fact that some grow-outs diagnosed with NE were affected by NE in one sex only. The same considerations apply to the apparent lack of impact of NE on foot pad scores. Published experimental results [15] suggest that diarrhoea caused by coccidial infection can lead to poor litter quality, and hence, increased severity of foot pad dermatitis in turkeys. Because coccidial infection was a likely predisposing factor for NE in our observational study, NE may also have been associated with higher foot pad scores during the first few weeks following disease outbreaks, as suggested by the higher scores of the two only grow-outs that were slaughtered less than three weeks after NE detection. However, most grow-outs were examined for foot pad lesions six to seven weeks after NE diagnosis, which may have provided time for re-establishment of a satisfactory litter quality and healing of foot pad lesions.
In this study the feed conversion ratio was estimated based on merged data from males and females in the same grow-out, since most farmers do not have separate feeding systems for males and females. Because most NE cases appeared around 5 to 6 weeks of age, the impact of NE on accumulated feed conversion ratio was likely to be more modest in males than in females. Furthermore, feed conversion results from males were given more weight than the results from females, because the males were substantially larger and had consumed much more feed than the females. These circumstances may have contributed to the lack of significant (p = 0.12) impact of NE on feed conversion ratio. It is however noteworthy that grow-outs diagnosed with NE were estimated with about 1 % poorer feed conversion than grow-outs without NE. The same considerations apply to profit margin per bird. In this case our data indicate a lower p-value (0.07) and higher estimates (seven to 8 %) for a negative impact of NE.
The incidence of necrotic enteritis in turkeys was strongly influenced by season and farm. The strong farm effect underlines the potential importance of environmental factors and/or management factors in the epidemiology of this disease. Our data suggest that subclinical coccidiosis was an important predisposing factor for NE in the examined turkey population. Although not investigated in this study, variable severity of coccidiosis might also have been associated with farm management and season. The potential roles of diet and grow-out size in the epidemiology of necrotic enteritis in turkeys deserve further studies.
Study design and populations
This observational study comprises two study populations of B.U.T. 10 turkeys.
The largest study population consists of all grow-outs (545 grow-outs from 57 commercial turkey farms) that were started in south-eastern Norway during the period August 2010 to October 2016. All farms were located in the three regions (Hedmarken, Vestfold, Østfold) with the majority of turkey farms in the country (as from 2018 the only regions). Data on outdoor temperatures and precipitation in these three regions are displayed in Table 6.
A grow-out was defined as the entire group of day-old birds that were housed on the same day on the same farm. Data from these grow-outs were collected routinely by the company (Nortura SA, Norway) slaughtering turkeys from these farms, and made accessible for this study. All grow-outs were started with female turkeys, and 95.2 % of the grow-outs also included males. Females and males were raised separately as from the day of hatch until slaughter. All birds were kept on litter floor, and offered free access to feed and water. Some farms divided male and female groups into more sub-groups during the first eight weeks of rearing. This means that whereas some sub-groups were kept on partly the same litter floor during the whole grow-out period, others were moved to other rooms or houses with fresh litter at some time point after initial housing. All grow-outs were started with fresh litter material. Females were usually slaughtered at about 12 weeks of age, and males were mostly slaughtered at about 19 weeks of age. The percentage of grow-outs started per month varied between a minimum of 7.0 of all started grow-outs in February and a maximum of 9.7 in January.
A smaller study population of 39 grow-outs started during 2015–2017 at 16 commercial turkey farms were used to study the relationship between NE occurrence and levels of Eimeria oocyst counts per gram faeces (OPG) in three to eight weeks old birds. Based on historical data on NE occurrence, farms were selected in order to ensure a sample of grow-outs that was representative of the larger study population, which was confirmed by comparing data on NE frequency and average age of turkeys at NE appearance from the two study populations. Data from 14 of the farms (36 grow-outs) in the smaller study population constituted a subset of the study with 545 grow-outs. Data on withdrawal time of anticoccidial drugs were collected from 12 of the grow-outs in the smaller study population.
All grow-outs in both study populations were started with in-feed anticoccidial drugs, mainly monensin and to a minor extent lasalocid. These anticoccidials were used continuously until withdrawal. Only one type of anticoccidial compound was used in each grow-out; no shuttle or systematic rotation programs were used. Withdrawal took place at six to nine weeks of age, in most cases at about eight weeks of age. No anticoccidial vaccines or antibiotic growth promoters were used.
Incidence of necrotic enteritis in the largest study population
NE was diagnosed by Nortura's field veterinarians, based on gross lesions. Characteristic findings include small intestinal pseudomembranes with a mucoid appearance, often accompanied by gas-filled intestines with watery contents. NE was recorded if the outbreak was deemed severe enough to require treatment with therapeutic antibiotics. NE incidence was defined as the percentage of all 545 grow-outs diagnosed with at least one recorded outbreak during the whole study period or during specific time components (yearly, quarterly or monthly time intervals). Each grow-out was allocated to time interval based on the date of hatch.
Sampling of faeces and counts of oocyst per gram (OPG) in the smaller study population
OPG counts were estimated and recorded in 39 grow-outs from 16 farms. NE incidence was calculated as the number of detections of NE that led to antibiotic treatment up to and including eight weeks of turkey age. Whereas OPG counts were recorded at the turkey group level (at least two groups per grow-out) on several (mainly three to five) occasions between three and eight weeks of age, NE incidence was recorded at the grow-out level. Pooled samples of fresh faeces were collected from five evenly distributed areas within the floor space shared by each turkey group. Litter was included in the sampling material only to the extent that it was inseparable from wet faeces. Each of the samples pooled per turkey group and sampling day were mixed thoroughly before being examined.
The levels of oocysts per gram faeces (OPG) were determined using a modified McMaster method for parasite egg counting. This method is based on published literature [16]. Briefly, the protocol is based on dilution of faeces and subsequent flotation of oocysts before counting; 30 g of thoroughly mixed pooled sample, 420 ml tap water, mixed in a blender and sieved through a sieve with 250 μm mesh size, centrifugation for 3 min at 3000 rotations/minute, removal of the supernatant from the oocyst-containing sediment, mixing of oocysts in saturated NaCl with a volume corresponding to the supernatant, and examination of 1,0 ml this mixture using a Whitlock Universal counting chamber. This method has a theoretical lower sensitivity of 15 OPG.
Foot pad scoring
Foot pads from 100 feet per grow-out were scored for lesions from 0 to 2 (0: no lesions, 1: moderate lesions, 2: severe lesions) at the slaughter of 260 grow-outs. Sum of foot pad score per grow-out was calculated based on the following formula:
$${\mathit(number\mathit\;of\mathit\;feet\mathit\;with\mathit\;score\mathit\;\mathit0\mathit\;\times\mathit\;\mathit0\mathit)}\mathit\;+\mathit\;{\mathit(number\mathit\;of\mathit\;feet\mathit\;with\mathit\;score\mathit\;\mathit1\mathit\;\times\mathit\;\mathit1\mathit)}\mathit\;+\mathit\;{\mathit(number\mathit\;of\mathit\;feet\mathit\;with\mathit\;score\mathit\;\mathit2\mathit\;\times\mathit\;\mathit2\mathit)}$$
Range of sum of foot pad scores was therefore 0 to 200. Foot pads were scored at slaughter; i.e. at 68–108 days of age for females and at about 19 weeks of age for males. Because 90 % of all NE cases appeared in grow-outs below 51 days of age, only foot pad scores in females (scored 18–51 days after NE detection) were examined.
Production performance
Daily weight gain was calculated per sex and grow-out, based on mean carcass weight at slaughter. The feed conversion ratio was calculated per grow-out (including both females and males), based on accumulated feed uptake at slaughter and carcass weight at slaughter. Calculation of profit per bird to the farmer was based on costs of day-old poults and feed, and payment per approved carcass.
All data were collated in Excel spread-sheets.
Raw data on explanatory variables in the largest study population were provided by Nortura SA in an Excel spreadsheet format, and were further analysed in Excel before being imported into Stata 14.2 or Stata 16.1 for statistical analysis. The relationship between each variable and NE occurrence was explored and described in text, tables, and figures. The unit of concern in these analyses was grow-out. The outcome was binary (NE yes/no). Explanatory variables were categorical (Farm, Year housed, Season, Feed mill) or were made binary (Grow-out size: number of day-old turkeys per grow-out). A multilevel mixed-effects logistic regression model was built using the melogit procedure in Stata 16.1 with NE as the outcome, and Farm, Season, Feed mill, and Grow-out size as predictors. Farm was included as a random effect to adjust for the repeated structure of data. A backward selection approach was used to build the final model.
The Kruskal-Wallis rank sum test (kwallis procedure in Stata 14.2) was used to compare performances of grow-outs (N = 545) with and without an NE diagnosis (Table 5), to compare log10 OPG counts (N = 39) and foot pad scores (N = 545) in grow-outs with and without NE, and to compare seasonal differences in age at NE occurrence among grow-outs diagnosed with NE (N = 107).
Two datasets that support the findings of this article are included as additional files.
FCR:
Feed conversion ratio (kg feed/kg carcass weight)
Necrotic enteritis
OPG:
Number of Eimeria oocysts per gram faeces
Uzal FA, Senties-Cue CG, Rimoldi G, Shivaprasad HL. Non-Clostridium perfringens infectious agents producing necrotic enteritis-like lesions in poultry. Avian Pathol. 2016;45:326–33.
Droual R, Farver TB, Bickford AA. Relationship of sex, age, and concurrent intestinal disease to necrotic enteritis in turkeys. Avian Dis. 1995;39:599–605.
Hermans PG, Morgan KL. Prevalence and associated risk factors of necrotic enteritis on broiler farms in the United Kingdom; a cross-sectional survey. Avian Pathol. 2007;36:43–51.
Nicholds JF, McQuain C, Hofacre CL, Mathis GF, Fuller AL, Telg BE, Montoya AF, Williams SM, Berghaus RD, Jones MK. The effect of different species of Eimeria with Clostridium perfringens on performance parameters and induction of clinical necrotic enteritis in broiler chickens. Avian Dis. 2021;65:132–7.
Dierick E, Ducatelle R, Van Immerseel F, Goossens E. Research Note: The administration schedule of coccidia is a major determinant in broiler necrotic enteritis models. Poultry Sci. 2021;100(3). Article number 100806. https://doi.org/10.1016/j.psj.2020.10.060.
Droual R, Shivaprasad HL, Chin RP. Coccidiosis and necrotic enteritis in turkeys. Avian Dis. 1994;38:177–83.
Hardy SP, Benestad SL, Hamnes IS, Moldal T, David B, Barta JR, Reperant J-M, Kaldhusdal M. Developing an experimental necrotic enteritis model in turkeys - the impact of Clostridium perfringens, Eimeria meleagrimitis and host age on frequency of severe intestinal lesions. BMC Vet Res. 2020;16:63. https://doi.org/10.1186/s12917-020-2270-5.
Kaldhusdal M, Benestad SL, Løvland A. Epidemiologic aspects of necrotic enteritis in broiler chickens – disease occurrence and production performance. Avian Pathol. 2016;45:271–4. https://doi.org/10.1080/03079457.2016.1163521.
Gazdzinski P, Julian RJ. Necrotic enteritis in turkeys. Avian dis. 1992;36:792–8.
Fagerberg DJ, George, BA, Lance, WR. Clostridial enteritis in turkeys. Proc. 33rd Western Poultry Disease Conference. Davis; American Association of Avian Pathologists; 1984. p. 20–21.
Opengart K. Necrotic enteritis. In: Saif YM, Editor-in-chief. Diseases of poultry, 12th ed. Ames: Blackwell Publishing Ltd; 2008. p. 872 – 79.
M'Sadeq SA, Wu S, Swick RA, Choct M. Towards the control of necrotic enteritis in broiler chickens with in-feed antibiotics phasing-out worldwide. Anim Nutr. 2015;1:1–11.
Onrust L, Van Driessche K, Ducatelle R, Schwarzer K, Haesebrouck F, Van Immerseel F. Valeric acid glyceride esters in feed promote broiler performance and reduce the incidence of necrotic enteritis. Poultry Sci. 2018;97:2303–11.
Venkateswara PR, Raman M, Gomathinayagam S. Sporulation dynamics of poultry Eimeria oocysts in Chennai. J Parasit Dis. 2015;39:689–92.
Abd El-Wahab A, Visscher CF, Wolken S, Reperant J-M, Beineke A, Beyerbach M, Kamphues J. Foot-pad dermatitis and experimentally induced coccidiosis in young turkeys fed a diet without anticoccidia. Poultry Sci. 2012;91:627e35.
Taylor MA, Coop RL, Wall RL. Chapter 4 Laboratory diagnosis of parasitism. In: Taylor MA, Coop RL, Wall RL, editors. Veterinary parasitology. 44th ed. Chichester: Blackwell Publishing Ltd; 2015.
Thanks to Dag Henning Edvardsen at Norgesfôr, Olof Waldemar Löwgren at Felleskjøpet Agri and Gorm Sanson at Felleskjøpet fôrutvikling for useful background information about turkey feed production.
This study was funded by the Norwegian Research Council grant no. 225177, the Norwegian Agriculture and Food Industry Research Fund, the Norwegian Meat and Poultry Research Centre, Kemin Europa N.V., Felleskjøpet feed development, Nortura SA, Baastad Kalkun AS and the Norwegian Veterinary Institute. The Norwegian Veterinary Institute was involved in the design, analysis and reporting of the study. Nortura SA was involved in reporting of the study.
Department of Food Safety and Animal health, Norwegian Veterinary Institute, P.O.B. 750, Sentrum, 0106, Oslo, Norway
Magne Kaldhusdal & Inger Sofie Hamnes
Department of Production Animal Sciences, Faculty of Veterinary Medicine, Norwegian University of Life Sciences, P.O.B. 369, Sentrum, 0102, Oslo, Norway
Eystein Skjerve
Norwegian Meat and Poultry Research Centre Animalia, P.O.B. 396, Økern, 0513, Oslo, Norway
Magne Kjerulf Hansen
Nortura SA, P.O.B. 360, Økern, 0513, Oslo, Norway
Bruce David, Skjalg Arne Hanssen & Atle Løvland
Magne Kaldhusdal
Inger Sofie Hamnes
Bruce David
Skjalg Arne Hanssen
Atle Løvland
MK wrote the grant application, conceptualized the study, conducted post mortem examinations and sampled faeces for OPG analyses, participated in data curation, participated in statistical analyses, wrote the first draft of the manuscript, and participated in the revision of the manuscript. ES participated in the statistical analyses and the revision of the manuscript. MKH sampled turkeys and collected field data, conducted post mortem examinations and sampled faeces for OPG analyses, provided data on usage of antibiotics, and participated in the revision of the manuscript. ISH supervised and participated in the OPG analyses, and participated in the revision of the manuscript. BD sampled turkeys and collected field data, conducted post mortem examinations and sampled faeces for OPG analyses, and participated in the revision of the manuscript. SAH conducted post mortem examinations, collected field data and participated in the revision of the manuscript. AL contributed to the grant application, provided and curated field data, and participated in the revision of the manuscript. All authors have read and approved the final manuscript.
Correspondence to Magne Kaldhusdal.
No turkeys used in this study were part of an experiment or subjected to experimental conditions. Detection of necrotic enteritis in the 545 grow-outs of the major study population was based on post mortem examination of turkeys that had died in the barn. Turkeys from the study population comprising 39 grow-outs were transported to the laboratory and euthanized immediately before post mortem examination and sampling for faecal oocyst counts. Written consent was obtained from the animal owners.
Ethical approval for this study was not sought, because the undertaken practices are considered "non-experimental husbandry (agriculture or aquaculture)" and "procedures in normal/common breeding and husbandry" and do not require approval by the Norwegian ethics board according to the Norwegian regulation on animal experimentation, § 2, 5a, d. Euthanasia was carried out by a powerful blow to the head followed immediately by cervical dislocation. This procedure is in accordance with Annex IV of EU Directive 2010/63 ( https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32010L0063#d1e79-72-1). Euthanasia carried out in accordance with Annex IV is not subject to application. The study was carried out in compliance with the ARRIVE guidelines (https://arriveguidelines.org/).
Data supporting conclusions regarding factors associated with necrotic enteritis incidence [Data 545 grow-outs].
Data supporting conclusions regarding an association between faecal oocyst counts and necrotic enteritis incidence [Data OPG].
Kaldhusdal, M., Skjerve, E., Hansen, M.K. et al. The incidence of necrotic enteritis in turkeys is associated with farm, season and faecal Eimeria oocyst counts. BMC Vet Res 17, 292 (2021). https://doi.org/10.1186/s12917-021-03003-8
Accepted: 23 August 2021
OPG
Grow-out size | CommonCrawl |
You don't work out oxidation states by counting the numbers of electrons transferred. a) O in the compound O2 b) Ag in AgNO3 c) Mn in MnO2 d) Zn in ZnSO4 e) Cl in ClO3- f) C in CO3^2- g) Mn in KMnO4 h) S in MgSO3 i) S in Na2S2O3 Thanks! The oxidation number of O is usually -2, unless it's part of a hydroxide (and then it's -1) or bonded to fluorine (and then it's +2). 0 1. sapphire5434. To find the oxidation number of sulfur, it is simply a matter of using the formula SO2 and writing the oxidation numbers as S = (x) and O2 = 2(-2) = -4. Inter state form of sales tax income tax? The oxidation number of Cl in NaCl is -1 The oxidation number of N in CN- is -3. 4) releasing of hydrogen. Copyright © 2020 Multiply Media, LLC. The positive oxidation state is counting the total number of electrons which have had to be removed - starting from the element. Since is in column of the periodic table, it will share electrons and use an oxidation state of . Yes. How tall are the members of lady antebellum? Siento enorme interés por la química supramolecular, la nanotecnología, y los compuestos organometálicos. and here Zn is the reducing agent and CuSO4 is the oxidising agent. All Rights Reserved. In the compound, ZnS04(aq), the oxidation number of the zinc ion (Zn (aq)) is +2. The oxidation number of O in Li20 and KN03 is -2. In almost all cases, oxygen atoms have oxidation numbers of -2. An oxidation number is defined as the charge an atom would carry if the molecule or polyatomic ion were completely ionic.When calculating the oxidation number of an element in a compound, treat all the elements present as if they are present as ions, EVEN if they are clearly part of a covalent molecule. Same with ZnSO4.. but why is it +2 and not +10?? Oxidation number of Zn in ZnSO4 is +2, In Zn oxidation state of Zn is 0, When reacted with H2SO4, Zn displaces H to form ZnSO4, whose oxidation state is +2, Since there is an increase in oxidation number therefore Zn is oxidised. To maintain electrical neutrality as required for all compounds, the two nitrogen atoms must have a total oxidation charge of +10, so that each of the two nitrogen atoms has an oxidation number of +5. In a compound or simple ion: group 1 metals are always +1, group 2 metals are always +2. Where can i find the fuse relay layout for a 1990 vw vanagon or any vw vanagon for the matter? THis means, Zn is undergoing oxidation or being oxidized or acting as an reducing agent. dont worry its correct. Figure 1. The oxidation number for sulfur in SO2 is +4. For the two compounds in the reaction, we find the oxidation number by finding the "charge" of each element in it. The oxidation number is the charge of each atom in the compound. Source(s): self. Which species is oxidized? The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. Oxidation states simplify the whole process of working out what is being oxidised and what is being reduced in redox reactions. [1ΔH f (ZnSO4 (aq)) + 1ΔH f (Mg (s))] - [1ΔH f (Zn (s)) + 1ΔH f (MgSO4 (aq))] [1(-1063.17) + 1(0)] - [1(0) + 1(-1376.12)] = 312.95 kJ 312.95 kJ (endothermic) When did organ music become associated with baseball? The oxidation number of H in H20 and CH4 is The oxidation number of F in MgF2 is -1. Start studying Oxidation Reduction Quiz. The oxidation number of IA elements (Li, Na, K, Rb, Cs and Fe) is +1 and the oxidation number IIA elements (Be, Mg, Ca, Sr, Ba and Ra) is +2. 6. This fits with the charge of the peroxide anion ($2 \times -1 = -2$), and as $\ce{BaO2}$ is a neutral compound, the sum of all oxidation numbers is 0. What are the disadvantages of primary group? Chemical reaction. What is the oxidation state of ZnSO4? 3) adding of oxygen. How would you describe the obsession of zi dima? ; When oxygen is part of a peroxide, its oxidation number is -1. Explaining what oxidation states (oxidation numbers) are. [1ΔH f (ZnSO4 (aq)) + 1ΔH f (Mg (s))] - [1ΔH f (Zn (s)) + 1ΔH f (MgSO4 (aq))] [1(-1063.17) + 1(0)] - [1(0) + 1(-1376.12)] = 312.95 kJ 312.95 kJ (endothermic) You will know that it is +2 because you know that metals form positive ions, and the oxidation state will simply be the charge on the ion. Oxidation number (also called oxidation state) is a measure of the degree of oxidation of an atom in a substance (see: Rules for assigning oxidation numbers). Copyright © 2020 Multiply Media, LLC. This gives a total oxidation number charge of -10 for the five oxygen atoms in N2O5. Oxidation states are straightforward to work out and to use, but it is quite difficult to define what they are in any quick way. It is the reducing agent. In addition to -2 oxidation state sulfur exhibits +2,+4, and +6 oxidation states respectively. ... An increase in the oxidation number of an atom suggests oxidation brought by the loss of electron/s during the chemical reaction. There is 4 main aspects to show an element get oxidised: 1) release of electron. So, reduction decreases the oxidation number (valency). Who is the longest reigning WWE Champion of all time? How long was Margaret Thatcher Prime Minister? The oxidation number of O is usually -2, unless it's part of a hydroxide (and then it's -1) or bonded to fluorine (and then it's +2). Preparation of solutions calculator is a useful tool which allows you to calculate how many solid chemicals or stock solutions you will need to prepare the desired solution. Zn +2 S -2 + O 0 2 → Zn +2 O -2 + S +4 O -2 2 5. Out of these +4 and +6 are common oxidation states. 1 ; View Full Answer Explaining what oxidation states (oxidation numbers) are. Oxidation number for Zn is increased to +2 while H reduce to 0. The oxidation numbers of all atoms in a neutral compound must add up to zero. Oxidation number for Zn is increased to +2 while H reduce to 0. This is a sneaky one! In the compound, the oxidation number of the calcium ion (Ca (aq)) is +2. The oxidation stats of Zn(s) is 0 Cu2+ in CuSO4 is +2 Cu(s) is 0 Zn2+ in ZnSO4 is +2. All Rights Reserved. Since is in column of the periodic table, it will share electrons and use an oxidation … 1. ; When oxygen is part of a peroxide, its oxidation number is -1. . BYJU'S online oxidation number calculator tool makes the calculation faster and it displays the oxidation number in a fraction of seconds. Where can i find the fuse relay layout for a 1990 vw vanagon or any vw vanagon for the matter? (as it is a neutral compound) Oxidation number of O is (-2)*2= -4 That makes the oxidation number of S is +4 When did Elizabeth Berkley get a gap between her front teeth? How long was Margaret Thatcher Prime Minister? 0 0. Oxidation Number Calculator is a free online tool that displays the oxidation number of the given chemical compound. Recuperado de: occc.edu; WhatsApp. Since is in column of the periodic table, it will share electrons and use an oxidation state of . 3) adding of oxygen. Amante y aprendiz de las letras. The oxidation number of H in H20 and CH4 is The oxidation number of F in MgF2 is -1. What are the reacting proportions? Side effects of excess supplementation may include abdominal pain, vomiting, headache, and tiredness.. Zinc sulfate is an inorganic compound.It is used as a dietary supplement to treat zinc deficiency and to prevent the condition in those at high risk. There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. Sum of the oxidation number of all the atoms present in a neutral molecule is zero. OXIDATION STATE APPROACH: IDEAL IONIC BONDS. Unbound atoms have a zero oxidation state. Who is the longest reigning WWE Champion of all time? Oxidation number of Zn changes from 0 to +2, and the oxidation number of Cu goes from 2+ to 0. When did organ music become associated with baseball? Source(s): oxidation numbers zn h2so4 gt znso4 aq h2 g: https://shortly.im/Ys7vn. The oxidation numbers of all atoms in … In the compound sulfur dioxide (SO2), the oxidation number of oxygen is -2. Zn has an oxidation number (valency) of 0, and is oxidised to Zn2+, an oxidation number of 2+, So oxidation increases the oxidation number (valency). 2) increase in oxidation number. oxidation numbers. The oxidation number of SO2 is 0. Cu goes from ON. 4) releasing of hydrogen. Answer to Mg(s) + ZnSO4(aq)---> MgSO4(aq)+Zn(s) 1. Notice which atoms change oxidation numbers. Notice that the oxidation state isn't simply counting the charge on the ion (that was true for the first two cases but not for this one). Oxidation number is used to define the state of oxidation of an element.
Copenhagen School Of Design And Technology, Fallout 4 Creation Club, Farm Animal Books For Elementary Students, Can Eucalyptus Grow In Zone 7, Kamau Brathwaite Limbo, Spyderco Native 5 Lightweight S110v, Milli Means 10 To The Power, Redken Extreme Play Safe 230, Weather January 2020 Nyc, What Is A Radiologist, " />
znso4 oxidation number
Points, as so4 having a charge of -2 therfore oxidation state of zn is+2… The SO4 is just a spectator ion and doesn't participate in the reaction. There is 4 main aspects to show an element get oxidised: 1) release of electron. The algebraic sum of the oxidation states in an ion is equal to the charge on the ion. There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. i just don't get oxidation number for example, in CuSO4 O: (-2)x4= -8 S= -2 so shouldn't Cu be +10?? Who is the actress in the saint agur advert? oxidation number of … Zinc Sulfate (1:1) Zinc Sulphate ZnSO4 Zinc(II) Sulfate Zinc(2+) Sulfate Zinc Sulfate Anhydrous Zinc Sulfate, Anhydrous Zinc Sulfate (anhydrous) Molar Mass of O4SZn Oxidation State of O4SZn Calculate Reaction Stoichiometry Calculate Limiting Reagent Twitter. Learn vocabulary, terms, and more with flashcards, games, and other study tools. How tall are the members of lady antebellum? An oxidation number is used to indicate the oxidation state of an atom in a compound. Assign an oxidation number of -2 to oxygen (with exceptions). Gabriel Bolívar. In this oxidation state decreases. Assign an oxidation number of -2 to oxygen (with exceptions). Cu2+ is the oxidising agent. Any atom by itself is neutral and its oxidation number is zero. The Oxidation State of Oxygen. There are several ways to work this out. Enn. Sum of oxidation number of all the atoms of a complex ion is equal to the net charge on the ion. To find this oxidation number, it is important to know that the sum of the oxidation numbers of atoms in compounds that are neutral must equal zero. in Zn (s) the oxidation number is 0. oxidation number of a compound in its element state is 0. it changes only when compounds are formed. Note : You wrote that its total charge was #-2#.. Does pumpkin pie need to be refrigerated? ZnSO4: the Zn2+ has a +2 oxidation number and SO42- (which we can keep as a compound) has a -2 oxidation number. zn is equal to +2 and so4 is equal to -2. those are the What are the disadvantages of primary group? in Zn(s) the oxidation number is 0. oxidation number of a compound in its element state is 0. it changes only when compounds are formed. Why don't libraries smell like bookstores? The oxidation state, sometimes referred to as oxidation number, describes the degree of oxidation (loss of electrons) of an atom in a chemical compound.Conceptually, the oxidation state, which may be positive, negative or zero, is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic, with no covalent component. There are several ways to work this out. 6. In almost all cases, oxygen atoms have oxidation numbers of -2. Peroxides include hydrogen peroxide, H2O2. What are the oxidation numbers of of everything in the system2. A: The table for the calculation of number average molecular weight is given below. +2 to 0. I hope this will be helpful. MgSO4: same thing here... Mg2+ has a +2 oxidation number and SO42- still has -2. When did Elizabeth Berkley get a gap between her front teeth? The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. 2) increase in oxidation number. It has lost electrons, so it is oxidised. The sum of the oxidation numbers for an ion is equal to the net charge on the ion. (+3)+(-2) = Z Z=(+1). H+ has an oxidation number (valency) of +1, and is reduced to an oxidation number (valency) of 0. The oxidation number of Zn in ZnS is +2 and the oxidation number of S in ZnS is -2. zn is equal to +2 and so4 is equal to -2. those are the oxidation numbers. Fe goes from oxidation number 0 to +2. 5. Licenciado en química de la Universidad de Carabobo. Different ways of displaying oxidation numbers of ethanol and acetic acid. The oxidation number of Cl in NaCl is -1 How old was queen elizabeth 2 when she became queen? The oxidation number is 2. explaination: SO4 has an oxidation number of -2, and the molecule (MnSO4) has an overall oxidation number of 0. Zn has an oxidation number (valency) of 0, and is oxidised to Zn2+, an oxidation number of 2+, ... CuSO4 is reduced to Cu and Zn is oxidised to ZnSO4. Zinc react with sulfuric acid to produce zinc sulfate, sulfur and water. Oxidation number is 0 when element is in atom or molecule state. We could start from a knowledge that the sulphate ion has a 2- charge, so the copper ion must have a 2+ charge and hence a +2 oxidation number. To make an electrically neutral compound, the copper must be present as a 2+ ion. Since sulfate (#"SO"_4^(2-)#) is already … How long will the footprints on the moon last? Assigning Oxidation Numbers. The oxidation number of IA elements (Li, Na, K, Rb, Cs and Fe) is +1 and the oxidation number IIA elements (Be, Mg, Ca, Sr, Ba and Ra) is +2. Who is the actress in the saint agur advert? The oxidation number of a central atom in a coordination entity is the charge it would bear if all the ligands were removed along with the electron pairs that were shared with the central atom (McNaught and Wilkinson, 1997). Does pumpkin pie need to be refrigerated? How long will the footprints on the moon last? Why don't libraries smell like bookstores? Find the Oxidation Numbers KClO. The Cu has a decrease oxidation number of +2 going to 0 therefore is being reduced the Zn has an increase in oxidation number going from 0 to +2. Remember the mnemonic "oil rig": oxidation is loss, reduction is gain. Preparation of ZnSO4 solution. +2 The proper name for CuSO_4 is copper(II) sulphate - the Roman numerals denote the oxidation state for any atom which has variable oxidation states, such as transition metals. Find the Oxidation Numbers KClO. Since is in column of the periodic table, it will share electrons and use an oxidation … Oxidation number is 0 when element is in atom or molecule state. Bonds between atoms of the same element (homonuclear bonds) are always divided equally. Solving for x, it is evident that the oxidation number for sulfur is +4. We could start from a knowledge that the sulphate ion has a 2- charge, so the copper ion must have a 2+ charge and hence a +2 oxidation number. Oxidation states are hypothetical charges for a compound that has ideal ionic bonds, or would form ideal ionic bonds, i.e. Answer to: Zn + H2SO4 arrow ZnSO4 + H2. It has gained electrons, so it is reduced. The oxidation number of S in CS2 is -2. Facebook. How would you describe the obsession of zi dima? The most common form includes water of crystallization as the heptahydrate, with the formula Zn SO 4 •7H 2 O. Sum of oxidation number of all the atoms of a complex ion is equal to the net charge on the ion. Sum of the oxidation number of all the atoms present in a neutral molecule is zero. 2. RULES A pure element in atom form has an oxidation number … The oxygen state refers to the number of electrons gained or forfeited by the element in order to achieve a noble gas configuration. There are a few exceptions to this rule: When oxygen is in its elemental state (O 2), its oxidation number is 0, as is the case for all elemental atoms. The oxidation numbers of all atoms in a neutral compound must add up to zero. 1 decade ago. zn is equal to +2 and so4 is equal to -2. those are the oxidation numbers. What is the oxidation state of ZnSO4. ?
Copenhagen School Of Design And Technology, Fallout 4 Creation Club, Farm Animal Books For Elementary Students, Can Eucalyptus Grow In Zone 7, Kamau Brathwaite Limbo, Spyderco Native 5 Lightweight S110v, Milli Means 10 To The Power, Redken Extreme Play Safe 230, Weather January 2020 Nyc, What Is A Radiologist, | CommonCrawl |
Acta Biotheoretica
June 2016 , Volume 64, Issue 2, pp 197–217 | Cite as
Generalizing Contextual Analysis
Regular Article
Okasha, in Evolution and the Levels of Selection, convincingly argues that two rival statistical decompositions of covariance, namely contextual analysis and the neighbour approach, are better causal decompositions than the hierarchical Price approach. However, he claims that this result cannot be generalized in the special case of soft selection and argues that the Price approach represents in this case a better option. He provides several arguments to substantiate this claim. In this paper, I demonstrate that these arguments are flawed and argue that neither the Price equation nor the contextual and neighbour partitionings sensu Okasha are adequate causal decompositions in cases of soft selection. The Price partitioning is generally unable to detect cross-level by-products and this naturally also applies to soft selection. Both contextual and neighbour partitionings violate the fundamental principle of determinism that the same cause always produces the same effect. I argue that a fourth partitioning widely used in the contemporary social sciences, under the generic term of 'hierarchical linear model' and related to contextual analysis understood broadly, addresses the shortcomings of the three other partitionings and thus represents a better causal decomposition. I then defend this model against the argument that because it predicts that there is some organismal selection in some specific cases of segregation distortion then it should be rejected. I show that cases of segregation distortion that intuitively seem to contradict the conclusion drawn from the hierarchical linear model are in fact cases of multilevel selection 2 while the assessment of the different partitionings are restricted to multilevel selection 1.
Soft selection Price equation Multilevel analysis MLS1 MLS2
I am thankful to Andy Gardner, Charles Goodnight, Samir Okasha and three anonymous reviewer for comments on earlier versions of the manuscript and Peter Godfrey-Smith for his advice on this topic. I am also grateful to Paul Griffiths for his support over the years. This research was supported under Australian Research Council's Discovery Projects funding scheme (Projects DP150102875).
Okasha (2005, 718–719) provides a definition of \(\beta_{4}\) in terms of collective characters and particle characters by demonstrating that there is the simple relation following relation between \(\beta_{2}\) and \(\beta_{4}\). Given that neighbourhood character is defined as \(X_{jk} = \frac{{nZ_{k} - z_{jk} }}{n - 1}\), we can deduct that
$$\beta_{4} = \frac{n - 1}{n}\beta_{2}$$
where \(n\) is the number of particles in a collective.
Although Okasha does not provide a demonstration of it, it is also useful to express \(\beta_{3}\) in terms of collective and particle characters. In fact, this will allows us to highlight the difference between the direct effect of particle character on particle fitness, controlling for collective character and the direct effect of particle character on particle fitness, controlling for neighbourhood character. This also highlights the straightforward mathematical link between contextual and neighbour partitionings. This can be done as follows. Assuming \(e_{jk}\) is nil we have:
$$w_{jk} = \beta_{3} z_{jk} + \beta_{4} X_{k} = \beta_{3} z_{jk} + \beta_{2} \frac{{nZ_{k} - z_{jk} }}{n }$$
This expression can be rearranged as follows:
$$w_{jk} = \left( {\beta_{3} - \frac{{\beta_{2} }}{n }} \right)z_{jk} + \beta_{2} Z_{k}$$
Because both the contextual and Okasha's version of the neighbourhood regression models are models for particle fitness, we know that:
$$w_{jk} = \beta_{1} z_{jk} + \beta_{2} Z_{k} = \beta_{3} z_{jk} + \beta_{4} X_{k}$$
And thus it follows that:
$$\beta_{1} z_{jk} + \beta_{2} Z_{k} = \left( {\beta_{3} - \frac{{\beta_{2} }}{n }} \right)z_{jk} + \beta_{2} Z_{k}$$
This implies that:
$$\beta_{1} = \beta_{3} - \frac{{\beta_{2} }}{n }$$
and therefore that:
$$\beta_{3} = \beta_{1} + \frac{{\beta_{2} }}{n }$$
Recall that Okasha defines \(\beta_{3}\) as the partial regression coefficient of fitness on particle character, controlling for neighbourhood character. We can now express it verbally in terms of particle and collective characters. Following the definitions of \(\beta_{1}\), \(\beta_{2}\) and \(n\) provided in the main text, \(\beta_{3}\) is the sum of the partial regression coefficient of particle fitness on particle character, controlling for collective character and the partial regression coefficient of fitness on collective character, controlling for particle character, divided by the number of particles in the collective.
Bourrat P (2015a) Levels, time and fitness in evolutionary transitions in individuality. Philosophy & Theory in Biology 7. doi: 10.3998/ptb.6959004.0007.001
Bourrat P (2015b) Levels of selection are artefacts of different fitness temporal measures. Ratio 28 (1):40–50. doi: 10.1111/rati.12053 CrossRefGoogle Scholar
Boyd LH, Iversen GR (1979) Contextual analysis: concepts and statistical techniques. Wadsworth Publishing Company, BelmontGoogle Scholar
Damuth J, Heisler IL (1988) Alternative formulations of multilevel selection. Biol Philos 3(4):407–430CrossRefGoogle Scholar
De Leeuw J, Meijer E (2008) Introduction to multilevel analysis. In: De Leeuw J, Meijer E (eds) Handbook of multilevel analysis. Springer, New York, pp 1–75CrossRefGoogle Scholar
Falk R, Sarkar S (1992) Harmony from discord. Biol Philos 7(4):463–472CrossRefGoogle Scholar
Frank SA (1998) Foundations of social evolution. Princeton University Press, PrincetonGoogle Scholar
Frank SA (2012) Natural selection. IV. The Price equation. J Evol Biol 25(6):1002–1019CrossRefGoogle Scholar
Gardner A (2015a) The genetical theory of multilevel selection. J Evol Biol 28(2):305–319CrossRefGoogle Scholar
Gardner A (2015b) More on the genetical theory of multilevel selection. J Evol Biol 28(9):1747–1751CrossRefGoogle Scholar
Goldstein H (2011) Multilevel statistical models, 4th edn. Wiley, ChichesterGoogle Scholar
Goodnight CJ (2015) Multilevel selection theory and evidence: a critique of Gardner, 2015. J Evol Biol 28(9):1734–1746CrossRefGoogle Scholar
Goodnight CJ, Schwartz JM, Stevens L (1992) Contextual analysis of models of group selection, soft selection, hard selection and the evolution of altruism. Am Nat 140(5):743–761CrossRefGoogle Scholar
Heisler L, Damuth J (1987) A method for analyzing selection in hierarchically structured populations. Am Nat 130(4):582–602CrossRefGoogle Scholar
Hox JJ (2010) Multilevel analysis: techniques and applications. Routledge, New YorkGoogle Scholar
Lande R, Arnold SJ (1983) The measurement of selection on correlated characters. Evolution 37(6):1210–1226CrossRefGoogle Scholar
Lewontin RC (1970) The units of selection. Annu Rev Ecol Syst 1(1):1–18CrossRefGoogle Scholar
Mitchell-Olds T, Shaw RG (1987) Regression analysis of natural selection: statistical inference and biological interpretation. Evolution 41(6):1149–1161CrossRefGoogle Scholar
Nunney L (1985) Group selection, altruism, and structured-deme models. Am Nat 126(2):212–230CrossRefGoogle Scholar
Okasha S (2004) Multilevel selection and the partitioning of covariance: a comparison of three approaches. Evolution 58(3):486–494CrossRefGoogle Scholar
Okasha S (2005) Altruism, group selection and correlated interaction. Br J Philos Sci 56(4):703–725CrossRefGoogle Scholar
Okasha S (2006) Evolution and the levels of selection. Oxford University Press, New YorkCrossRefGoogle Scholar
Price GR (1970) Selection and covariance. Nature 227:520–521CrossRefGoogle Scholar
Price GR (1972) Extension of covariance selection mathematics. Ann Hum Genet 35(4):485–490CrossRefGoogle Scholar
Robertson A (1966) A mathematical model of the culling process in dairy cattle. Anim Prod 8(95):108Google Scholar
Sarkar S (1994) The selection of alleles and the additivity of variance. In: PSA: proceedings of the biennial meeting of the Philosophy of Science Association, pp 3–12Google Scholar
Sarkar S (1998) Genetics and reductionism. Cambridge University Press, CambridgeCrossRefGoogle Scholar
Snijders TAB, Bosker RJ (1999) Multilevel analysis: an introduction to basic and advanced multilevel modeling. Sage, LondonGoogle Scholar
Sober E, Wilson DS (2011) Adaptation and natural selection revisited. J Evol Biol 24(2):462–468CrossRefGoogle Scholar
Stevens L, Goodnight CJ, Kalisz S (1995) Multilevel selection in natural populations of impatiens capensis. Am Nat 145(4):513–526Google Scholar
Wade MJ (1985) Soft selection, hard selection, kin selection, and group selection. Am Nat 125(1):61–73CrossRefGoogle Scholar
Wallace B (1975) Hard and soft selection revisited. Evolution 29(3):465–473CrossRefGoogle Scholar
Williams GC (1966) Adaptation and natural selection: a critique of some current evolutionary thought. Princeton University Press, PrincetonGoogle Scholar
1.Department of Philosophy, Unit for the History and Philosophy of Science, Charles Perkins CentreThe University of SydneySydneyAustralia
Bourrat, P. Acta Biotheor (2016) 64: 197. https://doi.org/10.1007/s10441-016-9280-5
Received 01 October 2015
First Online 26 May 2016 | CommonCrawl |
Research note
Thermostability and excision activity of polymorphic forms of hOGG1
Kathryn D. Mouzakis ORCID: orcid.org/0000-0001-6355-8851 nAff1,
Tiffany Wu nAff2 &
Karl A. Haushalter ORCID: orcid.org/0000-0001-7778-063X3
BMC Research Notes volume 12, Article number: 92 (2019) Cite this article
Reactive oxygen species (ROS) oxidize guanine residues in DNA to form 7,8-dihydro-oxo-2′-deoxyguanosine (8oxoG) lesions in the genome. Human 8-oxoguanine glycosylase-1 (hOGG1) recognizes and excises this highly mutagenic species when it is base-paired opposite a cytosine. We sought to characterize biochemically several hOGG1 variants that have been found in cancer tissues and cell lines, reasoning that if these variants have reduced repair capabilities, they could lead to an increased chance of mutagenesis and carcinogenesis.
We have over-expressed and purified the R46Q, A85S, R154H, and S232T hOGG1 variants and have investigated their repair efficiency and thermostability. The hOGG1 variants showed only minor perturbations in the kinetics of 8oxoG excision relative to wild-type hOGG1. Thermal denaturation monitored by circular dichroism revealed that R46Q hOGG1 had a significantly lower Tm (36.6 °C) compared to the other hOGG1 variants (40.9 °C to 43.2 °C). Prolonged pre-incubation at 37 °C prior to the glycosylase assay dramatically reduces the excision activity of R46Q hOGG1, has a modest effect on wild-type hOGG1, and a negligible effect on A85S, R154H, and S232T hOGG1. The observed thermolability of hOGG1 variants was mostly alleviated by co-incubation with stoichiometric amounts of competitor DNA.
Reactive oxygen species oxidize the DNA base guanine, forming mutagenic 7,8-dihydro-oxo-2′-deoxyguanosine (8oxoG) [1, 2]. Mispairing of 8oxoG with adenine during DNA replication results in G-to-T mutations. In humans, 8oxoG is targeted by the base excision repair pathway [3, 4], which is initiated when 8-oxoguanine DNA glycosylase-1 (hOGG1) catalyzes the hydrolysis of the N-glycosidic bond linking 8oxoG and deoxyribose in DNA [5,6,7,8,9,10].
Given the role of hOGG1 in preventing mutagenesis, a connection between deficiencies in hOGG1 activity and cancer seems plausible, but when the evidence for such a connection is examined, the data presents a mixed picture (reviewed in [9, 11,12,13,14,15]). Therefore, additional functional information about the naturally occurring hOGG1 variants would be beneficial for this analysis.
In this study, we investigated the repair efficiency and protein stability of four variants of hOGG1: R46Q, A85S, R154H, and S232T. The R46Q hOGG1 variant was first discovered in a human lung cancer cell line [16] and has reduced repair activity compared to wild-type hOGG1 [17, 18]. The R154H hOGG1 variant arises from somatic mutation and was first identified in a gastric cancer cell line [19]. In addition to having a lower activity with its native substrate (8oxoG base-paired with cytosine), R154H hOGG1 also displays decreased specificity for the base opposite 8oxoG [17, 20]. Molecular dynamics simulations show that both the R46Q hOGG1 variant and the R154H hOGG1 variant feature a reorganized and slightly wider active site compared to wild-type hOGG1 [21]. Less is known about the two final variants studied here: A85S hOGG1, first identified in a lung cancer patient [22]; and S232T hOGG1, first identified in a human kidney tumor [22]. Both of these variants were shown to be capable of excising 8oxoG [18], but this does not exclude the possibility of a more subtle defect in repair kinetics or stability. All four hOGG1 variants in this study were overexpressed in bacteria, purified, and then studied biochemically.
Generating hOGG1 proteins
For detailed experimental methods, see Additional file 1. Briefly, the full-length wild-type α-hOGG1 coding sequence was subcloned into pET-28a (Novagen, Madison, WI) to synthesize a hOGG1/pET-28a construct that produces a fusion protein with an N-terminal hexahistidine tag. Site-directed mutagenesis using the QuikChange Site-Directed Mutagenesis Kit (Stratagene, La Jolla, CA) generated the over-expression plasmids for the hOGG1 variants R46Q, A85S, R154H, and S232T. Bacteria cells transformed with the appropriate plasmid and induced to express protein were harvested by centrifugation and lysed. The resulting protein extract was subjected to a two-column purification protocol to yield purified hOGG1 protein (for gel analysis of purified proteins, see Additional file 2). The remaining N-terminal hexahistidine tag has been shown to have negligible effect on the DNA glycosylase activity of hOGG1 [23].
Preparation of DNA substrates
Oligonucleotides were purchased from Operon Biotechnologies (Huntsville, AL). For fluorescently labeled substrates, the Cy5 label was incorporated during DNA synthesis and for radiolabeled substrates, the 5′ end of the 8oxoG containing strand was radiolabeled with γ-32P-ATP using T4 polynucleotide kinase. Duplexes were formed by annealing to the complementary strand. Prior to use, radiolabeled DNA substrates were mixed in a 1/10 ratio with identical, unlabeled DNA duplexes. The sequences of the DNA substrates are listed below:
8oxoG/C
5'-ATCAGTGAG[8oxoG]CAGTCATCAG-3'
3'-TAGTCACTC C GTCAGTAGTC-5'
Cy5-8oxoG/C
$$ \begin{aligned} &\texttt{5}'\texttt{-}[\texttt{Cy5}]{\texttt{ATCAGTGAG}}[\texttt{8oxoG}]\texttt{CAGTCATCAG-3}' \\ & \qquad\;\texttt{3}'\texttt{-TAGTCACTC} \quad \; \texttt{C} \quad \;\, \texttt{GTCAGTAGTC-5}' \end{aligned}$$
G/C
5'-ATCAGTGAGGCAGTCATCAG-3'
3'-TAGTCACTCCGTCAGTAGTC-5'
DNA glycosylase excision assays
The DNA glycosylase excision assay was adapted from the single- and multiple turnover assays described by David et al. [23] (for more experimental details, see Additional file 1). Briefly, the radiolabeled 8oxoG/C substrate was incubated with hOGG1 protein at 37 °C and, at specified time points, aliquots were removed and quenched. Reaction products were resolved by denaturing polyacrylamide electrophoresis. All time courses were replicated at least three times. For the thermolability studies, the standard hOGG1 cleavage assay was modified by pre-incubating hOGG1 for 90 min at either 4 °C or 37 °C prior to the reaction. In a variation of this experiment, hOGG1 was co-incubated with undamaged DNA (G/C DNA duplex above) during this 90-min step. The products were analyzed as described above for the standard assay, except that the DNA substrate employed was the Cy5-8oxoG/C duplex.
CD spectroscopy
Circular dichroism (CD) spectra of wild-type hOGG1 and hOGG1 variants were obtained with a Jasco-715 spectropolarimeter. Data were recorded as the temperature was increased from 10.0 °C to 95.0 °C at a rate of 1 °C min−1. All denaturations were performed in triplicate. The Jasco-715 software was used to smooth the data and calculate the melting points based on the change in the molar ellipticity [θ] (degree cm2 dmol−1) at 222 nm with rising temperature.
Kinetic analysis of glycosylase activity
To measure activity of the hOGG1 variants, we utilized a standard DNA glycosylase activity assay in which labeled, double-stranded DNA containing a single 8oxoG opposite cytosine was treated with DNA glycosylase for varying amounts of time and then quenched. A typical time course of product formation from the enzyme reaction, as resolved by gel electrophoresis, is shown in Fig. 1. For each hOGG1 variant, the amount of product formed at each time point was quantified and averaged over a minimum of three replicates, as plotted in Additional file 3. The time course of product (P) formation was fit to Eq. 1, in which a rapid burst phase with amplitude A0 and rate constant k1 is followed by a slower, linear phase with rate constant k2 [24].
Representative time course of hOGG1 activity. One strand of this 20 bp duplex contains 8oxoG at the 10th position from the radiolabeled 5′ end. The substrate was incubated with hOGG1, in this case wild-type, for the time indicated, quenched with hot alkali, and then analyzed by denaturing polyacrylamide gel electrophoresis. The 20-mer band corresponds to non-cleaved DNA, while the 9-mer band corresponds to DNA processed by hOGG1
$$ [P] = A_{o} \left( {1 - e^{{ - k_{1} t}} } \right) + k_{2} t $$
The introduced mutations in hOGG1 had a modest effect on the kinetics of the burst phase of the reaction, as judged by the values for the rate constant k1 (Table 1).
Table 1 Summary of DNA glycosylase activity rate constants for hOGG1 variants
Compared to wild-type hOGG1 (k1 = 1.4 ± 0.1 min−1), the most severely affected variants for the burst phase were R154H hOGG1 (k1 = 0.23 ± 0.03 min−1) and S232T hOGG1 (k1 = 0.8 ± 0.1 min−1). The slower linear phase of the reaction was slightly faster for most hOGG1 variants relative to wild-type as judged by the rate constant k2, which may reflect reduced affinity for the product of the reaction.
Thermostability
To see how the mutations in hOGG1 affect protein folding, thermal denaturation experiments monitored by CD spectroscopy was performed for each hOGG1 variant (for unfolding curves see Additional file 4). The resulting data were analyzed to determine the melting temperature (Tm). The R46Q substitution significantly destabilizes hOGG1 relative to wild-type (Tm = 36.6 ± 0.5 °C compared to Tm = 41.8 ± 0.3 °C). In contrast, the A85S and S232T hOGG1 variants (both Tm = 42.2 °C ± 0.1) have a similar thermostability compared to wild-type hOGG1. Finally, the R154H hOGG1 variant is slightly stabilized (Tm = 43.2 ± 0.3 °C).
To investigate how the observed differences in hOGG1 Tm affects excision activity, a thermolability study was undertaken. Prior to the excision assay, each hOGG1 variant was pre-incubated at either 4 °C or 37 °C for 90 min. The hOGG1 activity was then assessed with the glycosylase activity assay described above. Figure 2 shows a representative gel image (top panel) and the quantified results for each variant (bottom panels). As expected from the thermal denaturation results, the R46Q hOGG1 excision activity was reduced to near background levels following an extended pre-incubation at 37 °C (Fig. 2—compare red open circles and blue open squares). The other hOGG1 variants showed more mild reductions or no reduction in excision activity after the 37 °C pre-incubation.
Thermolability of excision activity for hOGG1 variants. The hOGG1 variants' enzyme activities were compared with or without thermal challenge. The top panel shows a representative gel image, in this case for the R46Q hOGG1 variant in the presence of non-specific DNA. In the lower panels, product formation, as measured from the glycosylase activity assay, is plotted as a function of time for each hOGG1 variant. Prior to the excision assay, hOGG1 variants were pre-incubated at either 37 °C (circles) or 4 °C (squares). The pre-incubation was carried out in the absence (open markers) or the presence (filled markers) of stoichiometric undamaged DNA. For each hOGG1 variant the glycosylase assay was replicated a minimum of three times at each condition. Error bars represent the standard deviation
For the MutY DNA glycosylase, incubation with undamaged DNA was previously observed to be protective for enzyme activity [24]. To see if this effect similarly impacts the hOGG1 variants, the thermolability assays were repeated in the presence of stoichiometric competitor DNA lacking 8oxoG (Fig. 2—green filled circles and black filled squares). Significantly, for all variants, regardless of Tm, the differences in activity following the 4 °C and 37 °C pre-incubations were nearly abolished by the addition of undamaged DNA. For the thermolabile R46Q hOGG1 variant, the undamaged DNA provided almost complete protection from thermal denaturation during the 37 °C pre-incubation.
Translating knowledge of variations in DNA repair genes into useful information about cancer susceptibility is a complex problem [25]. In one mathematical model for base excision repair, the steady state prevalence of mutagenic lesions is insensitive to mild variations in the catalytic activity of the DNA glycosylase [26]. According to this model, a 50% reduction in the turnover number of hOGG1 is predicted to lead to only a 3% rise in the steady-state level of DNA damage [26]. Using this model to aid interpretation of our kinetic results, we predict that three of the mutations in hOGG1 studied here (R46Q, A85S, S232T) are unlikely to yield significantly elevated mutagenesis rates due to slower repair of 8oxoG. The hOGG1 variant that could yield significantly elevated rates of mutagenesis is the R154H variant, which retains only ~ 16% of wild-type activity. Furthermore, R154H hOGG1 has been shown previously to have relaxed specificity for the base opposite 8oxoG, which further drives mutagenesis [17, 20].
Additionally, this report shows that the R46Q hOGG1 variant is thermolabile by both CD thermal denaturation and an activity assay. The high-resolution structure of hOGG1 bound to DNA reveals that R46 serves as a stabilizing scaffold to connect three secondary structure elements: αE, βG, and the loop between αA and βB (see Additional file 5 and Ref. [20]). Introducing the R46Q mutation would most likely disrupt the hydrogen bonds that stabilize the secondary structure junction in this region of the protein. The sensitivity of R46Q hOGG1 to thermal denaturation is likely the reason that this variant has been previously reported to have reduced activity [17, 18]. On the other hand, we observed a dramatic reduction in thermolability upon co-incubation with competitor DNA lacking 8oxoG. For hOGG1 variants destabilized by mutations in structurally important residues, such as R46, the added stability gained upon DNA binding is apparently sufficient to stabilize the folded and active conformation of the enzyme [27, 28].
Both R46 and R154 are completely conserved in OGG1 sequences from divergent species (for a multiple sequence alignment see Additional file 6), which is consistent with the detrimental effects of introducing mutations at these positions (this study and References [17, 18, 20]). In contrast, A85 and S232 are not strongly conserved and it is not surprising that proteins with mutations at these positions show no significant difference and only minor differences, respectfully, in activity.
In this study, the R46Q, A85S, R154H, and S232T hOGG1 variants were characterized biochemically in comparison to wild-type hOGG1. The kinetics of 8oxoG excision by the hOGG1 variants were only mildly changed, with R154H hOGG1 having the greatest loss of activity. In addition, one of the variants, R46Q, showed increased thermolability. Binding to undamaged DNA was protective for all hOGG1 variants, including the thermolabile R46Q. Considering these results, carrying one of these variants of hOGG1 is probably not sufficient by itself to significantly increase the risk of carcinogenesis.
The experiments performed here were conducted in vitro with purified proteins and small oligonucleotide substrates, in contrast to the more complex environment of a living cell. The hOGG1 protein is one component of an interdependent process and variant forms of hOGG1 could potentially increase the likelihood of carcinogenesis when combined with other genetic and environmental risk factors.
8oxoG:
7,8-dihydro-oxo-2′-deoxyguanosine
hOGG1:
human 8-oxoguanine glycosylase 1
Lindahl T. Instability and decay of the primary structure of DNA. Nature. 1993;362:709–15.
Dizdaroglu M. Oxidatively induced DNA damage: mechanisms, repair and disease. Cancer Lett. 2012;327:26–47.
Schermerhorn KM, Delaney S. A chemical and kinetic perspective on base excision repair of DNA. Acc Chem Res. 2014;47:1238–46.
CAS Article PubMed Central Google Scholar
David SS, O'Shea VL, Kundu S. Base-excision repair of oxidative DNA damage. Nature. 2007;447:941–50.
Lu R, Nash HM, Verdine GL. A mammalian DNA repair enzyme that excises oxidatively damaged guanines maps to a locus frequently lost in lung cancer. Curr Biol. 1997;7:397–407.
Klungland A, Bjelland S. Oxidative damage to purines in DNA: role of mammalian Ogg1. DNA Repair (Amst). 2007;6:481–8.
Radicella JP, Dherin C, Desmaze C, Fox MS, Boiteux S. Cloning and characterization of hOGG1, a human homolog of the OGG1 gene of Saccharomyces cerevisiae. Proc Natl Acad Sci USA. 1997;94:8010–5.
Roldán-Arjona T, Wei YF, Carter KC, Klungland A, Anselmino C, Wang RP, et al. Molecular cloning and functional expression of a human cDNA encoding the antimutator enzyme 8-hydroxyguanine-DNA glycosylase. Proc Natl Acad Sci USA. 1997;94:8016–20.
Boiteux S, Coste F, Castaing B. Repair of 8-oxo-7,8-dihydroguanine in prokaryotic and eukaryotic cells: properties and biological roles of the Fpg and OGG1 DNA N-glycosylases. Free Radical Biol Med. 2017;107:179–201.
Dizdaroglu M, Coskun E, Jaruga P. Repair of oxidatively induced DNA damage by DNA glycosylases: mechanisms of action, substrate specificities and excision kinetics. Mutat Res Rev Mutat Res. 2017;771:99–127.
Nemec AA, Wallace SS, Sweasy JB. Variant base excision repair proteins: contributors to genomic instability. Semin Cancer Biol. 2010;20:320–8.
Wallace SS, Murphy DL, Sweasy JB. Base excision repair and cancer. Cancer Lett. 2012;327:73–89.
Wilson DM, Kim D, Berquist BR, Sigurdson AJ. Variation in base excision repair capacity. Mutat Res. 2011;711:100–12.
Weiss JM, Goode EL, Ladiges WC, Ulrich CM. Polymorphic variation in hOGG1 and risk of cancer: a review of the functional and epidemiologic literature. Mol Carcinog. 2005;42:127–41.
D'Errico M, Parlanti E, Pascucci B, Fortini P, Baccarini S, Simonelli V, et al. Single nucleotide polymorphisms in DNA glycosylases: from function to disease. Free Radical Biol Med. 2017;107:278–91.
Kohno T, Shinmura K, Tosaka M, Tani M, Kim SR, Sugimura H, et al. Genetic polymorphisms and alternative splicing of the hOGG1 gene, that is involved in the repair of 8-hydroxyguanine in damaged DNA. Oncogene. 1998;16:3219–25.
Audebert M, Radicella JP, Dizdaroglu M. Effect of single mutations in the OGG1 gene found in human tumors on the substrate specificity of the Ogg1 protein. Nucleic Acids Res. 2000;28:2672–8.
Audebert M, Chevillard S, Levalois C, Gyapay G, Vieillefond A, Klijanienko J, et al. Alterations of the DNA repair gene OGG1 in human clear cell carcinomas of the kidney. Cancer Res. 2000;60:4740–4.
Shinmura K, Kohno T, Kasai H, Koda K, Sugimura H, Yokota J. Infrequent mutations of the hOGG1 gene, that is involved in the excision of 8-hydroxyguanine in damaged DNA, in human gastric cancer. Jpn J Cancer Res. 1998;89:825–8.
Bruner SD, Norman DP, Verdine GL. Structural basis for recognition and repair of the endogenous mutagen 8-oxoguanine in DNA. Nature. 2000;403:859–66.
Anderson PC, Daggett V. The R46Q, R131Q and R154H polymorphs of human DNA glycosylase/β-lyase hOgg1 severely distort the active site and DNA recognition site but do not cause unfolding. J Am Chem Soc. 2009;131:9506–15.
Chevillard S, Radicella JP, Levalois C, Lebeau J, Poupon MF, Oudard S, et al. Mutations in OGG1, a gene involved in the repair of oxidative DNA damage, are found in human lung and kidney tumours. Oncogene. 1998;16:3083–6.
Leipold MD, Workman H, Muller JG, Burrows CJ, David SS. Recognition and removal of oxidized guanines in duplex DNA by the base excision repair enzymes hOGG1, yOGG1, and yOGG2. Biochemistry. 2003;42:11373–81.
Porello SL, Leyes AE, David SS. Single-turnover and pre-steady-state kinetics of the reaction of the adenine glycosylase MutY with mismatch-containing DNA substrates. Biochemistry. 1998;37:14756–64.
Mohrenweiser HW, Wilson DM, Jones IM. Challenges and complexities in estimating both the functional impact and the disease risk associated with the extensive genetic variation in human DNA repair genes. Mutat Res. 2003;526:93–125.
Sokhansanj BA, Wilson DM. Estimating the effect of human base excision repair protein variants on the repair of oxidative DNA base damage. Cancer Epidemiol Biomarkers Prev. 2006;15:1000–8.
Spolar RS, Record MT. Coupling of local folding to site-specific binding of proteins to DNA. Science. 1994;263:777–84.
Bjørås M, Seeberg E, Luna L, Pearl LH, Barrett TE. Reciprocal "flipping" underlies substrate recognition and catalytic activation by the human 8-oxo-guanine DNA glycosylase. J Mol Biol. 2002;317:171–7.
Larkin MA, Blackshields G, Brown NP, Chenna R, McGettigan PA, McWilliam H, et al. Clustal W and Clustal X version 2.0. Bioinformatics. 2007;23:2947–8.
KDM led the study, participated in site-directed mutagenesis and protein purification, performed the complete biochemical characterization of the hOGG1 variants including data analysis, and drafted the manuscript. TW assisted with the cloning, protein purification, and assay development. KAH assisted with the kinetic assays, performed the sequence analysis, and helped draft the manuscript. All authors read and approved the final manuscript.
We thank F. Y. Chang, M. Hoss, S. Hummel, J. Fukuto, K. Loh, J. Moretti, Y. Pu, E. Sokol, M. C. Gruenig, and D. A. Vosburg for helpful discussions. We would like to thank E. J. Crane and T. Negritto for help with instrumentation. The assistance of E. J. Kennedy and G. L. Verdine with the CD measurements is gratefully acknowledged.
All data generated and analyzed during this study are included in this published article and its additional files.
KDM was supported by the Merck/AAAS Undergraduate Science Research Program. TW was supported by NSF REU Grant CHE-0353662. KAH was supported by the Camille and Henry Dreyfus Foundation (Faculty Start-up Award SU-03-060), Research Corporation (Award CC5975), and the National Science Foundation (MCB-0543598).
Kathryn D. Mouzakis
Present address: Department of Chemistry and Biochemistry, Loyola Marymount University, 1 LMU Drive, LSB #284, Los Angeles, CA, 90045, USA
Tiffany Wu
Present address: Vascular & Interventional Specialists of Orange County, 1140 W. La Veta Avenue, Suite 850, Orange, CA, 92868, USA
Departments of Chemistry and Biology, Harvey Mudd College, 301 Platt Blvd., Claremont, CA, 91711-5990, USA
Karl A. Haushalter
Correspondence to Karl A. Haushalter.
Detailed experimental methods.
SDS-PAGE analysis of purified wild-type hOGG1 and hOGG1 variants. Proteins were over-expressed in bacteria and purified by two chromatography steps. Analysis was performed with a 12% SDS polyacrylamide gel, stained with Coomassie Blue. MW = Kaleidoscope molecular weight ladder (Bio-Rad), WT = wild-type.
Time course of DNA glycosylase activity for the different hOGG1 variants. In each case, DNA substrate (20 nM) was incubated with hOGG1 (100 nM) for varying times prior to the reaction being quenched with sodium hydroxide. The double-stranded 20-mer DNA substrates contained a centrally located single 8oxoG base opposite cytosine. The products of the DNA glycosylase reaction were separated by denaturing polyacrylamide gel electrophoresis and the band intensities quantified. For each hOGG1 variant the glycosylase assay was replicated a minimum of three times. Error bars at the individual time points represent the standard deviation. The resulting data was averaged and fit to Eq. 1. The large variance in product formation for the R46Q hOGG1 variants was observed over numerous replicates of the glycosylase assay.
Thermal denaturation of hOGG1 variants. Circular dichroism (CD) spectra were recorded for each protein sample at a concentration of 0.20 mg mL−1. The molar ellipticity [θ] (degree cm2 dmol−1) at 222 nm was recorded as the temperature was increased from 10.0 to 95.0 °C at a rate of 1 °C min−1 and the resulting data was normalized to provide the fraction denatured. The average values from three replicate denaturations are plotted.
Structural analysis of the amino acid residues mutated in hOGG1 variants. Analysis based on the original published structure of hOGG1 bound to DNA [20].
Multiple sequence alignment for OGG1 from diverse organisms. Yellow bars highlight amino acid residues that were varied in this study (R46, A85, R154, and S232). Residues that participate directly in catalysis (K249 and D268) are marked in red. The secondary structure annotation is based on the high-resolution crystal structure of K249Q hOGG1 bound to DNA [20]. The conserved HhH-GPD motif is highlighted in purple. Sequences were aligned using ClustalW2 [29]. The Genbank accession numbers for the sequences used are as follows: Homo sapiens, [GenBank:AAB61340.1]; Macaca mulatta, [GenBank:XP_001096322.1]; Bos taurus, [GenBank:NP_001073754.2]; Mus musculus, [GenBank:NP_035087.3]; Rattus norvegicus, [GenBank:NP_110497.1]; Arabidopsis thaliana, [GenBank:CAC83625.1]; Drosophila melanogaster, [GenBank:NP_572499.2].
Mouzakis, K.D., Wu, T. & Haushalter, K.A. Thermostability and excision activity of polymorphic forms of hOGG1. BMC Res Notes 12, 92 (2019). https://doi.org/10.1186/s13104-019-4111-9
Accepted: 31 January 2019
8-Oxoguanine
DNA glycosylase | CommonCrawl |
Fragment-based screening identifies molecules targeting the substrate-binding ankyrin repeat domains of tankyrase
Katie Pollock1,2 nAff4,
Manjuan Liu2,
Mariola Zaleska1,
Mirco Meniconi2,
Mark Pfuhl ORCID: orcid.org/0000-0001-9592-66393,
Ian Collins ORCID: orcid.org/0000-0002-8143-84982 &
Sebastian Guettler ORCID: orcid.org/0000-0002-3135-15461
Scientific Reports volume 9, Article number: 19130 (2019) Cite this article
Solution-state NMR
The PARP enzyme and scaffolding protein tankyrase (TNKS, TNKS2) uses its ankyrin repeat clusters (ARCs) to bind a wide range of proteins and thereby controls diverse cellular functions. A number of these are implicated in cancer-relevant processes, including Wnt/β-catenin signalling, Hippo signalling and telomere maintenance. The ARCs recognise a conserved tankyrase-binding peptide motif (TBM). All currently available tankyrase inhibitors target the catalytic domain and inhibit tankyrase's poly(ADP-ribosyl)ation function. However, there is emerging evidence that catalysis-independent "scaffolding" mechanisms contribute to tankyrase function. Here we report a fragment-based screening programme against tankyrase ARC domains, using a combination of biophysical assays, including differential scanning fluorimetry (DSF) and nuclear magnetic resonance (NMR) spectroscopy. We identify fragment molecules that will serve as starting points for the development of tankyrase substrate binding antagonists. Such compounds will enable probing the scaffolding functions of tankyrase, and may, in the future, provide potential alternative therapeutic approaches to inhibiting tankyrase activity in cancer and other conditions.
Tankyrase enzymes (TNKS/ARTD5 and TNKS2/ARTD6; simply referred to as 'tankyrase' from here on; Fig. 1A) are poly(ADP-ribose)polymerases (PARPs) in the Diphtheria-toxin-like ADP-ribosyltransferase (ARTD) family1,2. PARPs catalyse the processive addition of poly(ADP-ribose) (PAR) onto substrate proteins, which can either directly regulate acceptor protein function or serve as docking platform for PAR-binding proteins that mediate downstream signalling events3. Given the diversity of tankyrase binders and substrates, tankyrase impinges on a wide range of cellular functions2,4,5,6. These include Wnt/β-catenin signalling7,8,9,10, telomerase-dependent telomere lengthening11,12, sister telomere resolution during mitosis13,14, the control of glucose homeostasis15,16,17, mitotic spindle assembly18,19, DNA repair20,21, and the regulation of the tumour-suppressive Hippo signalling pathway22,23,24. Silencing of tankyrase elicits synthetic lethality in BRCA1/2-deficient cancer cells25. Given these links of tankyrase to disease-relevant processes, tankyrase has gained attention as a potential therapeutic target2,26.
(A) Domain organisation of human tankyrase enzymes. Two tankyrase paralogues (TNKS, TNKS2) share an overall sequence identity of 82% (83% across ARCs, 74% across SAM domains, 94% across PARP domains). The ARCs comprise the substrate/protein recognition domain. Several examples of crystal structures of human tankyrase ARCs bound to tankyrase-binding motif (TBM) peptides are shown: TNKS ARC1-3 bound to TBM peptide from LNPEP48 (PDB code 5JHQ), TNKS2 ARC4 bound to TBM peptide from 3BP24 (3TWR), TNKS ARC5 bound to TBM peptide from USP2581 (5GP7). (B) Details of the interaction of TNKS2 ARC4 with a 3BP2 TBM peptide4 (3TWR, modified from reference 4). Four TBM peptide-binding hotspots are shown: the "arginine cradle" (green), "central patch" (orange), "aromatic glycine sandwich" (blue), and "C-terminal contacts" (cyan). TBM octapeptide amino acid positions are numbered.
Many mechanistic aspects of tankyrase function have been revealed by studying its role in the Wnt/β-catenin pathway10. Tankyrase promotes Wnt/β-catenin signalling by PARylating AXIN (axis inhibition protein 1/2)7, a central component of the multi-protein β-catenin destruction complex, which initiates the degradation of the transcriptional co-activator β-catenin under low-Wnt conditions27. PARylation either induces the PAR-dependent ubiquitination and degradation of AXIN7,28,29,30, or promotes the Wnt-induced transformation of the destruction complex into a signalosome complex incapable of initiating β-catenin degradation8,31. Tankyrase thus sensitises cells to incoming Wnt signals32,33. The Wnt/β-catenin pathway is dysregulated in approximately 90% of colorectal cancer cases34. Inhibiting tankyrase has been explored as a strategy to re-tune oncogenically dysregulated Wnt/β-catenin signalling in cancers with mutations in the tumour suppressor and destruction complex component APC (adenomatous polyposis coli)10,35,36,37,38,39,40. Whilst tankyrase catalytic inhibitors can suppress tumour cell growth, in-vivo studies have also pointed to different degrees of tankyrase-inhibitor-induced intestinal toxicity in mice35,39,41. The precise molecular mechanisms by which tankyrase controls Wnt/β-catenin signalling, how tankyrase inhibition can restore oncogenically dysregulated signalling, and the basis of tankyrase inhibitor toxicity are incompletely charted. This warrants the development of different chemical probes to modulate tankyrase function.
To date, drug discovery efforts on tankyrase have focused on inhibiting the catalytic PARP domain2,10,42,43. Catalytic inhibition of tankyrase has complex consequences. As well as inhibiting substrate PARylation, catalytic inhibitors prevent tankyrase auto-PARylation and therefore subsequent PAR-dependent ubiquitination and degradation of tankyrase itself7. Consequently, tankyrase catalytic inhibition typically leads not only to the accumulation of many of its substrates but also of tankyrase6,7,35,37,40. The accumulation of tankyrase and its substrates may further accentuate catalysis-independent functions of tankyrase, which have been emerging recently. One such example is our surprising observation that tankyrase can promote Wnt/β-catenin signalling independently of its catalytic PARP activity, at least when tankyrase levels are high9. Under these conditions, tankyrase catalytic inhibitors do not completely block tankyrase-driven β-catenin-dependent transcription, pointing to both catalytic and non-catalytic (scaffolding) functions of tankyrase. Tankyrase scaffolding functions depend on tankyrase's substrate-binding ankyrin repeat clusters (ARCs) and the polymerisation function of its sterile alpha motif (SAM) domain (see Fig. 1A)9. Tankyrase auto-PARylation has been proposed to limit tankyrase polymerisation44. Tankyrase catalytic inhibition may therefore induce its hyperpolymerisation, which may further promote scaffolding functions. Scaffolding functions of tankyrase likely extend beyond Wnt/β-catenin signalling: not all tankyrase binders are also PARylated, and non-catalytic roles of tankyrase in other processes have been proposed4,5,45,46.
Unravelling the complexity of tankyrase's catalytic vs. non-catalytic functions will require novel tool compounds that block tankyrase-dependent scaffolding. We therefore set out to identify and characterise small molecule fragments that bind to the tankyrase ARC domains, as a first step towards the discovery of compounds capable of blocking the interaction of tankyrase binders and substrates with the ARC domains of tankyrase.
Tankyrase contains five N-terminal ankyrin repeat clusters (ARCs), four of which, namely ARCs 1, 2, 4 and 5, can recruit binders and substrates4,47,48 (Fig. 1A). ARCs bind conserved but degenerate six- to eight-amino-acid peptide motifs, termed the tankyrase-binding motif (TBM, consensus R-X-X-[small hydrophobic or G]-[D/E/I/P]-G-[no P]-[D/E])4,49. Insertions between the residues typically found at positions 1 and 4 can give rise to "non-canonical" TBMs that span more than six or eight amino acids50,51. Depending on the binding partner, ARCs can be functionally redundant, at least at the level of substrate recruitment4, or collaborate in a combinatorial fashion, engaging preferred sets of ARCs in recruiting multivalent tankyrase binders such as AXIN48. The TBM-binding pocket contains several binding hotspots (Fig. 1B). An "arginine cradle" forms the binding site for the TBM's essential arginine residue at position 1. The "central patch" accomodates diverse interactions, including hydrophobic contacts with a small hydrophobic residue at TBM position 4 and contact sites for the residue at TBM position 5. An "aromatic glycine sandwich" refers to the invariant glycine at TBM position 6 sandwiched between two aromatic residues4.
Mutation of the TBM binding sites in the ARCs abrogates tankyrase's ability to bind substrates and drive Wnt/β-catenin signalling9,48. As a further proof of concept for the feasibility of targeting tankyrase via the ARCs, a sequence-optimised4, cell-permeating stapled TBM peptide can compete with AXIN for tankyrase binding and suppress the Wnt-induced expression of a β-catenin-responsive reporter gene in HEK293 cells52. Given the uniqueness of ARCs within the PARP family and the high degree of conservation across both TNKS and TNKS2, interfering with substrate binding would likely provide high target specificity and inhibition of both TNKS and TNKS2, many of whose functions are redundant6,53.
Herein, we report the identification and characterisation of fragments that bind to tankyrase ARCs at the same site as the TBM peptides. The identified fragments provide a starting point for the development of potent, cell-active tankyrase substrate binding antagonists.
Essentiality of the TBM arginine residue
We first considered a peptidomimetic approach to develop TBM peptides into more potent, stable and drug-like competitors of the ARC:TBM interaction. Given the anticipated impairment of cell permeability by the N-terminal TBM arginine, we investigated whether the guanidine group could be substituted. To prioritise synthesis efforts, we followed an in-silico docking approach54, exploring the importance of the positive charge and hydrogen bonding interactions, linker lengths/flexibility and side chain geometry (see Supplementary Materials and Methods for details). From commercially available side chain alternatives, we identified five potential candidates for R replacements: 1H-imidazole-5-pentanoic acid, 1H-imidazole-1-pentanoic acid, 7-aminoheptanoic acid, D-arginine and L-citrulline (Supplementary Fig. 1). We next synthesised 3BP2 TBM octapeptides, incorporating the five arginine substituents at position 1, followed by fluorescence polarisation (FP) assays to assess competition of the peptides with a Cy5-labelled TBM peptide probe (Supplementary Fig. 1). We used a 16-mer TBM peptide (LPHLQRSPPDGQSFRSW, W introduced to measure A280) derived from the model substrate 3BP2, a signalling adapter protein4, as a positive control for a competitor, and a corresponding non-binding TBM peptide bearing a glycine-to-arginine substitution at position 64 as a negative control. Whilst we observed no binding for the G6R negative control, we measured an IC50 of 22 μM for the 3BP2 16-mer peptide (Supplementary Fig. 1). An 8-mer RSPPDGQS TBM peptide displayed an IC50 of 34 μM. Substituting L-arginine for D-arginine caused a five-fold drop in potency to 175 μM, highlighting the importance of side chain geometry. Both imidazole moiety peptides displayed IC50 values in the 500 μM range. The 7-aminoheptanoic acid and citrulline peptides showed poor competition and precipitation at high concentrations (Supplementary Fig. 1). In conclusion, these observations demonstrated that the essential arginine residue of the TBM cannot be easily substituted.
Primary fragment screens
Given the anticipated challenges associated with replacing the TBM arginine residue, we pursued a fragment screening strategy to sample a wide range of chemical space, with the aim of finding novel, ligand-efficient small molecules that target tankyrase ARCs and to identify alternative binding 'hotspots' away from the "arginine cradle" of the TBM binding site. We screened The Institute of Cancer Research (ICR) fragment library55 in parallel against TNKS2 ARC5 using differential scanning fluorimetry (DSF), and TNKS2 ARC4 using ligand-observed nuclear magnetic resonance (NMR) spectroscopy techniques. We used the 16-mer 3BP2 TBM peptide and its non-binding mutant variant as positive and negative controls, respectively.
Primary fragment screening by DSF
Our pilot studies showed that among all TNKS2 single ARCs that we could produce recombinantly (ARCs 1, 4 and 5)49, ARC5 displayed the lowest melting temperature (Tm) and the largest shift in melting temperature upon addition of the 3BP2 TBM peptide (ΔTm) (Fig. 2A). Therefore, we chose TNKS2 ARC5 for screening by DSF, anticipating the largest signal window for measuring changes in Tm upon fragment binding. DMSO concentrations up to 10% of total sample volume had a negligible effect on TNKS2 ARC5 Tm (Supplementary Fig. 2A). We explored the stabilisation of TNKS2 ARC5 by the abovementioned TBM peptide derivatives and found a good correlation between the DSF data and the FP data obtained with TNKS2 ARC4, further demonstrating the suitability of the DSF assay (Supplementary Fig. 2B).
(A) Differential scanning fluorimetry (DSF, a.k.a. ThermoFluor) assessment of TNKS2 ARCs 1, 4 and 5 shows that TNKS2 ARC5 is the least stable among these ARCs and experiences the highest degree of thermal stabilisation upon 3BP2 TBM peptide binding. (B) Fragment screen against TNKS2 ARC5 by DSF: the graph shows ΔTm (from the IP method) plotted vs. compound ID for both replicates. DMSO-only controls are coloured blue; hit fragments are coloured green. Lines correspond to the mean, and 2 or 3 standard deviations outside the mean. (C) Example of relaxation-edited spectra for hit compound 1. Signals are reduced in the presence of protein (red), indicating ARC binding, and recovered upon TBM peptide addition (green), indicating competition. (D) Example of waterLOGSY spectra for hit compound 1, showing a negative NOE signal when protein is added (red). Buffer (HEPES) signals were phased as positive peaks in our waterLOGSY spectra. (E) Structures of hit compounds from the DSF screen that bound both TNKS2 ARCs 4 and 5 and were competitive with a TBM peptide, as measured by relaxation-edited ligand-observed NMR.
We screened 1869 compounds in duplicate at a concentration of 500 μM, which we considered a reasonable compromise between having a sufficiently high concentration to identify weak binders while minimising the likelihood of false positives through fragment precipitation/aggregation and non-specific binding. The final DMSO concentration was 5%. We calculated melting temperatures using both the inflection point and the maximum peak of first derivative data methods. Both methods generally agreed, unless the melt curve was biphasic or misshapen, with slightly lower variability for the first derivative Tm determination method (see Materials and Methods for experimental and analysis details).
We determined the melting temperature for the unbound ARC (Tm, 0) from the mean of 12 reference melting curves per plate, with 5% DMSO only. We calculated the change in melting temperature (ΔTm) by subtracting the mean Tm, 0 from Tm, compound. We tested compounds in duplicate, defining fragments as hits if they conferred a ΔTm outside two standard deviations (2σ) from the mean, in one or both replicates. To check for consistency between plates, we ran triplicate peptide controls; however, we excluded peptide ΔTm values from the calculation of the mean ΔTm to avoid skewing the results. We observed mean ΔTm values (IP/1st derivative methods) of −0.127/−0.331 °C and σ as 0.997/1.19 °C. A hit cut-off of 2σ gave absolute shifts of +1.87/+2.05 °C for compounds that stabilised and −2.13/−2.71 °C for compounds that destabilised the ARC (Fig. 2B).
We assessed the robustness of the assay for screening. The standard deviation for both the DMSO-only (Tm, 0) and 3BP2 peptide positive control (Tm, peptide) melting temperatures across all plates was approximately 1 °C, indicating that any shifts below 1 °C may be attributable to noise. We calculated the Z factor (Z') using the mean melting temperature and σ for the whole fragment library, with the DMSO-only samples as the baseline and 3BP2 TBM peptide samples as positive controls. A value of Z' = 0.9 was obtained, indicating that the assay was robust.
We prioritised hits that stabilised TNKS2 ARC5 if they had a change in melting temperature of greater than 1.8 °C in at least one replicate (both ΔTm(IP) and ΔTm(1st derivative) methods of analysis), and de-prioritised those that showed a substantial discrepancy between ΔTm(IP) and ΔTm(1st derivative) values, indicating an unusual melting curve shape. Negative shift hits that destabilised the protein were only taken forward if they were significant in both replicates, and with both methods of ΔTm analysis. The higher stringency was applied as molecules that destabilise the protein can be harder to advance and develop into lead-like compounds56,57. We thus progressed 56 hits into validation assays. Of these, 48 conferred a positive thermal shift and stabilised TNKS2 ARC5; 8 had a negative thermal shift and destabilised the protein. We next assessed compound purity and structural integrity of hits from the DSF screen by liquid-chromatography-mass-spectrometry (LC-MS) and measured their solubility by NMR. Five compounds failed the LC-MS quality control, and eight were of insufficient solubility by NMR (<100 μM in aqueous buffer with 5% DMSO). Two compounds didn't contain any aromatic protons (required for our NMR solubility assay), and an additional two were no longer commercially available for re-purchase. A total of 17 of compounds were therefore excluded from further analysis. An in-silico pan assay interference compounds (PAINS) screen was applied to the hit fragments to highlight any possible issues in carrying the hits forward58. No compounds were flagged as problematic in the PAINS screen.
Fragment binding validation for DSF hits
39 hits from the DSF screen were suitable for follow-up by ligand-observed NMR methods. We re-purchased fragments and performed T2 relaxation-edited (CPMG-edited) and waterLOGSY experiments for each fragment with TNKS2 ARC5. We explored saturation transfer difference (STD) NMR, also using TNKS2 ARC5, but the assay was not sensitive enough to produce a reliable binding signal, likely due to the relatively small size of a single ARC protein (data not shown). High ligand concentrations were required to achieve sufficient signal, which in turn could lead to false-positive hits due to non-specific binding.
Using the relaxation-edited assay, we tested each fragment in three independent measurements (Fig. 2C), unless we obtained two negative results (non-binding) in the first two experiments. We next used waterLOGSY to further evaluate compounds that showed a substantial decrease (>15% reduction in peak integrals) upon protein addition in at least one out of three relaxation-edited experiments. We classified fragments with a negative NOE signal in waterLOGSY as binders (Fig. 2D). As a negative NOE for the compound-only sample could indicate aggregation, we flagged these compounds as potentially problematic. We identified 14 fragments that bound to TNKS2 ARC5 by both relaxation-edited and waterLOGSY methods (0.78% hit rate). We next tested whether binding of these fragments occurred competitively with the 3BP2 TBM peptide, and also if they bound to TNKS2 ARC4, as competition with peptide binders at various different ARCs will be a prerequisite for an efficient substrate binding antagonist. Three fragments (1, 2, 3) bound to both TNKS2 ARC4 and ARC5 and were competitive with the TBM peptide (0.16% hit rate) (Fig. 2E). Three further fragments also bound to both ARCs, but were not TBM competitive by NMR.
Primary fragment screening by NMR
The hit rate for compounds confirmed to bind TNKS2 ARC5 as evaluated by NMR was relatively low, at 0.78%, and only 0.16% for fragments binding TNKS2 ARC4 and 5 competitively with a TBM peptide. Different screening assays often identify distinct hit fragments59. There is no consensus on the most appropriate assays to use for fragment screening, especially against challenging targets such as protein-protein interactions. Often several orthogonal methods are used in series to narrow down fragment hits, or a combination of biophysical and biochemical assays to exclude false positives and identify binders that modulate protein activity55. We therefore carried out an additional primary screen using T2 relaxation-edited ligand-observed NMR on TNKS2 ARC4, probing a subset of molecules from the ICR fragment library that was compatible with NMR. We screened 1100 compounds in pools of four structurally dissimilar molecules with non-overlapping proton resonances (Fig. 3A,B)60. We split the top 100 hits into two groups for individual re-testing: those with a signal change >39% (3σ, 35 compounds), and those with a signal change of 26–39% reduction (2–3σ, 65 compounds) (Fig. 3A). We tested fragments of the first hit group (>3σ) individually using the T2 relaxation-edited NMR assay. We tested those of the second hit group (2–3σ) in a waterLOGSY NMR experiment, reasoning that this may rescue any genuine binders with a relatively small signal in the relaxation-edited assay. Nine out of 35 compounds from hit set 1 displayed a significant intensity change (≥26% reduction) upon protein addition when tested individually. We confirmed seven out of 65 compounds from hit set 2 to bind in the waterLOGSY assay. We tested these 16 compounds in further T2 relaxation-edited and waterLOGSY experiments, and in competition with the 3BP2 TBM peptide. Of the 16 compounds, two (4 and 5) were competitive with the TBM peptide by relaxation-edited NMR; one compound (5) also showed peptide competition by waterLOGSY (Fig. 3C).
(A) Summary of ligand-observed NMR screen, showing percentage of signal change vs. compound cocktail ID. Lines correspond to the mean, and 2 or 3 standard deviations outside the mean. (B) Example data for a cocktail of 4 compounds, containing one hit (compound b) and three non-binding fragments. (C) Structures of hit fragments uniquely identified in the NMR screen and TBM-competitive, as assessed by relaxation-edited NMR. (D) Structures of compounds that were identified as hits in both the DSF and NMR primary screens.
In addition to the hits identified uniquely by either the DSF or NMR screens, the two orthogonal screens also shared two common hits (compounds 6 and 7, Fig. 3D). Retrospective analysis of the DSF screening data revealed that compound 5 was excluded during the DSF analysis due to poorly shaped, biphasic melt curves. This resulted in large discrepancies between ΔTm values calculated by the inflection point (1.6 °C) and 1st derivative methods (8.1 °C) when the second peak was used to calculate Tm. We also discounted other compounds for poor melt curve shape; however, none of these were identified in the orthogonal NMR-based screen.
Fragment hit validation and Kd determination
We next tested validated hits from both the DSF and NMR screens against TNKS2 ARC4 using protein-observed NMR. This included 16 compounds in total: 14 fragments identified by DSF and confirmed to bind TNKS2 ARC5 by both relaxation-edited and waterLOGSY methods and two unique fragments identified by NMR and confirmed to bind TNKS2 ARC4 by both ligand-observed NMR methods. We used the 3BP2 TBM peptide as a positive control. To directly identify fragment binding sites on the ARC, we performed a full backbone and partial side-chain assignment of TNKS2 ARC4, doubly labelled with 15N and 13C isotopes. The assignment details have been reported elsewhere61. The control peptide induced significant chemical shift perturbations (CSPs), indicative of peptide binding in both a fast and slow kinetic regime (Supplementary Fig. 3). Interestingly, among the residues that constitute the TBM binding site on the ARC, residues that exhibited the slow-exchange binding mode are part of the "central patch" (D521, S527, F532, D556, L560, H564, N565, S568) and the "aromatic glycine sandwich" (G535, Y536, Y569); a single residue from the "arginine cradle" (F593) displayed slow exchange. This suggests that these areas constitute key TBM:ARC interaction hotspots. Residues that exhibited the fast-exchange binding mode were from the "arginine cradle" (D589, W591, E598) and the "C-terminal contacts" (H571, K604) (Supplementary Fig. 3). The different binding regimes may distinguish primary interaction hotspots that are engaged robustly when a peptide is first recruited (slow exchange) from secondary binding sites in the ARC that become occupied once primary binding hotspots are engaged; these may be more dynamic (fast exchange).
We then titrated compounds against 15N-labelled TNKS2 ARC4, initially at protein:compound ratios of 1:1 and 1:3. Two fragments (3 and 5) induced significant CSPs (data not shown); we used these to perform an eight-point titration and observed concentration-dependent CSPs (Supplementary Fig. 4A,B). We confirmed that the CSPs were not caused by pH changes during the titration by measuring the pH of the peptide and fragment stocks (at 3 mM) in assay buffer. Consistent with TBM-competitive binding of fragments 3 and 5, we identified several peaks that shifted in both fragment and 3BP2 TBM peptide titrations. Compound 5 caused more significant perturbations than compound 3, and they all exhibited a fast-exchange regime (Supplementary Fig. 4C). Peaks that moved significantly (CSPs > Δδtot + 2σ) upon addition of compound 5 are part of the "central patch" and "aromatic glycine sandwich" (S527, F532, G535, Y536, N565; see Supplementary Fig. 4C). However, the solubility of fragments 3 and 5 in assay buffer limited the maximum concentration achievable, and so complete saturation was not reached. This confounded affinity measurements and more extensive analyses of the fragment binding sites by protein-observed NMR.
Fragment analogue SAR
We next sought close structural analogues of hit fragments 3 and 5 to gain early insights into structure-activity relationships (SAR) and binding modes of hit fragments. We initially tested the analogues using both relaxation-edited and waterLOGSY NMR assays against TNKS2 ARC4 (Table 1), followed by protein-observed NMR if binding was detected in both ligand-observed NMR experiments. Compounds displaying negative NOE signals in the waterLOGSY assay upon protein addition were classed as binders; however, compounds that displayed negative NOE signals in the absence of protein were flagged as potential aggregators. Compounds that displayed no NOE signal were classed as non-binders.
Table 1 Analogues of compounds 3 and 5 tested by ligand-observed and protein-observed NMR.
Substituting the para-fluorine of compound 5 for a methyl group preserved binding (8), as did replacement of the entire Ar-F group with a furan (9). Analogue 9 showed increased solubility over original hits 3 and 5. Contracting the quinoxaline ring by one carbon atom to a benzimidazole (10) abrogated binding. Substitution of the quinoxaline moiety by a triazolopyrimidine (11) also abrogated binding. Shortening the amide linker by one carbon (12) limited solubility. Increasing the linker length by one carbon (13) abolished binding in the relaxation-edited NMR assay; however, it also resulted in a strong waterLOGSY signal. We also observed a strong waterLOGSY signal for compound 13 in the absence of protein, indicating that this compound may aggregate. Methylating the amide nitrogen of compound 9 was not tolerated (14). Additionally substituting the quinoxaline ring at positions 2 and 3 with methyl groups (15) led to a response in relaxation-edited ligand-observed NMR; however, the waterLOGSY data suggested compound aggregation, and no CSPs were observed in protein-observed NMR. In summary, we demonstrated TNKS2 ARC4 binding activity of several quinoxaline analogues of compound 5, confirming this hit series and showing that the Ar-F group of compound 5 could be readily substituted.
For the benzamide fragment (3), moving the fluorine atom from the para to the meta position (16) or adding an ortho-fluorine (17) abolished binding. Substitution of the furan for a pyridine (18) or Ar-F moiety (19) was not tolerated. Reversing or rearranging the amide linker (20, 21, 22) whilst simultaneously changing the furan for a piperidine (20), adding a methyl group to the furan ring (21) or substituting the para-fluorine for a meta-chlorine (22) were also not tolerated.
Compound 9 (Table 1, Fig. 4A) combined features of both fragments 3 and 5, namely the quinoxaline group of fragment 5, the amide linker shared by both fragments and the furan group of fragment 3. We observed that in the relaxation-edited NMR experiments, peaks corresponding to the quinoxaline displayed a larger reduction upon ARC addition than peaks attributed to furan (Fig. 4A). This suggested that the quinoxaline moiety more substantially contributes to the binding, and several analogues of the quinoxaline hit were confirmed to bind to TNKS2 ARC4.
(A) Relaxation-edited NMR for compound 9, showing the largest reduction in peak height for the quinoxaline protons (boxed), indicating that the majority of binding can be attributed to the quinoxaline moiety. (B) Protein-observed NMR for TNKS2 ARC4. Example area of superimposed 1H-15N HSQC NMR spectra, showing the chemical shift perturbations (CSPs) upon TBM peptide or compound 9 titration. (C) Kd estimate of compound 9 by plotting the CSPs of peaks that moved in a concentration-dependent manner. (D) ITC for the titration of compound 9 (5 mM) into TNKS2 ARC4 (200 μM). The Kd for compound 9 was calculated to be 1200 ± 380 μM; the stoichiometry of compound 9:TNKS2 ARC4 was 1.1 (global analysis of n = 5 independent experiments). See Supplementary Fig. 5 for an example titration of compound 9 into buffer. (E) Compound 9 binding to TNKS and TNKS2 ARCs was assessed by relaxation-edited NMR. Total reductions in peak area upon ARC addition are indicated.
Fragment binding affinity
The increased solubility of compound 9 compared to compound 5 allowed complete saturation in a protein-observed NMR titration experiment against TNKS2 ARC4, yielding an apparent Kd of 1050 μM (Fig. 4B,C). We next used isothermal titration calorimetry (ITC) to confirm the compound 9:TNKS2 ARC4 binding affinity, titrating the fragment (5 mM, 1% DMSO) into TNKS2 ARC4 (200 μM, 1% DMSO). A global analysis of 5 experiments confirmed the affinity to be in the region of 1 mM (1200 ± 380 μM) with a stoichiometry of 1.1 (Fig. 4D, Supplementary Fig. 5, Table 2).
Table 2 Summary of data for fragments confirmed to bind to TNKS2 ARC4 by protein-observed NMR.
Fragments bind to multiple TNKS and TNKS2 ARCs
The anticipated functional redundancy between ARCs and the existence of two tankyrase paralogues will require efficient substrate binding antagonists to ideally bind all TBM-binding ARCs of both TNKS and TNKS2. The high conservation of the peptide-binding pocket (Supplementary Fig. 6) suggests that this goal should be achievable4. We tested binding of compound 9 to all TNKS and TNKS2 ARCs using the relaxation-edited NMR assay (Fig. 4E, Table 3). Compound 9 bound all ARCs, with the exception of TNKS2 ARC1. Given the invariant residue infrastructure of the peptide-binding pocket in ARC1 of TNKS and TNKS2, this observation is difficult to reconcile with available structural information. It is possible that the presence of glycine-sandwiching phenylalanine rather than tyrosine residues in ARC1 of both tankyrases, paired with the presence of a phenylalanine in TNKS2 (F29TNKS2) as opposed to a leucine in TNKS (L187TNKS) confers this differential behaviour. F29TNKS2/L187TNKS sit in an extended hydrophobic pocket "above" the "central patch" and may, directly or indirectly, participate in fragment binding. However, it remains possible that other differences within the N-terminal capping repeat of ARC1, including the β-hairpin linking to the subsequent repeat, are responsible for the observed differential binding. Whilst the low affinity of the current fragments may sensitise them to small differences between the TBM-binding ARCs, further developed molecules will need to be engineered to resist such variability.
Table 3 Pan-ARC binding activity of compound 9, tested by ligand-observed NMR.
In-silico prediction of potential fragment binding hotspots
We next sought to determine the fragment binding site on the ARC. To gain insights into plausible fragment binding sites and identify potential hotspots, we undertook an in-silico fragment binding experiment by computational solvent mapping using the FTMap programme62,63. FTMap identifies pockets where several different small organic molecule probes bind and cluster together; these consensus sites represent potential hotspots for fragment binding. We docked a set of 16 probe molecules into the crystal structure of TNKS2 ARC4, from the ARC4:3BP2 TBM co-crystal structure4. FTMap identified ten areas of probe clustering, seven of which overlapped with the known peptide-binding groove (Fig. 5A). The lowest-energy consensus site, and hence most ligandable pocket identified, was the primarily hydrophobic "central patch" adjacent to the "glycine sandwich". The second most ligandable site predicted was the "arginine cradle". These predicted hotspots coincide with the experimentally determined hotspots for TBM peptide binding, based on structural data, site-directed mutagenesis and an amino acid scan of the 3BP2 TBM4,50. Another hotspot was detected in an extension to the "central patch", suggesting that it may be possible to grow fragments in a way that utilises this extended pocket. Of note, this "central patch extension" is occupied by a glycerol molecule in the TNKS2 apo-ARC4 crystal structure4. The potential fragment binding sites located by FTMap largely coincide with pockets identified using the program Pocasa, which performs a geometric search based on a three-dimensional grid and rolling probe sphere64 (Fig. 5B). The "central patch" and "central patch extension" were the highest-ranked pockets, followed by the "arginine cradle", with volumes/volume depth values of 126/289, 46/108 and 26/73, respectively.
(A) Seven fragment binding hotspots on TNKS2 ARC4 predicted by FTMap are in the TBM peptide binding site on the ARC, and one in close vicinity. FTMap analysis was done on TNKS2 ARC4 from the ARC4:3BP2 co-crystal structure4 (3TWR). Key residues of the peptide binding site are colour-coded as in Fig. 1B. The TBM peptide from 3BP2 is overlaid in transparent stick representation. The minimum energy hotspot found is in the "central patch" adjacent to the "glycine sandwich". (B) Pocket identification on TNKS2 ARC4 (from the ARC4:3BP2 co-crystal structure, 3TWR) using the Roll algorithm implemented in Pocasa64. The three top-ranking pockets are part of the "central patch", a "central patch extension" and the "arginine cradle". (C) Relaxation-edited NMR of compound 9 with TNKS2 ARC4 peptide binding site mutant variants4. Mutation of the "aromatic glycine sandwich" or the "central patch" abolishes or reduces binding of the compound, respectively, whilst binding is unaffected by mutation of the "arginine cradle". Mutated residues, numbered in (A) and (B), were as follows: "arginine cradle", WFE591/593/598AAA; "central patch", L560W; "aromatic glycine sandwich", YY536/569AA. (D) WaterLOGSY NMR confirms that "glycine sandwich" and "central patch" mutations impair binding of compound 9.
Ligand-observed NMR with mutant TNKS2 ARC4 proteins
We used three previously designed TNKS2 ARC4 mutant variants4 to explore potential binding determinants for fragment 9. A WFE591/593/598AAA triple-mutation abrogates three key residues in the "arginine cradle"; YY536/569AA truncates two tyrosine residues that form the "glycine sandwich", and L560W introduces a bulky residue into the "central patch" that sterically clashes with the TBM peptide (see Fig. 5A,B for locations of the mutated residues). We tested binding of compound 9 to the wild-type and mutant ARCs by ligand-observed NMR, using the relaxation-edited assay (Fig. 5C, Table 4). Whilst mutation of the "arginine cradle" had no effect on fragment binding, mutation of the aromatic residues sandwiching the TBM glycine (YY536/569AA) fully abrogated binding in the relaxation-edited NMR assay. Binding was impaired but not abolished for the "central patch" mutant variant (L560W). We confirmed these results in the orthogonal waterLOGSY assay (Fig. 5D, Table 4).
Table 4 Ligand-observed NMR analysis to assess compound 9 binding to wild-type and mutant variants of TNKS2 ARC4.
Fragment binding site mapping by protein-observed NMR
To directly identify the compound 9 binding site on the ARC, we analysed the titrations of compound 9 with 15N-labelled TNKS2 ARC4 (see Fig. 4B,C). The higher solubility of fragment 9, compared to that of compounds 3 and 5, meant that much larger CSPs could be achieved (Supplementary Fig. 4C). At an ARC4:compound ratio of 1:16, close to signal saturation (see Fig. 4C, [compound 9] = 4800 μM), we observed substantial CSPs for the following main-chain resonances: with CSPs above 2 σ from the mean CSP for S527, T528, F532, Y536, N565 and A566, and with CSPs within 1–2σ for A499, K501, D521, I522, L530, A534, G535 and L563 (Fig. 6). All CSPs occurred in the fast-exchange regime, and they included those observed for compounds 3 and 5 (Fig. 6, Supplementary Fig. 4C). Mapping the CSPs onto the crystal structure of TNKS2 ARC4 bound to the TBM peptide from 3BP24 revealed the substantial overlap of the fragment and TBM peptide binding sites in the "aromatic glycine sandwich", the "central patch" and residues in the close vicinity to these areas, in agreement with the mutagenesis studies (Fig. 6A). In conclusion, compound 9 occupies the most ligandable pocket on the ARC and a major binding hotspot of TBM peptides.
(A) Plot of CSPs in TNKS2 ARC4 (300 μM) induced by the addition of 16-fold excess (4800 μM) of compound 9 (see Fig. 4B,C). Bars corresponding to residues known to bind the TBM4 are colour-coded as in Figs. 1B and 5A,B. CSPs are mapped onto the surface representation of TNKS2 ARC4 bound to the TBM peptide of 3BP2 (shown in stick representation, 3TWR4): the strongest perturbations (>2σ of average) are shown in magenta, weaker ones (>1σ and ≤2σ) in pink. The overlap of the TBM binding pocket and compound 9-induced CSPs is clearly apparent. Unassigned residues are shown in dark grey. Prolines are shown in light blue (none on the peptide-binding face of the ARC). (B) Whole 1H-15N HSQC NMR spectra of TNKS2 ARC4 and selected areas, showing the CSPs upon compound 9 titration.
In-silico fragment docking
We next performed in-silico docking of fragment 9 to TNKS2 ARC4, aiming to satisfy the CSPs observed in our protein-observed NMR studies. We constrained the GOLD docking software to focus on a region within a distance of 14 Å from the PDGQS sequence of the 3BP2 TBM peptide (positions 4 to 8), as informed by protein-observed NMR. GOLD returned 18 binding poses, which upon visual inspection we clustered into eight distinct binding modes of similar binding poses (Table 5, Supplementary Fig. 7A). To identify the most likely, i.e. energetically most favourable, binding modes, we ranked the 18 poses by performing advanced ab-initio calculations using the fragment molecular orbital (FMO) method. This method quantitatively estimates the individual contributions and the chemical nature of each residue towards ligand binding. Therefore, the energy of binding can be evaluated for each binding hypothesis at a quantum-mechanical level (see Materials and Methods for details and references). We limit our discussion below to the top three binding modes, namely 4, 3 and 5, which the FMO method estimated to be energetically most stable (Table 5, Fig. 7, Supplementary Fig. 7). All three binding modes satisfy the large CSPs (>2σ) observed by compound 9 titration in protein-observed NMR, either by hydrogen bonds or van-der-Waals contacts, with the exception of T528, which is not directly contacted by the compound in any of the binding hypotheses.
Table 5 FMO total interaction energy (TIE) between compound 9 and TNKS2 ARC4, for 18 binding poses proposed by GOLD.
The three energetically most favourable binding modes of compound 9, obtained by in-silico docking to TNKS2 ARC4 and FMO analysis. Each of the three binding modes encompassed several similar poses (see Table 5 and Supplementary Fig. 7A); the poses with the lowest TIE were selected as representatives for each binding mode. Selected key contact residues are labelled. Colouring is as in Fig. 6A. Residues coloured in magenta or pink represent those with strong (>2σ) and moderate CSPs (>1σ and ≤2σ), respectively, of the average CSP observed by protein-observed NMR with TNKS2 ARC4 (see Fig. 6). See Supplementary Fig. 7B for compound 9 binding modes superimposed with the TBM peptide from 3BP2, and Supplementary Fig. 7C for a 2D ligand-protein interaction diagram describing binding mode 4.
In the top-scoring pose of binding mode 4, the pyrazine ring of the quinoxaline sits between the aromatic side chains of the "glycine sandwich" (Y536, Y569), whilst the rest of the compound occupies the adjacent "central patch" (Fig. 7). The breakdown of the total interaction energy (TIE) between compound 9 and TNKS2 ARC4 shows that compound 9 in binding mode 4 can strongly interact with Y536, Y569, G535, F532, S527 and R525; all these residues contribute to the TIE with a term lower than −5 kcal/mol. With the exception of Y569, which did not experience any CSPs, and R525, which was not assigned, the aforementioned residues showed substantial CSPs (moderate CSPs between 1 and 2σ for G535 and strong CSPs of >2σ for the others). A hydrogen bond between the compound 9 amide carbonyl and the S527 side chain mimics an equivalent contact involving the Asp residue at position 5 of the 3BP2 TBM model peptide (see Supplementary Fig. 7B). Binding mode 4 further satisfies moderate CSPs by interactions between the furan group with K501 and the furan and adjacent methylene with D521.
In binding mode 5, compound 9 is more extended and translated such that the quinoxaline group more extensively sits between the aromatic residues of the "glycine sandwich". Hydrogen bonds are established between the amide carbonyl and N565 and between the furan oxygen and S527. Both these hydrogen bonds mimic those observed with the TBM peptide (see Supplementary Fig. 7B), and both N565 and S527 were characterised by large CSPs. FMO calculations estimate very favorable interactions (<−5 kcal/mol) of compound 9 with residues Y536, Y569, G535, N565, S527 and R525.
In binding mode 3, the fragment is flipped relative to its orientation in binding modes 4 and 5. Instead of the quinoxaline, the furan and adjacent methylene and amine are sandwiched by the aromatic residues of the "glycine sandwich". Strikingly, binding mode 3 also features hydrogen bonds with N565 and S527. The hydrogen bond of the compound 9 amide carbonyl with the N565 side chain is preserved and serves as the "pivot point" of the flip relative to binding mode 5. The quinoxaline occupies the "central patch", more specifically the space accommodating the Asp residue at position 5 of the 3BP2 model peptide and a water molecule4, hydrogen-bonding with the S527 side chain. FMO calculations indicate that highly favourable contacts are established with residues Y536, Y569, G535, N565, S527, R525 and D521.
In all three binding modes, the sensitivity of ARC binding to amide nitrogen methylation (see compound 14, Table 1) may be explained by steric hindrance through the methyl group, which may induce a conformational change in the compound that is incompatible with ARC binding.
In conclusion, FMO calculations are in agreement with NMR observations, proposing binding modes 4, 3 and 5 as the most probable.
Conclusions and Discussion
Here we identify a quinoxaline-based set of fragments that bind to the substrate/protein-binding ARCs of tankyrase at the same site as TBM peptides. We show that the fragments bind in the "aromatic glycine sandwich" and "central patch" regions, major known hotspots of TBM binding, and propose several possible binding modes. These fragments, even at their current affinities in the millimolar range, provide a potential starting point for the development of tool compounds to investigate the scaffolding roles of tankyrase, with the aim to validate whether tankyrase substrate binding antagonists are a viable approach to inhibiting tankyrase function.
Synthesising a set of TBM peptide variants as part of an initial peptidomimetic approach, we found that the essential, invariant arginine residue at TBM position 1 is challenging to replace with other groups that are less likely to impair cell permeability. Given that neither of the arginine substituents analysed here sufficiently preserved TBM binding, we took a fragment screening approach. Fragment screening circumvents potential challenges associated with the time-consuming, iterative optimisation of a peptide into a peptidomimetic and enables a diverse chemical space to be screened in an unbiased manner. Our in-silico analyses that preceded fragment screening point to the "central patch" region as the top-ranking, potentially ligandable pocket of the ARC. Indeed, the identified fragment binding site overlaps substantially with the "central patch" and the adjacent "aromatic glycine sandwich", an anchor point for another invariant TBM residue, a glycine residue at TBM position 6. In protein-observed NMR studies, both the "central patch" and "aromatic glycine sandwich" coincide with TBM peptide-induced CSPs in the slow-exchange kinetic regime. This further confirms their critical role in TBM binding and illustrates that the fragments indeed target key determinants of the ARC:TBM interaction.
Effective substrate binding antagonists will likely need to target all TBM-binding ARCs in both tankyrase paralogues. Given the high degree of conservation between TNKS and TNKS2 ARCs, and the nearly identical TBM peptide binding infrastructure between different ARCs4,48 (Supplementary Fig. 6), this appears feasible. We indeed observe multi-ARC-binding activity of compound 9. The non-detectable binding of this fragment to TNKS2 ARC1 may be a consequence of its overall low affinity for tankyrase. Multi-ARC binding will need to be monitored as more potent compounds are developed.
Future studies will focus on the structure-based design of TBM competitors with increased affinity. Fully developed tankyrase substrate binding antagonists will enable the complex mechanisms of tankyrase to be probed in a wide range of its biological functions. In the longer term, substrate binding antagonists may be of potential therapeutic value as they offer an opportunity to block both catalytic and non-catalytic functions and may display pharmacodynamics that substantially differ from compounds targeting the tankyrase PARP catalytic domain.
Protein expression and purification
Tankyrase ARC constructs were produced as previously described (see Pollock et al., 2017 for construct details)49. Uniformly 15N-labelled protein was produced in E. coli grown in M9 minimal media containing 15N ammonium chloride (CK Isotopes). One litre of M9 minimal media was prepared by combining M9 medium (10X stock, 100 mL), trace elements solution (100X, 10 mL), glucose (20% w/v, 20 mL), magnesium sulfate (1 M, 1 mL), calcium chloride (1 M, 0.3 mL), biotin (1 mg/mL, 1 mL), thiamin (1 mg/mL, 1 mL) and making up to 1 L with water. M9 medium (10X) contained disodium hydrogen phosphate (60 g/L), potassium dihydrogen phosphate (30 g/L), sodium chloride (5 g/L), and 15N ammonium chloride (25 g/L).
BL21-CodonPlus (DE3)-RIL E. coli cells were transformed with a pETM30-2 plasmid containing the gene for a His6-GST tagged human tankyrase ARC construct4. A single colony was selected and amplified in LB media (5 mL, Laboratory Support Services, ICR) for 6 h. This culture (1 mL) was then used to inoculate minimal media (200 mL) containing kanamycin (50 μg/mL) and chloramphenicol (34 μg/mL), and grown at 37 °C overnight. This starter culture (25 mL) was then used to inoculate each litre of minimal media, containing antibiotics as before. Cultures were grown at 37 °C with shaking (180 rpm) to an optical density of 0.6, measured at 600 nm. The temperature was reduced to 18 °C, and protein expression was induced by the addition of isopropyl-β-D-1-thiogalactopyranoside (IPTG) (0.5 mM). The cultures were incubated at 18 °C overnight. Cells were harvested by centrifugation (4000 × g, 30 min). The pellet was stored at −80 °C until purification following the previously described method49.
Doubly 15N/13C-labelled protein for the backbone and partial side-chain assignment of TNKS2 ARC4 was prepared as the 15N-labelled protein, except that 13C-D-glucose (Cambridge Isotope Laboratories; at 6 g/L of M9 media) was used as well. Method details have been reported elsewhere61.
Fragment screening using a thermal shift assay
For the screen, a C1000 thermal cycler (Bio-Rad) was used to record melt curves. SYPRO Orange was purchased as a 5000 × stock in DMSO from Sigma Aldrich. The ICR fragment library was available as 100 mM stocks in DMSO, and dispensed (25 nL) using an ECHO acoustic liquid handling system into white 384-well PCR plates (Framestar, 4titude). Wells were backfilled with DMSO (225 nL). Buffer (2.75 μL, 25 mM HEPES-NaOH pH 7.5, 100 mM NaCl, 2 mM TCEP) was added, followed by TNKS2 ARC5 (1 μL, 100 μM stock) and then SYPRO orange dye (1 μL, 25×). The plate was centrifuged after the addition of each reagent (1 min, 1000 × g). Final assay concentrations were as follows: TNKS2 ARC5 (20 μM); fragment (500 μM); SYPRO Orange dye (5×); DMSO (5% v/v) in a total volume of 5 μL. Peptide control wells (3BP2 TBM 16-mer, 200 μM, sequence LPHLQRSPPDGQSFRSW with C-terminal tryptophan added for photometric concentration measurements, the N-terminus acetylated and the C-terminus amide-capped) were plated in triplicate, and there were 12 blank wells per plate, with DMSO only (250 nL, 5% v/v). Melt curves were recorded from 20–95 °C, with the temperature ramped by 0.5 °C every 15 s. Data were analysed using Vortex (Dotmatics) software and melting temperatures calculated from both the inflection point and the maximum peak of 1st-derivative data. Data points were excluded if the melt curve was poor, i.e., if there was no fluorescence signal above baseline, high fluorescence intensity throughout, or if the melt curve was shallow (<1000 rfu difference between baseline and peak maximum). The unbound melting temperature was determined from the mean of 12 reference melting curves, with 5% DMSO only. The change in melting temperature (ΔTm) was calculated by subtracting the mean Tm, 0 from Tm, compound. Compounds were tested in duplicate, and fragments were defined as hits if they gave a ΔTm outwith 2 σ from the mean in one or both replicates.
For the experiment shown in Fig. 2A, ARC and 3BP2 TBM peptide concentrations were 20 and 200 μM, respectively, in 50 mM HEPES-NaOH pH 7.5, 100 mM NaCl, 2 mM TCEP and a total volume of 25 μL. SYPRO Orange was added at 5×. Data were recorded from 4–95 °C, with the temperature ramped by 0.5 °C every 15 s, using a CFX384 thermal cycler (Bio-Rad). Data were analysed by non-linear regression in GraphPad Prism using a Boltzmann sigmoid with linear baselines. ΔTm values were determined using the inflection point method.
NMR experiments
A Bruker 500 MHz instrument, fitted with a 1.7 mm TXI microprobe, was used for all ligand-observed and protein-observed NMR experiments, with samples in 1.7 mm SampleJET NMR tubes (Bruker).
Fragment solubility assay
Fragments were dispensed into a 384-well plate (250 nL of 100 mM stock in DMSO, 500 μM final concentration) using an ECHO acoustic dispenser. DMSO (2.25 μL) was added, then NMR buffer (47.5 μL, 25 mM HEPES-NaOH pH 7.5, 100 mM NaCl, 1 mM TCEP, 10% D2O). The plate was centrifuged (1 min, 1000 × g) before the solutions were transferred to 1.7 mm NMR tubes using a Gilson liquid handling system. 1H NMR spectra were recorded with the DMSO and water signals dampened. 100 μM caffeine was used as an external standard to quantify the ligand signals.
T2 relaxation-edited NMR assay
Fragments were dispensed in duplicate into a 384-well plate (250 nL, 100 mM stock in DMSO). Wells were backfilled with DMSO (2.25 μL). Tankyrase ARC protein (47.5 μL in 25 mM HEPES, pH 7.5, 100 mM NaCl, 2 mM TCEP, 10% D2O; 20 μM final protein concentration in the assay) was added to the 'protein' samples; buffer only (47.5 μL) was added to the 'compound-only' samples. Solutions were transferred to 1.7 mm NMR tubes using a Gilson liquid handling system. 1H NMR spectra were recorded, with the DMSO and water signals dampened. A relaxation spin filter was applied at 400 ms65. Data were processed using Bruker Topspin 3.14. Lines were broadened with LB = 3.0, and the baseline was corrected between 6.0–10.0 ppm. The average integral for all peaks between 6.0–10.0 ppm was calculated, and the difference between compound-only and compound-plus-protein samples was compared. A reduction in signal integrals of ≥15% was classified as a hit. For competitive experiments, 3BP2 16-mer peptide (100 μM) was added, and the spectra were recorded and processed as above. The variability in signal reduction in the relaxation-edited experiment was previously determined as approximately ± 10% (Liu et al., unpublished observations). Therefore, replicates were run to account for this variability and to ensure that compounds that resulted in a weak signal reduction that did not meet the arbitrary cut-off were not erroneously excluded.
WaterLOGSY NMR assay
Fragments were dispensed in duplicate into a 384-well plate (250 nL, 100 mM stock in DMSO). Wells were backfilled with DMSO (2.25 μL). Tankyrase ARC protein (47.5 μL in 25 mM HEPES-NaOH pH7.5, 100 mM NaCl, 2 mM TCEP, 10% D2O; 20 μM final protein concentration in the assay) was added to the samples containing protein; buffer only (47.5 μL) was added to the compound-only samples. Solutions were transferred to 1.7 mm NMR tubes using a Gilson liquid handling system. 1H NMR spectra were recorded, with the DMSO signal dampened. The bulk water signal at 4.7 ppm was selectively inverted. Data were processed using Bruker Topspin 3.1466.
Fragment screening using a T2 relaxation-edited ligand-observed NMR assay
Cocktails of four structurally distinct compounds were created using MNova Screen software to ensure there was no significant overlap of peaks in the region of interest (5.5–9.5 ppm). Fragments were screened at 1 mM each, with 4% v/v DMSO. Compounds were dispensed in duplicate using an ECHO acoustic dispenser (0.5 μL of each, 100 mM stock in DMSO). TNKS2 ARC4 (35 μM in 25 mM HEPES-NaOH pH 7.5, 100 mM NaCl, 2 mM TCEP, 10% D2O) was added to one cocktail, and buffer alone to the other replicate for a control sample of compounds alone. Mixtures were incubated for 20 min at room temperature, and then transferred into 1.7 mm NMR tubes. 1H relaxation-edited NMR spectra were collected with double solvent suppression applied to dampen the water and DMSO solvent signals. 1H spectra of each individual compound were used as reference spectra.
Data were processed using Bruker Topsin 3.14, then analysed using the MNova Screen software. Only peaks between 5.5 and 9.5 ppm were considered. Peaks with a height of <5% maximum peak height within the region of interest were considered noise, and the minimum matched peak level was set at >51%. The relative peak intensity change (I) was calculated by Eq. 1 for all peaks in the 5.5–9.5 ppm region, for each compound.
$${\rm{I}}=({{\rm{I}}}_{{\rm{blank}}}-{{\rm{I}}}_{{\rm{protein}}})/{{\rm{I}}}_{{\rm{blank}}}$$
The average percentage change was then calculated, and compounds designated a hit if the signal was reduced by ≥26% (2 σ). Hit fragments were split into two groups: those with a signal change >39% (3 σ, 35 compounds), and those with a signal change of 26–39% reduction (2–3 σ, 65 compounds). Compounds of the first hit group (>3 σ) were tested individually in a second relaxation-edited assay, and the second hit group was tested in a waterLOGSY experiment, reasoning that this may rescue any genuine binders with a relatively small signal in the relaxation-edited assay, which has an intrinsic variability of ±10%. Protein 1H spectra of TNKS2 ARC4 (200 μM) were measured at 24 h intervals to ensure protein stability and folding for the duration of the screening experiments.
Fragment quality control
An Agilent 1200 Series HPLC coupled to an Agilent 6520 Quadrupole time of flight (qToF) mass spectrometer, fitted with an ESI/APCI multimode ionisation source, was used. All solvents were modified with 0.1% formic acid. Fragments (2 mM in DMSO) were injected (2 μL) onto a Purospher STAR RP-18 end-capped column (3 μm, 30 × 4 mm, Merck KGaA). Chromatographic separation was carried out over a 4-min gradient elution (90:10 to 10:90 water:methanol) at 30 °C. UV-Vis spectra were measured at 254 nm on a 1200 Series diode array detector (Agilent). The eluent flow was split, with 10% infused into the mass spectrometer. Eluent and nebulising gas were introduced perpendicular to the capillary axis, and applying 2 kV to the charging electrode generated a charged aerosol. The aerosol was dried by infrared emitters (200 °C) and drying gas (8 L/min of N2 at 300 °C, 40 psi), producing ions by ESI. Aerosol and ions were transferred to the APCI zone where solvent and analyte were vaporised. A current of 4 μA was applied, producing a corona discharge between the corona needle and APCI counter electrode, which produced ions by APCI. The multimode source operated in simultaneous APCI/ESI mode. During simultaneous APCI/ESI, ions from both ionisation modes entered the capillary and were analysed simultaneously. The fragmentor voltage was set at 180 V and skimmer at 60 V. Mass spectrometry data were acquired in positive ionisation mode over a scan range of m/z 160–950 with reference mass correction at m/z 622.02896 (Hexakis(2,2-difluoroethoxy)phosphazene). Data was analysed using MassHunter Qualitative Analysis B.06.00 (Agilent). Compound purity was calculated using the highest value of %UV (at 254 nm) or %TIC (total ion count).
Kd determination using chemical shift perturbation
TBM peptide and fragment titrations
15N-labelled TNKS2 ARC4 (488–649) (300 μM final concentration; 5 μL of 3 mM stock) in NMR buffer (45 μL, 25 mM HEPES-NaOH pH 7.5, 100 mM NaCl, 1 mM TCEP, 10% D2O) was used as the baseline sample. Peptide titration samples (Table 6) were prepared by diluting the 16-mer 3BP2 peptide with NMR buffer (45 μL), then adding TNKS2 ARC4 (300 μM final concentration; 5 μL of 3 mM stock). Separate samples were prepared for each concentration point. Fragment titration samples (Table 7) were prepared by diluting fragments in NMR buffer (25 mM HEPES-NaOH pH 7.5, 100 mM NaCl, 1 mM TCEP, 10% D2O) and backfilling with DMSO to keep a constant DMSO concentration of 5%. 15N-labelled TNKS2 ARC4 (300 μM final concentration; 5 μL of 3 mM stock) was added. Protein with 5% DMSO alone was used as the baseline. Separate samples were prepared for each concentration point. 1H-15N HSQC spectra were acquired over 3 h, with 64 scans and a spectrum width of 16.00 ppm for 1H and 29.00 ppm for 15N. The pH of the peptide and fragment stocks (at 3 mM) was confirmed to exclude the possibility that peak shifts were due to changes in pH during the titration.
Table 6 Peptide concentrations for titration in protein-observed NMR. The TNKS2 ARC4 concentration was 300 μM.
Table 7 Fragment concentrations for titration in protein-observed NMR. The TNKS2 ARC4 concentration was 300 μM.
Analysis of protein-observed NMR data
Data were processed in Bruker Topspin 3.14 and analysed using CcpNmr Analysis software v2.4.267.
To enable identification of peptide and fragment binding sites, a full backbone and partial side-chain assignment of 15N-13C-labelled TNKS2 ARC4 (488–649) was performed. Overall, out of 165 amino acids (construct + 3 N-terminal residues introduced by the cloning method), 164 residues were assigned, and backbone amides were missing for only two non-proline residues. Assignment details and methods have been reported elsewhere61.
Peaks that shifted were picked manually in each spectrum. The chemical shifts for each peak were measured and exported into Microsoft Excel, where the change in chemical shift from baseline was calculated for hydrogen and nitrogen shifts. The average Euclidean distance shifted (d) was then calculated using Eq. 2, weighting the different nuclei:
$${\rm{d}}=\surd \{1/2[{{\rm{\delta }}}_{{\rm{H}}}^{2}+({\rm{\alpha }}.{{\rm{\delta }}}_{{\rm{N}}}^{2})]\},{\rm{where}}\,{\rm{\alpha }}=0.14$$
Values of d were plotted against ligand concentration in GraphPad Prism, and fitted with Eq. 3:
$$\frac{\Delta {\delta }_{{\rm{obs}}}={\Delta {\rm{\delta }}}_{{\rm{\max }}}\{({[{\rm{P}}]}_{{\rm{t}}}+{[{\rm{L}}]}_{{\rm{t}}}+{{\rm{K}}}_{{\rm{d}}})-{[{({[{\rm{P}}]}_{{\rm{t}}}+{[{\rm{L}}]}_{{\rm{t}}}+{{\rm{K}}}_{{\rm{d}}})}^{2}-4{[{\rm{P}}]}_{{\rm{t}}}{[{\rm{L}}]}_{{\rm{t}}}]}^{1/2}}{2{[{\rm{P}}]}_{{\rm{t}}}}$$
Kd values were calculated for each peak that shifted individually. The mean of all shifting peaks was then calculated to give an apparent Kd value68.
Kd determination using isothermal titration calorimetry
An ITC200 instrument (MicroCal) was used, fitted with a twisted syringe needle, stirring at 750 rpm. All solutions were degassed using a ThermoVac before use. The reference cell was filled with buffer (200 μL, 25 mM HEPES-NaOH pH 7.5, 100 mM NaCl, 1 mM TCEP, 1% v/v DMSO). The cell was filled with TNKS2 ARC4 (488–649) (200 μL, 200 μM in identical buffer as above). 20 injections (1 × 0.5 μL, then 19 × 2 μL) of compound 9 (5 mM, 1% v/v DMSO in buffer) were performed, with 180 s between injections. Blank correction was performed by titrating compound into buffer alone (25 mM HEPES-NaOH pH 7.5, 100 mM NaCl, 1 mM TCEP, 1% v/v DMSO) using the same injection protocol as above. The first injections from each run were discarded from data analysis. Data were analysed using Origin software with a one-site binding model. Titrations were repeated n = 5. Global analysis was performed using SEDPHAT software69.
In-silico prediction of fragment hotspots and pockets on TNKS2 ARC4
For the FTMap analysis, TNKS2 ARC4 chain D from the TNKS2 ARC4-3BP2 co-crystal structure (PDB 3TWR)4 was submitted to the FTMap web server (ftmap.bu.edu) and analysed under protein-protein interaction mode, as detailed under the published conditions62,63.
For pocket identification using the Roll algorithm, TNKS2 ARC4 chain D from the TNKS2 ARC4-3BP2 co-crystal structure was submitted to the Pocasa 1.1 web server (altair.sci.hokudai.ac.jp/g6/service/pocasa/) and analysed with the following parameters: probe radius, 2 Å; single point flag, 16; protein depth flag, 18; grid size, 1 Å; atom type, protein.
In-silico fragment docking was performed using the commercial docking software GOLD (version 5.6) distributed by CCDC (https://www.ccdc.cam.ac.uk). The structure of human TNKS2 ARC4 in complex with the TBM peptide from 3BP2 (PDB 3TWR, chain D)4 was used as a protein template. The protein was prepared by adding hydrogens using the protonate-3D function within the software MOE, from CCG (https://www.chemcomp.com). A flexible ligand docking protocol was used, setting the GOLD autoscale parameter to 3, and using the PLP scoring function. We constrained GOLD to focus on a region within a distance of 14 Å from the PDGQS sequence of the 3BP2 TBM peptide (positions 4 to 8), as informed by protein-observed NMR, and performed 25 docking runs. We further refined the GOLD docking poses within MOE, using a rigid receptor approach, and adopting the Amber10:ETH force field along with the R-field solvation model (dielectric constants set to 2 and 80). Duplicate binding hypotheses after refinement were discarded. The remaining binding hypotheses were clustered, upon visual inspection, into eight different binding modes. We consider a "binding mode" to represent a cluster of few similar "binding poses". Binding mode 1 (BM1) included 3 binding poses, BM2 2 poses, BM3 3 poses, BM4 4 poses, BM5 2 poses, BM6 2 poses, BM7 1 pose, and BM8 1 pose (see Supplementary Fig. 7A and Table 5). All 18 binding poses underwent more advanced ab-initio calculations for accurate estimation of binding energy, as described in the next section.
Ab-initio fragment molecular orbital calculations (FMO)
To describe the compound 9:TNKS2 ARC4 interaction more quantitatively, we performed advanced ab-initio calculations using the fragment molecular orbital (FMO) method. The FMO method is a general ab-initio method that can be applied to studying large molecular systems, in particular when a standard quantum-mechanical treatment is unfeasible due to the computational demand. The FMO method can reduce the calculation time by dividing a large biological system into small and more computationally tractable fragments; a protein residue, a ligand, or a water molecule are examples of fragments. A number of successful applications of the FMO method for studying large biological systems have been published in the last decade70,71,72,73,74. Here, the protein residues were treated as different fragments, as well as compound 9, even if it is not possible to establish a univocal 1:1 correspondence between an FMO fragment and a protein residue. Fragmentation methods for FMO calculations have been reviewed in detail72,75. Upon fragmentation of the biological system, the FMO procedure performed (1) self-consistent field (SCF) calculations for all fragments and all fragment pairs in the system, (2) evaluated general properties, such as energy and gradient, and (3) computed a pair interaction energy term (PIE) for all the fragment pairs. The PIE between two fragments is a sum of five terms: electrostatic, exchange repulsion, charge transfer, dispersion and solvation. A detailed mathematical description of the method has been reported76,77. Finally, the total interaction energy (TIE) between compound 9 and TNKS2 ARC4 was calculated as the sum of all individual PIEs of the ligand. Given the ab-initio treatment, atom polarisability, charge transfer and quantum effects are considered, and the protein:ligand interaction energy is accurately estimated. Solvation effects were also considered by the use of the polarisable continuum model (PCM)77. Note that the TIE of compound 9 is not the difference between the free energy of the protein:ligand complex and the relative free energies of the isolated elements, as it does not include any estimation of the entropy associated with the binding event. The TIE of compound 9 is rather an estimation of the strength of the interaction between the protein and the ligand in its bound state. The binding pose with the lowest TIE, within a specific binding mode, was chosen as representative of the specific binding mode (BM4_C, BM3_C, BM5_A).
We used the FMO78 code version 5.1 as implemented in the general ab-initio quantum chemistry package GAMESS79, version 2018 R1, developed at the AMES laboratory, Iowa State University (https://www.msg.chem.iastate.edu/gamess/). Calculations were carried out using the second-order Moller-Plesset perturbation theory (MP2) and the 6–31 G* basis set, with the addition of diffuse functions to the COO− group. The PCM was used to treat the solvation effect. Input files were prepared with an MOE SVL script kindly provided by CCG, and the automatic fragmentation method implemented in the script was used for fragmenting the protein. FMO calculations only considered residues within a radius of 4.5 Å from the ligand.
Structural representations
All structural representations were generated using UCSF Chimera80 (developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco, supported by NIH P41-GM103311), unless indicated otherwise.
Hottiger, M. O., Hassa, P. O., Lüscher, B., Schüler, H. & Koch-Nolte, F. Toward a unified nomenclature for mammalian ADP-ribosyltransferases. Trends in Biochemical Sciences 35, 208–219 (2010).
Haikarainen, T., Krauss, S. & Lehtiö, L. Tankyrases: structure, function and therapeutic implications in cancer. Current Pharmaceutical Design 20, 6472–6488 (2014).
Gupte, R., Liu, Z. & Kraus, W. L. PARPs and ADP-ribosylation: recent advances linking molecular functions to biological outcomes. Genes & Development 31, 101–126 (2017).
Guettler, S. et al. Structural basis and sequence rules for substrate recognition by Tankyrase explain the basis for cherubism disease. Cell 147, 1340–1354 (2011).
Li, X. et al. Proteomic Analysis of the Human Tankyrase Protein Interaction Network Reveals Its Role in Pexophagy. Cell Reports 20, 737–749 (2017).
Bhardwaj, A., Yang, Y., Ueberheide, B. & Smith, S. Whole proteome analysis of human tankyrase knockout cells reveals targets of tankyrase-mediated degradation. Nat Comms 8, 2214 (2017).
Huang, S.-M. A. et al. Tankyrase inhibition stabilizes axin and antagonizes Wnt signalling. Nature 461, 614–620 (2009).
Yang, E. et al. Wnt pathway activation by ADP-ribosylation. Nat Comms 7, 11430 (2016).
Mariotti, L. et al. Tankyrase Requires SAM Domain-Dependent Polymerization to Support Wnt-β-Catenin Signaling. Molecular Cell 63, 498–513 (2016).
Mariotti, L., Pollock, K. & Guettler, S. Regulation of Wnt/β-catenin signalling by tankyrase-dependent poly(ADP-ribosyl)ation and scaffolding. Br. J. Pharmacol. 172, 5744 (2017).
Smith, S., Giriat, S., Schmitt, A. & de Lange, T. Tankyrase, a Poly(ADP-Ribose) Polymerase at Human Telomeres. Science 282, 1484–1487 (1998).
Smith, S. & de Lange, T. Tankyrase promotes telomere elongation in human cells. Curr. Biol. 10, 1299–1302 (2000).
Dynek, J. N. & Smith, S. Resolution of sister telomere association is required for progression through mitosis. Science 304, 97–100 (2004).
Canudas, S. et al. Protein requirements for sister telomere association in human cells. EMBO J. 26, 4867–4878 (2007).
Chi, N. W. & Lodish, H. F. Tankyrase is a golgi-associated mitogen-activated protein kinase substrate that interacts with IRAP in GLUT4 vesicles. Journal of Biological Chemistry 275, 38437–38444 (2000).
Yeh, T.-Y. J. et al. Hypermetabolism, hyperphagia, and reduced adiposity in tankyrase-deficient mice. Diabetes 58, 2476–2485 (2009).
Zhong, L. et al. The PARsylation activity of tankyrase in adipose tissue modulates systemic glucose metabolism in mice. Diabetologia 59, 582–591 (2016).
Chang, P., Coughlin, M. & Mitchison, T. J. Tankyrase-1 polymerization of poly(ADP-ribose) is required for spindle structure and function. Nat Cell Biol 7, 1133–1139 (2005).
Chang, P., Coughlin, M. & Mitchison, T. J. Interaction between Poly(ADP-ribose) and NuMA contributes to mitotic spindle pole assembly. Mol. Biol. Cell 20, 4575–4585 (2009).
Nagy, Z. et al. Tankyrases Promote Homologous Recombination and Check Point Activation in Response to DSBs. PLoS Genet 12, e1005791 (2016).
Okamoto, K. et al. MERIT40-dependent recruitment of tankyrase to damaged DNA and its implication for cell sensitivity to DNA-damaging anticancer drugs. Oncotarget 9, 35844–35855 (2018).
Wang, W. et al. Tankyrase Inhibitors Target YAP by Stabilizing Angiomotin Family Proteins. Cell Reports 13, 524–532 (2015).
Troilo, A. et al. Angiomotin stabilization by tankyrase inhibitors antagonizes constitutive TEAD-dependent transcription and proliferation of human tumor cells with Hippo pathway core component mutations. Oncotarget 7, 28765–28782 (2016).
Jia, J. et al. Tankyrase inhibitors suppress hepatocellular carcinoma cell growth via modulating the Hippo cascade. PLoS ONE 12, e0184068 (2017).
McCabe, N. et al. Targeting Tankyrase 1 as a therapeutic strategy for BRCA-associated cancer. Oncogene 28, 1465–1470 (2009).
Riffell, J. L., Lord, C. J. & Ashworth, A. Tankyrase-targeted therapeutics: expanding opportunities in the PARP family. Nat. Rev. Drug Disc. 11, 923–936 (2012).
van Kappel, E. C. & Maurice, M. M. Molecular regulation and pharmacological targeting of the β-catenin destruction complex. Br. J. Pharmacol. 174, 4575–4588 (2017).
Zhang, Y. et al. RNF146 is a poly(ADP-ribose)-directed E3 ligase that regulates axin degradation and Wnt signalling. Nat Cell Biol 13, 623–629 (2011).
Callow, M. G. et al. Ubiquitin ligase RNF146 regulates tankyrase and Axin to promote Wnt signaling. PLoS One 6, e22595 (2011).
DaRosa, P. A. et al. Allosteric activation of the RNF146 ubiquitin ligase by a poly(ADP-ribosyl)ation signal. Nature 517, 223–226 (2015).
Wang, Z., Tacchelly-Benites, O., Yang, E. & Ahmed, Y. Dual Roles for Membrane Association of Drosophila Axin in Wnt Signaling. PLoS Genet 12, e1006494 (2016).
Wang, Z. et al. Wnt/Wingless Pathway Activation Is Promoted by a Critical Threshold of Axin Maintained by the Tumor Suppressor APC and the ADP-Ribose Polymerase Tankyrase. Genetics 203, 269–281 (2016).
Wang, Z. et al. The ADP-ribose polymerase Tankyrase regulates adult intestinal stem cell proliferation during homeostasis in Drosophila. Development 143, 1710–1720 (2016).
Novellasdemunt, L., Antas, P. & Li, V. S. W. Targeting Wnt signaling in colorectal cancer. A Review in the Theme: Cell Signaling: Proteins, Pathways and Mechanisms. American Journal of Physiology - Cell Physiology 309, C511–21 (2015).
Lau, T. et al. A novel tankyrase small-molecule inhibitor suppresses APC mutation-driven colorectal tumor growth. Cancer Research 73, 3132–3144 (2013).
la Roche de, M., Ibrahim, A. E. K., Mieszczanek, J. & Bienz, M. LEF1 and B9L shield β-catenin from inactivation by Axin, desensitizing colorectal cancer cells to tankyrase inhibitors. Cancer Research 74, 1495–1505 (2014).
Elliott, R. J. R. et al. Design and discovery of 3-aryl-5-substituted-isoquinolin-1-ones as potent tankyrase inhibitors. Med. Chem. Commun. 6, 1687–1692 (2015).
Tanaka, N. et al. APC Mutations as a Potential Biomarker for Sensitivity to Tankyrase Inhibitors in Colorectal Cancer. Mol. Cancer Ther. 16, 752–762 (2017).
Norum, J. H. et al. The tankyrase inhibitor G007-LK inhibits small intestine LGR5+ stem cell proliferation without altering tissue morphology. Biol. Res. 51, 3 (2018).
Menon, M. et al. A novel tankyrase inhibitor, MSC2504877, enhances the effects of clinical CDK4/6 inhibitors. Sci. Rep. 9, 923 (2019).
Zhong, Y. et al. Tankyrase Inhibition Causes Reversible Intestinal Toxicity in Mice with a Therapeutic Index <1. Toxicol Pathol 44, 267–278 (2016).
Lehtiö, L., Chi, N.-W. & Krauss, S. Tankyrases as drug targets. FEBS J. 280, 3576–3593 (2013).
Steffen, J. D., Brody, J. R., Armen, R. S. & Pascal, J. M. Structural Implications for Selective Targeting of PARPs. Front. Oncol. 3, 301 (2013).
De Rycker, M. & Price, C. M. Tankyrase polymerization is controlled by its sterile alpha motif and poly(ADP-ribose) polymerase domains. Mol. Cell. Biol. 24, 9802–9812 (2004).
Bisht, K. K. et al. GDP-mannose-4,6-dehydratase is a cytosolic partner of tankyrase 1 that inhibits its poly(ADP-ribose) polymerase activity. Mol. Cell. Biol. 32, 3044–3053 (2012).
Bae, J. Tankyrase 1 Interacts with Mcl-1 Proteins and Inhibits Their Regulation of Apoptosis. Journal of Biological Chemistry 278, 5195–5204 (2002).
Seimiya, H., Muramatsu, Y., Smith, S. & Tsuruo, T. Functional Subdomain in the Ankyrin Domain of Tankyrase 1 Required for Poly(ADP-Ribosyl)ation of TRF1 and Telomere Elongation. Mol. Cell. Biol. 24, 1944–1955 (2004).
Eisemann, T. et al. Tankyrase-1 Ankyrin Repeats Form an Adaptable Binding Platform for Targets of ADP-Ribose Modification. Structure 24, 1679–1692 (2016).
Pollock, K., Ranes, M., Collins, I. & Guettler, S. Identifying and Validating Tankyrase Binders and Substrates: A Candidate Approach. Methods Mol. Biol. 1608, 445–473 (2017).
Morrone, S., Cheng, Z., Moon, R. T., Cong, F. & Xu, W. Crystal structure of a Tankyrase-Axin complex and its implications for Axin turnover and Tankyrase substrate recruitment. Proceedings of the National Academy of Sciences 109, 1500–1505 (2012).
DaRosa, P. A., Klevit, R. E. & Xu, W. Structural basis for tankyrase-RNF146 interaction reveals noncanonical tankyrase-binding motifs. Protein Sci. 27, 1057–1067 (2018).
Xu, W. et al. Macrocyclized Extended Peptides: Inhibiting the Substrate-Recognition Domain of Tankyrase. J. Am. Chem. Soc. 139, 2245–2256 (2017).
Chiang, Y. J. et al. Tankyrase 1 and tankyrase 2 are essential but redundant for mouse embryonic development. PLoS One 3, e2639 (2008).
Jones, G., Willett, P., Glen, R. C., Leach, A. R. & Taylor, R. Development and validation of a genetic algorithm for flexible docking. 267, 727–748 (1997).
Silva-Santisteban, M. C. et al. Fragment-based screening maps inhibitor interactions in the ATP-binding site of checkpoint kinase 2. PLoS One 8, e65689 (2013).
Simeonov, A. Recent developments in the use of differential scanning fluorometry in protein and small molecule discovery and characterization. Expert Opin Drug Discov 8, 1071–1082 (2013).
Cimmperman, P. et al. A quantitative model of thermal stabilization and destabilization of proteins by ligands. Biophys. J. 95, 3222–3231 (2008).
Baell, J. B. & Chemistry, G. H. J. O. M. New substructure filters for removal of pan assay interference compounds (PAINS) from screening libraries and for their exclusion in bioassays. ACS Publications 53, 2719–2740 (2010).
Wielens, J. et al. Parallel screening of low molecular weight fragment libraries: do differences in methodology affect hit identification? J Biomol Screen 18, 147–159 (2013).
Peng, C. et al. Fast and Efficient Fragment-Based Lead Generation by Fully Automated Processing and Analysis of Ligand-Observed NMR Binding Data. J. Med. Chem. 59, 3303–3310 (2016).
Zaleska, M., Pollock, K., Collins, I., Guettler, S. & Pfuhl, M. Solution NMR assignment of the ARC4 domain of human tankyrase 2. Biomol NMR Assign 13, 255–260 (2019).
Brenke, R. et al. Fragment-based identification of druggable 'hot spots' of proteins using Fourier domain correlation techniques. Bioinformatics 25, 621–627 (2009).
Kozakov, D. et al. The FTMap family of web servers for determining and characterizing ligand-binding hot spots of proteins. Nat Protoc 10, 733–755 (2015).
Yu, J., Zhou, Y., Tanaka, I. & Yao, M. Roll: a new algorithm for the detection of protein pockets and cavities with a rolling probe sphere. Bioinformatics 26, 46–52 (2010).
Hajduk, P. J., Olejniczak, E. T. & Fesik, S. W. One-Dimensional Relaxation- and Diffusion-Edited NMR Methods for Screening Compounds That Bind to Macromolecules. J. Am. Chem. Soc. 119, 12257–12261 (1997).
Dalvit, C. et al. Identification of compounds with binding affinity to proteins via magnetization transfer from bulk water. J. Biomol. NMR 18, 65–68 (2000).
Vranken, W. F. et al. The CCPN data model for NMR spectroscopy: development of a software pipeline. Proteins 59, 687–696 (2005).
Williamson, M. P. Using chemical shift perturbation to characterise ligand binding. Progress in Nuclear Magnetic Resonance Spectroscopy 73, 1–16 (2013).
Zhao, H., Piszczek, G. & Schuck, P. SEDPHAT–a platform for global ITC analysis and global multi-method analysis of molecular interactions. Methods 76, 137–148 (2015).
Heifetz, A. et al. Fragment Molecular Orbital Method Applied to Lead Optimization of Novel Interleukin-2 Inducible T-Cell Kinase (ITK) Inhibitors. J. Med. Chem. 59, 4352–4363 (2016).
Heifetz, A., James, T., Morao, I., Bodkin, M. J. & Biggin, P. C. Guiding lead optimization with GPCR structure modeling and molecular dynamics. Current Opinion in Pharmacology 30, 14–21 (2016).
Fedorov, D. G. & Kitaura, K. Extending the power of quantum chemistry to large systems with the fragment molecular orbital method. J Phys Chem A 111, 6904–6914 (2007).
Mazanetz, M. P., Ichihara, O., Law, R. J. & Whittaker, M. Prediction of cyclin-dependent kinase 2 inhibitor potency using the fragment molecular orbital method. J Cheminform 3, 2 (2011).
Heifetz, A. et al. The Fragment Molecular Orbital Method Reveals New Insight into the Chemical Nature of GPCR-Ligand Interactions. J. Chem. Inf. Model. 56, 159–172 (2016).
Fedorov, D. G., Nagata, T. & Kitaura, K. Exploring chemistry with the fragment molecular orbital method. Phys Chem Chem Phys 14, 7562–7577 (2012).
Fedorov, D. G. & Kitaura, K. Pair interaction energy decomposition analysis. J Comput Chem 28, 222–237 (2007).
Nishimoto, Y. & Fedorov, D. G. Three-body expansion of the fragment molecular orbital method combined with density-functional tight-binding. J Comput Chem 38, 406–418 (2017).
Fedorov, D. G. & Kitaura, K. The importance of three-body terms in the fragment molecular orbital method. J Chem Phys 120, 6832–6840 (2004).
Schmidt, M. W. et al. General atomic and molecular electronic structure system. J Comput Chem 14, 1347–1363 (1993).
Pettersen, E. F. et al. UCSF Chimera–a visualization system for exploratory research and analysis. J Comput Chem 25, 1605–1612 (2004).
Xu, D. et al. USP25 regulates Wnt signaling by controlling the stability of tankyrases. Genes & Development 31, 1024–1035 (2017).
We thank Fiona Jeganathan for her help in optimising the DSF assay, Meirion Richards for mass spectrometry analysis, and Rob van Montfort and Rosemary Burke for access to technologies. KP was funded by a Wellcome Trust PhD studentship (WT102360/Z/13/Z). Work in the SG laboratory has been supported by The Institute of Cancer Research (ICR), by Cancer Research UK through a Career Establishment Award (C47521/A16217), followed by a Programme Foundation Award (C47521/A28286), by the Wellcome Trust through an Investigator Award (214311/Z/18/Z), and by The Lister Institute of Preventive Medicine through a Lister Institute Research Prize Fellowship. Work in the IC laboratory has been supported by ICR and by Cancer Research UK through funding to the Cancer Therapeutics Unit (C309/A11566). This project received further funding through a Faringdon Proof of Concept Fund award from ICR.
Katie Pollock
Present address: Cancer Research UK Beatson Institute, Drug Discovery Programme, Glasgow, G61 1BD, United Kingdom
Divisions of Structural Biology & Cancer Biology, The Institute of Cancer Research (ICR), London, SW7 3RP, United Kingdom
, Mariola Zaleska
& Sebastian Guettler
Division of Cancer Therapeutics, The Institute of Cancer Research (ICR), London, SW7 3RP, United Kingdom
, Manjuan Liu
, Mirco Meniconi
& Ian Collins
School of Cardiovascular Medicine and Sciences and Randall Centre, King's College London, Guy's Campus, London, SE1 1UL, United Kingdom
Mark Pfuhl
Search for Katie Pollock in:
Search for Manjuan Liu in:
Search for Mariola Zaleska in:
Search for Mirco Meniconi in:
Search for Mark Pfuhl in:
Search for Ian Collins in:
Search for Sebastian Guettler in:
Designed and planned experiments: K.P., S.G. and I.C.; protein production: K.P.; fragment screening by thermal shift: K.P.; fragment screening and validation by N.M.R.: K.P. and M.L.; fragment binding site location by protein-observed N.M.R.: K.P., M.Z. and M.P.; in-silico docking and ab-initio F.M.O. calculations: M.M.; data analysis: K.P., M.Z., M.M., S.G. and I.C.; wrote the paper: K.P., S.G. and I.C. with input from all authors.
Correspondence to Ian Collins or Sebastian Guettler.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Pollock, K., Liu, M., Zaleska, M. et al. Fragment-based screening identifies molecules targeting the substrate-binding ankyrin repeat domains of tankyrase. Sci Rep 9, 19130 (2019) doi:10.1038/s41598-019-55240-5
DOI: https://doi.org/10.1038/s41598-019-55240-5
Scientific Reports menu
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights | CommonCrawl |
Quick list
<< Thursday, November 29, 2018 >>
Inequality in Life and Death: Policy and Prospect
Conference/Symposium | November 29 | 8:30 a.m.-12:30 p.m. | Sutardja Dai Hall, Bantao Auditorium
Featured Speaker: Peter Orszag, Vice Chairman of Investment Banking and Global Co-Head of Healthcare, Lazard
Speakers: Yuriy Gorodnichenko, Quantedge Presidential Professor of Economics, UC Berkeley Economics; Hilary Hoynes, Professor of Public Policy and Economics, UC Berkeley Economics; Ronald Lee, Professor of the Graduate School in Demographics and Economy, UC Berkeley Economics; Gabriel Zucman, Assistant Professor of Economics, UC Berkeley Economics
Moderator: Alan Auerbach, Burch Professor of Economics and Law, UC Berkeley Economics
Sponsor: Berkeley Center on the Economics and Demography of Aging (CEDA)
Inequality has become a central focus of policy discussions, but inequality has multiple dimensions and correspondingly many potential policy interventions. This mini-conference will consider inequality from this broad perspective.
RSVP info: RSVP online
Paris/Berkeley/Bonn/Zürich Analysis Seminar: The scalar wave equation on general asymptotically flat spacetimes: Stability and instability results
Seminar | November 29 | 9:10-10 a.m. | 238 Sutardja Dai Hall
Speaker: Georgios Moschidis, Miller Institute and UC Berkeley
Sponsor: Department of Mathematics
In this talk, we will examine how certain geometric conditions on general asymptotically flat spacetimes $(\mathcal M,g)$ are related to stability or instability properties of solutions to the scalar wave equation $\square _g\psi =0$ on $\mathcal M$. First, in the case when $(\mathcal M,g)$ possesses an event horizon with positive surface gravity and an ergoregion which is sufficiently small in... More >
Foreign Language and Area Studies (FLAS) Fellowship Informational Workshop for prospective applicants
Information Session | November 29 | 11 a.m.-12 p.m. | 309 Sproul Hall
Sponsor: Center for African Studies
Foreign Language and Area Studies (FLAS) fellowships provide funding to students to encourage the study of less commonly taught foreign languages in combination with area and international studies. These fellowships are funded by grants from the U.S. Department of Education. The purpose of the FLAS program is to promote the training of students who intend to make their careers in college or... More >
Applied Math Seminar: Renormalization and large eddy simulation for a driven Burgers equation in a hydrodynamic regime
Seminar | November 29 | 11 a.m.-12 p.m. | 732 Evans Hall
Speaker: Alexandre Chorin, UC Berkeley
A real-space renormalization group (RNG) is constructed for a randomly-driven Burgers equation, with irrelevant degrees of freedom eliminated sequentially by stochastic parametrization followed by scaling. The connection with more standard implementations of an RNG is spelled out. The parameters in the equation and in the forcing, as well as the construction of the RNG, are chosen so that the... More >
Oliver E. Williamson Seminar: "Folklore"
Seminar | November 29 | 12-1:30 p.m. | C330 Haas School of Business
Speaker: Stelios Michalopoulos, Brown
Sponsor: Department of Economics
The Oliver E. Williamson Seminar on Institutional Analysis, named after our esteemed colleague who founded the seminar, features current research by faculty, from UCB and elsewhere, and by advanced doctoral students. The research investigates governance, and its links with economic and political forces. Markets, hierarchies, hybrids, and the supporting institutions of law and politics all come... More >
SURF Summer Research Scholarships Info Session
Information Session | November 29 | 12-1 p.m. | 9 Durant Hall
Speaker/Performer: Sean Burns, Director, Office of Undergraduate Research and Scholarships
Sponsor: Office of Undergraduate Research
OURS Staff will discuss eligibility criteria for SURF programs, benefits of the fellowship and tips for a successful application
Harnessing the power of data science and real world evidence for cancer treatment, access, and care
Seminar | November 29 | 12-1 p.m. | Webinar
Speakers/Performers: Raima Mathur, Flatiron Health; Meghna Samant, Flatiron Heatlth
Sponsors: Public Health, School of, Public Health Alumni Association board of directors
Join Meghna Samant and Raina Mathur of Flatiron Health, for this professional health development lunchtime webinar sponsored by the Public Health Alumni Association Board of Directors.
Registration recommended
Registration info: Register online
MEng Info Session: University of California, Los Angeles
Information Session | November 29 | 12-1 p.m. | University of California, Los Angeles, Engineering V, Room 5101
Sponsor: Fung Institute for Engineering Leadership
Sign up to attend the MEng information session on Thursday, November 29, 2018 from 12-1pm in the Engineering V Building. Lunch will be served!
Certificate Program in Teaching English to Speakers of Other Languages Online Information Session
Information Session | November 29 | 12-1 p.m. | Online
Sponsor: UC Berkeley Extension
Learn how UC Berkeley Extension's professional certificate can prepare you for diverse job opportunities—in teaching, business, publishing, travel and more—both in the United States and around the world.
Reservation info: Make reservations online
Harnessing the power of data science and real-world evidence for cancer treatment, access, and care: Professional Development Webinar sponsored by the Public Health Alumni Association
Speakers: Meghna Samant, Flatiron Health; Raina Mathur
Sponsor: UC Berkeley School of Public Health
Professional Development Webinar sponsored by the Public Health Alumni Association
IB Seminar: Large scale phylogenetics, from within STD outbreaks to across the tree of life
Seminar | November 29 | 12:30-1:30 p.m. | 2040 Valley Life Sciences Building
Featured Speaker: Emily Jane McTavish, University of California, Merced
Sponsor: Department of Integrative Biology
Haas Scholars Program Info Session: $13,800 to carry out a final project in *ANY* major
Information Session | November 29 | 1-2 p.m. | 9 Durant Hall
Speaker/Performer: Leah Carroll, Haas Scholars
Learn about how to apply to this research program for your last year!
The Haas Scholars Program supports twenty undergraduates with financial need with their interest for conducting research during their final year at UC-Berkeley. Applicants are evaluated primarily on the merit and originality of their proposal for an independent research or creative project that will serve as the basis for a... More >
Econ 235, Financial Economics Student Seminar: "TBA"
Seminar | November 29 | 1-2 p.m. | 597 Evans Hall
Speakers/Performers: Todd Messer and Peter McCrory, UC Berkeley; Juan Herreno, Columbia University
EHS 201 Biosafety in Laboratories
Course | November 29 | 1:30-3:30 p.m. | 177 Stanley Hall | Note change in date, time, and location
Sponsor: Office of Environment, Health & Safety
This training is required for anyone who is listed on a Biological Use Authorization (BUA) application form that is reviewed by the Committee for Laboratory and Environmental Biosafety (CLEB). A BUA is required for anyone working with recombinant DNA molecules, human clinical specimens or agents that may infect humans, plants or animals. This safety training will discuss the biosafety risk... More >
How to Write a Research Proposal Workshop
Workshop | November 29 | 2-3 p.m. | 9 Durant Hall
Speaker: Leah Carroll, Haas Scholars Program Manager/Advisor, Office of Undergraduate Research and Scholarships
Need to write a grant proposal? This workshop is for you! You'll get a head start on defining your research question, developing a lit review and project plan, presenting your qualifications, and creating a realistic budget.
Open to all UC Berkeley students.
Seminar 251, Labor Seminar: "What Accounts for the Racial Gap in Time Allocation and Intergenerational Transmission of Human Capital?"
Seminar | November 29 | 2-3:30 p.m. | 648 Evans Hall
Featured Speaker: George-Levi Gayle, Washington University of St. Louis
Sponsor: Center for Labor Economics
Electronic Resources for Chinese Studies in Social Sciences
Information Session | November 29 | 2-3:30 p.m. | East Asian Library, Room 341
Speaker/Performer: Susan Xue, C. V. Starr East Asian Library
Sponsor: Library
Learn to locate books, articles, data and government documents in electronic format for Chinese Studies in the social sciences.
[Lecture Michael Taussig] Killing In America: A Genealogy of Corpse Magic
Lecture | November 29 | 3-5 p.m. | 221 Kroeber Hall
Speaker/Performer: Michael Taussig, Columbia University Anthropology Dept.
Sponsor: Experimental Ethnography Working Group
Columbia University Professor of Anthropology Michael Taussig will offer a lecture on his recent work 221 Kroeber 3-5 PMK
UCDC Info Session: Fall 2019 application deadline, February 21, 2019
Speaker: Mary Crabb, UCDC
Sponsor: The UC Berkeley Washington Program
Come learn about how to spend a semester working and studying in Washington, DC. UCDC sends students from all majors to intern and take classes in DC, earning a full semester of Berkeley credit.
Convergent circuitry for thermoregulation
Seminar | November 29 | 3:30-4:30 p.m. | 101 Life Sciences Addition
Featured Speaker: Lily Jan, University of California, San Francisco
Sponsor: Department of Molecular and Cell Biology
This seminar is partially sponsored by NIH
The Gerald D. and Norma Feldman Annual Lecture: The Life and Death of the Russian Revolution
Lecture | November 29 | 4-6 p.m. | Bancroft Hotel
Location: 2680 Bancroft Way, Berkeley, CA 94704
Speaker: Professor Yuri Slezkine, Jane K. Sather Professor of History, Dept. of History
Sponsors: Institute of European Studies, Department of History, Institute of Slavic, East European, and Eurasian Studies (ISEEES)
This talk will follow the lives of the original Bolsheviks from the time they joined the apocalyptic sect known as "the party of a new type" to the time most of them were arrested for terrorism and treason. It will focus on the connection between private lives and millenarian expectations and attempt to clarify the reasons for socialism's premature demise.
Critical Auralities: Reencountering the Korean War through the Praxis of Listening
Colloquium | November 29 | 4-5:30 p.m. | 180 Doe Library
Speaker: Crystal Baik, University of California, Riverside
Sponsor: Center for Korean Studies (CKS)
Drawing from a chapter of her forthcoming book, Reencounters: On the Korean War & Diasporic Memory Critique, Professor Baik discusses a diasporic repertoire of multigenerational oral history archives that have coalesced in the past twenty years in relation to the un-ended Korean War.
Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
Special Event | November 29 | 4-5:30 p.m. | Banatao Auditorium, Sutardja Dai Hall
Speaker: Virginia Eubanks
Sponsor: Information, School of
The impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America.
Mathematics Department Colloquium: Persistent homology and applications from PDE to symplectic topology
Colloquium | November 29 | 4:10-5 p.m. | 60 Evans Hall
Speaker: Claude Viterbo, Ecole Normale Supérieure
Persistent homology has emerged in the field of topogical data analysis, and in a different formulation in the work of Barannikov in Morse theory. We shall explain what it is, and how this comes into play crucially on one hand on spectral asymptotics of the Witten Laplacian, and on the other hand in several questions in Hamiltonian dynamics and symplectic topology.
Rahul Verma | Ideology and Identity: The Changing Party Systems of India
Lecture | November 29 | 5-6:30 p.m. | Stephens Hall, 10 (ISAS Conf. Room)
Speaker: Rahul Verma, PhD Candidate, Political Science, University of California, Berkeley
Moderator: Munis Faruqui, Director, Institute for South Asia Studies; Sarah Kailath Professor of India Studies; Associate Professor in the Department of South and Southeast Asian Studies
Sponsors: Institute for South Asia Studies, Sarah Kailath Chair of India Studies, Department of Political Science, Institute of International Studies
A talk by PhD candidate in Political Science at UC Berkeley, Rahul Verma on his upcoming co-authored (with Pradeep Chhibber) book, Ideology and Identity: The Changing Party Systems of India.
Living in a Sacred Cosmos: Indonesia and the Future of Islam
Lecture | November 29 | 5-6:30 p.m. | Graduate Theological Union, Board Room, GTU Library
Location: 2400 Ridge Rd., Berkeley, CA 94709
Speaker: Bernard Adeney-Risakotta, Founding director, Indonesian Consortium for Religious Studies (Yogyakarta)
Sponsors: Center for Southeast Asia Studies, Center for Islamic Studies
Bernard Adeney-Risakotta will discuss his new book, Living in a Sacred Cosmos: Indonesia and the Future of Islam, which was recently published as a monograph by Yale University's Southeast Asia Council.
NPN Social Hour
Workshop | November 29 | 5:15-6:15 p.m. | Hearst Museum of Anthropology, 102 (Kroeber Hall)
Sponsor: Human Resources
Meet at the museum entrance on the building's south side. Bring your staff ID for free admission.
Race and the Apparatus of Disposability
Workshop | November 29 | 5:30-7:30 p.m. | Simon Hall, Goldberg Room (Room 297)
Speaker/Performer: Sherene H. Razack
5:30pm – Reception
6pm – Lecture
Disposability, a condition written on the body, is a racial project. Populations that stand in the way of the progress of capital accumulation, are targeted for disposability, and relegated to the realm of "sub-humanity." Processes of disposability enable white Europeanness to prevail. In this paper, I pursue what race has to do with disposability through an... More >
Lecture | November 29 | 5:30-7:30 p.m. | Simon Hall, Goldberg Room, room 297, Berkeley Law
Sponsor: Center for Race and Gender
SHERENE H. RAZACK
DISTINGUISHED PROFESSOR AND THE PENNY KANNER ENDOWED CHAIR IN GENDER STUDIES, UCLA
Disposability, a condition written on the body, is a racial project. Populations that stand in the way of the progress of capital accumulation, are targeted for disposability, and relegated to the realm of "sub-humanity." Processes of... More >
Lightspeed Venture Partners Info-Session: Fellowship Info-session
Information Session | November 29 | 6-8 p.m. | Cory Hall, Cory-521 Hogan Rm
Sponsor: Electrical Engineering and Computer Sciences (EECS)
Got a new startup idea? Or a vision you can't wait to see take place?
Each year, Lightspeed Venture Partners hosts an intensive summer fellowship program to provide teams with the resources, guidance, and connections they need to grow their ventures. Come join Lightspeed recruiters and Blueprint on 11/29 to learn more about how the Fellowship program can help you accelerate your startup idea.... More >
Interdisciplinary Marxist Working Group
Meeting | September 20 – December 13, 2018 every other Thursday | 6-8 p.m. | 306 Wheeler Hall
Sponsor: Department of English
Please join us for this semester's first meeting of the IMWG on Thursday, Sept 6 from 6-8pm in the Wheeler English lounge. We will be continuing with where we left off last semester in Capital, with plans to finish volume 1 by early October.
No prior knowledge of Capital or Marx is required, and everyone is welcome. I'm attaching a rough schedule, as our readings after Capital vol 1 will... More >
The Battle Front for the Liberation of Japan—Summer in Sanrizuka
Film - Feature | November 29 | 7-8:40 p.m. | Berkeley Art Museum and Pacific Film Archive
Shinsuke Ogawa "has been unaccountably neglected in the Western world. . . . [His is an] extraordinary, incisive, and deeply committed body of work" (Jed Rapfogel, Anthology Film Archives). In 1968, Ogawa and the new filmmaking collective Ogawa Pro "followed a brigade of student activists and joined the growing movement of resistance by the farmers and their allies against the forced eviction... More >
Exhibits and Ongoing Events
Luminous Disturbances: Paintings by Kara Maria
Exhibit - Painting | September 10 – December 14, 2018 every Monday, Tuesday, Wednesday, Thursday & Friday | Stephens Hall, Townsend Center for the Humanities
Sponsor: Townsend Center for the Humanities
Kara Maria's "cheerfully apocalyptic" paintings engage with a host of political issues, including war and environmental destruction. On display at the Townsend Center for the Humanities Sept 10 - Dec 14, 2018.
Exhibit - Artifacts | October 13, 2017 – May 30, 2019 every Monday, Tuesday, Wednesday, Thursday, Friday & Saturday | Bancroft Library, Rowell Cases, near Heyns Reading Room, 2nd floor corridor between The Bancroft Library and Doe
Let there be laughter! This exhibition features Cal students' cartoons, jokes, and satire from throughout the years, selected from their humor magazines and other publications.
Immigration, Deportation and Citizenship, 1908-2018: Selected Resources from the IGS and Ethnic Studies Libraries
Exhibit - Artifacts | August 31 – December 10, 2018 every day | Moses Hall, IGS Library - 109 Moses
Sponsors: Institute of Governmental Studies Library, Ethnic Studies Library
"Immigration, Deportation and Citizenship, 1908-2018: Selected Resources from the IGS and Ethnic Studies Libraries" contains items from the Ethnic Studies Library and the Institute of Governmental Studies Library addressing historical attitudes and policy around immigration, deportation, and citizens' rights, as well as monographs and ephemera relating to current events.
The Handmaid's Tale: an exhibit at Moffitt Library
Exhibit - Multimedia | September 5 – December 31, 2018 every day | Moffitt Undergraduate Library, 3rd Floor near Elevators
The new Moffitt Library exhibit explores the themes and antecedents of The Handmaid's Tale, this year's On the Same Page program selection. On exhibit are library materials and quotes that demonstrate that not only were we wrong to say "it can't happen here" - it has already happened, all over the world: Berlin, Nazi Germany, Argentina, and yes, here in the US.
Attendance restrictions: UC Berkeley ID required for entrance to Moffitt Library.
Art for the Asking: 60 Years of the Graphic Arts Loan Collection at the Morrison Library
Exhibit - Artifacts | September 17, 2018 – February 28, 2019 every day | Doe Library, Bernice Layne Brown Gallery
Art for the Asking: 60 Years of the Graphic Arts Loan Collection at the Morrison Library will be up in Doe Library's Brown Gallery until March 1st, 2019. This exhibition celebrates 60 years of the Graphic Arts Loan Collection, and includes prints in the collection that have not been seen in 20 years, as well as prints that are now owned by the Berkeley Art Museum. There are also cases dedicated... More >
Boundless: Contemporary Tibetan Artists at Home and Abroad
Exhibit - Painting | October 3, 2018 – May 26, 2019 every day | Berkeley Art Museum and Pacific Film Archive
Featuring works by internationally renowned contemporary Tibetan artists alongside rare historical pieces, this exhibition highlights the ways these artists explore the infinite possibilities of visual forms to reflect their transcultural, multilingual, and translocal lives. Though living and working in different geographical areas—Lhasa, Dharamsala, Kathmandu, New York, and the Bay Area—the... More >
Harvey Quaytman: Against the Static
Exhibit - Painting | October 17, 2018 – January 27, 2019 every day | Berkeley Art Museum and Pacific Film Archive
The paintings of Harvey Quaytman (1937–2002) are distinct for their novel explorations of shape, drawing, texture, geometric pattern, and color application. While his works display a rigorous experimentation with formalism and materiality, they are simultaneously invested with rich undertones of sensuality, complexity, and humor. This new retrospective exhibition charts the trajectory of... More >
Dimensionism: Modern Art in the Age of Einstein
Exhibit - Painting | November 7, 2018 – March 3, 2019 every day | Berkeley Art Museum and Pacific Film Archive
In the early twentieth century, inspired by modern science such as Albert Einstein's theory of relativity, an emerging avant-garde movement sought to expand the "dimensionality" of modern art, engaging with theoretical concepts of time and space to advance bold new forms of creative expression. Dimensionism: Modern Art in the Age of Einstein illuminates the remarkable connections between the... More >
Art Wall: Barbara Stauffacher Solomon
Exhibit - Painting | August 15, 2018 – March 3, 2019 every day | Berkeley Art Museum and Pacific Film Archive
The 1960s architectural phenomenon Supergraphics—a mix of Swiss Modernism and West Coast Pop—was pioneered by San Francisco–based artist, graphic and landscape designer, and writer Barbara Stauffacher Solomon. Stauffacher Solomon, a UC Berkeley alumna, is creating new Supergraphics for BAMPFA's Art Wall. Land(e)scape 2018 is the fifth in a series of temporary, site-specific works commissioned for... More >
Old Masters in a New Light: Rediscovering the European Collection
Exhibit - Painting | September 19 – December 16, 2018 every day | Berkeley Art Museum and Pacific Film Archive
Since 1872, the University of California, Berkeley has been collecting works by European artists, building a collection that includes many rare and exceptional works distinguished by artistic innovation, emotional and psychological depth, and technical virtuosity. Consisting mostly of gifts from professors, alumni, and other supporters, the collection continues to evolve, representing artistic... More >
Well Played! The Math and Science of Improving Your Game
Exhibit - Multimedia | November 17, 2018 – May 18, 2019 every day | Lawrence Hall of Science
Sponsor: Lawrence Hall of Science (LHS)
You don't have to be a pro to know that math and science can help improve your game. In our exhibit, Well Played!, you can experiment with force, angles, and trajectory to get the highest scores you can with classic arcade games such as Skeeball, Pinball, and Basketball.
Want to improve your score? Try our interactive exhibits on the math and science behind force and trajectory, and then head... More >
Bearing Light: Berkeley at 150
Exhibit - Artifacts | April 16, 2018 – February 28, 2019 every Monday, Tuesday, Wednesday, Thursday & Friday | 8 a.m.-5 p.m. | Bancroft Library, 2nd Floor Corridor
This exhibition celebrates the University of California's sesquicentennial anniversary with photographs, correspondence, publications, and other documentation drawn from the University Archives and The Bancroft Library collections. It features an array of golden bears, including Oski, and explores the illustrious history of UC Berkeley.
Facing West 1: Camera Portraits from the Bancroft Collection
Exhibit - Photography | November 9, 2018 – March 15, 2019 every Monday, Tuesday, Wednesday, Thursday & Friday | 10 a.m.-4 p.m. | Bancroft Library
The first part of a double exhibition celebrating the tenth anniversary of the renewed Bancroft Library and its gallery, Facing West 1 presents a cavalcade of individuals who made, and continue to make, California and the American West. These camera portraits highlight the communities and peoples of Hubert Howe Bancroft's original collecting region, which extended from the Rockies to the Pacific... More >
Facing West: Camera Portraits from the Bancroft Collection
Exhibit - Photography | November 29, 2018 – March 15, 2019 every Monday, Tuesday, Wednesday, Thursday & Friday | 10 a.m.-4 p.m. | Bancroft Library, Bancroft Gallery
Face to Face: Looking at Objects That Look at You
Exhibit - Multimedia | March 10 – December 9, 2018 every Sunday, Wednesday, Thursday, Friday & Saturday | 11 a.m.-5 p.m. | Hearst Museum of Anthropology | Note change in date
Sponsor: Phoebe A. Hearst Museum of Anthropology
For this Spring 2018 exhibit, entitled Face to Face: Looking at Objects that Look at You, the Hearst staff and 14 UC Berkeley freshmen have co-curated a global selection of objects that depict human faces in different ways. The exhibit asks: Why and how do crafting traditions of the world so often incorporate human faces, and how do people respond to those faces? Objects such as West African... More >
The Karaite Canon: Manuscripts and Ritual Objects from Cairo
Exhibit - Artifacts | August 28 – December 14, 2018 every Tuesday, Wednesday, Thursday & Friday | 11 a.m.-4 p.m. | Magnes Collection of Jewish Art and Life (2121 Allston Way)
The Karaite Canon highlights a selection from the over fifty manuscripts he brought to California, along with ritual objects belonging to Cairo's Karaite community.
Acquired by The Magnes Collection of Jewish Art and Life in 2017 thanks to an unprecedented gift from Taube Philanthropies, the most significant collection of works by Arthur Szyk (Łódź, Poland, 1894 – New Canaan, Connecticut, 1951) is now available to the world in a public institution for the first time as... More >
Pièces de Résistance: Echoes of Judaea Capta From Ancient Coins to Modern Art
This exhibition will be continuing in Spring 2019.
Notions of resistance, alongside fears and realities of oppression, resound throughout Jewish history. As a minority, Jews express their political aspirations, ideals of heroism, and yearnings of retaliation and redemption in their rituals, art, and everyday life.
Centering on coins in The Magnes Collection, this exhibition explores how... More >
Project "Holy Land": Yaakov Benor-Kalter's Photographs of British Mandate Palestine, 1923-1940
Exhibit - Photography | August 28 – December 14, 2018 every Tuesday, Wednesday, Thursday & Friday | 11 a.m.-4:05 p.m. | Magnes Collection of Jewish Art and Life (2121 Allston Way)
For nearly two decades, Yaakov (Jacob) Benor-Kalter (1897-1969) traversed the Old City of Jerusalem, documenting renowned historical monuments, ambiguous subjects in familiar alleyways, and scores of "new Jews" building a new homeland. Benor-Kalter's photographs smoothly oscillate between two worlds, and two Holy Lands, with one lens.
After immigrating from Poland to the British Mandate of... More > | CommonCrawl |
UCPRESS
search filter All ContentElementa: Science of the Anthropocene
Ecology & Earth Systems
Special Features & Forums
2. Methodology
3. Results and Discussion
Data Accessibility Statement
Effective inundation of continental United States communities with 21st century sea level rise
In Special Collection:
Ecology & Earth Systems , Ocean Science , Sustainability Transitions
Kristina A. Dahl;
Kristina A. Dahl
Dahl Scientific, San Francisco, CA, US
Erika Spanger-Siegfried;
Erika Spanger-Siegfried
Union of Concerned Scientists, Cambridge, MA, US
Astrid Caldas;
Astrid Caldas
Union of Concerned Scientists, Washington, DC, US
Shana Udvardy
Elementa: Science of the Anthropocene (2017) 5: 37.
https://doi.org/10.1525/elementa.234
Kristina A. Dahl, Erika Spanger-Siegfried, Astrid Caldas, Shana Udvardy; Effective inundation of continental United States communities with 21st century sea level rise. Elementa: Science of the Anthropocene 1 January 2017; 5 37. doi: https://doi.org/10.1525/elementa.234
Recurrent, tidally driven coastal flooding is one of the most visible signs of sea level rise. Recent studies have shown that such flooding will become more frequent and extensive as sea level continues to rise, potentially altering the landscape and livability of coastal communities decades before sea level rise causes coastal land to be permanently inundated. In this study, we identify US communities that will face effective inundation—defined as having 10% or more of livable land area flooded at least 26 times per year—with three localized sea level rise scenarios based on projections for the 3rd US National Climate Assessment. We present these results in a new, online interactive tool that allows users to explore when and how effective inundation will impact their communities. In addition, we identify communities facing effective inundation within the next 30 years that contain areas of high socioeconomic vulnerability today using a previously published vulnerability index. With the Intermediate-High and Highest sea level rise scenarios, 489 and 668 communities, respectively, would face effective inundation by the year 2100. With these two scenarios, more than half of communities facing effective inundation by 2045 contain areas of current high socioeconomic vulnerability. These results highlight the timeframes that US coastal communities have to respond to disruptive future inundation. The results also underscore the importance of limiting future warming and sea level rise: under the Intermediate-Low scenario, used as a proxy for sea level rise under the Paris Climate Agreement, 199 fewer communities would be effectively inundated by 2100.
Ocean Science, Sustainability Transitions, Ecology & Earth Systems
climate change, sea level rise, coastal resilience, socioeconomic vulnerability, United States
Sea level rise as a consequence of ongoing climate change poses a threat to millions of people worldwide (Hinkel et al. 2014). In the United States alone, the combination of global sea level rise, population growth and land use change is projected to expose between 4 and 13 million people to inundation by the year 2100 (Hauer et al. 2016). Left unabated, rising seas could affect upwards of 20 million US residents through the end of this century and beyond (Strauss et al. 2015).
While the number of people and communities affected by future inundation depends on the pace and magnitude of sea level rise, recurrent tidal flooding is already emerging as one of the most visible and quantifiable present-day signs of climate change. The East and Gulf Coasts of the US experienced some of the world's fastest rates of sea level rise during the 20th century (National Oceanic and Atmospheric Administration 2013a; Dangendorf et al. 2017). These rising seas have caused tidal flooding–coastal flooding that is driven in large part by routine tidal fluctuations rather than precipitation or storm surge–to become an increasingly frequent occurrence in US coastal communities. Whereas minor coastal flooding along the East, Gulf, and West coasts of the US occurred just once every one to five years in the 1950s, it was occurring about once every three months by 2012 (Sweet et al. 2014).
Sea level rise is expected to make recurrent tidal flooding both more frequent and more extensive (Sweet & Park 2014; Moftakhari et al. 2015; Dahl et al. 2017; Kulp & Strauss 2017). While the tidal datums associated with the mean higher high water (MHHW) mark could be revised upward as sea level rises, the water level at which a community begins to flood will not change, thus leading to an increase in flood frequency. With this increase, many areas will flood with such frequency—potentially facing dozens to hundreds of minor coastal floods per year by mid-century—that, in the absence of protective measures, they could be rendered unusable before they actually fall at or below the present day MHHW level.
The definition of an area permanently inundated by the ocean is conceptually and functionally straightforward: Any area that is under water at high tide would be considered permanently inundated. Sea level rise is projected to permanently inundate many coastal areas in the US this century [e.g. NOAA, 2017]. Many communities, however, are already facing disruptive, even transformative flooding long before they will be rendered permanently inundated (Spanger-Siegfried et al. 2014). In places such as Annapolis, Maryland, Norfolk, Virginia, and Miami Beach, Florida, substantial investments of time and money are being made to cope with frequent tidal flooding that disrupts daily life and business operations (City of Annapolis 2011; Applegate 2014; Weiss 2016).
As sea level rises, more coastal communities will begin to see increasingly frequent tidal flooding that is both expansive enough to preclude normal daily life in certain areas (hindering work and school transportation, impeding commerce, damaging property, etc.) and frequent enough to make adjusting to this disruption costly—in some cases prohibitively so—or untenable (Spanger-Siegfried et al. 2014; Sweet & Park 2014; Moftakhari et al. 2015; Moftakhari et al. 2017). Investments into protective measures such as bulkheads or pump systems can make a substantial difference to community-level flood severity (Allen 2016). While not specifically addressed in the present study, such measures have the potential to forestall the onset of disruptive flooding.
The consequences of frequent flooding for communities that already face socioeconomic challenges are likely to be even more disruptive than for those with greater resources. While the causes of socioeconomic vulnerability are complex and can encompass a wide range of variables—including income, race, education, and health insurance coverage—communities with high socioeconomic vulnerability are traditionally more impacted when faced with environmental hazards such as flooding and have fewer resources to cope and adapt (Adger et al. 2009; Lane et al. 2013; Dilling et al. 2015).
In this study, we examine what we call "effective inundation." We consider an effectively inundated area to be one in which flooding is so frequent that it renders the area's current use no longer feasible. In this sense, effective inundation is the point at which a community is forced to make changes to ensure its residents are safe and its infrastructure and services are functional.
Effective inundation exists along an inundation trajectory that begins with no tidal flooding, then shifts as sea level rises to infrequent tidal flooding, then advances further into frequent tidal flooding, which becomes effective inundation, and eventually, permanent inundation (Figure 1).
Sea level rise expands the zone of effective inundation. Areas that fall below the mean higher high water (MHHW; light blue) level today are permanently inundated, as infrastructure below the MHHW level would be inundated, on average, once daily. Areas that lie just above the MHHW level (darker blue) flood regularly enough that their use is limited. These areas are effectively inundated. Compared to today (left panel), sea level rise will expand both the permanent inundation zone and the effective inundation zone (right panel). DOI: https://doi.org/10.1525/elementa.234.f1
Despite a general understanding that sea level rise will bring more frequent flooding to many areas (Dahl et al., 2017 and others) and that permanent inundation is a long-term risk, there are few tools available to communities that assess the growing land area likely to be affected by frequent, disruptive flooding within timeframes associated with community planning horizons.
Previously published tools and studies have focused on either a) the frequency and intensity of coastal flooding, either tidally-driven or from storm surge, at defined time points in the future (e.g. Sweet & Park 2014; Moftakhari et al. 2015; Dahl et al. 2017); b) national-scale visualization and analysis of inundation with defined increments of sea level rise (e.g. 0.5 m, 1.0 m) relative to MHHW irrespective of the fact that sea level is not expected to rise uniformly along our coasts (Marcy et al. 2011; Climate Central 2014; Hauer et al. 2016); or c) local-scale visualizations and analyses reflecting the amount of sea level rise projected locally for a given year (e.g. TMAC, 2015).
The first approach (a) is useful in communicating the magnitude and extent of projected future flooding; however a frequency alone (e.g. 180 floods per year) without a tie to the area affected by such flooding limits how much a community can do with the information. The second approach (b) relies on users to have some a priori knowledge about the local pace of sea level rise as well as different sea level rise scenarios. For users whose expertise lies outside of sea level rise science, having to do additional research to link the mapped increments of sea level rise to timeframes could be an impediment to well-informed decision-making. The third approach (c) has its greatest utility for the communities that have undertaken such efforts, but is not universally available to all communities. With all of these approaches, the link between potential reductions in greenhouse gas emissions and community-level coastal impacts is not explicit.
With these gaps in the existing decision-making tools in mind, we have undertaken a novel analysis that identifies where and when sea level rise effectively inundates coastal communities in the continental US through the end of this century. We also evaluated the intersection of this physical exposure and socioeconomic vulnerability as measured by the Social Vulnerability Index, recognizing that compounding risk factors will create additional challenges for many communities (SoVI; Cutter, Boruff, & Shirley, 2003; Martinich, Neumann, Ludwig, & Jantarasami, 2013).
We do this by:
Developing a method to quantify "effective inundation";
Mapping the extent of effective inundation within the 23 coastal states of the continental US at a series of time steps between now and 2100 using tide gauge-specific sea level rise projections based on three global sea level rise scenarios published for the Third US National Climate Assessment (NCA hereafter) (Parris et al. 2012; Walsh et al. 2014);
Explicitly connecting the concept of reductions in greenhouse gas emissions to effective inundation by using the NCA Intermediate-Low projection as a proxy for sea level rise under a scenario in which future warming is capped at 2°C;
Identifying cohorts of communities at the Census county subdivision level that meet the effectively inundated threshold for each future time horizon;
Evaluating the proportion of exposed communities that contain at least one Census tract with high socioeconomic vulnerability as defined by the SoVI;
Developing a practical online interactive planning tool that allows users to explore the extent of effective inundation at any location with different sea level rise projections at specific years in the future.
2.1 Determining a frequency threshold
Flood risk tolerance will vary from community to community. In order to conduct a nationally-consistent spatial and temporal analysis, however, we defined a single flooding frequency associated with effective inundation and a land area threshold above which a community would be considered effectively inundated. In addition to reviewing the literature on tipping points in the flood frequencies that communities can cope with (Sweet & Park 2014), we conducted interviews with community experts in East and Gulf Coast communities including Annapolis, Maryland, Charleston, South Carolina, Broad Channel, New York, and consulted publicly available sources such as National Weather Service alerts to determine this frequency. These interviews are summarized in Table S1.
The current frequency of minor coastal flooding—often called nuisance flooding—in these communities ranges from approximately 24 in Charleston to 50 floods per year in Annapolis, on average (Dahl et al. 2017). Despite this large range, and speaking to the issue of different tolerance levels to flooding, each community was already developing or implementing a response to frequent flooding. A city official from Annapolis noted that the city initiated a response to flooding long before reaching the level of 50 floods per year (L. Craig, pers. comm.), while in the 1980s Charleston developed a comprehensive drainage master plan in response to flooding—when flooding was not as frequent as it is today (City of Charleston 2015). In Broad Channel, flooding on certain streets around each full and new moon (about 2 times per month) had driven the neighborhood association to lobby for and secure $28 million for sea walls and road elevation (Katz 2016).
These conversations conveyed that communities were responding to flooding, typically of limited areas, out of necessity and long before it reached the level of 50 events per year. Two of the four communities we spoke with—Charleston and Broad Channel—had taken action by the time they were coping with about 25 flood events per year.
This research suggests that 26 floods or more per year has, for affected communities, required substantial planning and investment. We therefore settled on this frequency as a threshold for defining effectively inundated areas.
In addition to defining a frequency threshold, we defined a threshold of affected land area above which we consider a community to be chronically inundated. Based on an evaluation of our results of present day effectively inundated communities and conversations with experts within those communities, we posit that if 10% or more of a community's usable land area is flooded 26 times per year or more, major municipal challenges will ensue in many cases. These challenges could include, for example, the need for: significant investments in shoreline protection structures; reallocation of land to open space to allow floodwaters to ebb and flow; raising homes, streets, and other infrastructure; or relocating coastal residents to inland areas.
In reality, the impact of flooding on a community has as much or more to do with what is being flooded as with the area being flooded. Based on our initial results of the effectively inundated area today, the communities of Annapolis, Maryland, and Miami Beach, Florida, do not experience flooding of 10% or more of their area 26 times per year or more. And yet frequent flooding of critical areas of those communities has prompted major investments of time and money (City of Annapolis 2011; Weiss 2016). In contrast, there are low-lying coastal communities where much more than 10% of the land area floods 26 or more times per year, but the flooded area is largely rural and uninhabited and thus does not affect the local people.
Our interviews with local experts revealed that there is no one land area threshold that applies universally to all communities. However, 80% of the 91 communities that meet both the frequency and 10% land area thresholds for flooding today largely fall within two regions with well-documented flooding problems: Louisiana and the Eastern Shore of Maryland. Frequent flooding in Isle de Jean Charles, Louisiana, for example, led residents there to seek and receive federal assistance for relocation (Maldonado et al. 2014). And the population on Smith Island, Maryland, has declined by more than one-third since 2010, in part due to frequent flooding (Holland 2016; TownCharts 2017). In speaking with local experts representing the majority of communities that met the effective inundation threshold today, both the area we mapped as effectively inundated and the frequency of inundation within that area were confirmed as consistent with their current observations and experience (Table S1). In one case (West Wildwood, New Jersey), very recent upgrades to bulkheads had reduced flooding below the extent and frequency indicated by our analysis, suggesting the importance of continuing to update local digital elevation models as protective measures are put in place.
Tidal events that exceed the effective inundation threshold could be affected by a number of factors in addition to tidal variability. These factors include storminess, long-term changes in regional climate patterns–such as the prevailing wind direction–or the Pacific Decadal Oscillation. This analysis does not attempt to separate out the differing causes of flood events. Tidally driven flood events tend to cluster around times when a new or full moon coincides with lunar perigee—the point at which the moon is closest to the Earth—because these conditions amplify the normal tidal range. These events tend to occur more in the spring and fall rather than being spaced evenly throughout the year.
2.2 Tide gauge data to identify the water level associated with 26 exceedances per year
In order to determine the physical areas of the US that are inundated at least 26 times per year, we utilized a set of 93 tide gauges (66 on the East and Gulf Coasts, 27 on the West Coast) maintained by the National Ocean Service. Using 20 years (1996–2015) of hourly, verified water level data for each gauge, we determined the threshold water level relative to the present MHHW level that was exceeded 26 ± 1 times annually (Table S2).
The water level at each gauge associated with the 26 floods per year threshold—hereafter referred to as the effective inundation threshold–could be influenced by a number of factors, both natural and anthropogenic in cause. On interannual and interdecadal timescales, the 18.6 year nodal tidal cycle and the 8.85 year cycle of lunar perigee are both known to influence mean sea level and MHHW along the US East and Gulf Coasts and elsewhere (Flick et al. 2003; Haigh et al. 2011; Wadey et al. 2014). The El Niño Southern Oscillation also affects sea level and extreme water levels on both the East and West Coasts of the US (Sweet & Zervas 2011; Hamlington et al. 2015). On shorter timescales, extreme sea levels such as occurred along the US East Coast in 2009–2010 have the potential to influence flood frequency and the water level associated with effective inundation threshold (Sweet et al. 2009; Goddard et al. 2015). By using 20 years of tide gauge data, our results encompass a full nodal tidal cycle, more than two cycles of lunar perigee, and several El Niño events.
While 30 years is typically considered the modern climate epoch, sea level has also risen substantially in that time period, which has caused an increase in the frequency of tidal flooding at the nuisance level and would likely affect the water level exceeded 26 times per year (Church & White 2011; Ezer & Atkinson 2014; Sweet et al. 2014; Hay et al. 2015; Moftakhari et al. 2015). In using 20 years of data, we aimed to use enough data to encompass the 18.6 and 8.85 year cycles mentioned above while also reasonably capturing modern sea level conditions. This is consistent with our previously published research (Dahl et al. 2017) and considerably longer than the tide gauge reference period used for projections of future flood frequency by previous studies (Sweet & Park 2014). Our projections assume that future tidal ranges will not differ substantially from those during the reference period, though there is evidence that sea level rise may increase tidal range (Flick et al. 2003; Passeri et al. 2016). While the sea level rise projections we use incorporate local rates of vertical land movement, we do not model any changes in coastal morphology, although such changes are likely to occur as sea level rises (FitzGerald et al. 2008; Lentz et al. 2016).
The effective inundation threshold was determined for each year (January–December) of the tide gauge record. Years for which 10% or more of the hourly observations were missing were excluded from the analysis (Table S2). The threshold water level for each gauge was determined recursively using a script that counted the number of exceedances of a specified water level starting with the MHHW level. The script then adjusted the water level in increments of 0.30 mm (0.0010 ft) and recounted the number of exceedances until the number of exceedances was 26 ± 1. This, and all other scripts used developed for this analysis, are available in a public GitHub repository at https://github.com/kristydahl/permanent_inundation.
We then used the mean threshold water level for all of the years to define the effective inundation threshold relative to MHHW for each gauge. We used the standard deviation about the mean effective inundation threshold for the full set of gauges as a component in the combined linear error used to define the time steps we analyzed from now through 2100 (see section 2.4).
It is important to note that tide gauges record variations in water levels relative to local benchmarks that are ideally situated on bedrock. This is not always the case, however. In Louisiana, where benchmarks are typically located tens of meters below the land surface, gauges are recording water level variations relative to those subsurface benchmarks (Jankowski et al. 2017). While this study does not attempt to correct for this phenomenon, it is important to note that in places like Louisiana, the determination of inundation thresholds could be affected by long-term deep subsidence.
2.3 Elevation data and inundated areas
Mapping the extent of the effectively inundated area based on water levels from tide gauges requires a digital elevation model (DEM). We obtained DEMs for the continental US from the National Oceanic and Atmospheric Administration (Marcy et al. 2011). The resolution of the DEMs varies between ⅓ arc second (~10 meters) and 1/9 arc second (~3 m), though much of the East Coast is at the latter, higher resolution. The DEMs, which were used in the creation of NOAA's Sea Level Rise Viewer, are lidar-based and were conditioned and created specifically for sea level rise mapping (Marcy et al. 2011; NOAA 2017). Because the original data sources vary, so does the vertical uncertainty (root mean square error, or RMSE) of the DEMs. All of the DEMs meet or exceed the 18.5 cm RMSE standard for the National Flood Insurance Program (NOAA 2017). Investigation of the DEM metadata showed that RMSEs were less than 10 cm for most of the East Coast and higher for some parts of the Gulf Coast. Vertical accuracy data was not reported within the metadata of the West Coast DEMs. We assume an average RMSE of 9.25 cm, which we use in our calculations of combined linear error and minimum sea level rise interval below.
2.4 Sea level rise projections
To determine the height of the effective inundation threshold over time, we used local projections based on three global sea level rise projections originally developed for the 3rd National Climate Assessment (NCA; Parris et al. 2012; Walsh et al. 2014). The NCA Highest scenario, which projects 2 m of sea level rise globally by 2100, assumes ocean warming in accordance with IPCC AR4 projections and an estimate of maximum possible ice loss (Pfeffer et al. 2008). The NCA Intermediate-High scenario projects 1.2 m of sea level rise globally by 2100 and assumes warming associated with the upper end of the IPCC AR4 projections while ice loss is modeled using a semi-empirical approach (e.g. Horton et al., 2008; Vermeer & Rahmstorf, 2009; Jevrejeva, Moore, & Grinsted, 2010). The NCA Intermediate-Low scenario assumes that sea level rise is driven primarily by ocean warming with very little contribution of ice loss. This scenario is associated with an average global temperature increase of 1.8°C and a 0.5 m rise in sea level by 2100 (Parris et al. 2012).
There is no published sea level rise projection developed specifically with the goals of the Paris Climate Agreement–namely limiting warming to less than 1.5 or 2°C above pre-industrial levels–as a basis. One recent study projects a 0.8 m rise above 2000 levels by 2100 with 2°C of warming, for example (Schaeffer et al. 2012). Another states that limiting warming to below 2°C is associated with sea level rise near or below 1 m by 2100 (Strauss et al. 2015). Because the warming associated with the NCA Intermediate-Low scenario is in line with the Paris goals and the scenario can be easily localized using USACE guidelines, we determined it to be the most useful proxy for a Paris Agreement sea level rise scenario.
The NCA scenarios described above represent globally averaged sea level rise. Sea level is not expected to rise uniformly, however, due to regional factors such as land subsidence, tectonics, changes in ocean circulation, gravitational fingerprinting, groundwater pumping, and dredging, which together account for local vertical land movement (Milliken et al. 2008; Moucha et al. 2008; Mitrovica et al. 2009; Konikow 2011; Ezer et al. 2013). We calculated local sea level rise projections (E) at each tide gauge and at each future time using the equation described by the US Army Corps of Engineers (Huber & White 2015):
E(t)=Mt+bt2
\[{\mathop{\rm E}\nolimits} {\rm(t)} = {\mathop{\rm Mt}\nolimits} + {\rm{b{t^2}}}\]
– t is years since 1992
– M is the eustatic sea level rise rate (0.0017 mm/yr) plus the local vertical land movement rate (Huber & White 2015; Zervas et al. 2013)
– b is a variable that determines the pace of sea level rise. This variable is set to 1.56E-04, 8.71E-05, and 2.71E-05 for the NCA Highest, Intermediate-High, and Intermediate-Low scenarios, respectively (Huber & White 2015).
Estimates of vertical land movement (VLM) come directly from the tide gauge records. These estimates were derived by decomposing the records into a number of components, including seasonal variability and global sea level trends, to calculate average VLM rates over the length of each record (Zervas et al. 2013). In places like Louisiana, where subsidence rates are closely linked to rates of fluid extraction and have varied considerably over the last century, the average VLM rates used here may mask any accelerations or decelerations in subsidence rates over the last 20 years (Kolker et al. 2011). Because this average VLM is held constant when calculating future local sea level rise, this calculation could underestimate or overestimate future sea level rise in locations where VLM is highly variable, such as Louisiana.
2.5 Determining the minimum sea level rise increment
In order for the inundation zones for each time interval to be meaningfully different from each other, they must be spaced far enough apart to be outside of the range of statistical uncertainty associated with the underlying datasets (Gesch 2013). There are several sources of statistical uncertainty in this analysis:
The vertical accuracy of the DEMs (9.25 cm)
Tide gauge measurement errors (3.0 cm; National Oceanic and Atmospheric Administration, 2013a)
Datum uncertainty (1.5 cm; National Oceanic and Atmospheric Administration, 2013b)
Standard deviation about the mean effective inundation threshold (5.3 cm; this study)
Using the average values (reported above) for (1), (2), and (3), we calculated a cumulative vertical error of 11.2 cm using a sum of squares approach. We then calculate a combined linear error by multiplying the cumulative vertical error by 1.28 for the 80% confidence level. This confidence level is lower than that suggested by Gesch (2013), but consistent with the level employed by NOAA using the same underlying DEMs (NOAA 2017). We then calculate the minimum sea level by multiplying the combined linear error by two (Gesch 2013). These calculations result in an average minimum sea level rise interval of 28.6 cm that we apply across all tide gauges.
For each sea level rise scenario, we used the minimum sea level rise interval and the average of the projected sea level rise values for each year for all of the tide gauges to determine the years for future analysis. Because sea level is projected to rise quickly with the Highest scenario, we analyzed seven future years in addition to the present-day: 2030, 2045, 2060, 2070, 2080, 2090, and 2100. For the Intermediate-High scenario, we analyzed 2035, 2060, 2080, and 2100. For the Intermediate-Low scenario, we analyzed just 2060 and 2100.
2.6 Spatial analysis of inundated areas
Our spatial analysis largely follows the methods outlined by the National Oceanic and Atmospheric Administration (NOAA Office for Coastal Management 2012). We determined the effectively inundated areas by creating a transect of points perpendicular to the coast at each gauge and assigning the gauges and their associated transect points the height of the effective inundation threshold above MHHW. Analyzing the West and contiguous East and Gulf Coast regions separately, we then interpolated between those points using the natural neighbor method. This yielded a spatially variable water level surface above MHHW, which we then added to a MHHW surface developed and published by NOAA (NOAA 2016) and referenced to the NAVD88 vertical datum. This total water level surface represented the height of the effective inundation threshold above NAVD88. For future time steps, we added the corresponding projected amount of sea level rise for that gauge to the effective inundation threshold value and interpolated again to create a future water level surface.
We then subtracted the DEMs from the total water level surface to create an inundation surface for each time step. To ensure that the inundated areas were hydrologically connected to the ocean, not just low-lying areas that might, in actuality, be disconnected from the ocean by higher elevation barriers, we performed a region grouping and extracted only hydrologically connected areas (Figure S1).
2.7 Community-level area analysis
For the purposes of this study, we defined communities using US Census county subdivision areas. The Census defines county subdivisions as "the primary divisions of counties and equivalent entities" (US Census Bureau 2010). County subdivisions vary both in size and population. Unlike Census tracts or counties, county subdivisions tend to represent recognizable towns and their boundaries. Examples include Miami Beach, FL, Atlantic City, NJ, and Galveston, TX.
Using standard spatial analysis tools, we determined the area of each county subdivision above MHHW that was inundated at each time step. In order to assess how much of the inundated area was developed or developable land, we first did the area analysis including all county subdivision land areas above MHHW. We then removed wetland and areas protected by federal levees from each county subdivision and inundation surface and calculated the non-wetland area above MHHW that was inundated at each time step (US Fish & Wildlife Service 2016; USACE 2017). Leveed areas were removed because any errors in levee height or representation within the DEMs could results in false inundation. Additional protective structures such as bulkheads and seawalls were included to the degree with which they were represented within the DEMs.
For each time step, we define a cohort of effectively inundated communities (EICs) based on the percentage of usable land area inundated, excluding wetlands and leveed areas. The impact of coastal flooding on a community will depend highly on what is being inundated, not just the frequency, as discussed above. Given the variable levels of resilience to the percentage of land area exposed to flooding, we explored using a higher percentage threshold than the 10% discussed above. Using a higher threshold, e.g. 25% or 50% yields fewer EICs; however the trend—an increasing number of EICs as sea level rises—remains (see section 3.4).
2.8 Analysis of socioeconomically vulnerable communities
Our analysis of socioeconomically vulnerable communities relies on the social vulnerability index (SoVI; Cutter et al., 2003). SoVI provides a relative measure of vulnerability to environmental hazards based on 29 socioeconomic variables. These variables are collected primarily by the US Census Bureau and include economic measures (e.g. per capita income and median household value) as well as demographic measures (e.g. median age and race/ethnicity; see Hazards and Vulnerability Research Institute, 2013 for a full list of underlying variables). Census tract level data were developed and provided by Martinich et al. 2013. For each Census tract, the variables were normalized to z-scores with a mean of zero and a standard deviation of 1, then reduced to an overall SoVI score using a principal components analysis. The overall SoVI score helps to identify places that are significantly above or below mean levels of vulnerability.
Because socioeconomic vulnerability and its causes vary greatly, it does not lend itself to straightforward comparisons across regions. Therefore, the SoVI data were broken into four regions: North Atlantic (ME through VA), South Atlantic (NC through Monroe County, FL), Gulf Coast (Collier County, FL through TX), and Pacific (CA through WA) (Martinich et al. 2013). The overall SoVI scores were normalized within each region such that the mean SoVI score for a region is zero and the standard deviation is 1. We defined tracts with high vulnerability as those with SoVI scores greater than 0.5 standard deviations above the mean, as previous studies have done (Martinich et al. 2013).
We chose to use SoVI over other environmental justice indices (such as EJ Screen) or a smaller subset of variables because of its extensive use by previous studies (Dunning & Durden 2013; Martinich et al. 2013) and because it encompasses a wide range of social, economic, and demographic variables, all of which can contribute to an overall level of vulnerability (Cutter et al. 2003).
After defining the cohort of EICs for each time horizon and sea level rise projection, we used GIS intersection tools to determine which EICs contain at least one Census tract with high socioeconomic vulnerability. Demographics change over time, and rising seas could force large-scale changes in coastal populations (Hauer et al. 2016). For this reason, we limit our primary analysis of the intersection between tracts with high vulnerability and inundated areas to time steps within the next 30 years (2030 and 2045 for the Highest scenario; 2035 for the Intermediate-High scenario).
2.9 Uncertainty
We assess sources of uncertainty, but do not conduct an explicit error analysis in this study. The primary source of uncertainty in projecting the impact of future sea level rise on coastal communities is likely the future pace and magnitude of the sea level rise itself, which will be a product of both past and future greenhouse gas emissions as well as the Earth system response to those emissions. Because future emissions trajectories are highly uncertain, we do not assign any probability or likelihood to the three sea level rise scenarios analyzed here, but rather see them as a range that brackets uncertainty in future emissions choices, the global ice sheet response to those emissions, and the associated magnitude of sea level rise over the course of this century. Future changes in coastal demographics are an additional source of uncertainty. Because SoVI is a static assessment of socioeconomic vulnerability, we cannot exclude the possibility that time and exposure to flooding will substantially change patterns of socioeconomic vulnerability along the coast.
Other sources of uncertainty derive from the data underlying our analyses–the vertical error in the DEMs, interannual variation in the effective inundation threshold, and tide gauge measurement and datum transformation errors. We have implicitly incorporated these combined errors into our analyses by following conventions for defining the minimum sea level rise interval for mapping (Titus et al. 2009; Gesch 2013). When using the same underlying elevation datasets, previous studies have mapped sea level rise intervals of 12 inches, on par with the 28.6 cm minimum sea level rise interval for this study (Marcy et al. 2011; Climate Central 2014). While recent attempts have been made to quantify errors in spatial sea level rise assessments of limited geographic scope (e.g. Leon, Heuvelink, & Phinn, 2014), there is also precedent for relying on best estimates of uncertainty in studies with a national or regional geographic scope (Weiss et al. 2011; Strauss et al. 2012). Because the underlying DEMs have varying degrees of accuracy, it is likely that the level of uncertainty in our results will also vary along the coasts. A full, strict uncertainty analysis may also be of limited value because future conditions rely on unknowns, such as the magnitude of sea level rise at a given location in a given year (Schmid et al. 2014). Having used the 80% confidence level to calculate the minimum sea level rise interval, we assume that differences in the results we present for sequential years are statistically significant at the 80% confidence level.
3.1 Tide gauge analysis
For the 93 gauges in our set, the mean height of the effective inundation threshold was 0.33 m above MHHW, and the mean standard deviation about that height was 5.27 cm (see supplementary online material). The threshold at most gauges falls between MHHW and the minor coastal flooding threshold set by the National Weather Service, which averages 0.56 m above MHHW for East and Gulf Coast gauges (Table S2).
3.2 Verification of present-day conditions
We identified EICs in which at least 10% of the non-wetland, non-leveed area above MHHW falls below the effective inundation threshold in the present day (Figure 2). Nationally, there are 91 EICs today that cluster into just 29 counties (Figure 3). Nearly half of the EICs (59) are in Louisiana, where high rates of land subsidence have exacerbated sea level rise to date (Kolker et al. 2011; Zervas et al. 2013). This present-day cohort includes widely reported coastal flooding hot spots such as Somerset and Dorchester Counties in Maryland (Gertner 2016), the Florida Keys (Union of Concerned Scientists 2015), and Terrebonne and St. Mary Parishes in Louisiana (Marshall et al. 2014). For each of the counties that contain EICs, we contacted local experts in an effort to ground truth our present day results. We spoke with representatives from local National Flood Insurance, sustainability, and environmental planning offices, as well as citizens, who confirmed that the extent of effective inundation we had mapped for the present day was representative of the frequency and extent of flooding observed locally (Table S1). It is important to note that additional shoreline protection measures (e.g. bulkheads, seawalls, etc.) would likely change the frequency and extent of flooding a community experiences, as was the case for one community expert with whom we spoke.
Present day effectively inundated areas. Areas below mean higher high water (blue) and below the effective inundation threshold (yellow) for two example regions within the national analysis: northern New Jersey (a); and the Galveston, Texas, region (b). Wetland areas are shown with cross hatching and present day cohort communities are outlined in black. DOI: https://doi.org/10.1525/elementa.234.f2
Effectively inundated communities for present and future time horizons. Effectively inundated communities with the NCA Intermediate-High sea level rise scenario for the present (yellow), and in 2035 (blue), 2060 (green), and 2100 (pink). DOI: https://doi.org/10.1525/elementa.234.f3
3.3 A flooded future
The number of EICs on a national basis increases steadily as sea level rises (Figure 3). By 2035, the number of EICs nearly doubles (to 167) compared to today with the Intermediate-High scenario. That number rises to 272, 365, and 489 in the years 2060, 2080, and 2100, respectively. In addition to the simple rise in the number of EICs, the land area inundated within the EICs increases over the course of the century. Whereas 47 of today's 91 EICs have 25% or more of their land area effectively inundated, by 2100, 60% of the EICS (or 294 in all) are inundated at that level with the Intermediate-High scenario (Table 1). Full lists of communities inundated in year and scenario can be found in Table S3.
Number of effectively inundated communities with each sea level rise scenario. Total number of effectively inundated communities for the present, Intermediate-Low, Intermediate-High, and High scenarios. Number of communities affected to different degrees of inundation reported for four classes of inundation: 10 – 25%, 25 – 50%, 50 – 75%, and >75%. DOI: https://doi.org/10.1525/elementa.234.t1
Int-Low Scenario .
Int-High Scenario .
Highest Scenario .
% inundation .
Present .
10–25% 44 63 112 64 103 133 195 75 109 132 165 208 226 240
25–50% 31 53 61 49 71 76 102 55 78 89 89 110 123 155
50–75% 12 42 58 37 50 74 59 30 44 71 69 71 81 76
>75% 4 25 59 17 48 82 133 18 34 68 104 134 170 197
Total 91 183 290 167 272 365 489 178 265 360 427 523 600 668
3.4 Early EICs: 2030 through 2045
More than 70 new communities face effective inundation by 2035 with the Intermediate-High scenario (Figure 4). These early EICs cluster in several regions: the eastern shore of Maryland, the mainland side of North Carolina's Pamlico Sound, the New Jersey shore, South Carolina's Lowcountry, Louisiana west of New Orleans, and the northern coast of Texas between the Louisiana border and Brazosport.
Number of effectively inundated communities nationwide. The number of effectively inundated communities (EICs) increases as sea level rises. Results shown here are for the NCA Intermediate-High scenario. Bar height is inclusive of all communities with 10% or more effective inundation; colors indicate the number of communities at 10 to 25% (blue) and >25% (pink) effective inundation. DOI: https://doi.org/10.1525/elementa.234.f4
The Highest scenario projects a similar number of newly inundated communities in 2030 as the Intermediate-High scenario in 2035—178 for the Highest scenario compared with 167 for the Intermediate-High. The clusters of affected communities with the two scenarios are also similar. By 2045, with the Highest scenario, however, the number of EICs expands to 265—a rise of nearly 100 in the 15 years since 2030. And by 2045, with the Highest scenario, the Atlantic Coast of Florida goes from having just one EIC 15 years earlier to having 8. New Jersey also experiences a large increase in EICs between 2030 and 2045—from 26 to 55.
3.5 Mid-century EICs
Between 2035 and 2060, an additional 105 communities face effective inundation with the Intermediate-High scenario. Whereas the clusters of EICs in 2035 tend to simply expand areas with clusters of EICs today, by 2060 entirely new stretches of the coastline are exposed to effective inundation (Figure 3). South Carolina, for example, goes from just 2 EICs in 2035 to 12 in 2060, spanning most of the state's coastline. Likewise, Florida's Atlantic coast goes from just one EIC in 2035 to 8 in 2060, including Miami Beach. The greater Boston area, northern New Jersey, and the Atlantic coast of the Delmarva Peninsula, including Lewes, Delaware, and Ocean City, Maryland, all face effective inundation in 2060. Regional inundation patterns—as well as total numbers of EICs–with the Intermediate-High scenario in 2060 are similar to those of the Highest scenario in 2045.
The Highest scenario would expose an additional 88 communities to effective inundation by 2060 compared to the Intermediate-High scenario (Figure S2). Fifty of these communities—more than half—are concentrated in just three states: Florida (14 communities), New Jersey (18 communities), and North Carolina (18 communities). Additional regions with clusters of communities that would face effective inundation with the Highest scenario but not the Intermediate-High scenario in 2060 include the greater Charleston, SC, area, Iberville and St. Martin parishes in Louisiana, and Alameda, CA.
3.6 End of century EICs
By 2100, 489 communities–including nearly all of the immediate coastal communities in New Jersey, Maryland, northern North Carolina, South Carolina, Georgia, Louisiana, and northern Texas–face effective inundation with the Intermediate-High scenario (Figure 3). The 2100 cohort includes previous unaffected communities in the San Francisco region (San Mateo and Alameda) as well as the greater Los Angeles region (North Coast). Notably, there are 29 EICs with present day populations over 100,000, including Boston, MA, Newark, NJ, and St. Petersburg, FL (US Census Bureau, 2010).
An additional 179 communities would face effective inundation with the Highest scenario that would not be affected with the Intermediate-High scenario (Figure 5). Three-quarters (75%) of these communities fall into eight states—Florida, Louisiana, Maryland, Massachusetts, New Jersey, New York, North Carolina, and Virginia—all of which have 10 or more communities that would be effectively inundated with the Highest scenario but not the Intermediate-High scenario. Significant clusters of communities that fall into this category include the San Francisco Bay Area, much of the Georgia coast and the Florida Panhandle, Hancock County, MS, southern Texas, and Long Island, NY. With the Highest scenario, the number of EICs with present day populations over 100,000 rises to 52, including four of the five boroughs of New York City.
Effectively inundated communities in 2100 with the Intermediate-High and Highest scenarios. Effectively inundated communities with the Intermediate-High scenario in 2100 are shown in pink. Additional communities that would face effective inundation with the Highest scenario are shown in yellow. DOI: https://doi.org/10.1525/elementa.234.f5
3.7 State trends
Today, the 23 states (including the District of Columbia) included in our analysis have a mean of four EICs. Whereas Louisiana has the most EICs (59), the majority (15) of those states have no EICs, and the mean number of EICs per state today is zero. By 2100, the mean number of EICs per state with the Intermediate-High scenario is 21: roughly a five-fold increase.
Averages, while useful, obscure stark differences in the number of EICs in each state as well as the pace of growth as sea level rises. For example, while the number of EICs in Louisiana grows rapidly—from 59 today to 131 in 2100 with the Intermediate-High scenario—the rate of the increase in EICs in New Jersey is faster (Figure 6). By 2100, there are 103 EICs in New Jersey compared to just 7 today—an increase of more than one order of magnitude. Several states—South Carolina, Massachusetts, Texas, and Georgia—go from two or fewer EICs today to 10 or more in 2100. By the end of the century, more than 40% (10) of the 23 coastal states are projected to have 10 or more EICs (Table 2).
Effectively inundated communities by state. Effectively inundated communities (EICs) for each state with the Intermediate-High scenario. Total bar height represents the total number of EICs for each state by 2100. Note that states with one or zero EICs by 2100 are not shown. DOI: https://doi.org/10.1525/elementa.234.f6
Effectively inundated communities by state. Number of effectively inundated communities per state for the present, Intermediate-Low, Intermediate-High, and Highest scenarios for each year analyzed. DOI: https://doi.org/10.1525/elementa.234.t2
Intermediate-Low .
Intermediate-High .
Highest .
State .
AL 0 0 0 0 0 1 1 0 0 1 1 1 1 2
CT 0 0 0 0 0 0 1 0 0 0 1 1 3 5
DE 0 0 1 0 1 3 5 0 1 3 5 5 6 7
DC 0 0 0 0 0 0 0 0 0 0 0 0 0 0
FL 3 5 19 5 18 31 58 5 18 32 48 69 85 90
GA 2 4 6 4 6 7 10 4 6 7 7 14 17 18
LA 59 97 112 95 105 114 131 89 101 110 116 129 139 146
ME 0 0 0 0 0 1 1 0 0 1 1 2 3 4
MD 12 23 30 22 27 35 39 23 27 35 37 41 44 51
MA 0 0 5 0 5 9 18 1 5 9 13 18 20 28
MS 0 0 0 0 0 1 2 0 0 1 2 2 4 5
NH 0 0 0 0 0 0 1 0 0 0 0 2 2 4
NJ 7 27 58 21 55 74 103 26 55 73 87 110 120 131
NY 0 0 1 0 0 3 4 0 0 3 4 6 9 14
NC 6 15 26 13 25 43 49 20 26 43 47 53 61 63
PA 0 0 0 0 0 0 0 0 0 0 0 0 0 0
RI 0 0 0 0 0 0 1 0 0 0 0 2 3 3
SC 0 3 12 2 12 18 19 3 10 18 19 20 20 22
TX 1 5 10 2 8 11 17 3 7 10 15 18 20 26
VA 1 4 8 3 8 11 24 4 7 11 20 25 34 38
CA 0 0 0 0 0 1 3 0 0 1 2 3 6 7
OR 0 0 0 0 0 0 0 0 0 0 0 0 0 0
WA 0 0 2 0 2 2 2 0 2 2 2 2 3 4
3.8 Physically exposed and socially vulnerable
Hurricane Katrina and other natural disasters have highlighted the fact that socially vulnerable communities often bear the brunt of disasters and, in the aftermath, face additional challenges to restoring their living situations (Kuhl et al. 2014; Cleetus et al. 2015). Lack of transportation to evacuate a flooded area, living in older, less flood-resistant housing, or working minimum wage service jobs in a flood-prone coastal region are just a few examples of how socioeconomic vulnerability contributes to heightened environmental risk. While extreme events provide a window into the additional challenges facing socially vulnerable communities, sea level's more gradual rise also has the potential to bring these challenges into closer view.
We find that, nationally, 55% of the 2035 EICs (92 out of 167 total under the Intermediate-High scenario) contain at least one Census tract with a high SoVI score (Figure 7). Similar to previous findings, over 40% (39) of these socially vulnerable EICs are in the Gulf Coast region (Martinich et al. 2013). Of those, the vast majority are in the state of Louisiana. Despite the Gulf Coast's concentration of socially vulnerable EICs, there are a number of clusters of EICs in other regions that stand out as well. These include: The Eastern Shore/Chesapeake Coast of Maryland; the mainland side of Pamlico Sound in North Carolina; the New Jersey Shore; Kiawah and Edisto Islands in South Carolina's Lowcountry; and the Florida Keys. At 54%, the percentage or EICs containing a tract with a high SoVI score is similar for both the 2030 and 2045 Highest cohorts. Our results suggest that these regions and communities will require particular attention, and potentially additional resources, as coastal communities begin to build resilience to coastal flooding.
Effectively inundated communities with high socioeconomic vulnerability. Effectively inundated communities in 2035 (pink) with the Intermediate-High scenario. Affected communities with at least one Census tract with a high SoVI score are shown in blue. Note that the regions shown in panels A and G do not have any effectively inundated communities in 2035 with high SoVI. DOI: https://doi.org/10.1525/elementa.234.f7
The demographic variables driving high SoVI scores vary from place to place. Within the Gulf Coast region, for example, which has a large African-American population, high SoVI scores tend to be driven by poverty and race. Along Maryland's Eastern Shore, a large elderly population, likely with reduced mobility, contributes to high social vulnerability. The varying suite of factors contributing to social vulnerability within our cohort of EICs suggests that resilience building and/or coastal retreat strategies will need to vary in accordance with the specific social vulnerability challenges each community faces. A comprehensive analysis of the factors contributing to social vulnerability is beyond the scope of this work. However, the range of causes of social vulnerability noted here contributes to calls for tailored initiatives for enhancing preparedness and adaptive capacity in physically exposed, socially vulnerable areas (Emrich & Cutter 2011).
3.9 Comparisons with the Intermediate-Low scenario
The pace at which sea level rises has a great bearing on the number of communities that face effective inundation this century. Differences in the number of EICs between the three scenarios we analyzed are significant by 2060 and dramatic by 2100 (Figure 8). In 2060, the Intermediate-High scenario projects 272 EICs. That figure is 32% higher (360) with the Highest scenario and 33% lower (183) with the Intermediate-Low scenario. The percentage differences are similar for 2100, with the Highest scenario projecting 37% more EICs (668 in total) than then Intermediate-High, and the Intermediate-Low projecting 41% fewer EICs (290 in total).
Number of effectively inundated communities for each sea level rise scenario. Number of effectively inundated communities by year for the three scenarios analyzed in this study: Highest (yellow); Intermediate-High (pink); Intermediate-Low (green). DOI: https://doi.org/10.1525/elementa.234.f8
With all three scenarios, and for all years, between 52 and 64% of EICs have 25% or more of their land area subject to effective inundation. While these percentages are relatively unvarying, there are large differences in the total numbers of EICs with 25% or more inundation. The Intermediate-High scenario projects 169 EICs with 25% or more inundation by 2060, and 294 by 2100. With the Highest scenario, those numbers rise to 228 and 428, respectively. With the Intermediate-Low scenario, they fall to 120 and 178.
In 2060, there are several clusters of communities that could be spared effective inundation with the Intermediate-Low scenario relative to the Intermediate-High (Figure S3). These clusters include the greater Boston area, northern New Jersey and 13 communities along the New Jersey Shore, the Atlantic coast of Florida (including Miami Beach) and the Gulf Coast of Florida off the coast of Cape Coral.
By 2100, the clusters of spared communities mentioned above grow in area (Figure 9). Large stretches of the Delaware, Maryland, Virginia, and North Carolina, Florida, and Texas coasts also stand to gain greatly if sea level rise follows the trajectory of Intermediate-Low scenario rather than the Intermediate-High. Large population centers (>100,000 people today) stand to also gain greatly from a slower pace of sea level rise (Figure 10). With the Intermediate-Low scenario, only 3 of the 29 large population centers included in the 2100 Intermediate-High cohort would face effective inundation. Communities that would be spared inundation would include four of the five boroughs of New York City, Miami, and San Mateo.
Comparison of effectively inundated communities in 2100 with the Intermediate-Low and Intermediate-High scenarios. Effectively inundated communities with the Intermediate-High scenario are shown in pink. Communities shown in green would be effectively inundated with the Intermediate-High scenario, but spared with the Intermediate-Low. DOI: https://doi.org/10.1525/elementa.234.f9
Effectively inundated communities with populations over 100,000. Locations of effectively inundated communities in 2100 with populations over 100,000 with the three scenarios analyzed in this study: Highest (yellow); Intermediate-High (pink); Intermediate-Low (green). DOI: https://doi.org/10.1525/elementa.234.f10
Using the Intermediate-Low scenario as one potential approximation of the magnitude of sea level rise if the goals of the Paris Climate Agreement were met, these results suggest that the emissions choices we make in the coming decades—and ice sheet responses to those choices–could have profound impacts on communities in the coastal US.
3.10 Online tool
Links to interactive maps showing the extent of inundation at each future time horizon and for each scenario can be found at http://www.ucsusa.org/RisingSeasHitHome (Union of Concerned Scientists 2017). Examples from the tool are shown in Figure S4 and Figure S5.
In this study, we have defined effective inundation and mapped its extent for the continental United States for three distinct and localized sea level rise scenarios. Our approach yielded national-level snapshots of the communities most exposed to sea level rise for specific time horizons through the end of this century that can be used for assessing effective inundation at the local level. This community-focused, time horizon-based approach fills a gap in the existing suite of publicly available tools in that it allows users to visualize future inundation based on specific future time horizons and scenarios.
These results show that, in the absence of measures to manage increased flooding, effective inundation of coastal communities could become widespread within the next 40 years and encompass much of the coast by the end of the century. The growth of effective inundation suggests that communities will face stark choices about their ways of life in the decades to come. From homes and streets being elevated at high cost in Broad Channel, NY, and Norfolk, VA to the value of real estate declining in flood-prone parts of Miami-Dade County, FL, the cost of adapting to rising seas and more frequent flooding is already becoming apparent (Gregory 2013; Urbina 2016; Ruggeri 2017). In places such as Tangier Island, MD, and coastal Louisiana, there are ongoing public discourses about the cost and practicality of saving homes and communities from complete inundation (Gertner 2016; Coastal Protection and Restoration Authority of Louisiana 2017).
Over half of the effectively inundated communities we project for the year 2035 are home to socioeconomically vulnerable populations, which suggests that resources for building climate resilience will need to account for the fact that many communities face not only physical exposure to climate hazards, but also socioeconomic challenges to building resilience.
Using the NCA Intermediate-Low scenario as a proxy for projected sea level rise under a scenario where global warming was capped at 2°C, these results suggest that hundreds of communities in the US could be spared effective inundation were the international community to adhere to the goals of the Paris Agreement.
Whether or not those goals are met, in the coming decades, local, state, and federal governments will need comprehensive plans to provide resources and safe options for communities facing effective inundation, with particular attention to areas with vulnerable populations.
Python scripts: public GitHub repository www.github.com/kristydahl/permanent_inundation.
The authors thank a number of people for providing data and methodological support. Matt Pendleton, Doug Marcy, Billy Brooks, and Billy Sweet (NOAA Office for Coastal Management and NOAA CO-OPS) provided DEM data, methodological input, and reviews. Paul Kirshen and Ellen Douglas (University of Massachusetts, Boston) also provided methodological input and reviews. Jeremy Martinich, Lindsay Ludwig, and Stefani Penn (EPA) provided Census tract-level SoVI data, and Susan Cutter (University of South Carolina) provided advice about the development and use of SoVI. Brenda Ekwurzel, Rachel Cleetus, and Nicole Hernandez-Hammer (Union of Concerned Scientists) provided valuable advice and input throughout the course of the research.
This work was funded by grants to the Union of Concerned Scientists' Climate and Energy Program from the Barr Foundation, the Energy Foundation, the Common Sense Fund, and members of the Union of Concerned Scientists.
Contributed to conception and design: KD, ESS
Contributed to acquisition of data: KD
Contributed to analysis and interpretation of data: KD, ESS, AC, SU
Drafted and/or revised the article: KD, ESS, AC, SU
Approved submitted version for publication: KD, ESS, AC, SU
Nested and teleconnected vulnerabilities to environmental change
Frontiers in Ecology and the Environment
, vol.
(pg.
As waters rise, Miami Beach builds higher streets and political willpower
For city of Norfolk, park becomes wetlands once again
Available at: http://hamptonroads.com/2014/02/city-norfolk-park-becomes-wetlands-once-again [Accessed April 3, 2015]
Sea-Level Rise from the Late 19th to the Early 21st Century
Surveys in Geophysics
City of Annapolis
Flood mitigation strategies for the City of Annapolis, MD: City Dock and Eastport area
Available at: http://www.annapolis.gov/docs/default-source/dnep-documents-pdfs/03–01–2011-sea-level-study.pdf?sfvrsn=6
City of Charleston
Sea level rise strategy
Available at: http://www.charleston-sc.gov/DocumentCenter/View/10089 [Accessed February 22, 2017]
Surviving and Thriving in the Face of Rising Seas
Available at: http://www.ucsusa.org/global-warming/prepare-impacts/communities-on-front-lines-of-climate-change-sea-level-rise
Surging Seas: Sea level rise analysis by Climate Central
Available at: http://sealevel.climatecentral.org/ [Accessed April 4, 2015]
Coastal Protection and Restoration Authority of Louisiana
Louisiana's Comprehensive Master Plan for a Sustainable Coast
Available at: http://coastal.la.gov/wp-content/uploads/2017/04/2017-Coastal-Master-Plan_Web-Book_Final_Compressed-04252017.pdf
Boruff
Social vulnerability to environmental hazards
Social Science Quarterly
https://doi.org/10.1111/1540-6237.8402002
Spanger-Siegfried
Sea level rise drives increased tidal flooding frequency at tide gauges along the U.S. East and Gulf Coasts: Projections for 2030 and 2045
pg.
Available at: http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0170949&type=printable
Dangendorf
, et al.
Reassessment of 20th century global mean sea level rise
https://doi.org/10.1073/pnas.161007114
Dilling
The dynamics of vulnerability: why adapting to climate variability will not always prepare us for climate change
WIREs Climate Change
https://doi.org/10.1002/wcc.341
Available at: http://sciencepolicy.colorado.edu/admin/publication_files/2015.24.pdf [Accessed February 22, 2017]
Dunning
Social vulnerability analysis: A comparison of tools
US Army Engineer Institute for Water Resources
Available at: http://www.iwr.usace.army.mil/Portals/70/docs/iwrreports/Social_Vulnerability_Analysis_Tools.pdf
Social vulnerability to climate-sensitive hazards in the southern United States
Weather, Climate, and Society
https://doi.org/10.1175/2011WCAS1092.1
Gulf Stream's induced sea level rise and variability along the U.S. mid-Atlantic coast
Journal of Geophysical Research: Oceans
https://doi.org/10.1002/jgrc.20091
Accelerated flooding along the U.S. East Coast: On the impact of sea-level rise, tides, storms, the Gulf Stream, and the North Atlantic Oscillations
Earth's Future
https://doi.org/10.1002/2014EF000252
Available at: http://doi.wiley.com/10.1002/2014EF000252 [Accessed April 3, 2015]
Coastal Impacts Due to Sea-Level Rise
Annual Review of Earth and Planetary Sciences
https://doi.org/10.1146/annurev.earth.35.031306.140139
Trends in United States Tidal Datum Statistics and Tide Range
Journal of Waterway, Port, Coastal, and Ocean Engineering
https://doi.org/10.1061/(ASCE)0733-950X(2003)129:4(155)
Gertner
Should the United States save Tangier Island from oblivion?
Available at: https://www.nytimes.com/2016/07/10/magazine/should-the-united-states-save-tangier-island-from-oblivion.html [Accessed May 17, 2017]
Consideration of vertical uncertainty in elevation-based sea-level rise assessments: Mobile Bay, Alabama case study
Journal of Coastal Research
https://doi.org/10.2112/SI63-016.1
An extreme event of sea-level rise along the Northeast coast of North America in 2009–2010
https://doi.org/10.1038/ncomms7346
Available at: http://www.ncbi.nlm.nih.gov/pubmed/25710720
Where streets flood with the tide, a debate over city Aid
Available at: http://www.nytimes.com/2013/07/10/nyregion/debate-over-cost-and-practicality-of-protecting-part-of-queens-coast.html?_r=0 [Accessed May 17, 2017]
Pattiaratchi
Global influences of the 18.61 year nodal cycle and 8.85 year cycle of lunar perigee on high tidal levels
https://doi.org/10.1029/2010JC006645
Available at: http://doi.wiley.com/10.1029/2010JC006645 [Accessed June 20, 2016]
Hamlington
The effect of the El Niño-Southern Oscillation on U.S. regional and coastal sea level
Available at: http://doi.wiley.com/10.1002/2014JC010602
Millions projected to be at risk from sea-level rise in the continental United States
https://doi.org/10.1038/nclimate2961
Available at: http://www.nature.com/doifinder/10.1038/nclimate2961 [Accessed January 26, 2017]
Probabilistic reanalysis of twentieth-century sea-level rise
Hazards and Vulnerability Research Institute
Social Vulnerability Index for the United States 2006–10
Available at: http://webra.cas.sc.edu/hvri/products/sovi.aspx [Accessed February 22, 2017]
Hinkel
Coastal flood damage and adaptation costs under 21st century sea-level rise
Study to look at Smith Island flooding solutions
delmarvanow
Available at: http://www.delmarvanow.com/story/news/local/maryland/2016/09/06/study-look-smith-island-flooding-solutions/89921176/ [Accessed May 17, 2017]
Sea level rise projections for current generation CGCMs based on the semi-empirical method
Available at: http://doi.wiley.com/10.1029/2007GL032486 [Accessed April 3, 2015]
Sea Level Change Curve Calculator (2015.46) User Manual
United States Army Corps of Engineers
Available at: http://www.corpsclimate.us/docs/Sea_Level_Change_Curve_Calculator_User_Manual_2015_46_FINAL.pdf
Törnqvist
Vulnerability of Louisiana's coastal wetlands to present-day rates of relative sea-level rise
https://doi.org/10.1038/ncomms14792
Jevrejeva
Grinsted
How will sea level respond to changes in natural and anthropogenic forcings by 2100?
Borough President Katz's Broad Channel street raising task force reviews project progress
Borough of Queens, City of New York
Available at: http://www.queensbp.org/borough-president-katzs-broad-channel-street-raising-task-force-reviews-project-progress/ [Accessed February 22, 2017]
Kolker
An evaluation of subsidence rates and sea-level variability in the northern Gulf of Mexico
Konikow
Contribution of global groundwater depletion since 1900 to sea-level rise
Evacuation as a climate adaptation strategy for environmental justice communities
Available at: http://link.springer.com/10.1007/s10584-014-1273-2 [Accessed March 21, 2017]
Rapid escalation of coastal flood exposure in US municipalities from sea level rise
Available at: http://link.springer.com/10.1007/s10584-017-1963-7
Health effects of coastal storms and flooding in urban areas: a review and vulnerability assessment
Journal of Environmental and Public Health
Available at: http://www.ncbi.nlm.nih.gov/pubmed/23818911 [Accessed February 22, 2017]
Evaluation of dynamic coastal response to sea-level rise modifies inundation likelihood
Available at: http://www.nature.com/doifinder/10.1038/nclimate2957
Heuvelink
Phinn
Incorporating DEM uncertainty in coastal inundation mapping
The impact of climate change on tribal communities in the US: Displacement, relocation, and human rights
Climate Change and Indigenous Peoples in the United States: Impacts, Experiences and Actions
New mapping tool and techniques for visualizing sea level rise and coastal flooding impacts
Proceedings of the 2011 Solutions to Coastal Disasters Conference
June 26–29, 2011
American Society of Civil Engineers(pg.
Available at: http://projects.propublica.org/louisiana/ [Accessed February 22, 2017]
Martinich
Risks of sea level rise to disadvantaged communities in the United States
Mitigation and Adaptation Strategies for Global Change
A new composite Holocene sea-level curve for the northern Gulf of Mexico
Geological Society of America Special Papers
https://doi.org/10.1130/2008.2443(01)
The sea-level fingerprint of West Antarctic collapse
https://doi.org/10.1126/science.1166510
Moftakhari
Increased nuisance flooding along the coasts of the United States due to sea level rise: Past and future
Cumulative hazard: The case of nuisance flooding
Dynamic topography and long-term sea-level variations: There is no such thing as a stable continental platform
https://doi.org/10.1016/j.epsl.2008.03.056
Sea Level Trends – NOAA Tides and Currents
Available at: http://co-ops.nos.noaa.gov/sltrends/sltrends.html [Accessed April 4, 2015]
Inundation Analysis – NOAA Tides & Currents
Available at: http://tidesandcurrents.noaa.gov/inundation/ [Accessed April 4, 2015]
Inundation Mapping Tidal Surface – Mean Higher High Water – NOAA Data Catalog
Available at: https://data.noaa.gov/dataset/inundation-mapping-tidal-surface-mean-higher-high-water4b2f9 [Accessed February 22, 2017]
Digital Coast Sea Level Rise and Coastal Flooding Impacts Viewer: Frequent Questions
Available at: https://coast.noaa.gov/data/digitalcoast/pdf/slr-faq.pdf [Accessed May 17, 2017]
NOAA Office for Coastal Management
Mapping coastal inundation primer
Available at: https://coast.noaa.gov/data/digitalcoast/pdf/coastal-inundation-guidebook.pdf
, et al. ,
Global Sea Level Rise Scenarios for the United States National Climate Assessment
NOAA Technical Report OAR CPO-1. Available at: http://cpo.noaa.gov/sites/cpo/Reports/2012/NOAA_SLR_r3.pdf
Tidal hydrodynamics under future sea level rise and coastal morphology in the Northern Gulf of Mexico
O'Neel
Kinematic constraints on glacier contributions to 21st-century sea-level rise
Miami's fight against rising seas
BBC Future
Available at: http://www.bbc.com/future/story/20170403-miamis-fight-against-sea-level-rise [Accessed May 17, 2017]
Long-term sea-level rise implied by 1.5 C warming levels
Mapping and Portraying Inundation Uncertainty of Bathtub-Type Models
https://doi.org/10.2112/JCOASTRES-D-13-00118.1
Available at: http://www.bioone.org/doi/abs/10.2112/JCOASTRES-D-13-00118.1
Encroaching Tides: How Sea Level Rise and Tidal Flooding Threaten U.S. East and Gulf Coast Communities over the Next 30 Years
Available at: http://ucsusa.org/encroachingtides
Tidally adjusted estimates of topographic vulnerability to sea level rise and flooding for the contiguous United States
Environmental Research Letters
https://doi.org/10.1088/1748-9326/7/1/014033
Levermann
Carbon choices determine US cities committed to futures below sea level
Sea Level Rise and Nuisance Flood Frequency Changes around the United States
NOAA Technical Report NOS CO-OPS 073
From the extreme to the mean: Acceleration and tipping points of coastal inundation from sea level rise
Available at: http://onlinelibrary.wiley.com/doi/10.1002/2014EF000272/abstract
Zervas
Cool-Season Sea Level Anomalies and Storm Surges along the U.S. East Coast: Climatology and Comparison with the 2009/10 El Niño
Monthly Weather Review
https://doi.org/10.1175/MWR-D-10-05043.1
Elevated East Coast Sea Level Anomaly: June – July 2009
Available at: http://tidesandcurrents.noaa.gov/publications/EastCoastSeaLevelAnomaly_2009.pdf%5Cnhttp://search.proquest.com/docview/904481884?accountid=10639
Coastal sensitivity to sea level rise: a focus on Mid-Atlantic Region
U.S. Climate Change Science Program
Synthesis and Assessment Product 4.1
Future conditions risk assessment and modeling
Available at: https://www.fema.gov/media-library-data/1454954261186-c348aa9b1768298c9eb66f84366f836e/TMAC_2015_Future_Conditions_Risk_Assessment_and_Modeling_Report.pdf
TownCharts
Smith Island, Maryland demographics data
Available at: http://www.towncharts.com/Maryland/Demographics/Smith-Island-CDP-MD-Demographics-data.html [Accessed May 17, 2017]
Encroaching Tides in the Florida Keys (2015)
Available at: http://www.ucsusa.org/sites/default/files/attach/2015/10/encroaching-tides-florida-keys.pdf [Accessed February 22, 2017]
When Rising Seas Hit Home
Online at: http://www.ucsusa.org/RisingSeasHitHome
Urbina
Perils of climate change could swamp coastal real estate
Available at: https://www.nytimes.com/2016/11/24/science/global-warming-coastal-real-estate.html [Accessed May 17, 2017]
National Levee Database
Available at: http://nld.usace.army.mil/egis/f?p=471:1: Accessed February 1, 2017]
US Census Bureau
Geographic Terms and Concepts – County Subdivision
Available at: https://www.census.gov/geo/reference/gtc/gtc_cousub.html [Accessed February 22, 2017]
US Fish and Wildlife Service
National Wetlands Inventory
Available at: https://www.fws.gov/wetlands/data/State-Downloads.html [Accessed February 22, 2017]
Rahmstorf
Global sea level linked to global temperature
Wadey
A century of sea level data and the UK's 2013/14 storm surges: an assessment of extremes and clustering using the Newlyn tide gauge record
https://doi.org/10.5194/os-10-1031-2014
Available at: www.ocean-sci.net/10/1031/2014/ [Accessed June 21, 2016]
Climate Change Impacts in the United States: The Third National Climate Assessment
Available at: http://nca2014.globalchange.gov/report/our-changing-climate/introduction
Miami Beach's $400 million sea-level rise plan is unprecedented, but not everyone is sold
Available at: http://www.miaminewtimes.com/news/miami-beachs-400-million-sea-level-rise-plan-is-unprecedented-but-not-everyone-is-sold-8398989 [Accessed May 17, 2017]
Implications of recent sea level rise science for low-elevation areas in coastal cities of the conterminous U.S.A: A letter
Estimating Vertical Land Motion from Long-Term Tide Gauge Records Services Center for Operational Oceanographic Products and Services
NOAA Technical Report NOS CO-OPS 065. Available at: https://tidesandcurrents.noaa.gov/publications/Technical_Report_NOS_CO-OPS_065.pdf
Copyright: © 2017 The Author(s)
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See http://creativecommons.org/licenses/by/4.0/.
Supplementary Data- zip file
Table S1.
Interview summary regarding present day flooding: uploaded as online supporting information.
- xlsx file
Tide gauge information: uploaded as online supporting information.
Percentage of inundation within communities for all years and scenarios: uploaded as online supporting information.
Figure S1.
Schematic diagram of the spatial analyses underlying this study.
- pdf file
Effectively inundated communities in 2060 with the Intermediate-High and Highest scenarios.
Effectively inundated communities in 2060 with the Intermediate-High and Intermediate-Low scenarios.
Inundated areas of Miami Beach, Florida in 2060 as indicated in online tool.
Inundated areas of Oakland, California in 2100 as indicated in online tool.
Recipient(s) will receive an email with a link to 'Effective inundation of continental United States communities with 21st century sea level rise' and will not need an account to access the content.
Subject: Effective inundation of continental United States communities with 21st century sea level rise
Beyond community gardens: A participatory research study evaluating nutrient and lead profiles of urban harvested fruit
Simultaneous immobilization of arsenic and cadmium in paddy soil by Fe-Mn binary oxide: A field-scale study
Airborne formaldehyde and volatile organic compound measurements over the Daesan petrochemical complex on Korea's northwest coast during the Korea-United States Air Quality study: Estimation of emission fluxes and effects on air quality
Infrastructure governance for the Anthropocene
Tropospheric Ozone Assessment Report: A critical review of changes in the tropospheric ozone burden and budget from 1850 to 2100
Continuous Publishing Alert
Topics alert
Visit the UC Press Blog
Browse All Disciplines
Media & Journalists
About UC Press
Acquisitions Editors
Print-Disability
UC Press Foundation
© Copyright 2021 by the Regents of the University of California. All rights reserved. Privacy policy Accessibility | CommonCrawl |
Higher order normal modes
Luis C. García-Naranjo
Departamento de Matemáticas y Mecánica, IIMAS, UNAM, Apdo. Postal 20-126, Col. San Angel, Mexico City, 01000, MEXICO
Received September 2019 Revised March 2020 Published July 2020
Fund Project: The author acknowledges support for his research from the Program UNAM-DGAPA-PAPIITIN115820 and from the Alexander von Humboldt Foundation for a Georg Forster Experienced Researcher Fellowship that funded a research visit to TU Berlin where part of this work was done
The concept of centre of mass of two particles in 2D spaces of constant Gaussian curvature is discussed by recalling the notion of "relativistic rule of lever" introduced by Galperin [6] (Comm. Math. Phys. 154 (1993), 63–84), and comparing it with two other definitions of centre of mass that arise naturally on the treatment of the 2-body problem in spaces of constant curvature: firstly as the collision point of particles that are initially at rest, and secondly as the centre of rotation of steady rotation solutions. It is shown that if the particles have distinct masses then these definitions are equivalent only if the curvature vanishes and instead lead to three different notions of centre of mass in the general case.
Keywords: Constant curvature spaces, centre of mass, celestial mechanics, relativistic rule of lever, collision point, centre of steady rotation.
Mathematics Subject Classification: Primary: 70F05; Secondary: 70A05.
Citation: Luis C. García-Naranjo. Some remarks about the centre of mass of two particles in spaces of constant curvature. Journal of Geometric Mechanics, 2020, 12 (3) : 435-446. doi: 10.3934/jgm.2020020
A. V. Borisov, I. S. Mamaev and A. A. Kilin, Two-body problem on a sphere. Reduction, stochasticity, periodic orbits, Regul. Chaotic Dyn., 9 (2004), 265-279. doi: 10.1070/RD2004v009n03ABEH000280. Google Scholar
A. V. Borisov, L. C. García-Naranjo, I. S. Mamaev and J. Montaldi, Reduction and relative equilibria for the two-body problem on spaces of constant curvature, Celest. Mech. Dyn. Astr., 130 (2018), 36 pp. doi: 10.1007/s10569-018-9835-7. Google Scholar
J. F. Cariñena, M. F. Rañada and M. Santander, Central potentials on spaces of constant curvature: The Kepler problem on the two-dimensional sphere $S^2$ and the hyperbolic plane $H^2$, J. Math. Phys., 46 (2005), 052702. doi: 10.1063/1.1893214. Google Scholar
F. Diacu, The non-existence of centre of mass and linear momentum integrals in the curved $N$-body problem, Libertas Math., 32 (2012), 25-37. doi: 10.14510/lm-ns.v32i1.30. Google Scholar
F. Diacu, E. Pérez-Chavela and J. G. Reyes, An intrinsic approach in the curved $n$-body problem. The negative curvature case, J. Differential Equations, 252 (2012), 4529-4562. doi: 10.1016/j.jde.2012.01.002. Google Scholar
G. A. Galperin, A concept of the mass center of a system of material points in the constant curvature spaces, Comm. Math. Phys., 154 (1993), 63-84. doi: 10.1007/BF02096832. Google Scholar
L. C. García-Naranjo, J. C. Marrero, E. Pérez-Chavela and M. Rodríguez-Olmos, Classification and stability of relative equilibria for the two-body problem in the hyperbolic space of dimension 2, J. Differential Equations, 260 (2016), 6375-6404. doi: 10.1016/j.jde.2015.12.044. Google Scholar
L. C. García-Naranjo and J. Montaldi, Attracting and repelling 2-body problems on a family of surfaces of constant curvature, J. Dyn. Diff. Equat., (2020). doi: 10.1007/s10884-020-09868-x. Google Scholar
V. V. Kozlov and A. O. Harin, Kepler's problem in constant curvature spaces, Celestial Mech. Dynam. Astronom., 54 (1992), 393-399. doi: 10.1007/BF00049149. Google Scholar
C. Lim, J. Montaldi and R. M. Roberts, Relative equilibria of point vortices on the sphere, Physica D, 148 (2001), 97-135. doi: 10.1016/S0167-2789(00)00167-6. Google Scholar
J. E. Marsden and T. S. Ratiu, Introduction to Mechanics and Symmetry, 2$^{nd}$ edition, Springer-Verlag, New York, 1994. doi: 10.1007/978-0-387-21792-5. Google Scholar
J. Montaldi, R. M. Roberts and I. Stewart, Periodic solutions near equilibria of symmetric Hamiltonian systems, Phil. Trans. Roy. Soc. London., 325 (1988), 237-293. doi: 10.1098/rsta.1988.0053. Google Scholar
J. Montaldi and R. M. Roberts, Relative equilibria of molecules, J. Nonlinear Sci., 9 (1999), 53-88. doi: 10.1007/s003329900064. Google Scholar
Figure 1. Illustration of the centre of mass $ \boldsymbol{\bar {q}} $ according to the characterisations C1, C2 and C3
Figure 2. The value of $ r_2 $ as a function of $ \kappa $ according to Eqs. (3), (4) and (5) under the assumption that $ 2\mu_1 = \mu_2 $ and $ r_1 = 1 $. Note that for $ \kappa>0 $ there are two branches for (5) as described in the text. The shaded area corresponds to values of $ (\kappa, r_2) $ that are forbidden since they violate the restriction that $ r = 1+r_2<\pi/ \sqrt{\kappa} $
Huyuan Chen, Dong Ye, Feng Zhou. On gaussian curvature equation in $ \mathbb{R}^2 $ with prescribed nonpositive curvature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3201-3214. doi: 10.3934/dcds.2020125
Vivina Barutello, Gian Marco Canneori, Susanna Terracini. Minimal collision arcs asymptotic to central configurations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 61-86. doi: 10.3934/dcds.2020218
Maoli Chen, Xiao Wang, Yicheng Liu. Collision-free flocking for a time-delay system. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1223-1241. doi: 10.3934/dcdsb.2020251
Shanding Xu, Longjiang Qu, Xiwang Cao. Three classes of partitioned difference families and their optimal constant composition codes. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020120
Sebastian J. Schreiber. The $ P^* $ rule in the stochastic Holt-Lawton model of apparent competition. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 633-644. doi: 10.3934/dcdsb.2020374
Petr Pauš, Shigetoshi Yazaki. Segmentation of color images using mean curvature flow and parametric curves. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1123-1132. doi: 10.3934/dcdss.2020389
Ferenc Weisz. Dual spaces of mixed-norm martingale hardy spaces. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020285
Qing Li, Yaping Wu. Existence and instability of some nontrivial steady states for the SKT competition model with large cross diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3657-3682. doi: 10.3934/dcds.2020051
Gui-Qiang Chen, Beixiang Fang. Stability of transonic shock-fronts in three-dimensional conical steady potential flow past a perturbed cone. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 85-114. doi: 10.3934/dcds.2009.23.85
Hussein Fakih, Ragheb Mghames, Noura Nasreddine. On the Cahn-Hilliard equation with mass source for biological applications. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020277
Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129
Shin-Ichiro Ei, Shyuh-Yaur Tzeng. Spike solutions for a mass conservation reaction-diffusion system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3357-3374. doi: 10.3934/dcds.2020049
Sabira El Khalfaoui, Gábor P. Nagy. On the dimension of the subfield subcodes of 1-point Hermitian codes. Advances in Mathematics of Communications, 2021, 15 (2) : 219-226. doi: 10.3934/amc.2020054
Balázs Kósa, Karol Mikula, Markjoe Olunna Uba, Antonia Weberling, Neophytos Christodoulou, Magdalena Zernicka-Goetz. 3D image segmentation supported by a point cloud. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 971-985. doi: 10.3934/dcdss.2020351 | CommonCrawl |
Welcome > Autumn Course (NES) > Assignments > 02 - Hamming Codes (Implementation)
The preliminary requirements for this assignment:
1. Git installed (see Dev > Git),
2. Python installed (see Dev > Python),
3. The template cloned (see Dev > template),
4. The first assignment handed in (see Assignment 1 - Hamming Codes (Theory)).
In this assignment you'll have to implement an encoder and decoder for a systematic Hamming Code $(10, 6)$ with additional parity bit. The implementation has to be capable of encoding and decoding input words, detecting errors and correcting single-bit errors if they occurs. Also, the implementation has to be done in Python using the template provided in ./src/hamming_code.py. You will implement and run the program on your computer.
Deadline for submission: Sunday, November 17th 2019, 23:59 // 11:59 pm
Please upload your solution into your own Gitlab repository.
Generator matrix
Use the following non-systematic generator matrix $G'_{6,10}$ for your implementation: \[ G' = \left( \begin{matrix} 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \end{matrix} \right) \]
Do not use external packages like NumPy or other for matrices and vector arithmetic!
Everything can be done with simple plain python.
Make yourself familiar with the project structure and classes, then do the following steps:
Add above matrix $G'_{6,10}$ to your class HammingCode.
Bring $G'_{6,10}$ into the systematic form through implementing and executing the transformation steps listed below the sub-tasks.
Use row reduction and column/row swapping (we need all options this time).
Use the pre-defined method __convert_g() for your implementation.
Derive $H_{4,10}$ from the generated matrix $G_{6,10}$.
Use the pre-defined method __derive_h() for your implementation.
Transformation steps to implement
Subtract row 1 from: row 2, row 6
Subtract row 2 from: row 1, row 3, row 4, row 5
Swap rows 4 and 6
Swap columns 5 and 6
Implement the encoder in the function encode() in ./src/hamming_code.py:
Add the missing logic for encoding given input words.
Do not overwrite the existing signature of the method (tests depend on that).
Don't forget to calculate and add the additional parity bit $p_{5}$ using odd parity.
Make sure to return the final code as a Tuple and not as a List.
Test your implementation with unit-tests (see Task 4).
Implement the decoder in the function decode() in ./src/hamming_code.py:
Add the missing logic for decoding given input words.
Calculate the additional parity bit of the encoded word using odd parity.
Calculate the syndrome vector. Remove $p_{5}$ beforehand.
Check the syndrome vector against $H_{4,10}$ and conclude if the encoded word had an error.
If the encoded word was without an error, return the decoded word and VALID.
If the encoded word had a single error, return the corrected decoded word and CORRECTED.
If the encoded word had multiple errors, return None and UNCORRECTABLE.
Now we are looking into unit-tests and update the file ./src/test.py.
First, import everything from hamming_code.py.
For every given test case, implement the corresponding logic.
Use asserts and pre-defined expectations (e.g. simple variable holding the value) for the checks.
Encode the following codes (you can check them in a single function):
$(0, 1, 1, 0, 1, 1)$
Decode the following codes:
$(0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0)$
Analyze the output.
Does the output of decode() match the input of encode() before?
Bring up at least two more test cases for codes being corrected and for an uncorrectable code.
Do not exchange source code! Keep it private!
We do not tolerate plagiarism.
Plagiarism of any form will get you disqualified from the lab. | CommonCrawl |
Applied Mathematics & Optimization
A Nonsmooth Optimization Approach for Hemivariational Inequalities with Applications to Contact Mechanics
Michal Jureczka
Anna Ochal
In this paper we introduce an abstract nonsmooth optimization problem and prove existence and uniqueness of its solution. We present a numerical scheme to approximate this solution. The theory is later applied to a sample static contact problem describing an elastic body in frictional contact with a foundation. This contact is governed by a nonmonotone friction law with dependence on normal and tangential components of displacement. Finally, computational simulations are performed to illustrate obtained results.
Nonmonotone friction Optimization problem Error estimate Finite element method Numerical simulations
Mathematics Subject Classification
35Q74 49J40 65K10 65M60 74S05 74M15 74M10 74G15
In the literature we can find examples of many models describing deformation of a body that is partly in contact with another object, the so-called foundation. In various contact models boundary conditions enforced on the part of the body contacting the foundation appear. Functions that occur in these conditions model response of the foundation in the normal direction to the contact boundary and in tangential direction to the boundary (friction law). In many cases these functions are monotone, such as when Coulomb's law of dry friction is considered, but in applications this may not always be the case. What is more, the friction bound may change as the penetration of the foundation by body increases. Nonmonotonicity of functions describing contact laws and influence of normal displacement of the body on friction law cause some difficulties in analytical and numerical treatment of considered problems.
In this paper we introduce an abstract framework that can be used to numerically approximate a solution to a class of mechanical contact problems. We present a nonsmooth optimization problem and prove existence and uniqueness of a solution to this problem. Next we present a numerical scheme approximating this solution and provide numerical error estimate. We apply this theory to a static contact problem describing an elastic body in contact with a foundation. This contact is governed by a nonmonotone friction law with dependence on normal and tangential components of displacement. Weak formulation of introduced contact problem is presented in the form of hemivariational inequality. In the end we show results of computational simulations and describe the numerical algorithm that was used to obtain these results.
Let us now briefly present references in the literature. The definition and properties of Clarke subdifferential and tools used to solve optimization problems were introduced in [6]. Comparison of nonsmooth and nonconvex optimization methods can be found in [1], and details on computational contact mechanics is presented in [19]. The theory of hemivariational inequalities was developed in [17], and the idea to use Finite Element Method to solve these inequalities was presented in [13]. Another early study of vector-valued hemivariational problems in the context of FEM can be found in [14]. More recent analysis of hemivariational and variational–hemivariational inequalities was presented in [10, 15, 16], whereas numerical analysis of such problems can be found for example in papers [2, 3, 4, 8, 9, 11, 12].
A similar mechanical model to the one described in the paper was already considered in [15], where the authors prove only existence of a solution using surjectivity result for pseudomonotone, coercive multifunction without requiring any smallness assumption.
An error estimate concerning stationary variational–hemivariational inequalities was presented in [8]. In our case variational part of inequality is not present and the inequality is not constrained, however error estimate had to be generalized to reflect dependence of friction law on normal component of the displacement.
A numerical treatment of mechanical problem leading to hemivariational inequality using two approaches—nonsmooth and nonconvex optimization and quasi-augmented Lagrangian method is presented in [2]. As the smallness assumption is not required, this once again does not guarantee uniqueness and leads to a nonconvex optimization problem. There, the authors assume contact to be bilateral and consider friction law which does not depend on normal component of the displacement.
This paper is organized as follows. Section 2 contains a general differential inclusion problem and an optimization problem. We show that under introduced assumptions both problems are equivalent and have a unique solution. In Sect. 3 we proceed with a discrete scheme that approximates solution to introduced optimization problem and we prove theorem concerning numerical error estimate. An application of presented theory in the form of mechanical contact model is indicated in Sect. 4, along with its weak formulation. Finally, in Sect. 5, we describe computational algorithm used to solve mechanical contact problem and present simulations for a set of sample data.
2 A General Optimization Problem
Let us start with basic notation used in this paper. For a normed space X, we denote by \(\Vert \cdot \Vert _X\) its norm, by \(X^*\) its dual space and by \(\langle \cdot ,\cdot \rangle _{X^*\times X}\) the duality pairing of \(X^*\) and X. By \(c>0\) we denote a generic constant (value of c may differ in different equations).
Let us now assume that \(j:X \rightarrow \mathbb {R}\) is locally Lipschitz continuous. The generalized directional derivative of j at \(x \in X\) in the direction \(v\in X\) is defined by
$$\begin{aligned}&j^0(x;v) := \limsup _{y \rightarrow x, \lambda \searrow 0} \frac{j(y+\lambda v) - j(y)}{\lambda }. \end{aligned}$$
The generalized subdifferential of j at x is a subset of the dual space \(X^*\) given by
$$\begin{aligned}&\partial j(x) := \{\xi \in X^*\, | \, \langle \xi , v\rangle _{X^*\times X}\le j^0(x;v) \ \text{ for } \text{ all } v \in X \}. \end{aligned}$$
If \(j:X^n \rightarrow \mathbb {R}\) is a locally Lipschitz function of n variables, then we denote by \(\partial _i j\) and \(j_i^0\) the Clarke subdifferential and generalized directional derivative with respect to i-th variable of j, respectively.
Let now V be a reflexive Banach space and X be a Banach space. Let \(\gamma \in \mathcal {L}(V, X)\) be linear and continuous operator from V to X, and \(c_{\gamma }:=\Vert \gamma \Vert _{\mathcal {L}(V, X)}\). We denote by \(\gamma ^* :X^* \rightarrow V^*\) the adjoint operator to \(\gamma \). Let \(A :V \rightarrow V^*\), \(J :X \times X \rightarrow \mathbb {R}\) and \(f\in V^*\). We formulate the operator inclusion problem as follows.
Problem \(\varvec{P_{incl}}\)Find\(u \in V\)such that
$$\begin{aligned}&A u + \gamma ^* \partial _2 J(\gamma u, \gamma u) \ni f. \end{aligned}$$
In the study of Problem \(P_{incl}\) we make the following assumptions.
\(\underline{H(A)}:\) The operator \(A :V \rightarrow V^*\) is such that
A is linear and bounded,
A is symmetric, i.e. \(\langle A u, v\rangle _{V^*\times V} = \langle A v , u \rangle _{V^*\times V}\) for all \(u, v \in V\),
there exists \(m_A>0\) such that \(\langle A u, u\rangle _{V^*\times V} \ge m_A \Vert u\Vert _V^2\) for all \(u \in V\).
\(\underline{H(J)}:\) The functional \(J :X \times X \rightarrow \mathbb {R}\) satisfies
J is locally Lipschitz continuous with respect to its second variable,
there exist \(c_{0}, c_{1}, c_{2} \ge 0\) such that
$$\begin{aligned} \Vert \partial _2 J(w, v)\Vert _{X^*} \le c_{0} + c_{1}\Vert v \Vert _X + c_{2}\Vert w \Vert _X\quad \hbox {for all} w, v \in X, \end{aligned}$$
there exist \(m_\alpha , m_L \ge 0\) such that
$$\begin{aligned}&J_2^0(w_1, v_1; v_2 - v_1) + J_2^0(w _2, v_2; v_1 - v_2)\\&\quad \le m_\alpha \Vert v_1- v_2\Vert _X^2 + m_L\Vert w_1 - w_2 \Vert _{X} \Vert v_1 - v_2\Vert _{X} \end{aligned}$$
for all \( w_1, w_2, v_1, v_2 \in X\).
\(\underline{H(f)}: \quad f \in V^*\).
\(\underline{(H_s)}: \quad m_A > (2m_\alpha + m_L) c_\gamma ^2\).
We remark that H(J)(c) is equivalent to the following condition
$$\begin{aligned} \begin{aligned}&\langle \partial _2 J(w_1, v_1)- \partial _2 J(w_2, v_2), v_1 - v_2 \rangle _{X^*\times X} \\&\quad \ge - m_\alpha \Vert v_1- v_2\Vert _X^2 - m_L\Vert w_1- w_2\Vert _X \Vert v_1- v_2\Vert _X \end{aligned} \end{aligned}$$
for all \(w_1, w_2, v_1, v_2 \in X\). Moreover, in a special case when J does not depend on its first variable, condition H(J)(c) holds with \(m_L = 0\) and we obtain the well known relaxed monotonicity condition, i.e. for all \(v_1, v_2 \in X\)
$$\begin{aligned}&\langle \partial J(v_1) - \partial J(v_2), v_1 - v_2 \rangle _{X^*\times X} \ge - m_\alpha \Vert v_1- v_2\Vert _X^2. \end{aligned}$$
So, condition H(J)(c) is more general than (2.2).
We start with a uniqueness result for Problem \(P_{incl}\), provided that a solution exists.
Lemma 1
Assume that H(A), H(J), H(f) and \((H_s)\) hold. If Problem \(P_{incl}\) has a solution \(u \in V\), then it is unique and satisfies
$$\begin{aligned}&\Vert u \Vert _V \le c \, (1+\Vert f \Vert _{V^*}) \end{aligned}$$
with a positive constant c.
Let \( u \in V\) be a solution to Problem \(P_{incl}\). This means that there exists \( z \in \partial _2 J(\gamma u , \gamma u )\) such that
$$\begin{aligned}&A u + \gamma ^* z = f . \end{aligned}$$
From the definition of generalized directional derivative of \(J(\gamma u, \cdot )\) we have for all \( v \in V\)
$$\begin{aligned}&\langle f - A u , v \rangle _{V^*\times V} = \langle \gamma ^* z , v \rangle _{V^*\times V} = \langle z , \gamma v \rangle _{X^*\times X} \le J_2^0(\gamma u , \gamma u ; \gamma v ). \end{aligned}$$
Let us now assume that Problem \(P_{incl}\) has two different solutions \(u_1\) and \(u_2\). For a solution \(u_1\) we set \( v = u _2 - u _1\) in (2.4) to get
$$\begin{aligned}&\langle f , u _2- u _1 \rangle _{V^*\times V} - \langle A u _1, u _2 - u _1 \rangle _{V^*\times V} \le J_2^0(\gamma u _1, \gamma u _1; \gamma u _2-\gamma u _1). \end{aligned}$$
For a solution \(u_2\) we set \( v = u _1 - u _2\) in (2.4) to get
Adding the above inequalities, we obtain
$$\begin{aligned}&\langle A u _1 - A u _2, u _1 - u _2 \rangle _{V^*\times V}\\&\quad \le J_2^0(\gamma u _1, \gamma u _1; \gamma u _2-\gamma u _1) + J_2^0(\gamma u _2, \gamma u _2; \gamma u _1-\gamma u _2). \end{aligned}$$
Hence, H(A)(c) and H(J)(c) yield
$$\begin{aligned}&m_A \Vert u _1 - u _2\Vert _V^2 \le (m_\alpha + m_L)\Vert \gamma u _1-\gamma u _2\Vert _X^2, \end{aligned}$$
$$\begin{aligned}&\big (m_A - (m_\alpha + m_L)c_\gamma ^2\big ) \Vert u _1 - u _2\Vert _V^2 \le 0. \end{aligned}$$
Under assumption \((H_s)\), we obtain that if Problem \(P_{incl}\) has a solution, it is unique.
Now, in order to prove (2.3), we set \( v = - u \) in (2.4) to obtain
$$\begin{aligned}&\langle A u , u \rangle _{V^*\times V} \le J_2^0(\gamma u , \gamma u ; -\gamma u ) + \langle f , u \rangle _{V^*\times V}. \end{aligned}$$
Using H(J)(b) and (c), we get
$$\begin{aligned}&J_2^0(\gamma u , \gamma u ;-\gamma u ) \le (m_\alpha + m_L) \Vert \gamma u \Vert _X^2 - J_2^0(0 , 0 ; \gamma u ) \nonumber \\&\quad \le (m_\alpha + m_L) \Vert \gamma u \Vert _X^2 + c_{0} \Vert \gamma u \Vert _X. \end{aligned}$$
Combining (2.5) and (2.6), we have
$$\begin{aligned} m_A\Vert u \Vert _V^2&\le (m_\alpha + m_L) \Vert \gamma u \Vert _X^2 + c_{0}\Vert \gamma u \Vert _X + \Vert f \Vert _{V^*}\Vert u \Vert _V \end{aligned}$$
$$\begin{aligned} \big (m_A - (m_\alpha + m_L) c_\gamma ^2\big )\Vert u \Vert _V \le c \, (1+\Vert f \Vert _{V^*}). \end{aligned}$$
From \((H_s)\) we obtain required estimate. \(\square \)
We now consider an optimization problem, which will be equivalent to Problem \(P_{incl}\) under introduced assumptions. To this end, let the functional \(\mathcal {L}: V \times V \rightarrow \mathbb {R}\) be defined for all \( w , v \in V\) as follows
$$\begin{aligned} \mathcal {L}( w , v ) = \frac{1}{2} \langle A v , v \rangle _{V^*\times V} - \langle f , v \rangle _{V^*\times V} + J(\gamma w ,\gamma v ). \end{aligned}$$
The next lemma collects some properties of the functional \(\mathcal {L}\).
Under assumptions H(A), H(J), H(f) and \((H_s)\), the functional \(\mathcal {L}:V \times V \rightarrow \mathbb {R}\) defined by (2.7) satisfies
\(\mathcal {L}(w,\cdot )\) is locally Lipschitz continuous for all \(w \in V\),
\(\partial _2 \mathcal {L}( w , v ) \subseteq A v - f + \gamma ^* \partial _2 J(\gamma w , \gamma v )\) for all \(w,v \in V\),
\(\mathcal {L}(w,\cdot )\) is strictly convex for all \(w \in V\),
\(\mathcal {L}(w,\cdot )\) is coercive for all \(w \in V\).
The proof of (i) is immediate since for a fixed \(w \in V\) the functional \(\mathcal {L}(w,\cdot )\) is locally Lipschitz continuous as a sum of locally Lipschitz continuous functions with respect to v.
For the proof of (ii), we observe that from H(A) and H(f), the functions
$$\begin{aligned} f_1:V \ni v \mapsto \frac{1}{2} \langle A v , v \rangle _{V^*\times V} \in \mathbb {R}, \qquad f_2:V \ni v \mapsto \langle f , v \rangle _{V^*\times V} \in \mathbb {R} \end{aligned}$$
are strictly differentiable and
$$\begin{aligned} f_1'( v ) = A v ,\qquad f_2'( v ) = f . \end{aligned}$$
Now, using the chain rule for generalized subgradient (cf. Propositions 3.35 and 3.37 in [15]), we obtain
$$\begin{aligned} \partial _2\mathcal {L}(w,v)&= f_1'(v)-f_2'(v)+\partial _2(J\circ \gamma )(\gamma w, v)\\&\subseteq A v - f + \gamma ^* \partial _2 J(\gamma w , \gamma v ), \end{aligned}$$
which concludes (ii).
To prove (iii), first, we will show that for any fixed \(w \in V\), the operator \(v \mapsto \partial _2 \mathcal {L}( w , v )\) is strongly monotone. Then, we will use Theorem 3.4 in [7] to deduce that \(\mathcal {L}( w , \cdot )\) is strongly convex for all \(w \in V\). Finally, the latter implies condition (iii). Hence, it remains to show that \(v \mapsto \partial _2 \mathcal {L}( w , v )\) is strongly monotone. To this end, let us fix \( w , v _i \in V\) with \(i=1, 2\). We take \( \zeta _i \in \partial _2 \mathcal {L}( w , v _i)\). From (ii) there exist \( z _i \in ~\partial _2J(\gamma w , \gamma v _i)\) such that
$$\begin{aligned} \zeta _i = A v_i - f + \gamma ^* z_i. \end{aligned}$$
Hence, using H(A)(c) and (2.2), we obtain
$$\begin{aligned}&\langle \zeta _1 - \zeta _2, v _1 - v _2 \rangle _{V^*\times V}\nonumber \\&\quad =\langle A v _1 - A v _2, v _1 - v _2 \rangle _{V^*\times V} + \langle \gamma ^* z _1 - \gamma ^* z _2, v _1 - v _2 \rangle _{V^*\times V}\nonumber \\&\quad \ge m_A\Vert v _1 - v _2\Vert _V^2 + \langle z _1 - z _2, \gamma v _1 - \gamma v _2 \rangle _{X^*\times X} \nonumber \\&\quad \ge m_A\Vert v _1 - v _2\Vert _V^2 - m_\alpha \Vert \gamma v _1 - \gamma v _2\Vert _X^2\nonumber \\&\quad \ge (m_A - m_\alpha c_\gamma ^2)\Vert v _1 - v _2\Vert _V^2. \end{aligned}$$
From \((H_s)\) we see that \(\partial _2 \mathcal {L}( w , \cdot )\) is strongly monotone for every \( w \in V\).
For the proof of (iv), let us fix \(w, v \in V\). From H(A)(c) and H(f) we obtain
$$\begin{aligned}&\mathcal {L}( w , v) \ge \frac{1}{2} m_A \Vert v\Vert _V^2 - \Vert f\Vert _{V^*}\Vert v\Vert _V+ J(\gamma w, \gamma v). \end{aligned}$$
Now, using the Lebourg mean value theorem (cf. Proposition 3.36 in [15]), we get that there exists \(\lambda \in (0,1)\) and \(\eta \in \partial _2(J\circ \gamma )(\gamma w, \lambda v)\) such that
$$\begin{aligned}&J(\gamma w, \gamma v) = \langle \eta , v \rangle _{V^*\times V} + J(\gamma w, 0). \end{aligned}$$
Since \(\partial _2(J\circ \gamma )(\gamma w, \lambda v) \subseteq \gamma ^* \partial _2 J(\gamma w , \lambda \gamma v )\) we have \(\eta \in \gamma ^* \partial _2 J(\gamma w , \lambda \gamma v )\). Then there exists \(z_1 \in \partial _2 J(\gamma w , \lambda \gamma v )\) such that \(\eta = \gamma ^* z_1\). Taking \(z_2 \in \partial _2 J(\gamma w , 0)\) and by (2.1), we obtain
$$\begin{aligned}&\lambda \langle \gamma ^* z_1 - \gamma ^* z_2, v \rangle _{V^*\times V} = \langle z_1 - z_2, \lambda \gamma v \rangle _{X^*\times X} \ge - m_\alpha \Vert \lambda \gamma v \Vert _X^2 \ge - m_\alpha \lambda ^2 c_\gamma ^2 \Vert v \Vert _V^2, \end{aligned}$$
and this, along with the fact that \(\lambda \in (0,1)\), leads to
$$\begin{aligned}&\langle \eta , v \rangle _{V^*\times V} \ge - m_\alpha c_\gamma ^2 \Vert v \Vert _V^2 + \langle z_2, \gamma v \rangle _{X^*\times X}. \end{aligned}$$
Using H(J)(b), we get
$$\begin{aligned}&\langle z_2, \gamma v \rangle _{X^*\times X} \ge -|\langle z_2, \gamma v \rangle _{X^*\times X}| \ge - \Vert \partial _2 J(\gamma w, 0)\Vert _{X^*}\Vert \gamma v \Vert _X \ge -c(1 + \Vert w \Vert _V)\Vert v \Vert _V. \end{aligned}$$
Combining (2.8)–(2.11) and because \(J(\gamma w, 0)\) is bounded from below for fixed w, we get
$$\begin{aligned}&\mathcal {L}( w , v) \ge \left( \frac{1}{2} m_A - m_\alpha c_\gamma ^2\right) \Vert v \Vert _V^2 -c\Vert v \Vert _V - c. \end{aligned}$$
From \((H_s)\) we see that \(\mathcal {L}( w , \cdot )\) is coercive for every \( w \in V\). \(\square \)
The problem under consideration reads as follows.
Problem \(\varvec{P_{opt}}\)Find\( u \in V\)such that
$$\begin{aligned} 0 \in \partial _2 \mathcal {L}( u , u ). \end{aligned}$$
We are now in a position to prove the existence and uniqueness result for the above optimization problem.
Assume that H(A), H(J), H(f) and \((H_s)\) hold. Then Problem \(P_{opt}\) has a unique solution \( u \in V\).
We introduce operator \(\Lambda :V \rightarrow V\) defined for all \( w \in V\) as follows
$$\begin{aligned} \Lambda w = \mathop {\mathrm{arg\,min}\,}\limits _{ v \in V} \mathcal {L}( w , v ). \end{aligned}$$
From Lemma 2(i), (iv) we see that for fixed w functional \(\mathcal {L}(w, \cdot )\) is proper, lower semicontinuous and coercive. This implies that it attains a global minimum. Uniqueness of that minimum is guaranteed by Lemma 2(iii). We can conclude that operator \(\Lambda \) is well defined. Now we prove that it is a contraction. Take \(u _i \in V\) for \(i=1, 2\) and let \( {\widehat{u}} _i = \Lambda u _i\). Because of strict convexity of \(\mathcal {L}( w , \cdot )\), we have
$$\begin{aligned} \widehat{u} _i = \mathop {\mathrm{arg\,min}\,}\limits _{ v \in V} \mathcal {L}( u _i, v ) \quad \text{ if } \text{ and } \text{ only } \text{ if }\quad 0 \in \partial _2 \mathcal {L}( u _i, \widehat{u} _i) \end{aligned}$$
(see Theorem 1.23 in [13]). From similar arguments to those used in proofs of Lemmata 1 and 2 with fixed first argument of functional \(\mathcal {L}\), we have for all \( v \in V\)
$$\begin{aligned}&\langle f- A \widehat{u} _i, v \rangle _{V^*\times V} \le J_2^0(\gamma u _i, \gamma \widehat{u} _i; \gamma v ). \end{aligned}$$
Taking for \(i=1\) value \( v = \widehat{u}_2 - \widehat{u} _1\), for \(i=2\) value \( v = \widehat{u}_1 - \widehat{u}_2\) and adding these inequalities, we obtain
$$\begin{aligned}&\langle A \widehat{u} _1 - A \widehat{u} _2, \widehat{u} _1 - \widehat{u} _2 \rangle _{V^*\times V} \\&\quad \le J_2^0(\gamma u _1, \gamma \widehat{u} _1; \gamma \widehat{u} _2 - \gamma \widehat{u} _1) + J_2^0(\gamma u _2, \gamma \widehat{u} _2; \gamma \widehat{u} _1 - \gamma \widehat{u} _2). \end{aligned}$$
From assumptions H(A)(c) and H(J)(c), we get
$$\begin{aligned}&m_A\Vert \widehat{u} _1 - \widehat{u} _2\Vert _V^2 \le m_\alpha \Vert \gamma \widehat{u} _1 - \gamma \widehat{u} _2\Vert _X^2 + m_L\Vert \gamma u _1 - \gamma u _2\Vert _X\Vert \gamma \widehat{u} _1 - \gamma \widehat{u} _2\Vert _X. \end{aligned}$$
Using the elementary inequality \(ab \le \frac{a^2}{2} + \frac{b^2}{2}\), we obtain
$$\begin{aligned}&m_A\Vert \widehat{u} _1 - \widehat{u} _2\Vert _V^2 \le m_\alpha c^2_\gamma \Vert \widehat{u} _1 - \widehat{u} _2\Vert _V^2 + \frac{m_Lc_\gamma ^2}{2}(\Vert u _1 - u _2\Vert _V^2 + \Vert \widehat{u} _1 - \widehat{u} _2\Vert _V^2). \end{aligned}$$
Because of \((H_s)\), we can rearrange these terms to get
$$\begin{aligned}&\Vert \widehat{u} _1 - \widehat{u} _2\Vert _V^2 \le \frac{m_L c_\gamma ^2}{2m_A - 2m_\alpha c_\gamma ^2 - m_L c_\gamma ^2}\Vert u _1 - u _2\Vert _V^2. \end{aligned}$$
Using assumption \((H_s)\) once more, we obtain that the operator \(\Lambda \) is a contraction. From the Banach fixed point theorem we know that there exists a unique \( u ^* \in V\) such that \(\Lambda u ^* = u ^*\), so \( 0 \in \partial _2 \mathcal {L}( u ^*, u ^*)\). \(\square \)
Let us conclude the results from Lemmata 1, 2 and 3 in the following theorem.
Theorem 4
Assume that H(A), H(J), H(f) and \((H_s)\) hold. Then Problems \(P_{incl}\) and \(P_{opt}\) are equivalent, they have a unique solution \(u\in V\) and this solution satisfies
$$\begin{aligned}&\Vert u \Vert _V \le c(1+\Vert f \Vert _{V^*}) \end{aligned}$$
Lemma 2(ii) implies that every solution to Problem \(P_{opt}\) solves Problem \(P_{incl}\). Using this fact, Lemmata 1 and 3 we see that a unique solution to Problem \(P_{opt}\) is also a unique solution to Problem \(P_{incl}\). Because of the uniqueness of the solution to Problem \(P_{incl}\) we get that Problems \(P_{incl}\) and \(P_{opt}\) are equivalent. The estimate in the statement of the theorem follows from Lemma 1. \(\square \)
3 Numerical Scheme
Let \(V^h \subset V\) be a finite dimensional subspace with a discretization parameter \(h>0\). We present the following discrete scheme of Problem \(P_{opt}\).
Problem \(\varvec{P_{opt}^{h}}\)Find \(u^h \in V^h\)such that
$$\begin{aligned} 0 \in \partial _2 \mathcal {L}(u^h, u^h). \end{aligned}$$
We remark that existence of a unique solution to Problem \(P_{opt}^h\) and equivalence to the discrete version of Problem \(P_{incl}\) follow from application of Theorem 4 in this new setting. Now let us present the following main theorem concerning error estimatie of introduced numerical scheme.
Assume that H(A), H(J), H(f) and \((H_s)\) hold. Then for the unique solutions u and \(u^h\) to Problems \(P_{opt}\) and \(P_{opt}^h\), respectively, there exists a constant \(c>0\) such that
$$\begin{aligned} \Vert u - u ^h\Vert _V^2 \le c\,\inf \limits _{ v ^h \in V^h} \Big \{ \Vert u - v ^h \Vert _V^2 + \Vert \gamma u - \gamma v ^h \Vert _X + R( u , v ^h) \Big \}, \end{aligned}$$
where a residual quantity is given by
$$\begin{aligned} R( u , v ^h) = \langle A u , v ^h - u \rangle _{V^*\times V} + \langle f , u - v ^h \rangle _{V^*\times V}. \end{aligned}$$
Let u be a solution to Problem \(P_{opt}\) and \(u^h\) be a solution to Problem \(P_{opt}^h\). Then they are solutions to corresponding inclusion problems and satisfy respectively
$$\begin{aligned} \langle f - Au, v \rangle _{V^*\times V}&\le J_2^0(\gamma u, \gamma u; \gamma v) \quad \text{ for } \text{ all } v \in V, \end{aligned}$$
$$\begin{aligned} \langle f - Au^h, v \rangle _{V^*\times V}&\le J_2^0(\gamma u^h, \gamma u^h; \gamma v) \quad \text{ for } \text{ all } v \in V^h. \end{aligned}$$
Taking (3.3) with \(v=u^h-u\), and (3.4) with \(v=v^h-u^h\), then adding these inequalities, we obtain for all \( v^h \in V^h\)
$$\begin{aligned}&\langle f, v ^h - u \rangle _{V^*\times V} + \langle Au^h - Au, u^h - u \rangle _{V^*\times V} - \langle A u ^h, v ^h - u \rangle _{V^*\times V} \nonumber \\&\quad \le J_2^0(\gamma u , \gamma u ; \gamma u ^h - \gamma u ) + J_2^0(\gamma u ^h, \gamma u ^h; \gamma v ^h - \gamma u ^h). \end{aligned}$$
We observe that by subadditivity of generalized directional derivative (cf. [15], Proposition 3.23(i)) and H(J)(c), we have
$$\begin{aligned}&J_2^0(\gamma u, \gamma u; \gamma u ^h - \gamma u) + J_2^0(\gamma u^h, \gamma u^h; \gamma v^h - \gamma u^h )\nonumber \\&\quad \le J_2^0(\gamma u, \gamma u; \gamma u^h - \gamma u) + J_2^0(\gamma u^h, \gamma u^h; \gamma u - \gamma u^h) + J_2^0(\gamma u^h, \gamma u^h; \gamma v^h - \gamma u)\nonumber \\&\quad \le (m_\alpha + m_L) \Vert \gamma u^h -\gamma u \Vert _X^2 + \left( c_0 + (c_1+c_2) \Vert \gamma u^h\Vert _X \right) \Vert \gamma v ^h - \gamma u \Vert _X. \end{aligned}$$
From the statement of Lemma 1 applied to discrete version of Problem \(P_{incl}\) we get that \(\Vert \gamma u^h\Vert _X \le c_{\gamma }\Vert u^h\Vert _V\le c\,(1+\Vert f\Vert _{V^*})\) is uniformly bounded with respect to h. Hence, returning to (3.5) and using (3.6), we obtain for all \( v ^h \in V^h\)
$$\begin{aligned}&\langle Au^h - Au, u^h - u \rangle _{V^*\times V} \le \langle A u ^h - Au, v ^h - u \rangle _{V^*\times V} + \langle A u, v ^h - u \rangle _{V^*\times V} \nonumber \\&\quad + \langle f, u-v^h \rangle _{V^*\times V} + (m_\alpha + m_L)c^2_\gamma \Vert u^h - u \Vert _V^2 + c \, \Vert \gamma v^h - \gamma u\Vert _X . \end{aligned}$$
By assumption H(A) and definition (3.2), we get for all \(v^h\in V^h\)
$$\begin{aligned}&m_A \Vert u ^h - u \Vert _V^2 \le c \, \Vert u ^h - u \Vert _V\Vert v ^h - u \Vert _V + R( u , v ^h) \nonumber \\&\quad + (m_\alpha + m_L) c_\gamma ^2\Vert u - u ^h \Vert _V^2 + c \, \Vert \gamma u - \gamma v ^h \Vert _X. \end{aligned}$$
Finally, the elementary inequality \(ab\le \varepsilon a^2 + \frac{b^2}{4\varepsilon }\) with \(\varepsilon > 0\) yields
$$\begin{aligned}&m_A \Vert u - u ^h\Vert _V^2 \le \varepsilon \Vert u - u ^h\Vert _V^2 + \frac{c^2}{4\varepsilon }\Vert u - v ^h\Vert _V^2 + R( u , v ^h) \\&\quad + (m_\alpha + m_L) c_\gamma ^2\Vert u - u ^h \Vert _V^2 + c \, \Vert \gamma u - \gamma v ^h \Vert _X. \end{aligned}$$
This is equivalent for all \( v ^h \in V^h\) to
$$\begin{aligned}&\Big (m_A - (m_\alpha + m_L) c_\gamma ^2 - \varepsilon \Big ) \Vert u - u ^h\Vert _V^2 \le \frac{c}{\varepsilon }\Vert u - v ^h \Vert _V^2 + R( u , v ^h) + c \, \Vert \gamma u - \gamma v ^h \Vert _X. \end{aligned}$$
Taking sufficiently small \(\varepsilon \) and using \((H_s)\) we obtain the desired conclusion. \(\square \)
4 Application to Contact Mechanics
In this section we apply the results of previous sections to a sample mechanical contact problem. Let us start by introducing the physical setting and notation.
An elastic body occupies a domain \(\Omega \subset \mathbb {R}^{d}\), where \(d = 2, 3\) in application. We assume that its boundary \(\Gamma \) is divided into three disjoint measurable parts: \(\Gamma _{D}, \Gamma _{C}, \Gamma _{N}\), where the part \(\Gamma _D\) has a positive measure. Additionally \(\Gamma \) is Lipschitz continuous, and therefore the outward normal vector \(\varvec{\nu }\) to \(\Gamma \) exists a.e. on the boundary. The body is clamped on \(\Gamma _{D}\), i.e. its displacement is equal to \(\varvec{0}\) on this part of boundary. A surface force of density \(\varvec{f}_N\) acts on the boundary \(\Gamma _{N}\) and a body force of density \(\varvec{f}_0\) acts in \(\Omega \). The contact phenomenon on \(\Gamma _{C}\) is modeled using general subdifferential inclusions. We are interested in finding the displacement of the body in a static state.
Let us denote by "\(\cdot \)" and \(\Vert \cdot \Vert \) the scalar product and the Euclidean norm in \(\mathbb {R}^{d}\) or \(\mathbb {S}^{d}\), respectively, where \(\mathbb {S}^{d} = \mathbb {R}^{d \times d}_{sym}\). Indices i and j run from 1 to d and the index after a comma represents the partial derivative with respect to the corresponding component of the independent variable. Summation over repeated indices is implied. We denote the divergence operator by \(\text {Div }\varvec{\sigma } = (\sigma _{ij,j})\). The standard Lebesgue and Sobolev spaces \(L^2(\Omega )^d = L^2(\Omega ;\mathbb {R}^d)\) and \(H^1(\Omega )^d = H^1(\Omega ;\mathbb {R}^d)\) are used. The linearized (small) strain tensor for displacement \(\varvec{u} \in H^1(\Omega )^d\) is defined by
$$\begin{aligned} \varvec{\varepsilon }(\varvec{u})=(\varepsilon _{ij}(\varvec{u})), \quad \varepsilon _{ij}(\varvec{u}) = \frac{1}{2}(u_{i,j} + u_{j,i}). \end{aligned}$$
Let \(u_\nu = \varvec{u}\cdot \varvec{\nu }\) and \(\sigma _\nu = \varvec{\sigma }\varvec{\nu } \cdot \varvec{\nu }\) be the normal components of \(\varvec{u}\) and \(\varvec{\sigma }\), respectively, and let \(\varvec{u}_\tau =\varvec{u}-u_\nu \varvec{\nu }\) and \(\varvec{\sigma }_\tau =\varvec{\sigma }\varvec{\nu }-\sigma _\nu \varvec{\nu }\) be their tangential components, respectively. In what follows, for simplicity, we sometimes do not indicate explicitly the dependence of various functions on the spatial variable \(\varvec{x}\).
Now let us introduce the classical formulation of considered mechanical contact problem.
Problem \(\varvec{P}\): Find a displacement field \(\varvec{u}:\Omega \rightarrow \mathbb {R}^{d}\)and a stress field\(\varvec{\sigma }:\Omega \rightarrow \mathbb {S}^{d}\)such that
$$\begin{aligned} \varvec{\sigma } = \mathcal {A}(\varvec{\varepsilon }(\varvec{u})) \qquad&\text { in } \Omega \end{aligned}$$
$$\begin{aligned} \text {Div }\varvec{\sigma } + \varvec{f}_{0} = \varvec{0} \qquad&\text { in } \Omega \end{aligned}$$
$$\begin{aligned} \varvec{u} = \varvec{0} \qquad&\text { on } \Gamma _{D} \end{aligned}$$
$$\begin{aligned} \varvec{\sigma }\varvec{\nu } = \varvec{f}_{N} \qquad&\text { on } \Gamma _{N} \end{aligned}$$
$$\begin{aligned} -\sigma _{\nu } \in \partial j_{\nu }(u_{\nu }) \qquad&\text { on } \Gamma _{C} \end{aligned}$$
$$\begin{aligned} -\varvec{\sigma _{\tau }} \in h_{\tau }(u_{\nu })\,\partial j_{\tau }(\varvec{u_{\tau }}) \qquad&\text { on } \Gamma _{C} \end{aligned}$$
Here, Eq. (4.1) represents an elastic constitutive law and \(\mathcal {A}\) is an elasticity operator. Equilibrium equation (4.2) reflects the fact that problem is static. Equation (4.3) represents clamped boundary condition on \(\Gamma _{D}\) and (4.4) represents the action of the traction on \(\Gamma _{N}\). Inclusion (4.5) describes the response of the foundation in normal direction, whereas the friction is modeled by inclusion (4.6), where \(j_\nu \) and \(j_\tau \) are given superpotentials, and \(h_\tau \) is a given friction bound.
We consider the following Hilbert spaces
$$\begin{aligned}&\mathcal {H} = L^2(\Omega ;\mathbb {S}^{d}), \qquad V = \{\varvec{v} \in H^1(\Omega )^d\ |\ \varvec{v} = \varvec{0} \text { on } \Gamma _{D}\}, \end{aligned}$$
endowed with the inner scalar products
$$\begin{aligned}&(\varvec{\sigma },\varvec{\tau })_{\mathcal {H}} = \int _\Omega \sigma _{ij}\tau _{ij} \, dx, \qquad (\varvec{u},\varvec{v})_V = (\varvec{\varepsilon }(\varvec{u}),\varvec{\varepsilon }(\varvec{v}))_{\mathcal {H}}, \end{aligned}$$
respectively. The fact that space V equipped with the norm \(\Vert \cdot \Vert _V\) is complete follows from Korn's inequality, and its application is allowed because we assume that \(meas(\Gamma _{D}) > 0\). We consider the trace operator \(\gamma :V \rightarrow L^2(\Gamma _{C})^d=X\). By the Sobolev trace theorem we know that \(\gamma \in \mathcal {L}(V, X)\) with the norm equal to \(c_\gamma \).
Now we present the hypotheses on data of Problem P.
\(\underline{H({\mathcal {A}})}:\) \({\mathcal {A}} :\Omega \times {{\mathbb {S}}}^d \rightarrow {{\mathbb {S}}}^d\) satisfies
\(\mathcal {A}(\varvec{x},\varvec{\tau }) = (a_{ijkh}(\varvec{x})\tau _{kh})\) for all \(\varvec{\tau } \in {{\mathbb {S}}}^d\), a.e. \(\varvec{x}\in \Omega ,\ a_{ijkh} \in L^{\infty }(\Omega ),\)
\(\mathcal {A}(\varvec{x},\varvec{\tau }_1) \cdot \varvec{\tau }_2 = \varvec{\tau }_1 \cdot \mathcal {A}(\varvec{x},\varvec{\tau }_2)\) for all \(\varvec{\tau }_1, \varvec{\tau }_2 \in {{\mathbb {S}}}^d\), a.e. \(\varvec{x}\in \Omega \),
there exists \(m_{\mathcal {A}}>0\) such that \(\mathcal {A}(\varvec{x},\varvec{\tau }) \cdot \varvec{\tau } \ge m_{\mathcal {A}} \Vert \varvec{\tau }\Vert ^2\) for all \(\varvec{\tau } \in {{\mathbb {S}}}^d\), a.e. \(\varvec{x}\in \Omega \).
\(\underline{H(j_{\nu })}:\) \(j_{\nu } :\Gamma _C \times \mathbb {R} \rightarrow \mathbb {R}\) satisfies
\(j_{\nu }(\cdot , \xi )\) is measurable on \(\Gamma _C\) for all \(\xi \in \mathbb {R}\) and there exists \(e \in L^2(\Gamma _C)\) such that \(j_{\nu }(\cdot ,e(\cdot ))\in L^1(\Gamma _C)\),
\(j_{\nu }(\varvec{x}, \cdot )\) is locally Lipschitz continuous on \(\mathbb {R}\) for a.e. \(\varvec{x} \in \Gamma _C\),
there exist \(c_{\nu 0}, c_{\nu 1} \ge 0\) such that
$$\begin{aligned} |\partial _2 j_{\nu }(\varvec{x}, \xi )| \le c_{\nu 0} + c_{\nu 1}|\xi |\quad \hbox {for all }\xi \in \mathbb {R},\hbox { a.e. }\varvec{x} \in \Gamma _C, \end{aligned}$$
there exists \(\alpha _{\nu } \ge 0\) such that
$$\begin{aligned} (j_{\nu })_2^0(\varvec{x},\xi _1;\xi _2-\xi _1) + (j_{\nu })_2^0(\varvec{x},\xi _2;\xi _1-\xi _2)\le \alpha _{\nu }|\xi _1-\xi _2|^2 \end{aligned}$$
for all \(\xi _1, \xi _2 \in \mathbb {R}\), a.e. \(\varvec{x} \in \Gamma _C\).
\(\underline{H(j_{\tau })}:\) \(j_{\tau } :\Gamma _C \times \mathbb {R}^{d} \rightarrow \mathbb {R}\) satisfies
\(j_{\tau }(\cdot , \varvec{\xi })\) is measurable on \(\Gamma _C\) for all \(\varvec{\xi } \in \mathbb {R}^{d}\) and there exists \(\varvec{e} \in L^2(\Gamma _C)^{d}\) such that \(j_{\tau }(\cdot ,\varvec{e}(\cdot ))\in L^1(\Gamma _C)\),
there exists \(c_{\tau }>0\) such that
$$\begin{aligned} |j_{\tau }(\varvec{x}, \varvec{\xi }_1) - j_{\tau }(\varvec{x}, \varvec{\xi }_2)| \le c_{\tau } \Vert \varvec{\xi }_1 - \varvec{\xi }_2\Vert \quad \hbox { for all }\varvec{\xi }_1, \varvec{\xi }_2 \in \mathbb {R}^d,\hbox { a.e. }\varvec{x} \in \Gamma _C, \end{aligned}$$
there exists \(\alpha _{\tau } \ge 0\) such that
$$\begin{aligned} (j_{\tau })_2^0(\varvec{x},\varvec{\xi }_1;\varvec{\xi }_2-\varvec{\xi }_1) + (j_{\tau })_2^0(\varvec{x},\varvec{\xi }_2;\varvec{\xi }_1-\varvec{\xi }_2)\le \alpha _\tau \Vert \varvec{\xi }_1-\varvec{\xi }_2\Vert ^2 \end{aligned}$$
for all \(\varvec{\xi }_1, \varvec{\xi }_2 \in \mathbb {R}^{d}\), a.e. \(\varvec{x} \in \Gamma _C\).
\(\underline{H(h)}:\) \(h_{\tau } :\Gamma _C \times \mathbb {R} \rightarrow \mathbb {R}\) satisfies
\(h_{\tau }(\cdot , \eta )\) is measurable on \(\Gamma _C\) for all \(\eta \in \mathbb {R}\),
there exists \(\overline{h_{\tau }} > 0\) such that \(0 \le h_{\tau }(\varvec{x}, \eta ) \le \overline{h_{\tau }}\) for all \(\eta \in \mathbb {R}\), a.e. \(\varvec{x} \in \Gamma _C\),
there exists \(L_{h_\tau }>0\) such that
$$\begin{aligned} |h_{\tau }(\varvec{x}, \eta _1) - h_{\tau }(\varvec{x}, \eta _2)| \le L_{h_\tau } |\eta _1 - \eta _2|\quad \hbox { for all }\eta _1, \eta _2 \in \mathbb {R},\hbox { a.e. }\varvec{x} \in \Gamma _C. \end{aligned}$$
\((\underline{H_0}): \quad \varvec{f}_0 \in L^2(\Omega )^d, \quad \varvec{f}_N \in L^2(\Gamma _N)^d\).
We remark that condition \(H(j_{\tau })\)(b) is equivalent to the fact that \(j_{\tau }(\varvec{x},\cdot )\) is locally Lipschitz continuous and there exists \(c_{\tau } \ge 0\) such that \(\Vert \partial _2 j_{\tau }(\varvec{x} , \varvec{\xi } )\Vert \le c_{\tau }\) for all \(\varvec{\xi } \in \mathbb {R}^{d}\) and a.e. \(\varvec{x} \in \Gamma _C\). Moreover, condition H(h)(b) is sufficient to obtain presented mathematical results, but from mechanical point of view we should additionally assume that \(h(r)=0\) for \(r\le 0\). This corresponds to the situation when body is separated from the foundation and friction force vanishes.
Using the standard procedure, the Green formula and the definition of generalized subdifferential, we obtain a weak formulation of Problem P in the form of hemivariational inequality.
Problem \(\varvec{P_{hvi}}\)Find a displacement \(\varvec{u} \in V\)such that for all\(\varvec{v} \in V\)
$$\begin{aligned}&\langle A\varvec{u}, \varvec{v} \rangle _{V^*\times V} +\int _{\Gamma _C} j_3^0 (\varvec{x}, \gamma \varvec{u}(\varvec{x}), \gamma \varvec{u}(\varvec{x}); \gamma \varvec{v}(\varvec{x}))\, da \ge \langle \varvec{f}, \varvec{v} \rangle _{V^*\times V}. \end{aligned}$$
Here, the operator \(A :V \rightarrow V^*\) and \(\varvec{f} \in V^*\) are defined for all \(\varvec{w},\varvec{v} \in V\) as follows
$$\begin{aligned} \langle A\varvec{w}, \varvec{v} \rangle _{V^*\times V}&= (\mathcal {A}(\varvec{\varepsilon }(\varvec{w})),\varvec{\varepsilon }(\varvec{v}))_{\mathcal {H}},\\ \langle \varvec{f}, \varvec{v} \rangle _{V^* \times V}&= \int _{\Omega }\varvec{f}_{0}\cdot \varvec{v}\, dx + \int _{\Gamma _{N}}\varvec{f}_{N}\cdot \gamma \varvec{v}\, da \end{aligned}$$
and \(j:\Gamma _C\times \mathbb {R}^d \times \mathbb {R}^d \rightarrow \mathbb {R}\) is defined for all \(\varvec{\eta }, \varvec{\xi } \in \mathbb {R}^d\) and \(\varvec{x}\in \Gamma _C\) by
$$\begin{aligned} j(\varvec{x}, \varvec{\eta }, \varvec{\xi }) = j_{\nu }(\varvec{x}, \xi _{\nu }) + h_{\tau }(\varvec{x}, \eta _{\nu })\, j_{\tau }(\varvec{x}, \varvec{\xi }_{\tau }). \end{aligned}$$
It is easy to check that under assumptions \(H(\mathcal {A})\) and \((H_0)\), the operator A and the functional \(\varvec{f}\) satisfy H(A) and H(f), respectively. We also define the functional \(J :L^2(\Gamma _C)^d \times L^2(\Gamma _C)^d \rightarrow \mathbb {R}\) for all \(\varvec{w}, \varvec{v} \in L^2(\Gamma _C)^d \) by
$$\begin{aligned} J(\varvec{w}, \varvec{v}) = \int _{\Gamma _C} j(\varvec{x}, \varvec{w}(\varvec{x}), \varvec{v}(\varvec{x}))\, da. \end{aligned}$$
Below we present some properties of the functional J.
Assumptions \(H(j_{\nu })\), \(H(j_{\tau })\) and H(h) imply that functional J defined by (4.8)–(4.9) satisfies H(J).
We first observe that from \(H(j_\nu )\)(a),(b), \(H(j_\tau )\)(a),(b) and H(h)(a),(c) the function \(j(\cdot ,\varvec{\eta },\varvec{\xi })\) is measurable on \(\Gamma _C\), there exist \(\varvec{e}_1, \varvec{e}_2 \in L^2(\Gamma _C)^d\) such that \(j(\cdot , \varvec{e}_1(\cdot ), \varvec{e}_2(\cdot ))\in L^1(\Gamma _C)\), \(j(\varvec{x}, \cdot , \varvec{\xi })\) is continuous and \(j(\varvec{x}, \varvec{\eta }, \cdot )\) is locally Lipschitz. Moreover, by \(H(j_\nu )\)(c), \(H(j_\tau )\)(c) and H(h)(b) we easily conclude
$$\begin{aligned} \Vert \partial _3 j(\varvec{x}, \varvec{\eta },\varvec{\xi })\Vert&\le |\partial _2 j_\nu (\varvec{x}, \xi _\nu )| + h_\tau (\varvec{x},\eta _\nu ) \Vert \partial _2 j_\tau (\varvec{x},\varvec{\xi }_\tau )\Vert \le c_{\nu 0}+(c_{\nu 1}+\overline{h_\tau } \, c_\tau ) \Vert \varvec{\xi }\Vert . \end{aligned}$$
Applying Corollary 4.15 in [15], we obtain that functional J is well defined, locally Lipschitz with respect to the second variable and the growth condition H(J)(b) holds with \(c_{0}= \sqrt{2\, meas(\Gamma _C)}\,c_{\nu 0}\), \(c_{1}=\sqrt{2}\, (c_{\nu 1}+ \overline{h_{\tau }} c_{\tau })\) and \(c_{2}=0\).
To prove H(J)(c), we take \(\varvec{\eta }_i, \varvec{\xi }_i \in \mathbb {R}^d\), \(i = 1, 2\), and by the sum rules (cf. Proposition 3.35 in [15]) and from \(H(j_\nu )\)(d), \(H(j_\tau )\)(b),(c) and H(h)(b),(c), we obtain
$$\begin{aligned}&j_3^0(\varvec{x},\varvec{\eta }_1, \varvec{\xi }_1; \varvec{\xi }_2- \varvec{\xi }_1) + j_3^0(\varvec{x},\varvec{\eta }_2, \varvec{\xi }_2; \varvec{\xi }_1- \varvec{\xi }_2)\\&\quad \le (j_ {\nu })_2^0(\varvec{x},\xi _{1\nu }; \xi _{2\nu } - \xi _{1\nu }) + (j_{\nu })_2^0(\varvec{x},\xi _{2\nu }; \xi _{1\nu } - \xi _{2\nu }) \\&\qquad + h_{\tau }(\varvec{x},\eta _{1\nu })\left( (j_{\tau })_2^0(\varvec{x},\varvec{\xi }_{1\tau }; \varvec{\xi }_{2\tau } - \varvec{\xi }_{1\tau }) + (j_{\tau })_2^0(\varvec{x},\varvec{\xi }_{2\tau }; \varvec{\xi }_{1\tau } - \varvec{\xi }_{2\tau })\right) \\&\qquad + \big (h_{\tau }(\varvec{x}, \eta _{2\nu })-h_{\tau }( \varvec{x},\eta _{1\nu })\big )\,(j_{\tau })_2^0(\varvec{x},\varvec{\xi }_{2\tau }; \varvec{\xi }_{1\tau } - \varvec{\xi }_{2\tau }) \\&\quad \le (\alpha _\nu + \overline{h_\tau }\,\alpha _\tau ) \, \Vert \varvec{\xi }_1 - \varvec{\xi }_2\Vert ^2 + L_{h_\tau } c_\tau \Vert \varvec{\eta }_1 - \varvec{\eta }_2\Vert \, \Vert \varvec{\xi }_1 - \varvec{\xi }_2\Vert . \end{aligned}$$
And consequently, since
$$\begin{aligned} J_2^0(\varvec{w}, \varvec{v}; \varvec{z}) \le \int _{\Gamma _C} j_3^0 (\varvec{x}, \varvec{w}(\varvec{x}), \varvec{v}(\varvec{x}); \varvec{z}(\varvec{x}))\, da \end{aligned}$$
(cf. Corollary 4.15(iii) in [15]), we have
$$\begin{aligned}&J_2^0(\varvec{w}_1, \varvec{v}_1; \varvec{v}_2- \varvec{v}_1) + J_2^0(\varvec{w}_2, \varvec{v}_2; \varvec{v}_1- \varvec{v}_2) \\&\quad \le \int _{\Gamma _{C}} \left( (\alpha _{\nu } + \overline{h_{\tau }} \alpha _{\tau }) \Vert \varvec{v}_1(\varvec{x})- \varvec{v}_2(\varvec{x})\Vert ^2 + L_{h_\tau }c_{\tau } \Vert \varvec{w}_1(\varvec{x}) - \varvec{w}_2(\varvec{x})\Vert \, \Vert \varvec{v}_1(\varvec{x}) - \varvec{v}_2(\varvec{x})\Vert \right) \, da. \end{aligned}$$
Hence, by the Hölder inequality, we obtain H(J)(c) with \(m_\alpha = \alpha _{\nu } + \overline{h_{\tau }} \alpha _{\tau }\) and \(m_L = L_{h_\tau }c_{\tau }\). \(\square \)
With the above properties, we have the following existence and uniqueness result for Problem \(P_{hvi}\).
If assumptions \(H(\mathcal {A})\), \(H(j_{\nu })\), \(H(j_{\tau })\), H(h), \((H_0)\) and \((H_s)\) hold, then Problems \(P_{hvi}\) and \(P_{incl}\) are equivalent. Moreover, they have a unique solution \(\varvec{u}\in V\) and this solution satisfies
$$\begin{aligned}&\Vert \varvec{u} \Vert _V \le c\, (1+\Vert \varvec{f} \Vert _{V^*}) \end{aligned}$$
We notice that the assumptions of Theorem 4 are satisfied. This implies that Problem \(P_{incl}\) has a unique solution. By (2.4) and Corollary 4.15(iii) in [15] we get that every solution to Problem \(P_{incl}\) solves Problem \(P_{hvi}\). Using similar technique as in the proof of Lemma 1, we can show that if Problem \(P_{hvi}\) has a solution, it is unique. Combining these facts we obtain our assertion.
We conclude this section by providing a sample error estimate under additional assumptions on the solution regularity. We consider a polygonal domain \(\Omega \) and a space of continuous piecewise affine functions \(V^h\). We introduce the following discretized version of Problem \(P_{hvi}\).
Problem \(\varvec{P_{hvi}^h}\) Find a displacement \(\varvec{u}^h \in V^h\) such that for all \(\varvec{v}^h \in V^h\)
$$\begin{aligned}&\langle A\varvec{u}^h, \varvec{v}^h \rangle _{V^*\times V} + \int _{\Gamma _{C}} j_3^0(\varvec{x},\gamma \varvec{u}^h(\varvec{x}), \gamma \varvec{u}^h(\varvec{x}); \gamma \varvec{v}^h(\varvec{x}))\, da \ge \langle \varvec{f}, \varvec{v}^h \rangle _{V^*\times V}. \end{aligned}$$
Assume \(H(\mathcal {A})\), \(H(j_{\nu })\), \(H(j_{\tau })\), H(h), \((H_0)\) and \((H_s)\) and assume the solution regularity \(\varvec{u} \in H^2(\Omega )^d\), \(\gamma \varvec{u} \in H^2(\Gamma _C)^d\), \(\varvec{\sigma }\varvec{\nu } \in L^2(\Gamma _C)^d\). Additionally, assume that \(\Gamma _C\) is a flat component of the boundary \(\Gamma \). Then, for the solution \(\varvec{u}\) to Problem \(P_{hvi}\) and the solution \(\varvec{u}^h\) to Problem \(P_{hvi}^h\) there exists a constant \(c>0\) such that
$$\begin{aligned} \Vert \varvec{u} - \varvec{u}^{h} \Vert _V \le c\, h. \end{aligned}$$
We denote by \(\Pi ^h \varvec{u} \in V^h\) the finite element interpolant of \(\varvec{u}\). By the standard finite element interpolation error bounds (see [5]) we have for all \(\varvec{\eta } \in H^2(\Omega )^d\) such that \(\gamma \varvec{\eta } \in H^2(\Gamma _C)^d\)
$$\begin{aligned} \Vert \varvec{\eta } - \Pi ^h\varvec{\eta }\Vert _V&\le c\,h\,\Vert \varvec{\eta }\Vert _{H^2(\Omega )^d}, \end{aligned}$$
$$\begin{aligned} \Vert \gamma \varvec{\eta } - \gamma \Pi ^h\varvec{\eta }\Vert _{L^2(\Gamma _C)^d}&\le c\,h^2\,\Vert \gamma \varvec{\eta }\Vert _{H^2(\Gamma _C)^d}. \end{aligned}$$
We now bound the residual term defined by (3.2) using similar procedure to that one described in [8]. Let \(\varvec{v} = \pm \varvec{w}\) in inequality (4.7), where the arbitrary function \(\varvec{w}\in V\) is such that \(\varvec{w} \in C^\infty ({\overline{\Omega }})^d\) and \(\varvec{w} = \varvec{0}\) on \(\Gamma _D \cup \Gamma _C\). Then we obtain the identity
$$\begin{aligned}&\langle A\varvec{u}, \varvec{w} \rangle _{V^*\times V} = \langle \varvec{f}, \varvec{w} \rangle _{V^*\times V}. \end{aligned}$$
From this identity, using fundamental lemma of calculus of variations, we can deduce that
$$\begin{aligned} \text {Div }\mathcal {A}(\varvec{\varepsilon }(\varvec{u})) + \varvec{f}_{0}&= \varvec{0} \qquad \text { in } \Omega , \end{aligned}$$
$$\begin{aligned} \varvec{\sigma }\varvec{\nu }&= \varvec{f}_{N} \qquad \text { on } \Gamma _{N}. \end{aligned}$$
We multiply equation (4.13) by \(\varvec{v}^h - \varvec{u}\) and obtain
$$\begin{aligned} \int _{\Gamma } \varvec{\sigma }\varvec{\nu } \cdot (\gamma \varvec{v}^h - \gamma \varvec{u})\, da - \int _{\Omega } \mathcal {A}(\varvec{\varepsilon }(\varvec{u})) \cdot \varvec{\varepsilon }(\varvec{v}^h - \varvec{u})\, dx + \int _{\Omega } \varvec{f}_{0} \cdot (\varvec{v}^h - \varvec{u})\, dx = 0. \end{aligned}$$
Using the homogenous Dirichlet boundary condition of \(\varvec{v}^h - \varvec{u}\) on \(\Gamma _D\) and the traction boundary condition given by (4.14) we have
$$\begin{aligned} \langle A \varvec{u} , \varvec{v}^h - \varvec{u} \rangle _{V^*\times V} = \int _{\Gamma _{C}} \varvec{\sigma }\varvec{\nu }\cdot (\gamma \varvec{v}^h - \gamma \varvec{u})\, da + \langle \varvec{f} , \varvec{v}^h - \varvec{u} \rangle _{V^*\times V}. \end{aligned}$$
Using this and (3.2) we obtain
$$\begin{aligned}&R(\varvec{u}, \varvec{v}^{h}) = \int _{\Gamma _{C}} \varvec{\sigma }\varvec{\nu }\cdot (\gamma \varvec{v}^h - \gamma \varvec{u})\, da \le c\,\Vert \gamma \varvec{u} - \gamma \varvec{v}^h\Vert _{L^2(\Gamma _C)^d}. \end{aligned}$$
From inequalities (3.1), (4.11), (4.12) and (4.17) we get
$$\begin{aligned}&\Vert \varvec{u} - \varvec{u}^{h} \Vert _V^{2} \le c \, \Big (\Vert \varvec{u} - \Pi ^h\varvec{u} \Vert _V^2 + \Vert \gamma \varvec{u} - \gamma \Pi ^h\varvec{u} \Vert _{L^2(\Gamma _C)^d} \Big ) \le c\,h^2, \end{aligned}$$
and we obtain required estimate. \(\square \)
5 Simulations
In this section we present results of our computational simulations. From Theorems 4 and 7 we know that Problems \(P_{hvi}\) and \(P_{opt}\) are equivalent. Hence, we can apply numerical scheme \(P_{opt}^h\) and use Theorem 5 to approximate solution of \(P_{hvi}\). We employ Finite Element Method and use space \(V^h\) of continuous piecewise affine functions as a family of approximating subspaces. The idea for algorithm used to calculate solution of discretized problem is based on the proof of Lemma 3 and is described by Algorithm 1.
In order to minimize not necessarily differentiable function \(\mathcal {L}(\varvec{w}^h, \cdot )\) we use Powell's conjugate direction method. This method was introduced in [18] and does not require the assumption on differentiability of optimized function. Other, more refined nonsmooth optimization algorithms described for example in [1], could also be adapted. For a starting point \(\varvec{u}_0^h\) we take a solution to problem with \(\varvec{\sigma }\varvec{\nu }=\varvec{0}\) on \(\Gamma _{C}\), although it can be chosen arbitrarily.
We set \(d=2\) and consider a rectangular set \(\Omega = [0,2] \times [0,1]\) with following parts of the boundary
$$\begin{aligned}&\Gamma _{D} = \{0\} \times [0,1], \quad \Gamma _{N} = ([0,2] \times \{1\}) \cup (\{2\} \times [0,1]), \quad \Gamma _{C} = [0,2] \times \{0\}. \end{aligned}$$
The elasticity operator \(\mathcal {A}\) is defined by
$$\begin{aligned}&\mathcal {A}(\varvec{\tau }) = 2\eta \varvec{\tau } + \lambda \text{ tr }(\varvec{\tau })I,\qquad \varvec{\tau } \in \mathbb {S}^2. \end{aligned}$$
Here I denotes the identity matrix, \(\text{ tr }\) denotes the trace of the matrix, \(\lambda \) and \(\eta \) are the Lamé coefficients, \(\lambda , \eta >0\). In our simulations we take the following data
$$\begin{aligned} \lambda&= \eta = 4, \\ \varvec{u}_{0}(\varvec{x})&= (0,0), \quad \varvec{x} \in \Omega ,\\ j_\nu (\varvec{x}, \xi )&= \left\{ \begin{array}{ll} 0, &{}\xi \in (-\infty ,\, 0), \\ 10\, \xi ^2, &{}\xi \in [0,\, 0.1), \\ 0.1, &{}\xi \in [0.1,\, \infty ), \\ \end{array} \right. \varvec{x} \in \Gamma _C,\\ j_{\tau }(\varvec{x}, \varvec{\xi })&= \ln (\Vert \varvec{\xi }\Vert + 1), \quad \varvec{\xi } \in \mathbb {R}^2,\ \varvec{x} \in \Gamma _C,\\ h_{\tau }(\varvec{x}, \eta )&= \left\{ \begin{array}{ll} 0, &{}\eta \in (-\infty , 0), \\ 8\, \eta , &{}\eta \in [0,0.1), \\ 0.8 &{}\eta \in [0.1,\infty ) \\ \end{array} \right. \varvec{x} \in \Gamma _C,\\ \varvec{f}_0(\varvec{x})&= (-1.2,\, -0.9), \quad \varvec{x} \in \Omega ,\\ \varvec{f}_N(\varvec{x})&= (0,0), \quad \varvec{x} \in \Omega . \end{aligned}$$
Both functions \(j_{\nu }\) and \(j_{\tau }\) are nondifferentiable and nonconvex. Our aim is to investigate reaction of the body to various modifications of input data.
In Fig. 1 we present output obtained without any modifications. We push the body down and to the left with a force \(\varvec{f}_0\). As a result the body penetrates the foundation, but because of frictional forces it is squeezed to the left more in the higher part than in the lower part. In Fig. 2 we modify the function \(h_\tau \) to be given by
$$\begin{aligned}&h_{\tau }(\varvec{x}, \eta ) = \left\{ \begin{array}{ll} 0, &{}\eta \in (-\infty , 0), \\ 16\, \eta , &{}\eta \in [0,0.1), \\ 1.6 &{}\eta \in [0.1,\infty ) \\ \end{array} \right. \varvec{x} \in \Gamma _C. \end{aligned}$$
As a result we see that the penetration of the foundation does not change, but increased friction prevents the body from sliding to the left on \(\Gamma _C\). In Fig. 3 we return to original data and modify only the function \(j_\nu \) to the following
Modified function \(h_\tau \)
Modified function \(j_\nu \)
Increased force \(\varvec{f}_0\)
Numerical errors
$$\begin{aligned}&j_\nu (\varvec{x}, \xi )= \left\{ \begin{array}{ll} 0, &{}\xi \in (-\infty ,\, 0), \\ 30\, \xi ^2, &{}\xi \in [0,\, 0.1), \\ 0.3, &{}\xi \in [0.1,\, \infty ), \\ \end{array} \right. \varvec{x} \in \Gamma _C. \end{aligned}$$
We can observe that the response of the foundation is more significant and the body moves downward only slightly. At the same time friction decreases due to influence of function \(h_\tau \) that depends on normal component of body displacement. In Fig. 4 we once more return to original data and slightly increase force \(\varvec{f}_0\) to be equal to
$$\begin{aligned}&\varvec{f}_0(\varvec{x}) = (-1.2,\, -1.0), \quad \varvec{x} \in \Omega . \end{aligned}$$
As a result, the body breaks through the threshold of quadratic response of the function \(j_\nu \) into the part where this function is constant. This reflects the situation when there is no response of the foundation in normal direction (e.g. the foundation broke) and causes the penetration to increase drastically.
In order to illustrate the error estimate obtained in Sect. 4, we present a comparison of numerical errors \(\Vert \varvec{u} - \varvec{u}^h\Vert _V\) computed for a sequence of solutions to discretized problems. We use a uniform discretization of the problem domain according to the spatial discretization parameter h. The boundary \(\Gamma _C\) of \(\Omega \) is divided into 1 / h equal parts. We start with \(h = 1\), which is successively halved. The numerical solution corresponding to \(h = 1/512\) was taken as the "exact" solution \(\varvec{u}\). The numerical results are presented in Fig. 5, where the dependence of the error estimate \(\Vert \varvec{u} - \varvec{u}^h\Vert _V\) with respect to h is plotted on a log–log scale. A first order convergence can be observed, providing numerical evidence of the theoretical optimal order error estimate obtained at the end of Sect. 4.
The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 823731 CONMECH, National Science Center of Poland under Maestro Project No. UMO-2012/06/A/ST1/00262.
Bagirov, A., Karmitsa, N., Mäkelä, M.M.: Introduction to Nonsmooth Optimization Theory, Practice and Software. Springer, New York (2014)zbMATHGoogle Scholar
Barboteu, M., Bartosz, K., Kalita, P.: An analytical and numerical approach to a bilateral contact problem with nonmonotone friction. Int. J. Appl. Math. Comput. Sci. 23(2), 263–276 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
Barboteu, M., Bartosz, K., Kalita, P., Ramadan, A.: Analysis of a contact problem with normal compliance, finite penetration and nonmonotone slip dependent friction. Commun. Contemp. Math. 16(1), 1350016 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
Barboteu, M., Han, W., Migórski, S.: On numerical approximation of a variational-hemivariational inequality modeling contact problems for locking materials. Comput. Math. Appl. 77(11), 2894–2905 (2019)MathSciNetCrossRefGoogle Scholar
Ciarlet, P.G.: The Finite Element Method for Elliptic Problems. North Holland, Amsterdam (1978)zbMATHGoogle Scholar
Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley Interscience, New York (1983)zbMATHGoogle Scholar
Fan, L., Liu, S., Gao, S.: Generalized monotonicity and convexity of non-differentiable functions. J. Math. Anal. Appl. 279, 276–289 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
Han, W.: Numerical analysis of stationary variational-hemivariational inequalities with applications in contact mechanics. Math. Mech. Solids 1–15 (2017)Google Scholar
Han, W., Sofonea, M.: Numerical analysis of hemivariational inequalities in contact mechanics. Acta Numerica 28, 175–286 (2019)MathSciNetCrossRefGoogle Scholar
Han, W., Migórski, S., Sofonea, M.: Advances in Variational and Hemivariational Inequalities, Advances in Mechanics and Mathematics, vol. 33. Springer, New York (2015)zbMATHGoogle Scholar
Han, W., Sofonea, M., Barboteu, M.: Numerical analysis of elliptic hemivariational inequalities. SIAM J. Numer. Anal. 55(2), 640–663 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
Han, W., Sofonea, M., Danan, D.: Numerical analysis of stationary variational-hemivariational inequalities. Numer. Mathe. 139(3), 563–592 (2018)MathSciNetCrossRefzbMATHGoogle Scholar
Haslinger, J., Miettinen, M., Panagiotopoulos, P.D.: Finite Element Method for Hemivariational Inequalities. Theory, Methods and Applications. Kluwer Academic Publishers, Boston (1999)CrossRefzbMATHGoogle Scholar
Miettinen, M., Haslinger, J.: Finite element approximation of vector-valued hemivariational problems. J. Glob. Optim. 10(1), 17–35 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
Migórski, S., Ochal, A., Sofonea, M.: Nonlinear Inclusions and Hemivariational Inequalities: Models and Analysis of Contact Problems, vol. 26. Springer, New York (2013)CrossRefzbMATHGoogle Scholar
Migórski, S., Ochal, A., Sofonea, M.: A class of variational-hemivariational inequalities in reflexive Banach spaces. J. Elast. 127(2), 151–178 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
Panagiotopoulos, P.D.: Hemivariational Inequalities, Applications in Mechanics and Engineering. Springer, New York (1993)CrossRefzbMATHGoogle Scholar
Powell, M.J.D.: An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput. J. 7(2), 155–162 (1964)MathSciNetCrossRefzbMATHGoogle Scholar
Wriggers, P.: Computational Contact Mechanics. Wiley, Chichester (2002)zbMATHGoogle Scholar
1.Faculty of Mathematics and Computer ScienceJagiellonian University in KrakowKrakowPoland
Jureczka, M. & Ochal, A. Appl Math Optim (2019). https://doi.org/10.1007/s00245-019-09593-y
DOI https://doi.org/10.1007/s00245-019-09593-y | CommonCrawl |
Health literacy among older adults is associated with their 10-years' cognitive functioning and decline - the Doetinchem Cohort Study
Bas Geboers ORCID: orcid.org/0000-0001-5764-20961,
Ellen Uiters2,
Sijmen A. Reijneveld1,
Carel J. M. Jansen3,
Josué Almansa1,
Astrid C. J. Nooyens2,
W. M. Monique Verschuren2,4,
Andrea F. de Winter1 &
H. Susan J. Picavet2
Many older adults have low levels of health literacy which affects their ability to participate optimally in healthcare. It is unclear how cognitive decline contributes to health literacy. To study this, longitudinal data are needed. The aim of this study was therefore to assess the associations of cognitive functioning and 10-years' cognitive decline with health literacy in older adults.
Data from 988 participants (mean age = 65.3) of the Doetinchem Cohort Study were analyzed. Health literacy was measured by the Brief Health Literacy Screening. Memory, mental flexibility, information processing speed, and global cognitive functioning were assessed at the same time as health literacy and also 10 years earlier. Logistic regression analyses were performed, adjusted for age, gender, and educational level.
Higher scores on tests in all cognitive domains were associated with a lower likelihood of having low health literacy after adjustment for confounders (all ORs < 0.70, p-values<.001). Similar associations were found for past cognitive functioning (all ORs < 0.75, p-values<.05). Before adjustment, stronger cognitive decline was associated with a greater likelihood of having low health literacy (all ORs > 1.37, p-values<.05). These associations lost significance after adjustment for educational level, except for the association of memory decline (OR = 1.40, p = .023, 95% CI: 1.05 to 1.88).
Older adults with poorer cognitive functioning and stronger cognitive decline are at risk for having low health literacy, which can affect their abilities to promote health and self-manage disease. Low health literacy and declining cognitive functioning might be a barrier for person-centered care, even in relatively young older adults.
Many older adults in developed countries have low health literacy [1, 2], defined as "the degree to which people are able to access, understand, appraise and communicate information to engage with the demands of different health contexts to promote and maintain health across the life-course [3]." The high prevalence of low health literacy among older adults – up to 59% [1] - is problematic as this population is in greater need of health information, for example due to the high prevalence of chronic diseases in this group. Low health literacy among older adults is associated with various undesirable health outcomes like poorer self-rated health [4] and higher mortality [5]. It may also affect older adults' ability to benefit optimally from healthcare services, in which patients are nowadays expected participate actively.
Poor cognitive functioning or cognitive decline may affect the health literacy level of older adults [6]. Lower educational level contributes to low health literacy as well [1], but this does not explain why health literacy tends to gradually decline over time in older adults [7, 8]. Specific domains of cognitive functioning that might play a role in the health literacy of older adults are memory, information processing speed, and mental flexibility. Cross-sectional studies have shown that poor cognitive functioning is related to low health literacy in older adults, for example among patients with diabetes [9]. Other studies suggest that poor cognitive functioning partially explains the association between low health literacy and undesirable health outcomes like poor physical health [10]. However, evidence lacks about how a decline in cognitive functioning affects health literacy at an older age. In order to answer such research questions, longitudinal data is needed.
Current evidence on the longitudinal association of cognitive decline with health literacy is very limited. One study has shown that cognitive abilities at age 11 are positively associated with health literacy at age 67 [11], but this does not regard cognitive decline. Only two studies addressed associations of cognitive decline with low health literacy [12] and health literacy decline [7] but these regarded a relatively short follow-up time, i.e. less than 6 years. Evidence on the associations of cognitive decline and health literacy during longer follow-up periods fully lacks.
The aim of this study is to assess the associations of cognitive functioning and 10-years' cognitive decline with health literacy in older adults, by taking into account global cognitive functioning and the domains of memory, mental flexibility, and information processing speed.
Design, setting, and study population
Data from the prospective Doetinchem Cohort Study (DCS) were used for our analyses [13]. All participants are inhabitants of Doetinchem, a city in the East of the Netherlands (population around 57,000). The general aim is to study the impact of lifestyle and biological risk factors on health. The first round of data collection took place in 1987–1991 and included 7768 participants aged 20–59 years (response rate: 62%). Follow-up rounds took place every 5 years, with response rates of > 75%. The fifth round (2008–2012) included 4018 participants (51.7% of the primary sample). The sixth round is ongoing. The DCS was conducted according to the guidelines laid down in the Helsinki Declaration, with all procedures approved by the external Medical Ethics Committees. Written informed consent was obtained from all participants before the start of the study.
For our analyses, we included participants whose data from the sixth round were available by February 2016 (n = 1028). We excluded participants who reported to have had a stroke (2.7%, n = 28), as strokes are known to have an impact on cognitive functioning. As one of the cognitive tests was language specific, we also excluded participants who moved to the Netherlands after the age of 18 (1.2%, n = 12), leaving a set of 988 participants.
Data on cognitive functioning were collected from round three onwards from participants above the age of 45. We used data from round six for current cognitive functioning and from round four for past cognitive functioning. In this paper, round four will be described as the first measurement and round six will be described as the second measurement. The follow-up time between these measurements was 10 years (mean = 9.77, SD = 0.18). Cognitive data in the first measurement were collected from 737 participants (74.6% of our set). This lower number is the result of some participants not having reached the age of 45 when this measurement took place.
We used data on health literacy, cognitive functioning, and covariates.
Health literacy was measured during the second measurement by the validated Brief Health Literacy Screening [14, 15]. It consists of three items:
"How often do you have someone help you read hospital materials?"
"How confident are you filling out medical forms by yourself?"
"How often do you have problems learning about your medical condition because of difficulty understanding written information?"
Participants answered these questions on a 5-point scale. We added up the scores of all questions, which led to a continuous scale (3–15). We then categorized participants into groups with high health literacy (13 or higher, 81.8%) and low health literacy (12 or lower, 18.2%), based on previous research [16].
Cognitive functioning
Four tests were used to measure cognitive functioning: the 15-word Verbal Learning Test (VLT) [17], the Stroop Color Word Test (SCWT) [18], the Verbal Fluency Test (VFT) [19], and the Letter Digit Substitution Test (LDST) [20]. The VLT consists of showing the participant 15 monosyllabic words three times, with a free recall test after every presentation and an additional free recall test after a 15 min delay. Scores are determined by the number of words that are correctly recalled. The SCWT consists of three subtasks. In the first subtask, participants are asked to read 40 written color names. In the second subtask, they are asked to name the color of 40 colored patches. The third subtask consists of naming the ink color in which 40 color names are shown incongruously (e.g., the word "red" shown in green). Scores are determined by the time it took the participant to complete each task. Scores of the SCWT were reversed, whereby a higher score indicated a better performance; scores were then normalized by log transformation, because of the skewed distribution. The VFT consists of asking the participant to name as many animals as possible within 1 minute. The LDST consists of a sheet of paper containing a table in which nine letters are matched with a digit (1 to 9) and a separate list of letters. The participant is asked to match the letters in the list with the corresponding digits as quickly as possible. The score is based on the number of letters that were matched correctly. All these tests are known to be sensitive to detect age-related cognitive decline, also in the middle-age range.
Scores on all tests were standardized. Scores from the first measurement were standardized based on the means and standard deviations of the same tests in the second measurement. Scores were then combined into three cognitive domains and a score for global cognitive functioning [21]. The following formulas were used:
$$ {\displaystyle \begin{array}{l}\mathrm{Memory}=\left({\mathrm{VLT}}_{\mathrm{Total}}+{\mathrm{VLT}}_{\mathrm{Maximum}}+{\mathrm{VLT}}_{\mathrm{Delayed}}\right)/3\\ {}\mathrm{Information}\kern0.17em \mathrm{processing}\kern0.17em \mathrm{speed}=\left({\mathrm{Stroop}}_{\mathrm{Color}\ \mathrm{names}}+{\mathrm{Stroop}}_{\mathrm{Color}\ \mathrm{patches}}+\mathrm{LDST}\right)/3\\ {}\mathrm{Mental}\ \mathrm{flexibility}={\mathrm{Stroop}}_{\mathrm{Ink}\ \mathrm{color}}\\ {}\mathrm{Global}\kern0.17em \mathrm{cognitive}\ \mathrm{functioning}=\left({\mathrm{Stroop}}_{\mathrm{Ink}\ \mathrm{color}}+\mathrm{LDST}+{\mathrm{VLT}}_{\mathrm{Total}}+{\mathrm{VLT}}_{\mathrm{Delayed}}+\mathrm{VFT}\right)/5\end{array}} $$
Cognitive decline concerned the domain scores at the second measurement minus the domain scores at the first measurement. A higher score thus indicates stronger cognitive decline.
Covariates
Educational level was used as a covariate in our study. It was assessed based on the highest completed level of education in five categories: Very low (elementary school or less), low (lower vocational education), medium (medium general secondary education), high (medium vocational education to pre-university education), and very high (higher vocational education or university). Further potential confounders were age and gender. To control for potential non-linear effects of age, age squared was also used.
First, we compared the age, gender, educational level, and current cognitive functioning of the participants by level of health literacy, by using chi-square tests and independent sample t-tests. We also tested whether cognitive decline had taken place between the two measurement waves by using paired samples t-tests. Second, we assessed the associations of the various cognitive domains (both current and past) with health literacy by using logistic regression analyses, adjusted for covariates, and with health literacy as the outcome variable. Finally, we repeated the analyses with cognitive decline in all domains as the predictor variables, additionally adjusted for past cognitive functioning.
Sensitivity analyses
We conducted two sensitivity analyses. First, we checked whether the use of an alternative cut-off point for high health literacy would lead to changes in our results. For this purpose, we used a cut-off point of 14 or higher (63.5% high health literacy vs. 81.8% in the main analyses). Second, we checked whether the use of data from a different wave for past cognitive functioning would change our results. For this purpose, we studied cognitive functioning at round three (15 year follow-up time, n = 643).
Characteristics of the sample at the second measurement and their associations with health literacy are presented in Table 1. Participants with low health literacy were older, more often had a (very) low level of education, and had poorer cognitive functioning in all domains. Participants who had also participated in cognitive testing in the first measurement (n = 737) had a mean age of 68.5 years at the second measurement, and showed cognitive decline in all domains between the two measurement waves (all p-values<.0001).
Table 1 Background characteristics and current cognitive functioning of participants by level of health literacy (n = 988)
Associations between cognitive functioning and health literacy
Associations of current and past cognitive functioning with health literacy are shown in Table 2. Poorer current cognitive functioning in all domains was significantly associated with low health literacy after adjustment for age and gender. These associations weakened after adjustment for educational level, but they all remained statistically significant. The same pattern of results was found for the associations between past cognitive functioning and health literacy.
Table 2 Likelihood of having low health literacy per one standard deviation better cognitive functioning among older adults, in odds ratios (and 95% confidence intervals)
Associations between cognitive decline and health literacy
Associations between cognitive decline and health literacy are presented in Table 3. After adjustment for age, gender, and past cognitive functioning, decline in all cognitive domains was statistically significantly associated with low health literacy. Further adjustment for educational level weakened all associations. Only the association between memory decline and health literacy then remained statistically significant (OR = 1.40, p = .023).
Table 3 Likelihood of having low health literacy per one standard deviation stronger cognitive decline among older adults, in odds ratios (and 95% confidence intervals)
The sensitivity analyses with the alternative cut-off point for low health literacy did not lead to relevant changes in the associations between current and past cognitive functioning and health literacy. The associations between cognitive decline and health literacy before adjustment for educational level were weakened, with only the association between health literacy and memory decline remaining statistically significant (OR = 1.30, p-value<.05), and the other associations losing significance (ORs < 1.42, all p-values between .055 and .11). After adjustment for educational level, none of the associations reached statistical significance (all ORs < 1.26, p-values>.14).
Using data from a different measurement wave for past cognitive functioning (round three) did not change our results regarding the associations of current and past cognitive functioning with health literacy. The only relevant change in the associations between cognitive decline and health literacy was that, after adjustment for educational level, the association between memory decline and health literacy lost significance (OR = 1.26, p-value = .15), while the association between mental flexibility decline and health literacy remained statistically significant (OR = 1.36, p-value<.05).
In this longitudinal study, we assessed the associations of current and past cognitive functioning and cognitive decline with health literacy in older adults. Our study shows that poorer cognitive functioning in both the present and the past is associated with low health literacy. Stronger cognitive decline is also associated with low health literacy, but after adjustment for educational level, only the association between health literacy and memory decline remains statistically significant.
Our study showed associations between poorer current cognitive functioning and low health literacy among older adults for global cognitive functioning and all its domains (i.e., memory, information processing speed, and mental flexibility). This confirms previous findings that cognitive functioning and health literacy are related [9, 10, 12] and suggests that health literacy in older adults is dependent on a variety of cognitive abilities. The identified associations between poorer past cognitive functioning and low health literacy are in line with the findings of Mõttus et al. [11], who found that cognitive functioning in childhood is associated with health literacy at an older age. Our findings support their conclusion that health literacy reflects lifelong general cognitive ability [11]. All associations weakened after adjustment for educational level, which could be explained by the associations of educational level with both health literacy [1] and cognitive functioning [22] found in earlier studies. However, in our study, the association between cognitive functioning and health literacy is for the most part independent of educational level, probably because educational level is determined not only by capabilities, but also by opportunities.
To our knowledge, our study is the first to suggest that associations between cognitive decline and low health literacy already exist in relatively young older adults. In our study, the associations between cognitive decline and health literacy seem to be partially independent of educational level. This may partially explain why health literacy gradually declines over time among older adults [7, 8]. This issue requires further study.
The relationships of cognitive functioning and cognitive decline with health literacy might also go beyond a one-way cause-and-effect relationship in which cognition affects the health literacy of older adults. Among older adults, having a low level of health literacy might also lead to poorer cognitive functioning. For example, older adults with low health literacy might engage in less physical activity [23], which might then contribute to lower cognitive health [24]. A degree of conceptual overlap may also exist between cognitive functioning and health literacy. However, empirical studies show that cognitive functioning and health literacy are separate constructs that have independent effects on various outcomes [25, 26].
This was the first longitudinal study on cognitive functioning and health literacy with a long follow-up time of 10 years. An important strength of our study was the use of a community-based sample. The longitudinal nature of the dataset used allowed us to study past cognitive functioning and cognitive decline. Also, we used a set of sensitive instruments to measure cognitive functioning.
Our study also had some limitations. First, our results may be affected by selective response and drop-outs. However, response rates were high in all rounds of the DCS (> 75%), limiting the potential for this bias. Additionally, we studied associations, which are much less affected by such bias than prevalences [27]. Second, we used a self-report instrument to measure health literacy. This may have led to an underestimation of some associations, as people may not always be aware of their health literacy limitations. However, the Brief Health Literacy Screening we used has been validated [14, 15] and is frequently used in research [28, 29].
Many healthcare services are increasingly shifting towards a more person-centered approach [30]. Person-centered care is defined as "an approach to care that consciously adopts the perspectives of individuals, families and communities, and sees them as participants as well as beneficiaries of trusted health systems that respond to their needs and preferences in humane and holistic ways" [30]. This approach to care assumes that patients actively take part in their own care and make their own decisions with regard to treatment options. However, low health literacy and declining cognitive functioning in older adults might be important barriers for person-centered care. It is therefore important that healthcare professionals involved in prevention and care for older adults are aware of the important impact of cognitive functioning on health literacy. Proactive planning of care is essential for older adults with both low health literacy and declining cognitive functioning. As cognitive decline is even associated with low health literacy in relatively young older adults, health professionals should be aware that early identification of the most vulnerable group is especially important.
The associations between poor cognitive functioning and low health literacy also suggest that interventions to mitigate the negative impacts of low health literacy should focus on lowering the cognitive demands of health information. This may include strategies like the use of plain language in health documents, but also the use of spoken animations [31] and teach-back methods [32]. It might be important to address both cognition and health literacy in order to optimize person-centered prevention and health care.
To further disentangle the intricate relationship between cognitive functioning and health literacy, future studies should measure both factors repeatedly over the course of several years. Additionally, such studies could examine the potential confounding or mediating role of health status in these relations. Future research should also focus on the possibility of limiting health literacy decline among older adults by using cognitive interventions, such as cognitive training [33].
Our findings show that, in older adults, poor cognitive functioning in both the present and the past is associated with low health literacy. This holds for memory, information processing speed, and mental flexibility, and most strongly for global cognitive functioning. Further, even in relatively young older adults, cognitive decline is associated with low health literacy, also when taking educational level into account. This has implications for prevention and care regarding older adults with low cognitive functioning.
DCS:
Doetinchem Cohort Study
LDST:
Letter digit substitution test
SCWT:
Stroop color word test
VFT:
Verbal fluency test
VLT:
Verbal learning test
Kutner M, Greenberg E, Jin Y, Paulsen C. The health literacy of America's adults: results from the 2003 National Assessment of adult literacy (NCES 2006–483). Washington, DC: National Center for Education Statistics, US Department of Education; 2006.
HLS-EU Consortium. Comparative report of health literacy in eight EU member states. The European Health Literacy Survey HLS-EU. 2012. Available from: http://ec.europa.eu/chafea/documents/news/Comparative_report_on_health_literacy_in_eight_EU_member_states.pdf. Online publication.
Kwan B, Frankish J, Rootman I, Zumbo B, Kelly K, Begoray D, Kazanjian A, Mullet J, Hayes M. The development and validation of measures of "health literacy" in different populations. Vancouver, BC: University of British Columbia, Institute of Health Promotion Research, and University of Victoria, Centre for Community Health Promotion Research; 2006.
Toci E, Burazeri G, Jerliu N, Sørensen K, Ramadani N, Hysa B, Brand H. Health literacy, self-perceived health and self-reported chronic morbidity among older people in Kosovo. Health Promot Int. 2015;30:667–74.
Bostock S, Steptoe A. Association between low functional health literacy and mortality in older adults: longitudinal cohort study. BMJ. 2012;344:e1602.
von Wagner C, Steptoe A, Wolf MS, Wardle J. Health literacy and health actions: a review and a framework from health psychology. Health Educ Behav. 2009;36:860–77.
Kobayashi LC, Wardle J, Wolf MS, von Wagner C. Cognitive function and health literacy decline in a cohort of aging English adults. J Gen Intern Med. 2015;30:958–64.
Morris NS, Maclean CD, Littenberg B. Change in health literacy over 2 years in older adults with diabetes. Diabetes Educ. 2013;39:638–46.
Nguyen HT, Kirk JK, Arcury TA, Ip EH, Grzywacz JG, Saldana SJ, Bell RA, Quandt SA. Cognitive function is a risk for health literacy in older adults with diabetes. Diabetes Res Clin Pract. 2013;101:141–7.
Serper M, Patzer RE, Curtis LM, Smith SG, O'Conor R, Baker DW, Wolf MS. Health literacy, cognitive ability, and functional health status among older adults. Health Serv Res. 2014;49:1249–67.
Mõttus R, Johnson W, Murray C, Wolf MS, Starr JM, Deary IJ. Towards understanding the links between health literacy and physical health. Health Psychol. 2014;33:164–73.
Boyle PA, Yu L, Wilson RS, Segawa E, Buchman AS, Bennett DA. Cognitive decline impairs financial and health literacy among community-based older persons without dementia. Psychol Aging. 2013;28:614–24.
Verschuren WMM, Blokstra A, Picavet HSJ, Smit HA. Cohort profile: the Doetinchem Cohort Study. Int J Epidemiol. 2008;37:1236–41.
Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004;36:588–94.
Chew LD, Griffin JM, Partin MR, Noorbaloochi S, Grill JP, Snyder A, Bradley KA, Nugent SM, Baines AD, Vanryn M. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008;23:561–6.
Geboers B, de Winter AF, Luten KA, Jansen CJM, Reijneveld SA. The association of health literacy with physical activity and nutritional behavior in older adults, and its social cognitive mediators. J Health Commun. 2014;19(Suppl 2):61–76.
van der Elst W, van Boxtel MPJ, van Breukelen GJP, Jolles J. Rey's verbal learning test: normative data for 1855 healthy participants aged 24-81 years and the influence of age, sex, education, and mode of presentation. J Int Neuropsychol Soc. 2005;11:290–302.
van der Elst W, Van Boxtel MPJ, Van Breukelen GJP, Jolles J. The Stroop color-word test: influence of age, sex, and education; and normative data for a large sample across the adult age range. Assessment. 2006;13:62–79.
van der Elst W, Van Boxtel MPJ, Van Breukelen GJP, Jolles J. Normative data for the animal, profession and letter M naming verbal fluency tests for Dutch speaking participants and the effects of age, education, and sex. J Int Neuropsychol Soc. 2006;12:80–9.
van der Elst W, van Boxtel MPJ, van Breukelen GJP, Jolles J. The letter digit substitution test: normative data for 1,858 healthy participants aged 24-81 from the Maastricht aging study (MAAS): influence of age, education, and sex. J Clin Exp Neuropsychol. 2006;28:998–1009.
Nooyens ACJ, Bueno-de-Mesquita HB, van Boxtel MPJ, van Gelder BM, Verhagen H, Verschuren WMM. Fruit and vegetable intake and cognitive decline in middle-aged men and women: the Doetinchem Cohort Study. Br J Nutr. 2011;106:752–61.
Evans DA, Beckett LA, Albert MS, Hebert LE, Scherr PA, Funkenstein HH, Taylor JO. Level of education and change in cognitive function in a community population of older persons. Ann Epidemiol. 1993;3:71–7.
Bennett JS, Boyle PA, James BD, Bennett DA. Correlates of health and financial literacy in older adults without dementia. BMC Geriatr. 2012;12:30.
Andel R, Crowe M, Pedersen NL, Fratiglioni L, Johansson B, Gatz M. Physical exercise at midlife and risk of dementia three decades later: a population-based study of Swedish twins. J Gerontol A Biol Sci Med Sci. 2008;63:62–6.
Hawkins MW, Dolansky MA, Levin JB, Schaefer JT, Gunstad J, Redle JD, Josephson R, Hughes JW. Cognitive function and health literacy are independently associated with heart failure knowledge. Heart Lung. 2016;45:386–91.
Cohn JA, Shah AS, Goggins KM, Simmons SF, Kripalani S, Dmochowski RR, Schnelle JF, Reynolds WS. Health literacy, cognition, and urinary incontinence among geriatric inpatients discharged to skilled nursing facilities. Neurourol Urodyn. 2017;ahead of print.
Boshuizen HC, Viet AL, Picavet HS, Botterweck A, van Loon AJ. Non-response in a survey of cardiovascular risk factors in the Dutch population: determinants and resulting biases. Public Health. 2006;120:297–308.
Lubetkin EI, Zabor EC, Isaac K, Brennessel D, Kemeny MM, Hay JL. Health literacy, information seeking, and trust in information in Haitians. Am J Health Behav. 2015;39:441–50.
Adams AS, Parker MM, Moffet HH, Jaffe M, Schillinger D, Callaghan B, Piette J, Adler NE, Bauer A, Karter AJ. Communication barriers and the clinical recognition of diabetic peripheral neuropathy in a diverse cohort of adults: the DISTANCE study. J Health Commun. 2016;21:544–53.
World Health Organization. WHO global strategy on people-centred and integrated health services: interim report. Geneva, Switzerland: World Health Organization; 2015.
Meppelink CS, van Weert JCM, Haven CJ, Smit EG. The effectiveness of health animations in audiences with different health literacy levels: an experimental study. J Med Internet Res. 2015;17:e11.
Kornburger C, Gibson C, Sadowski S, Maletta K, Klingbeil C. Using "teach-back" to promote a safe transition from hospital to home: an evidence-based approach to improving the discharge process. J Pediatr Nurs. 2013;28:282–91.
Willis SL, Tennstedt SL, Marsiske M, Ball K, Elias J, Koepke KM, Morris JN, Rebok GW, Unverzagt FW, Stoddard AM, et al. Long-term effects of cognitive training on everyday functional outcomes in older adults. JAMA. 2006;296:2805–14.
The authors would like to thank the epidemiologists and fieldworkers of the Municipal Health Service in Doetinchem for their contribution to the data collection for this study. The authors would also like to thank Klaske Wynia, PhD, for her useful comments on our manuscript.
The DCS is supported by the Dutch Ministry of Health, Welfare and Sport and the National Institute for Public Health and the Environment. The Dutch Ministry of Public Health, Welfare and Sport had no role in the design or conduct of the study; the collection, analysis, or interpretation of the data; or the writing or approval of the manuscript.
Due to ethical restrictions related to participant consent, all relevant data are available upon request to the principal investigator of the Doetinchem Cohort Study: professor WMM Verschuren (email: [email protected]).
Department of Health Sciences, University Medical Center Groningen, University of Groningen, Hanzeplein 1, FA10, P.O. Box 30.001, 9700 RB, Groningen, the Netherlands
Bas Geboers, Sijmen A. Reijneveld, Josué Almansa & Andrea F. de Winter
Centre for Nutrition, Prevention and Health Services, National Institute for Public Health and the Environment (RIVM), Bilthoven, the Netherlands
Ellen Uiters, Astrid C. J. Nooyens, W. M. Monique Verschuren & H. Susan J. Picavet
Department of Communication and Information Studies, Faculty of Arts, University of Groningen, Groningen, the Netherlands
Carel J. M. Jansen
Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, the Netherlands
W. M. Monique Verschuren
Bas Geboers
Ellen Uiters
Sijmen A. Reijneveld
Josué Almansa
Astrid C. J. Nooyens
Andrea F. de Winter
H. Susan J. Picavet
BG conducted the analyses and drafted the manuscript. EU, SAR, CJMJ, AFW, and HSJP were closely involved in the conceptualization and planning of the study and the interpretation of the results. JA and ACJN contributed substantially to the preparation of the final dataset and the development and performance of the statistical procedures, and they reflected critically on the quality and accuracy of the final analyses. CJMJ and WMMV critically reviewed the manuscript for content and had an important advisory role throughout the writing process. HSJP and WMMV were responsible for data collection. All authors contributed substantially to the writing of the manuscript and have read and approved the final version.
Correspondence to Bas Geboers.
Written informed consent was obtained from all participants. Only participants who were capable of independently giving their consent at baseline were included in the Doetinchem Cohort Study. The Medical Ethics Committees of the Netherlands Organization of Applied Scientific Research and the University of Utrecht approved the study.
Geboers, B., Uiters, E., Reijneveld, S.A. et al. Health literacy among older adults is associated with their 10-years' cognitive functioning and decline - the Doetinchem Cohort Study. BMC Geriatr 18, 77 (2018). https://doi.org/10.1186/s12877-018-0766-7
Cognitive decline | CommonCrawl |
TX Alcoholic Beverage Code 246
anthonyx2010
Alcohol or any beverage containing more than one-half of one percent of alcohol by volume.
Alcohol or any liquor produced in whole or in part by the process of distillation and includes spirit coolers that may have an alcohol content as low as 4% alcohol by volume.
Illicit Beverage
An alcoholic beverage imported or transported in violation of the code; on which a tax imposed by the laws of this state has not been paid and to which the tax stamp hasn't been applied.
Permittee
A person who is the holder of a permit for liquor or any agent, servant, or employee of that person.
Licensee
A person who is the holder of a license for beer or any agent, servant, or employee of that person.
A malt beverage containing more than 1/2 of 1% of alcohol by volume and not more than 4% of alcohol by weight.
Ale or Malt Liquor
A malt beverage containing more than 4% alcohol by weight.
Mixed Beverage
A beverage in which two or more ingredients are mixed.
A bottled drink made from wine, fruit juice, and carbonated water.
Wine and Vinous Liquor
The product obtained from the alcoholic fermentation of juice of sound ripe or dried grapes, fruits, berries or honey.
What is needed by TABC to take administrative action?
Date, Time, Location, reason for being on premise, primary observations, ID of persons involved, statements, evidence, arrests, completion of reports.
101.07 Peace Officer...
All peace officers in the state SHALL enforce the provisions of this code and cooperate with and assist the commission in detecting violations and apprehending offenders.
1.05 General Penalty
A person who violates a provision of this code for which a "specific penalty" is not provided is guilty of a misdemeanor and on conviction is punishable by a fine of not less than $100 nor more than $1000 OR by confinement in the county jail for not more than one year OR both.
5.331 Public Disturbance Reports
Local law enforcement agencies in each county with a population of 3.3 million or more shall send to the commission reports and other data concerning shootings, stabbings, and other public disturbances that occur on the premises of a permittee or licensee. The reports and data shall be incorporated into the record of the permittee or licensee.
11.01 Permit Required
A person may manufacture, distill, brew, sell, import, export, transport, distribute, warehouse, store, possess, possess for the purpose of sale, bottle, rectify, blend, treat, fortify, mix, or process liquor, or possess equipment or material designed for or capable of use for manufacturing liquor, if the right or privilege of doing so is granted by this code.
11.04 Must Display Permit
All permits shall be displayed in a conspicuous place at all times on the licensed premises.
11.041 Warrning Sign Required
Each holder of a permit shall display in a prominent place on the permit holder's premises a sign giving notice that it is unlawful for a person to carry a weapon on the premises unless the weapon is a [concealed and a police officer. If more than 51% of profit is from Alcohol sales.
11.042 Health Risks Warning Sign
The commission by rule shall require the holder of a permit authorizing the sale of alcoholic beverages for on-premises consumption to display a warning sign on the door to each restroom on the permitted premises that informs the public of the risks of drinking alcohol during pregnancy.
11.06 Privlages Limited to Licensed Premises
No person may use a permit or exercise any privileges granted by the permit except at the place, address, premises, or location for which the permit is issued, except as otherwise provided by this code.
22.10 Opening Containers Prohibited
Except as authorized under Section 52.01 of this code, no person may break or open a container containing liquor or beer or possess an opened container of liquor or beer on the premises of a package store.
22.11 Consumption on Premises Prohibited
Except as authorized under Section 52.01, no person may sell, barter, exchange, deliver, or give away any drink or drinks of alcoholic beverages from a container that has been opened or broken on the premises of a package store.
22.13 Age of package store employees
A package store permittee may not knowingly utilize or employ any person under the age of 21 to work on the premises of a package store in any capacity or to deliver alcohol off the premises of a package store. This section shall not apply to a person who is employed by the person's parent or legal guardian to work in a package store that is owned by the parent or legal guardian.
22.17 Sale to customer in store at closing
Notwithstanding any other provision of this code, if a customer has entered a package store during hours in which the package store may sell alcohol and is still in the store at the time the hours of legal sale end, the permittee may allow the customer to remain in the store for a reasonable amount of time to finish shopping, and the permittee may sell an alcoholic beverage to that customer even though the sale occurs after the designated end of the hours of legal sale.
25.01 Wine and Beer Retailers Permit Authorized Activities
The holder of a wine and beer retailer's permit may
sell:
(1) for consumption on or off the premises where sold, but not for resale, wine, beer, and malt liquors containing alcohol in excess of one-half of one percent by volume and not more than 17 percent by volume; and
(2) for consumption on the premises traditional port or sherry containing alcohol in excess of one-half of one percent by volume and not more than 24 percent by volume.
26.01 Wine and Beer Retailer's Off Premises Permit Authorized Activiteis
The holder of a wine and beer retailer's off- premise permit may sell for off-premises consumption only, in unbroken original containers, but not for resale, wine, beer, and malt liquors containing alcohol in excess of one-half of one percent by volume but not more than 17 percent by volume.
28.01 Mixed Beverage Permit
The holder of a mixed beverage permit may sell, offer for sale, and possess mixed beverages, including distilled spirits, for consumption on the licensed premises:
May also purchase wine, beer, ale, and malt liquor containing alcohol of not more than 21
percent by volume in containers of any legal size.
28.081 Substitution of brand without consent of consumer prohibited
commits an offense if the holder, agent, servant, or employee substitutes one brand of alcoholic beverage for a brand that has been specifically requested by a consumer, unless the consumer is notified and consents to the substitution.
1.04 Consignment Sale
the delivery of alcoholic beverages under an agreement, arrangement, condition, or system by which the person receiving the beverages has the right at any time to relinquish possession to them or to return them to the shipper and in which title to the beverages remains in the shipper
1.04 Alcoholic Beverage
means alcohol, or any beverage containing more than one-half of one percent of alcohol by volume, which is capable of use for beverage purposes, either alone or when diluted.
1.04 Distilled Spirits
means alcohol, spirits of wine, whiskey, rum, brandy, gin, or
any liquor produced in whole or in part by the process of distillation, including all dilutions or mixtures of them, and includes spirit coolers that may have an alcoholic content as low as four percent alcohol by volume and that contain plain, sparkling, or carbonated water and may also contain one or more natural or artificial blending or flavoring ingredients.
29.01 Mixed Beverage Late Hours permit
The holder of a mixed beverage late hours permit may sell mixed beverages on Sunday between the hours of 1:00 a.m. and 2 a.m. and on any other day between the hours of 12 midnight and 2 a.m. if the premises covered by the permit are in an area where the sale of mixed beverages during those hours is authorized by this code.
61.01 License Required General Provision
No person may manufacture or brew beer for the purpose of sale, import it into this state, distribute or sell it, or possess it for the purpose of sale without having first obtained an appropriate license or permit as provided in this code. Each licensee shall display his license at all times in a conspicuous place at the licensed place of business.
69.01 Retail Dealers On Premises License
The holder of a retail dealer's on-premise license may sell beer in or from any lawful container to the ultimate consumer for consumption on or off the premises where sold. The licensee may not sell beer for resale.
70.01 Retail Dealers On Premises Late Hours License
The holder of a retail dealer's on-premise late hours license may sell beer for consumption on the premises on Sunday between the hours of 1:00 a.m. and 2 a.m. and on any other day between the hours of 12 p.m. and 2 a.m.
71.01 Retail Dealers Off Premises License
The holder of a retail dealer's off-premise license may sell beer in lawful containers to consumers, but not for resale and not to be opened or consumed on or near the premises where sold.
101.63 Sale to certain persons
A person commits an offense if the person with criminal negligence sells an alcoholic beverage to an habitual drunkard or an intoxicated or insane person.
(b) Except as provided in Subsection (c) of this section, a violation of this section is a misdemeanor punishable by a fine of not less than $100 nor more than $500, by confinement in jail for not more than one year, or by both.
101.71 Inspection of Vehicle
No holder of a permit issued under Title 3, Subtitle A, of this code, may refuse to allow the commission or its authorized representative or a peace officer, on request, to make a full inspection, investigation, or search of any vehicle.
101.72 CONSUMPTION OF ALCOHOLIC BEVERAGE ON PREMISES LICENSED FOR OFF-PREMISES CONSUMPTION
A person commits an offense if the person knowingly consumes liquor or beer on the premises of a holder of a wine and beer retailer's off-premise permit or a retail dealer's off-premise license.
103.01 Illicit Beverages Prohibited
No person may possess, manufacture, transport, or sell an illicit beverage.
103.03. SEIZURE OF ILLICIT BEVERAGES, ETC.
A peace officer may seize without a warrant:
beverage;
(1) any illicit beverage, its container, and its packaging;
(2) any vehicle, including an aircraft or watercraft, used to transport an illicit
(3) any equipment designed for use in or used in manufacturing an illicit beverage; or
(4) any material to be used in manufacturing an illicit beverage.
105.01 Hours of Sale
Except as provided in Sections 105.02, 105.03, 105.04, and 105.08, no person may sell, offer for sale, or deliver any liquor:
(1) on New Year's Day, Thanksgiving Day, or Christmas Day;
(2) on Sunday; or
(3) before 10 a.m. or after 9 p.m. on any other day.
(b) When Christmas Day or New Year's Day falls on a Sunday, Subsection (a) of this section applies to the following Monday.
105.03 Hours of Sale Mixed Beverage
(a) No person may sell or offer for sale mixed beverages at any time not permitted by this section.
(b) A mixed beverage permittee may sell and offer for sale mixed beverages between 7 a.m. and midnight on any day except Sunday. On Sunday he may sell mixed beverages between midnight and 1:00 a.m. and between 10 a.m. and midnight, except that an alcoholic beverage served to a customer between 10 a.m. and 12 noon on Sunday must be provided during the service of food to the customer.
(c) In a city or county having a population of 800,000 or more, according to the last preceding federal census, or 500,000 or more, according to the 22nd Decennial Census of the United States, as released by the Bureau of the Census on March 12, 2001, a holder of a mixed beverage late hours permit may also sell and offer for sale mixed beverages between midnight and 2 a.m. on any day.
105.04 Hours of Sale WIne & Beer Retailers
The hours of sale and delivery for alcoholic beverages sold under a wine and beer retailer's permit or a wine and beer retailer's off-premise permit are the same as those prescribed for the sale of beer under Section 105.05 of this code, except that no sale shall be allowed between 2 a.m. and noon on Sunday.
105.05 Hours of Sale Beer
A person may sell, offer for sale, or deliver beer between 7 a.m. and midnight on any day except Sunday. On Sunday he may sell beer between midnight and 1:00 a.m. and between noon and midnight, except that permittees or licensees authorized to sell for on-premise consumption may sell beer between 10:00 a.m. and noon if the beer is served to a customer during the service of food to the customer.
(c) In a city or county having a population of 800,000 or more, according to the last preceding federal census, or 500,000 or more, according to the 22nd Decennial Census of the United States, as released by the Bureau of the Census on March 12, 2001, a holder of a retail dealer's on-premise late hours license may also sell, offer for sale, and deliver beer between midnight and 2 a.m. on any day
105.06 Hours of Consumption
In a standard hours area, a person commits an offense if he consumes or possesses with intent to consume an alcoholic beverage in a public place at any time on Sunday between 1:15 a.m. and 12 noon or on any other day between 12:15 a.m. and 7 a.m.
(c) In an extended hours area, a person commits an offense if he consumes or possesses with intent to consume an alcoholic beverage in a public place at any time on Sunday between 2:15 a.m. and 12 noon and on any other day between 2:15 a.m. and 7 a.m.
(d) Proof that an alcoholic beverage was possessed with intent to consume in violation of this section requires evidence that the person consumed an alcoholic beverage on that day in violation of this section.
(e) An offense under this section is a Class C misdemeanor.
105.10 Penalty
sells or offers for sale an alcoholic beverage during prohibited hours; or
(2) consumes or permits the consumption of an alcoholic beverage on the person's
licensed or permitted premises during prohibited hours.
(b) An offense under this section is a Class A misdemeanor.
106.02 Purchase of Alcohol By A Minor
A minor commits an offense if the minor purchases an alcoholic beverage. A minor does not commit an offense if the minor purchases an alcoholic beverage under the immediate supervision of a commissioned peace officer engaged in enforcing the provisions of this code.
106.025 Attempt to purchase Alcohol by minor
A minor commits an offense if, with specific intent to commit an offense under Section 106.02 of this code, the minor does an act amounting to more than mere preparation that tends but fails to effect the commission of the offense intended.
106.03 Sale to Minors
A person commits an offense if with criminal negligence he sells an alcoholic beverage to a minor.
(b) A person who sells a minor an alcoholic beverage does not commit an offense if the minor falsely represents himself to be 21 years old or older by displaying an apparently valid proof of identification that contains a physical description and photograph consistent with the minor's appearance, purports to establish that the minor is 21 years of age or older, and was issued by a governmental agency. The proof of identification may include a driver's license or identification card issued by the Department of Public Safety, a passport, or a military identification card.
(c) An offense under this section is a Class A misdemeanor.
106.06 Purchasing Alcohol for a Minor; Furnishing Alcohol to a Minor
a person commits an offense if he purchases an alcoholic beverage for or gives or makes available an alcoholic beverage to a minor with criminal negligence. Unless the minor's adult parent, guardian, or spouse, or an adult in whose custody the minor has been committed by a court, and [he] is visibly present when the minor possesses or consumes the alcoholic beverage
106.07 Misrepresentation of age by a Minor
A minor commits an offense if he falsely states that he is 21 years of age or older or presents any document that indicates he is 21 years of age or older to a person engaged in selling or serving alcoholic beverages.
103.04 Illicit Beverage Arrest of Person in Posession
A peace officer may arrest without a
warrant any person found in possession of:
(1) an illicit beverage;
109.33 Sales near schools, churches, Hospitals
prohibiting the sale of alcoholic beverages by a dealer whose place of business is within:
(1) 300 feet of a church, public or private school, or public hospital;
(2) 1,000 feet of a public school, if the commissioners court or the governing body
receives a request from the board of trustees of a school district under Section 38.007, Education Code [Refer to Appendix for this citation]; or
(3) 1,000 feet of a private school if the commissioners court or the governing body receives a request from the governing body of the private school.
109.331 Sales near a child-care facility or day care
This section applies only to a permit or license holder under Chapter 25, 28, 32, 69, or 74 who does not hold a
food and beverage certificate.
(b) Except as provided by this subsection, the provisions of Section 109.33 relating to a
public school also apply to a day-care center and a child-care facility as those terms are defined by Section 42.002, Human Resources Code [Refer to Appendix for this citation]. Sections 109.33(a)(2) and (c) do not apply to a day-care center or child-care facility.
109.36 Consumption of Alcohol near homeless shelter or substance abuse centers
prohibiting the possession of an open container or the consumption of an alcoholic beverage on a public street, public alley, or public sidewalk within 1,000 feet of the property line of a homeless shelter that is not located in a central business district or a substance
abuse treatment center that is not located in a central business district.
5.001 Administration of Code TX Alcoholic Beverage Commision
The Texas Alcoholic Beverage Commission is an agency of the state.
101.75 Consumption of Alcoholich beverage near a School
A person commits an offense if the person possesses an open container or consumes an alcoholic beverage on a public street, public alley, or public sidewalk within 1,000 feet of the property line of a facility that is a public or private school [Refer to Appendix for this citation], including a parochial school, that provides all or any part of prekindergarten through twelfth grade.
101.02 Arrest without a Warrant
A peace officer may arrest without a warrant any person he observes violating any provision of this code or any rule or regulation of the commission. The officer shall take possession of all illicit beverages the person has in his possession or on his premises as provided in Chapter 103 of this code.
103.03 Seizure of illicit beverages
103.05 Report of Seizure
A peace officer who makes a seizure under Section 103.03 of this code shall make a report in triplicate which lists each item seized and the place and name of the owner, operator, or other person from whom it is seized. One copy of the report shall be verified by
oath.
106.01 Definition of Minor
In this code, "minor" means a person under 21 years of age.
106.071 Puniushment for Alcohol related offenses by a minor
Class C Misdemeanor
106.05 Possesion of Alcohol by a Minor
Except as provided in Subsection (b) of this section, a minor commits an offense if he possesses an alcoholic beverage.
106.041 Driving or operating a watercraft under the influence of alcohol by a minor
A minor commits an offense if the minor operates a motor vehicle in a public place, or a watercraft, while having any detectable amount of alcohol in the minor's system.
104.01 Lewed Immoral indecent conduct
No person authorized to sell beer at retail, nor the person's [his] agent, servant, or employee, may engage in or permit conduct on the premises of the retailer which is lewd, immoral, or offensive to public decency, including, but not limited to, any of the following acts:
(1) the use of loud and vociferous or obscene, vulgar, or indecent language, or permitting its use;
(2) the exposure of a person or permitting a person to expose himself or herself [his
person];
(3) rudely displaying or permitting a person to rudely display a pistol or other deadly weapon in a manner calculated to disturb persons in the retail establishment;
(4) solicitation of any person to buy drinks for consumption by the retailer or any of the retailer's [his] employees;
(5) being intoxicated on the licensed premises;
(6) permitting lewd or vulgar entertainment or acts;
(7) permitting solicitations of persons for immoral or sexual purposes;
(8) failing or refusing to comply with state or municipal health or sanitary laws or
ordinances; or
(9) possession of a narcotic or any equipment used or designed for the administering of a narcotic or permitting a person on the licensed premises to do so.
(b) For purposes of Subsection (a)(4), a solicitation is presumed if an alcoholic beverage is sold or offered for sale for an amount in excess of the retailer's listed, advertised, or customary price. The presumption may be rebutted only by evidence presented under oath.
106.15 Prohibited Activities by persons younger than 18
A permittee or licensee commits an offense if he employs, authorizes, permits, or induces a person younger than 18 years of age to dance with another person in exchange for a benefit, as defined by Section 1.07, Penal Code [Refer to Appendix for this citation], on the premises covered by the permit or license.
(b) An offense under Subsection (a) is a Class A misdemeanor.
106.09 Employment of Minors
Except as provided by [in] Subsections (b), (c), [and] (e) and (f) [of this section], no person may employ a person under 18 years of age to sell, prepare, serve, or otherwise handle liquor, or to assist in doing so.
(b) A holder of a wine only package store permit may employ a person 16 years old or older to work in any capacity.
(c) A holder of a permit or license providing for the on-premises consumption of alcoholic beverages may employ a person under 18 years of age to work in any capacity other than the actual selling, preparing, or serving of alcoholic beverages.
104.07 Posting of certain notices required
The holder of a permit or license under Chapter 25, 26, 28, 32, 69, or 71, other than the holder of a food and beverage certificate, shall display a sign containing the following notice in English and in Spanish:
WARNING: Obtaining forced labor or services is a crime under Texas law. Call the national human trafficking hotline: 1-888-373-7888. You may remain anonymous.
101.04 Consent to Inspection; Penalty
By accepting a license or permit, the holder consents to the commission, an authorized representative of the commission, or a peace officer entering the licensed premises at any time to conduct an investigation or inspect the premises for the purpose of performing any duty imposed by this code.
(b) A person commits an offense if the person refuses to allow the commission, an authorized representative of the commission, or a peace officer to enter a licensed or permitted premises as required by Subsection (a). An offense under this section is a Class A misdemeanor.
71.10 Warning Signs Required
Each holder of a retail dealer's off-premise license shall display in a prominent place on his premises a sign stating in letters at least two inches high: IT IS A CRIME (MISDEMEANOR) TO CONSUME LIQUOR OR BEER ON THESE PREMISES.
(b) A licensee who fails to comply with this section commits a misdemeanor punishable by a fine of not more than $25.
1.03 Public Policy
This code is an exercise of the police power of the state for the
protection of the welfare, health, peace, temperance, and safety of the people of the state. It shall be liberally construed to accomplish this purpose.
Governors Code 4.11
Code of Criminal Procedures-TCOLE
Texas code of criminal procedure
Texas TCOLE Certification Test Questions
Valra_Ram
08 - Penal Code (Part 1: Titles 1-3)
TABC
TABC Common Violations; 249
TABC CLASS 249
**Determine which are continuous. Identify discontinuities for those that are not continuous.** $$ y=\frac{x^2\ -\ 3 x\ +\ 2}{x\ -\ 2} $$
On a sheet of paper, use each of these key terms and academic vocabulary terms in a sentence. | Key Terms | | Academic Vocabulary | | | :--- | :--- | :--- | :--- | | advertising | advertising agency | media | medium | | mass media | ad campaign | communicate | display | | infomercial | audience | unique | prime | | direct-mail advertising | impression | transit | network | | pop-up ads | frequency | | | | banner ads | cost per thousand (CPM) | | | | webcast | prime time | | | | media planning | | | |
How are the public agenda and the media connected?
What factors play a role in friend selection? | CommonCrawl |
Jump to: | A | B | C | D | E | F | G | H | I | J | K | L | M | N | P | R | S | T | U | V | W | Y
Number of items: 582.
Moorthy, Bhagavatula and Madyastha, Prema and Madyastha, Madhava K (1989) Hepatotoxicity of pulegone in rats: Its effects on microsomal enzymes, in vivo. In: Toxicology, 55 (3). pp. 327-337.
Nair, Vijayakumaran P (1989) Development of Nonsocial Behaviour in the Asiatic Elephant. In: Journal of Navigation, 42 (2). pp. 278-290.
Agrawal, VK and Patnaik, LM and Goel, PS (1989) Specification-driven approach for protocol design of distributed computing systems. In: Journal of Microcomputer Applications, 12 (2). pp. 107-126.
Ajith, Kamath V and Appaji Rao, N and Vaidyanathan, CD (1989) Enzyme catalysed non-oxidative decarboxylation of aromatic acids II. Identification of active site residues of 2,3-dihydroxybenzoic acid decarboxylase from Aspergillus niger. In: Biochemical and Biophysical Research Communications, 165 (1). pp. 20-26.
Akila, R and Jacob, KT (1989) An SOx (x = 2, 3) sensor using ?-alumina/Na2SO4 couple. In: Sensors and Actuators, 16 (4). pp. 311-323.
Al-Dhahir, TA and Sood, AK and Bhat, HL (1989) Incommensurate-commensurate phase transition in ferroelastic CsIO4. In: Solid State Communications, 70 (9). 863 -868.
Albert, IDL and Ramasesha, S (1989) Numerically Exact Study of Polarizabilities and Hyperpolarizabilities of Correlated Conjugated Organic Models. In: Molecular Crystals & Liquid Crystals, 168 . pp. 95-101.
Albert, IDL and Ramasesha, S (1989) Intermolecular interactions in pi -conjugated systems: Application to polyenes. In: Physical Review B: Condensed Matter, 40 (12). pp. 8516-8521.
Alex, TK and Shrivastava, SK (1989) On-board correction of systematic error of Earth sensors. In: IEEE Transactions on Aerospace and Electronic Systems, 25 (3). pp. 373-379.
Amalendu, Chandra and Biman, Bagchi (1989) Exotic dielectric behavior of polar liquids. In: Journal of Chemical Physics, 91 (5). pp. 3056-3060.
Amalendu, Chandra and Biman, Bagchi (1989) Molecular theory of solvation and solvation dynamics of a classical ion in a dipolar liquid. In: Journal of Physical Chemistry, 93 (19). pp. 6996-7003.
Anand, Lalitha and Vithayathil, Paul Joseph (1989) Purification and properties of β-glucosidase from a thermophilic fungus Humicola lanuginosa (Griffon and Maublanc) Bunce. In: Journal of Fermentation and Bioengineering, 67 (6). pp. 380-386.
Anavekar, RV and Devaraj, N and Parthasarathy, G and Gopal, ESR and Ramakrishna, J (1989) Temperature and pressure dependence of direct current electrical resistivity in the Na2O-ZnO-B2O3 glass system. In: Physics and Chemistry of Glasses, 30 (5). pp. 172-179.
Angirasa, D and Srinivasan, J (1989) Natural convection flows due to the combined buoyancy of heat and mass diffusion in a thermally stratified medium. In: Journal of Heat Transfer, 111 (3). pp. 657-663.
Annakutty, KS and Kishore, K (1989) Synthesis and properties of flame retardant polyphosphate esters - a review. In: Journal of Scientific & Industrial Research, 48(10), (10). pp. 479-493.
Anvekar, Dinesh K and Sonde, BS (1989) Transducer Output Signal Processing Using Dual and Triple Microprocessor Systems. In: IEEE Transactions on Instrumentation and Measurement, 38 (3). pp. 834-836.
Aoki, K and Yanagitani, A and Masumoto, T and Chattopadhyay, K (1989) Formation and crystallization of hydrogen-induced amorphous SmFe2H3.6 alloy. In: Journal of the Less Common Metals, 147 (1). pp. 105-111.
Aradhya, KSS and Badarinarayana, K and Srinath, LS (1989) Mixed-mode stress intensity factors for arbitrarily oriented cracks in cylindrical shells under axial pull and torsion. In: Engineering Fracture Mechanics, 33 (3). pp. 445-449.
Arvind, Murching M and Srikant, YN (1989) Incremental attribute evaluation through recursive procedures. In: Computer Languages, 14 (4). pp. 225-237.
Asokan, S and Prasad, MVN and Parthasarathy, G and Gopal, ESR (1989) Mechanical and Chemical Thresholds in IV-VI Chalcogenide Glasses. In: Physical Review Letters, 62 (7). pp. 808-810.
Asokan, T and Nagabhushana, GR (1989) Negative resistance characteristics of CdO-based ceramics. In: Journal of Materials Science Letters, 8 (3). pp. 257-260.
Avudaithaia, M and Kutty, TRN (1989) Sacrificial photolysis of water on Ti1−xSnxO2 ultrafine powders. In: Materials Research Bulletin, 24 (9). pp. 1163-1170.
Badarinarayana, K and Aradhya, KSS (1989) On the investigation of singular stress field around radial cracks in annulii subjected to diametrical tension. In: Engineering Fracture Mechanics, 33 (3). pp. 437-443.
Bagchi, Biman (1989) Dynamics of Solvation and Charge Transfer Reactions in Dipolar Liquids. In: Annual Review of Physical Chemistry, 40 . pp. 115-141.
Bagchi, Biman and Aberg, Ulf and Sundstrom, Villy (1989) Analysis of Differing Experimental Results in Barrierless Reactions in Solution. In: Chemical Physics Letters, 162 (3). pp. 227-232.
Bagchi, Biman and Castner, Edward W and Fleming, Graham R (1989) On the generalized continuum model of dipolar solvation dynamics. In: Journal of Molecular Structure, 194 . pp. 171-181.
Bagchi, Biman and Chandra, Amalendu (1989) Polarization relaxation, dielectric dispersion, and solvation dynamics in dense dipolar liquid. In: Journal of Chemical Physics, 90 (12). pp. 7338-7345.
Bagchi, Biman and Chandra, Amalendu (1989) Solvation of an ion and of a dipole in a dipolar liquid: How different are the dynamics. In: Chemical Physics Letters, 155 (6). pp. 533-538.
Balakrishna, MS and Prakasha, TK and Krishnamurthy, SS (1989) Diphosphazanes as Ligands. Symbiosis of Phosphorus Chemistry and Organometallic Chemistry. In: Adv. Organomet, 57BUA9 . pp. 205-211.
Balakrishnan, N and Zrnic, Dusan S and Goldhirsh, Julius and Rowland, John (1989) Comparison of Simulated Rain Rates from Disdrometer Data Employing Polarimetric Radar Algorithms. In: Journal of Atmospheric and Oceanic Technology, 6 (3). pp. 476-486.
Balasubrahmanyam, SN and Rajendran, N and Singh, DK (1989) Correlation of $^{13}C$ Shifts with Substituent Parameters in 3,4-Diphenyl-1,2,5-oxadiazole 2-Oxides Substituted at the Para-Positions of Either or Both Phenyl Rings. In: Bulletin of the Chemical Society of Japan, 62 (10). pp. 3334-3342.
Balasubramanian, S and Rao, KJ (1989) Electronegativities of constituent atoms and Tc of superconductors. In: Solid State Communications, 71 (11). pp. 979-982.
Balasubramanian, SV and Easwaran, KRK (1989) Aggregation of calcium ionophore (A23187) in phospholipid vesicles. In: Biochemical and Biophysical Research Communications, 158 (3). pp. 891-897.
Balasubramanyan, DR and Bhat, SV (1989) High-pressure NMR investigations of the protonic conductor (NH4)4Fe(CN)6.1.5H2O. In: Journal of Physics: Condensed Matter, 1 (8). pp. 1495-1502.
Bandyopadhyay, T and Sarma, DD (1989) Calculation of Coulomb interaction strengths for 3d transition metals and actinides. In: Phys. Rev. B, 39 (6). pp. 3517-3521.
Banerjee, S and Gunasekaran, MK and Raychaudhuri, AK (1989) Absolute velocity measurement using ultrasonic spectrometer. In: National Symposium on Acoustics '89, 14-16 December 1989, Calcutta, India, pp. 336-338.
Bansal, Manju and Pattabiraman, N (1989) Molecular mechanics studies on poly(purine) poly(pyrimidine) sequences in DNA: Polymorphism and local variability. In: Biopolymers, 28 (2). pp. 531-548.
Baranidharan, S and Sundaramoorthy, M and Sekhar, JA and Gopal, ESR and Sasisekharan, V (1989) X-Ray Studies Of Al6culi3 Quasicrystal. In: Phase Transitions: A Multinational Journal, 16 (1-4). pp. 615-620.
Baranidharan, S and Sundaramoorthy, M and Sekhar, JA and Gopal, ESR and Sasisekharan, V (1989) X-Ray studies of $Al_6CuLi_3$ quasicrystal. In: Phase Transitions, 16 (1-4). pp. 615-620.
Baranidharan, S and Balagurusamy, VSK and Srinivasan, A and Gopal, ESR and Sasisekharan, V (1989) Non-periodic tilings in 2-dimensions: 4- and 7-fold symmetries. In: Phase Transitions: A Multinational Journal, 16 . pp. 621-626.
Barnes, AJ and Rao, CNR and Ratajczak, H (1989) Special Issue - Molecular-Spectroscopy And Molecular-Structure - A Collection Of Invited Papers In Honor Of Orvillethomas,W.J. In: Journal of Molecular Structure, 198 . R9-R10.
Baruah, Jubaraj B and Samuelson, Ashoka G (1989) Copper(I) promoted C---C bond forming reactions: direct activation of allyl alcohols. In: Journal of Organometallic Chemistry, 361 (3). C57-C60.
Baskaran, N and Prakash, V and Rao, Appu AG and Radhakrishnan, AN and Savithri, HS and Rao, Appaji N (1989) Mechanism of interaction of O-amino-D-serine with sheep liver serine hydroxymethyltransferase. In: Biochemistry, 28 (25). 9607 -9612.
Baskaran, N and Prakash, V and Savithri, HS and Radhakrishnan, AN and Rao, Appaji N (1989) Mode of Interaction of Aminooxy Compounds with Sheep Liver Serine Hydroxymethyltransferase. In: Biochemistry, 28 (25). pp. 9613-9617.
Behera, N and Malik, RP and Kaul, RK (1989) Genus-two correlators for critical Ising model. In: Physical Review D: Particles and Fields, 40 (6). pp. 1993-2003.
Bhanu, VA and Kishore, K (1989) A demonstration of the inhibitory role of oxygen during the room-temperature radical polymerization of styrene initiated by cobalt(II)-sodium borohydride redox system. In: Macromolecules, 22 (8). pp. 3491-3492.
Bhaskar, Dasgupta and Veni, Madhavan CE (1989) An approximate algorithm for the minimal vertex nested polygon problem. In: Information Processing Letters, 33 (1). pp. 35-44.
Bhaskar , Vijaya K and Rao, Subba GSR (1989) Vinyl radical induced Michael additions: Total synthesis of (±)-seychellene. In: Tetrahedron Letters, 30 (2). 225 -228.
Bhaskarwar, Ashok N (1989) General population balance model of dissolution of polydisperse particles. In: AIChE Journal, 35 (4). pp. 658-661.
Bhat, PJ and Moudgal, NR (1989) Isolation and characterization of a gonadotropin receptor binding inhibitor from porcine follicular fluid. In: International Journal of Peptide & Protein Research, 33 (1). pp. 59-66.
Bhat, GS and Narasimha, R and Arakeri, VH (1989) A new method of producing local enhancement of buoyancy in liquid flows. In: Experiments in Fluids, 7 (2). pp. 99-102.
Bhat, NB and Nandy, SK (1989) Special Purpose Architecture for Accelerating Bitmap DRC. In: 26th Conference on Design Automation, 1989, 25-29 June, Las Vegas,Nevada,USA, 674 -677.
Bhat, SV and Srinivasu, VV and Rao, CNR (1989) Certain novel features of the R.F. response of the YBa2Cu3O7−x superconductors. In: Physica C: Superconductivity, 162-16 (Part 2). pp. 1571-1572.
Bhattacharya, TK and Mahapatra, PR (1989) A Powerful Range-Doppler Clutter Rejection Strategy for Navigational Radars. In: IEEE National Aerospace and Electronics Conference, 1989. NAECON 1989, 22-26 May, Dayton,OH, Vol.1, 132-137.
Bhattacharya, TK and Mahapatra, PR and Balakrishnan, N (1989) Design of Clutter-Optimum Discrete Phase Coded Radar signals Via Integer Programming. In: Proceedings of the 1989 International Symposium on Noise and Clutter Rejection in Radars and Imaging Sensors, 1989, IEICE, pp. 227-232.
Bhattacharyya, D and Bansal, Manju (1989) A self-consistent formulation for analysis and generation of non-uniform DNA structure. In: Journal of Biomolecular Structure & Dynamics, 6 (4). pp. 635-653.
Bhattacharyya, S and Nath, G (1989) Generalised vortex motion of compressible fluid with and without magnetic field and suction. In: International Journal of Engineering Science, 27 (12). pp. 1639-1650.
Bhavani, K and Karande, Anjali A and Shaila, MS (1989) Preparation and characterization of monoclonal antibodies to nucleocapsid protein N and H glycoprotein of rinderpest virus. In: Virus Research, 12 (4). 331 -348.
Biswas, NN (1989) Maximum compatible classes from compatibility matrices. In: Sadhana, 14 (3). pp. 213-218.
Brahma, KK and Dash, PK and Dattaguru, B (1989) Observation of crack closure using a crack mouth opening displacement gauge. In: International Journal of Fatigue, 11 (1). pp. 37-41.
Brahmachari, SK and Mishra, RK and Bagga, R and Ramesh, N (1989) DNA duplex with the potential to change handedness after every half a turn. In: Nucleic Acids Research, 17 (18). pp. 7273-7281.
Brahmachari, Vani and Nagasuma, R and Brahmachari, Samir K (1989) Preparation of megabase DNA from adult insects and mammalian spleen for pulsed-field gel electrophoresis. In: Journal of Genetics, 68 (3). pp. 185-186.
Budkuley, JS and Patil, KC (1989) Synthesis and Thermoanalytical Properties of Mixed Metal sulfite Hydrazinate Hydrates I. In: Synthesis and Reactivity in Inorganic and Metal-Organic Chemistry, 19 (9). pp. 909-922.
Budkuley, Jayant S and Patil, KC (1989) Thermal properties of magnesium bisulphite hydrazinate hydrate. In: Thermochimica Acta, 153 . pp. 419-422.
Chakrabarti, A (1989) A Simplified Approach to a Travelling Wave Problem. In: Journal of Applied Mathematics and Mechanics, 69 (3). pp. 165-167.
Chakrabarti, A (1989) On Some Dual Integral-Equations Involving Bessel-Function Of Order One. In: Indian Journal of Pure and Applied Mathematics, 20 (5). pp. 483-492.
Chakrabarti, A (1989) Solution of Two Singular Integral Equations Arising in Water Wave Problems. In: Journal of Applied Mathematics and Mechanics / Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM), 69 (12). pp. 457-459.
Chakrabarti, A (1989) A note on the porous-wavemaker problem. In: Acta Mechanica, 77 (1-2). pp. 121-129.
Chanda, Manas and Rempel, GL (1989) Attaching chelating ligands to polybenzimidazole via epoxidation to obtain metal selective sorbents. In: Journal of Polymer Science Part A: Polymer Chemistry, 27 (10). pp. 3237-3250.
Chandra, AK (1989) Excited-state structures of molecules with nearby electronic states. In: Journal of Molecular Structure: THEOCHEM, 202 . pp. 249-263.
Chandra, Amalendu and Bagchi, Biman (1989) Breakdown of Onsager's conjecture on distance dependent polarization relaxation in solvation dynamics. In: Journal of Chemical Physics, 91 (4). pp. 2594-2598.
Chandra, Amalendu and Bagchi, Biman (1989) Force constants of solvent polarization fluctuations: Softening at intermediate wave vectors. In: Journal of Chemical Physics, 91 (11). pp. 7181-7186.
Chandra, Amalendu and Bagchi, Biman (1989) Microscopic expression for frequency and wave vector dependent dielectric constant of a dipolar liquid. In: Journal of Chemical Physics, The, 90 (3). 1832 -1840.
Chandra, Amalendu and Bagchi, Biman (1989) Microscopic expression for time-dependent solvation energy of ions and dipoles in dense polar liquids. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 101 (1). pp. 83-88.
Chandra, Amalendu and Bagchi, Biman (1989) A molecular theory of collective orientational relaxation in pure and binary dipolar liquids. In: Journal of Chemical Physics, 91 (3). pp. 1829-1842.
Chandrapa, KG and Muralidhara, MK and Muralidhara, BK and Seshan, S (1989) Physical and thermal properties of sodium silicate-bonded self-setting sands. In: Fonderie-Fondeur d'Aujourd'hui, 83 . pp. 25-34.
Chandrasekhar, Sosale and Mukkamala, Ravindranath and Neela, BS and Suryanarayana, Ramakuma Rao and Viswamitra, MA (1989) Cycloaddition of C,N-diphenylnitrone with 6,6-diphenylfulvene: concerted or stepwise? X-ray crystal structure of the product. In: Journal of Chemical Research (8). pp. 252-253.
Chandrasekhar, Indira and Sasisekharan, V (1989) The nomenclature and conformational analysis of lipids and lipid analogs. In: Molecular and Cellular Biochemistry, 91 (1-2). pp. 173-182.
Chandrasekhar, Sosale and Ravindranath, Mukkamala (1989) 'Preferential spontaneous resolution' of p-anisyl \alpha-methylbenzyl ketone. In: Tetrahedron Letters, 30 (45). pp. 6207-6208.
Chandrashekhara, K (1989) Analysis of Long Cantilever Cylindrical Shell Subjected to Wind Loading. In: Journal of Engineering Mechanics, 115 (9). pp. 2101-2105.
Changkakoti, R and Pappu, SV (1989) Towards optimum diffraction efficiency for methylene blue sensitized dichromated gelatin holograms. In: Optics & Laser Technology, 21 (4). pp. 259-263.
Changkakoti, Rupak and Pappu, Sastry V (1989) Methylene blue sensitized dichromated gelatin holograms: a study of their storage life and reprocessibility. In: Journal of the Optical Society of America B, 28 (2). 340 -344.
Charles, Kanakam C and Neelakandha, Mani S and Halasya, Ramanathan and Subba, Rao GSR (1989) Syntheses based on cyclohexadienes. Part 2. Convenient synthesis of 6-alkylsalicylates, 6-alkyl 2,4-dihydroxybenzoate, and 2,5-dialkylresorcinols. In: Journal of the Chemical Society, Perkin Transactions 1 (11). pp. 1907-1913.
Chattopadhyay, K and Swarnamba, PR and Srivastava, JPN (1989) The evolution of microstructure in the undercooled Zn-Sn entrained droplets. In: Metallurgical Transactions A, 20 (10). pp. 2109-2115.
Chaudhuri, UR and Ramkumar, K and Satyam, M (1989) Degradation of characteristics of tin oxide films deposited by spray pyrolysis. In: Journal of Physics D:Applied Physics, 22 (9). pp. 1413-1414.
Chaudhuri, Uma Ray and Ramkumar, K and Satyam, M (1989) Electrical conduction in a tin-oxide-silicon interface prepared by spray pyrolysis. In: Journal of Applied Physics, 66 (4). pp. 1748-1752.
Chauhan, VS and Uma, K and Kaur, Paramjeet and Balaram, P (1989) Conformations of dehydrophenylalanine containing peptides: nmr studies of an acyclic hexapeptide with two z-Phe residues. In: Biopolymers, 28 (3). pp. 763-771.
Chauhan, VS and Uma, K and Kaur, Paramjeet and Balaram, P (1989) Conformations of dehydrophenylalanine containing peptides: nmr studies of an acyclic hexapeptide with two delta Z-Phe residues. In: Biopolymers, 28 (3). pp. 763-771.
Chetal, AR and Mahto, P and Sarode, PR (1989) EXAFS studies of some gadolinium systems. In: Physica B: Condensed Matter, 158 (1-3). pp. 219-220.
Chitralekh, S and Avudainayagam, KV and Pappu, SV (1989) Rotation sensitivity of Lau fringes: an analysis based on coherence theory. In: Optics & Laser Technology, 21 (4). pp. 265-267.
Chitralekha, Sundergopal and Avudainayagam, Kodikullam V and Pappu, Sastry V (1989) Role of spatial coherence on the rotation sensitivity of Lau fringes: an experimental study. In: Journal of the Optical Society of America B, 28 (2). pp. 345-349.
Choudhuri, Arnab Rai (1989) Locating the seat of the solar dynamo. In: 142nd Symp Of The International Astronomical Union : Basic Plasma Processes On The Sun, DEC 01-05, 1989, Bombay, INDIA.
Choudhuri, Arnab Rai (1989) The evolution of loop structures in flux rings within the solar convection zone. In: Solar Physics, 123 (2). pp. 217-239.
Chouhan, Harish M and Anand, GV (1989) Bearing Estimation Of A Sound Source In A Shallow Water Channel. In: Fourth IEEE Region 10 International Conference TENCON '89, 22-24 November, Bombay,India, pp. 259-262.
D'Silva, S and Choudhuri, Arnab Rai (1989) Effect of Turbulence on Emerging Magnetic Flux Tubes in the Convection Zone. In: 142nd Symp Of The International Astronomical Union : Basic Plasma Processes On The Sun, DEC 01-05, 1989, Bangalore, India.
Damodaran, KV and Rao, KJ (1989) Elastic Properties of Alkali Phosphomolybdate Glasses. In: Journal of the American Ceramic Society, 72 (4). pp. 533-539.
Damodaran, KV and Rao, KJ (1989) The mixed alkali effect in the elastic properties of phosphomolybdate glasses. In: Physics and Chemistry of Glasses, 30 (4). pp. 130-134.
Damodaran, KV and Rao, KJ (1989) Elastic properties of phosphotungstate glasses. In: Journal of Materials Science, 24 (7). pp. 2380-2386.
Das, P (1989) Computer Programs for the Boltzmann Collision Matrix Elements. In: Computer Physics Communications, 55 (2). pp. 177-187.
Dasappa, S and Shrinivasa, U and Baliga, BN and Mukunda, H (1989) Five-kilowatt wood gasifier technology: Evolution and field experience. In: Sadhana, 14 (3). pp. 187-212.
Dasgupta, Chandan (1989) Variational calculation for the spin-(1/2 Heisenberg antiferromagnet on a square lattice. In: Physical Review B: Condensed Matter, 39 (1). 386 -391.
Datta, G and Hosur, RV and Verma, NC and Khetrapal, CL and Gurnani, S (1989) Mechanism of interaction of the antileukemic drug cytosine arabinoside with aromatic peptides : role of sugar conformation and peptide backbone. In: Physiological Chemistry & Physics & Medical NMR, 21 (4). 279-288 .
Deevi, Sarojini and Kishore, K and Verneker, Pai VR (1989) Role of aluminum-magnesium alloys in humid aging of composite solid propellants. In: Journal of Propulsion and Power, 5 (4). pp. 411-420.
Desayi, Prakash and Rao, Balaji K (1989) Probabilistic Analysis of Cracking Moment of Reinforced Concrete Beams. In: ACI Structural Journal, 86 (3). pp. 235-241.
Desayi, Prakash and Rao, Balaji K (1989) Reliability of reinforced concrete beams in limit state of cracking— failure rate analysis approach. In: Materials and Structures, 22 (4). pp. 269-279.
Deshpande, RJ and Nayak, UB (1989) Flotation of low-grade phyllite deposits of tungsten from Degana, Rajasthan. In: Transactions of the Indian Institute of Metals, 42 (2). pp. 109-114.
Devajyothi, C and Brahmachari, Vani (1989) Modulation of DNA methyltransferase during the life cycle of a mealybug Planococcus lilacinus. In: FEBS Letters, 250 (2). pp. 134-138.
Dey, J (1989) A simple technique for approximate solutions of the Falkner-Skan equation. In: Acta Mechanica, 77 (3-4). pp. 299-305.
Dhanasekaran, N and Moudgal, NR (1989) Biochemical and histological validation of a model to study follicular atresia in rats. In: Endocrinologia Experimentalis, 23 (3). pp. 155-166.
Dhanasekaran, N and Moudgal, NR (1989) Studies on follicular atresia: role of gonadotropins and gonadal steroids in regulating cathepsin-D activity of preovulatory follicles in the rat. In: Molecular and Cellular Endocrinology, 63 (1-2). pp. 133-142.
Diallo, Adama and Barrett, Thomas and Barbron, Monique and Subbarao, Shaila M and Taylor, William P (1989) Differentiation of rinderpest and peste des petits ruminants viruses using specific cDNA clones. In: Journal of Virological Methods, 23 (2). pp. 127-136.
Dickerson, RE and Bansal, Manju (1989) Definitions and nomenclature of nucleic acid structure parameters. In: EMBO Journal, 8 (1). pp. 1-4.
Diehl, P and Wasser, HR and Nagana, Gowda GA and Suryaprakash, N and Khetrapal, CL (1989) Metal-ion-ligand interactions in thermotropic liquid crystals. In: Chemical Physics Letters, 159 (2-3). pp. 199-201.
Diehl, P and Wasser, HR and Nagana, Gowda GA and Suryaprakash, N and Khetrapal, CL (1989) An NMR study of the coexistence of nematic and "induced" smectic phases in mixtures of nematics. In: Chemical Physics Letters, 159 (4). pp. 318-320.
Divakar, S (1989) Determination of distances of sugar protons from Mn2+ in concanavalin A. In: Indian J Biochem Biophys, 26 (3). pp. 190-195.
Dorai, C and Sastry, PS (1989) A Parallel Distributed Processing Model for 2D Shape Recognition. In: National Conference on Circuits and Systems, Nov. 1989, Roorkee University.
Drakshayani, DN and Sankar, Chitra and Mallya, RM (1989) The reduction of manganese nodules by hydrogen. In: Thermochimica Acta, 144 (2). 313-328..
Dutt, Narayana D (1989) Linear segmentation of velocity profile in sea water. In: Journal of Sound and Vibration, 132 (1). pp. 161-163.
Dutt, Vinayak and Anand, GV (1989) Tomographic Approach to Acoustic Noise Mapping in Shallow Sea. In: Fourth IEEE Region 10 International Conference,TENCON '89, 22-24 November, Bombay,India, pp. 581-584.
Ekambareswara, Rao and Ramesh, N and Choudhury, D and Brahmachari, SK and Sasisekharan, V (1989) Role of the environment in the interaction of nonintercalators with Z-DNA. In: Journal of Biomolecular Structure & Dynamics, 7 (2). pp. 335-345.
Floreanini, R and Percacci, R and Rajaraman, R (1989) Four-dimensional current algebra from Chern-Simons theory. In: Physics Letters B, 231 (1-2). pp. 119-124.
Fujii, I and Tsuchiya, K and Shikakura, Y and Murthy, MS (1989) Consideration on thermal decomposition of calcium hydroxide pellets for energy storage. In: Journal of Solar Energy Engineering, 111 (3). pp. 245-250.
Gadagkar, Raghavendra (1989) An undesirable property of Hill's diversity index $N_2$. In: Oecologia, 80 (1). pp. 140-141.
Ganguli, AK and Manivannan, V and Sood, AK and Rao, CNR (1989) New family of thallium cuprate superconductors not containing calcium or barium: TlSrn+1−xLnxCunO2n+3+δ (Ln=La, Pr, or Nd). In: Applied Physics Letters, 55 (25). 2664 -2666.
Ganguli, AK and Nagarajan, R and Ranga, Rao G and Vasanthacharya, NY and Rao, CNR (1989) Elusive superconductivity in polycrystalline samples of layered lanthanum nickelates. In: Solid State Communications, 72 (2). pp. 195-197.
Ganguli, AK and Nanjundaswamy, KS and Rao, CNR and Sequeira, A and Rajagopal, H (1989) A neutron diffraction study of the superconductor, $Tl_{0.5}Pb_{0.5}CaSr_2Cu_2O_y$. In: Materials Research Bulletin, 24 (7). pp. 883-888.
Ganguli, AK and Rao, CNR and Sequelra, A and Rajagopal, H (1989) An investigation of the PrBa2Cu3O7-δ system. In: Zeitschrift für Physik B: Condensed Matter, 74 (2). pp. 215-219.
Ganguli, AK and Vijayaraghavan, R and Rao, CNR (1989) Novel series of thallium cuprate superconductors. In: Physica C: Superconductivity, 162-16 (2). pp. 867-868.
Gangulia, AK and Vijayaraghavana, R and Rao, CNR (1989) Novel 1122 thallium cuprates showing high Tc superconductivity: TlCa1-xY,Ba2Cu2Oy,T,1-xPbxCaSr2Cu2Oy and TlCa0.5Ln0.5Sr2Cu2Oy. In: Phase Transitions: A Multinational Journal, 19 (4). 213 -222.
Ganguly, K and Sreedhar, K and Raju, AR and Demazeau, G and Hagenmuller , P (1989) Electron paramagnetic resonance studies of some ternary oxides of copper (II). In: Journal of Physics: Condensed Matter, 1 (1). pp. 213-226.
Ganju, Ramesh K and Vithayathil, Paul J and Murthy, SK (1989) Purification and characterization of two xylanases from Chaetomium thermophile var. coprophile. In: Canadian Journal of Microbiology, 35 (9). pp. 836-842.
Gannabathula, Prasad and Murthy, ISN (1989) ARMA Order Selection for Eeg - an Empirical Comparison Of Three Order Selection Algorithms. In: the Annual International Conference of the IEEE Engineering in Engineering in Medicine and Biology Society, 1989. Images of the Twenty-First Century, 9-12 November, Seattle,WA, Vol.5, 1686-1687.
Gaonkar, Gopal H and Nagaraja, CS and Nagabhushanam, J (1989) Prediction of inplane damping from deterministic and stochastic models. In: Vertica, 13 (2). pp. 143-158.
Ghose, D and Dam, B and Prasad, UR (1989) A Spread Acceleration Guidance Scheme for Command Guided Surface-To-Air Missiles. In: IEEE 1989 National Aerospace and Electronics Conference. NAECON 1989, 22-26 May, Dayton,OH, Vol.1, 202-208.
Ghose, D and Prasad, UR (1989) Multicriterion differential games with applications to combat problems. In: Computers & Mathematics with Applications, 18 (1-3). pp. 117-126.
Ghose, D and Prasad, UR (1989) Solution concepts in two-person multicriteria games. In: Journal of Optimization Theory and Applications, 63 (2). pp. 167-189.
Ghoshal, SK and Gupta, M and Rajaraman, V (1989) A parallel multistep predictor-corrector algorithm for solving ordinary differential equations. In: Journal of Parallel and Distributed Computing, 6 (3). pp. 636-648.
Gibbons, AN and Srikant, YN (1989) A class of problems efficiently solvable on mesh-connected computers including dynamic expression evaluation. In: Information Processing Letters, 32 (6). pp. 305-311.
GopalKrishna, * and Wiita, Paul J and Saripalli, L (1989) The Formation, Numbers And Radio Output Of Giant Radio Galaxies. In: Monthly Notices of the Royal Astronomical Society, 239 (1). pp. 173-187.
Gopalan, R and Somashekar, BR and Dattaguru, B (1989) Environmental effects on fibre-Polymer composites. In: Polymer Degradation and Stability, 24 (4). pp. 361-371.
Goverdhan, Mehta and Chebiyyam, Prabhakar and Natarajan, Padmaja and Suryanarayan, Ramakumar Rao and Mysore, Viswamitra A (1989) From cages to wedges and clefts : Design of some novel hosts based on Image , Image , Image -triquinane framework. In: Tetrahedron Letters, 30 (49). pp. 6895-6898.
Govindarajan, R and Kumar, R and Kumar, D and Patnaik, LM (1989) PROMIDS: A PROtotype multi-rIng data flow system for functional programming languages. In: Microprocessing and Microprogramming, 26 (3). pp. 161-173.
Gullapalli, S and Shivaswamy, V and Ramasarma, T and Kurup, CK (1989) Increase in alpha-glycerophosphate dehydrogenase and other oxidoreductase activities of hepatic mitochondria on administration of vanadate to the rat. In: Indian Journal of Biochemistry & Biophysics, 26 (4). pp. 227-233.
Gullapalli, Sharada and Shivaswamy, Vidya and Ramasarma, T and Ramakrishna, CK (1989) Increase in glycerophosphate dehydrogenase and other oxidoreductase activities of hepatic mitochondria on administration of vanadate to the rat. In: Indian Journal of Biochemistry & Biophysics, 26 (4). pp. 227-233.
Gunasekaran, MK and Jayalakshmi, Y (1989) Ratio transformer bridge for the measurement of dielectric constant of lossy liquids. In: Journal of Physics E: Scientific Instruments, 22 (12). pp. 1000-1004.
Gupta, Sen DP (1989) Rural Electrification in India: The Achievements and the Shortcomings. In: Fourth IEEE Region 10 International Conference TENCON '89, 22-24 November, Bombay,India, 752 -755.
Gurunath, B and Biswas, NN (1989) An Algorithm for Multiple Output Minimization. In: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 8 (9). pp. 1007-1013.
Hajra, JP (1989) Thermodynamic consistency of the ternary functions. In: Metallurgical and Materials Transactions A, 20 (10). pp. 2047-2055.
Harshavardhan, Solomon K and Hegde, MS (1989) On photooxidation of Ge in obliquely deposited Ge and Ge25X75(X = S, Se, Te) thin films. In: Solid State Communications, 69 (1). pp. 117-120.
Harshavardhan, Solomon K and Yalamanchi, RS and Rao, Kameswara L (1989) Formation of crystalline diamond from amorphous diamond-like carbon films by pulsed laser irradiation. In: Applied Physics Letters, 55 (4). pp. 351-353.
Hatwalne, Yashodhan and Krishnamurthy, HR and Pandit, Rahul and Ramaswamy, Sriram (1989) Small-angle grain boundaries in quasicrystals. In: Physical Review Letters, 62 (23). pp. 2699-2702.
Hegde, MS and Barboux, P and Tarascon, JM and Venkatesan, T and Chang, CC and Wu, XD and Inam, A (1989) Electronic structure of high-Tc Ba0.6K0.4BiO3 by x-ray photoelectron spectroscopy. In: Physical Review B: Condensed Matter, 39 (7). pp. 4752-4755.
Hemant, Yennawar P and Viswamitra, MA (1989) Crystal structure of L-lysyl-L-glutamic acid dihydrate. In: International Journal of Peptide & Protein Research, 34 (1). pp. 42-45.
Hohmann, G (1989) Comparative Study of Vocal Communication in Two Asian Leaf Monkeys, Presbytis johnii and Presbytis entellus. In: Folia Primatologica, 52 (1-2). pp. 27-57.
Hohmann, G (1989) Group fission in Nilgiri langurs (Presbytis johnii). In: International Journal of Primatology, 10 (5). pp. 441-454.
Hohmann, G (1989) Vocal communication of wild bonnet macaques (Macaca radiata). In: Primates, 30 (3). pp. 325-345.
Iyengar, RR and Bhattacharya, PK (1989) Thermodynamic View Of Hydrophobic Association Of Side-Chains Of Aromatic Amino-Acid. In: Indian Journal of Chemistry - Section A: Inorganic, Physical, Theoretical and Analytical Chemistry, 28 (6). pp. 445-451.
Iyengar, RN (1989) Response of nonlinear systems to narrow-band excitation. In: Structural Safety, 6 (2-4). pp. 177-185.
Iyengar, RN and Manohar, CS (1989) Probability Distribution of the Eigenvalues of the Random String Equation. In: Journal of Applied Mechanics, 56 (1). pp. 202-207.
Iyengar, Sundara Raja KT and Raghuprasad, BK and Ananthan, H (1989) Effect of interaction of macrocracks on the stress intensity factor in a beam. In: Engineering Fracture Mechanics, 32 (3). pp. 379-386.
Iyer, Suman B and Kumar, Vikram and Harshavardhan, KS (1989) Interface State Density Distribution in Amorphous/Crystalline Silicon Heterostructures. In: Japanese Journal of Applied Physics, 28, Part-2 (5). L744-L746.
Iyer, Anantha GV and Gananath, SN (1989) EPR spectroscopic studies of calcic plagioclases from the Archean anorthosites of Holenarasipur, Karnataka Craton, south India. In: Current Science, 58 (16). pp. 915-917.
Jacob, KT and Kale, GM and Iyengar, GNK (1989) Chemical potentials of oxygen for fayalite-quartz-lron and fayalite-quartz-magnetite equilibria. In: Metallurgical and Materials Transactions B, 20 (5). pp. 679-685.
Jacob, KT and Mathews, T (1989) Applications of solid electrolytes in galvanic sensors. In: High conductivity solid ionic conductors: recent trends and applications, 1989, Singapore.
Jacob, KT and Sheela, Ramasesha K (1989) Design of temperature-compensated reference electrodes for non-isothermal galvanic sensors. In: Solid State Ionics, 34 (3). pp. 161-166.
Jacob, KT and Swaminathan, K and Sreedharan, OM (1989) Stability constraints in the design of galvanic cells using composite electrolytes and auxiliary electrodes. In: Solid State Ionics, 34 (3). pp. 167-173.
Jagadish, N and Kumar, Mohan J and Patnaik, LM (1989) An Efficient Scheme for Interprocessor Communication Using Dual-Ported RAMs. In: IEEE Micro, 9 (5). pp. 10-19.
Jagannadha, Rao A and Chakraborti, R (1989) Studies on non-specific interference due to serum in the avidin-biotin microELISA for monkey chorionic gonadotropin and a method for its elimination. In: Journal of Immunological Methods, 125 (1-2). pp. 261-264.
Jagannathan, NR (1989) Carbon-13 chemical shielding tensors in alkanedicarboxylic acids. Influence of molecular geometry on the carboxyl carbon tensors in alkanedicarboxylic acids and related compounds. In: Magnetic Resonance in Chemistry, 27 (10). pp. 941-946.
Jagannathan, R and Simon, R and Sudarshan, ECG and Mukunda, N (1989) Quantum theory of magnetic electron lenses based on the Dirac equation. In: Physics Letters A, 13 (8-9). pp. 457-464.
Jain, Sampat R and Mimani, Tanu and Vittal, Jagadese J (1989) Chemical Aspects of the Synergistic Hypergolic Ignition in Hybrid Systems with N2O4 as Oxidizer. In: Combustion Science and Technology, 64 (1-3). pp. 29-41.
Jain, SR and Oommen, C (1989) Thermal ignition studies on metallized fuel-oxidizer systems. In: Journal of Thermal Analysis and Calorimetry, 35 (4). pp. 1119-1128.
Jamadagni, HS and Sonde, BS (1989) Near-Optimum Synchronisers for DPSK Modulation Schemes. In: Fourth IEEE Region 10 International Conference,TENCON '89, 22-24 November, Bombay,India, 145 -149.
Jamadagni, HS and Sonde, BS and Shah, AV (1989) Simulation of All-Digital DPSK Modems and Synchronizers. In: Fourth IEEE Region 10 International Conference,TENCON '89, 22-24 November, Bombay,India, pp. 150-153.
Jayalakshmi, V and Guha, S and Gopal, ESR (1989) Electrical and Dielectric Behaviour at the Critical Point of Binary Liquids: Acetonitrile + Cyclohexane. In: Berichte der Bunsen-Gesellschaft für Physikalische Chemie, 93 (4). pp. 513-520.
Jayannavar, AM and Vijayagovindan, GV and Kumar, N (1989) Energy dispersive backscattering of electrons from surface resonances of a disordered medium and 1/f noise. In: Zeitschrift für Physik B: Condensed Matter, 75 (1). pp. 77-79.
Jayaram, V (1989) Design And Fabrication Of An Electron-Energy Loss Spectrometer For Investigating Electronic-Structures Of Solid-Surfaces. In: Indian Journal of Pure & Applied Physics, 27 (7-8). pp. 429-437.
Jayaram, V and Kulkarni, GU and Rao, CNR (1989) Study of the electronic structures of high Tc cuprate superconductors by electron energy loss and secondary electron emission spectroscopies. In: Solid State Communications, 72 (1). pp. 101-105.
Jebaraj, PM and Srinivasan, MN and Seshadri, MR (1989) General and Intergranular Corrosion of Austenitic Stainless Steel Castings. In: Corrosion, 45 (11). pp. 938-942.
Jha, Animesh and Abraham, KP (1989) Dephosphorisation of Iron–Chrome Alloy with Ca–CaF2 Melt during Electro Slag Refining. In: ISIJ International, 29 (4). pp. 300-308.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) 3-D Geodesics on Convex Quadrics for Surface Ray Propagation: A Turbo Basic Package for Computer-Aided Instruction. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.1, 223-226.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Analytical Evaluation Of Element Coupling Coefficients on General Paraboloids Of Revolution. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.2, 1008-1011.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Applicability Of Artificial Intelligence Languages to Solving the Scattering and Diffraction Problems Using a Personal Computer. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.2, 738-741.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Closed Form Evaluation of Element Coupling Coefficients in Conformal Arrays on General Quadric Cylinders. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.2, 1004-1007.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Closed form Expressions for Integral Ray Geometric Parameters for Wave Propagation on General Quadric Cylinders. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.1, 203 -206.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Closed form Expressions for Integral Ray Geometric Parameters for Wave Propagation on General Quadric Surfaces of Revolution. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.1, 207-210.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Closed form Surface Ray Analysis for Antennas Located on a Class of Aircraft Wings. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.1, 356-359.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Closed form Surface Ray Tracing on Ogival Surfaces. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.3, 1294-1297.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Geodesic Splitting on General Paraboloid of Revolution And its Implications to the Surface Ray Analysis. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.1, 196-198.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Reduction of Search Order in Surface Ray Analysis for a Class of Nondevelopable Satellite Launch Vehicles. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.1, 352-355.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) A Surface Modeling Paradigm for Electromagnetic Applications in Aerospace Structures. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.1, 227 -230.
Jha, RM and Bokhari, SA and Sudhakar, V and Mahapatra, PR (1989) Surface Ray Contribution to Bistatic Radar Cross Section of a General Paraboloid of Revolution. In: Antennas and Propagation Society International Symposium, 1989. AP-S. Digest, 26-30 June, San Jose,CA, Vol.1, 219 -222.
Jyothi, N and Sudha, KN and Natarajan, KA (1989) Electrochemical Aspects of Selective Bioleaching of Sphalerite and Chalcopyrite from Mixed Sulfides. In: International Journal of Mineral Processing, 27 (3-4). pp. 189-203.
Kale, GM and Jacob, KT (1989) Gibbs energy of formation of co2sio4 and phase-relations in the system co-si-o. In: Mineral Processing and Extractive Metallurgy (IMM Transactions Section C), 98 . C117 -C122.
Kale, GM and Jacob, KT (1989) Gibbs Energies of Formation of $CuYO_2$ and $Cu_2Y_2O_5$ and Phase Relations in the System Cu-Y-O. In: Chemistry of Materials, 1 (5). pp. 515-519.
Kale, GM and Jacob, KT (1989) Phase relations and thermodynamic properties of compounds in the pseudobinary system BaO-Y2O3. In: Solid State Ionics, 34 (4). pp. 247-252.
Kale, GM and Jacob, KT (1989) Thermodynamic Stability of Potassium-beta-Alumina. In: Metallurgical and Materials Transactions B, Process Metallurgy and Materials Processing Science, 20 (5). pp. 687-691.
Kale, GM and Jacob, KT (1989) Thermodynamic partial properties of Na2O in Nasicon solid solution, Na1+xZr2SixP3-xO12. In: Journal of Materials Research, 4 (2). pp. 417-422.
Kale, GM and Jacob, KT (1989) properties of Na2O in Nasicon solid solution, Na1+xZr2SixP3-xO12. In: Journal of Materials Research, 4 (2). pp. 417-422.
Kalyanaraman, D and Bapu, G and Ramesh, NK and Subramanian, R (1989) Wear behavior of nickel-titanium dioxide composite coating. In: Bulletin of Electrochemistry, 5 (9). pp. 700-702.
Kalyanasundaram, P and Jayakumar, T and Raj, B and Murthy, CRL and Krishnan, A (1989) Acoustic emission technique for leak detection in an end shield of a pressurised heavy water reactor. In: International Journal of Pressure Vessels and Piping, 36 (1). pp. 65-74.
Kannan, C and Subramanian, S and Lahiri, AK (1989) Studies on slag-metal interface in electroslag casting using thermocouples. In: Transactions of the Indian Institute of Metals, 42 (5). pp. 499-502. (Unpublished)
Kannan, AM and Shukla, AK and Sathyanarayana, S (1989) Oxide-based bifunctional oxygen electrode for rechargeable metal/air batteries. In: Journal of Power Sources, 25 (2). pp. 141-150.
Kanta, Rama and Rangarajan, SK (1989) Chronopotentiometry with power-law perturbation functions at an expanding plane electrode with and without a preceding blank period for systems with a coupled first-order homogeneous chemical reaction. In: Journal of Electroanalytical Chemistry, 265 (1-2). pp. 39-65.
Kar, S and Selvarajan, A (1989) On the CNET: a rearrangeable nonblocking optical interconnection network. In: Electronics Letters, 25 (4). pp. 280-281.
Karle, IL and Anderson, Flippen JL and Kishore, R and Balaram, P (1989) Cystine peptides Antiparallel β-sheet conformation of the cyclic biscystine peptide [Boc-Cys-Ala-Cys-NHCH3]2. In: International Journal of Peptide & Protein Research, 34 (1). pp. 37-41.
Karle, IL and Flippen-Anderson, JL and Uma, K and Balaram, P (1989) Solvated Helical Backbones: X-Ray Diffraction Study of Boc-Ala-Leu-Aib-Ala-Leu-Aib-Ome.$H_2O$. In: Biopolymers, 28 (3). pp. 773-781.
Karle, Isabella L and Flippen-Anderson, Judith L and Uma, K and Balaram, P (1989) Modular Design of Synthetic Protein Mimics. Characterization of the Helical Conformation of a 13-Residue Peptide in Crystals. In: Biochemistry, 28 (16). pp. 6696-6701.
Karle, Isabella L and Flippen-Anderson, Judith L and Uma, Kuchibhotla and Balaram, Hemalatha and Balaram, Padmanabhan (1989) \alpha -Helix and Mixed $3_{10}$/ \alpha -helix in Cocrystallized Conformers of Boc-Aib-Val-Aib-Aib-Val-Val-Val-Aib-Val-Aib-Ome. In: Proceedings of the National Academy of Sciences of the United States of America, 86 (3). pp. 765-769.
Karuturi, Satyanarayana and Rao, Satyanarayana MR (1989) Immunochemical detection of Z-DNA in rat pachytene spermatocytes. In: Experimental Cell Research, 185 (2). pp. 319-326.
Kasiviswanathan, SR and Ramachandra, Rao A (1989) Exact solution for the unsteady flow and heat transfer between eccentrically rotating disks. In: International Journal of Engineering Science Int J Eng Sci, 27 (6). pp. 731-736.
Kasturi, TR and Saibaba, R (1989) Total Synthesis Of Substituted 5,8-Seco-6,7-Bisnor-13-Ethylsteroids. In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (3). pp. 208-213.
Kaur, Paramjeet and Uma, K and Balaram, P and Chauhan, VS (1989) Synthetic and conformational studies on dehydrophenylalanine containing model peptides. In: International Journal of Peptide & Protein Research, 33 (2). 103 -109 .
Kaushal, Kishore and Sivasubramanian, Sankaralingam and Adamali, Sameena Begum (1989) Studies on the ageing behaviour of Polyvinylchloride/ammonium perchlorate composite solid propellant. In: Fuel, 68 (11). pp. 1476-1479.
Kazemi, Amir AD and Murthy, Srinivas M and Raju, Govinda N (1989) Stress intensity factor determination of radially cracked circular rings subjected to tension using photoelastic technique. In: Engineering Fracture Mechanics, 32 (3). 403 -408.
Keshavac, Murthy K and Giridhar, DP (1989) Inverted V-Notch: Practical Proportional Weir. In: Journal of Irrigation and Drainage Engineering, 115 (6). pp. 1035-1050.
Khan, Nayeem Ullah and Bhat, Manzoor Ahmad and Vaidyanathan, CS (1989) Phenylalanine Ammonia-Lyase - An Update On Its Kinetics. In: Current Science (Bangalore), 58 (8). pp. 427-430.
Khandke, Kiran M and Vithayathil, PJ and Murthy, SK (1989) Degradation of larchwood xylan by enzymes of a thermophilic fungus, Thermoascus aurantiacus. In: Archives of Biochemistry and Biophysics, 274 (2). pp. 501-510.
Khandke, Kiran M and Vithayathil, PJ and Murthy, SK (1989) Purification and characterization of an $\alpha -_D-Glucuronidase$ from a thermophilic fungus, Thermoascus aurantiacus. In: Archives of Biochemistry and Biophysics, 274 (2). pp. 511-517.
Khandke, Kiran M and Vithayathil, PJ and Murthy, SK (1989) Purification and characterization of an alpha-D-glucuronidase from a thermophilic fungus, Thermoascus aurantiacus. In: Archives of Biochemistry and Biophysics, 274 (2). pp. 511-517.
Khandke, Kiran M and Vithayathil, PJ and Murthy, SK (1989) Purification of xylanase, $\beta$-glucosidase, endocellulase, and exocellulase from a thermophilic fungus, Thermoascus aurantiacus. In: Archives of Biochemistry and Biophysics, 274 (2). pp. 491-500.
Kishore, K and Mohandas, K (1989) Effect of fire-retardant additives on the time-resolved processes of polymer combustion. In: Fire and Materials, 14 (2). pp. 39-41.
Kishore, K and Nagarajan, R (1989) Effect of metal salicylates on the ignition and combustion of polystyrene. In: Fire Safety Journal, 15 (5). pp. 391-401.
Kishore, Kaushal and Rajalingam, Ponswamy (1989) Polymer–filler interactions in hydroxy-terminated polybutadiene–ammonium perchlorate system in the presence of a bonding agent. In: Journal of Applied Polymer Science, 37 (10). pp. 2845-2853.
Koshy, Abraham and Kumar, R and Gandhi, KS (1989) Effect of drag-reducing agents on drop breakage in stirred dispersions. In: Chemical Engineering Science, 44 (10). pp. 2113-2120.
Krishnamurthy, SS (1989) Bicyclic Phosphazenes Derived from (Amino) Cyclotetraphosphazenes. In: Phosphorus, Sulfur, and Silicon and the Related Elements, 41 (3-4). pp. 375-391.
Krishnamurthy, Savita and Vithayathil, Paul Joseph (1989) Purification and characterization of endo-1,4-β-xylanase from Paecilomyces varioti Bainier. In: Journal of Fermentation and Bioengineering, 67 (2). pp. 77-82.
Krishnan, VV and Murali, N and Kumar, Anil (1989) A diffusion equation approach to spin diffusion in biomolecules. In: Journal of Magnetic Resonance, 84 (2). pp. 255-267.
Krishnan, V and Nema, RS (1989) A Study of Short-term Partial Discharge Aging of Polypropylene Film. In: IEEE Transactions on Electrical Insulation, 24 (6). pp. 1133-1140.
Krishnarajulu, B and Muralidhar, GK and Mohan, S and Menon, AG (1989) Mass analyser optics for wide-beam ion sources in ion implantation systems. In: Journal of Physics E - Scientific Instruments, 22 (8). pp. 666-668.
Kulkarni, GU and Sankar, G and Rao, CNR (1989) Analysis of EXAFS data of complex systems. In: Zeitschrift für Physik B: Condensed Matter, 73 (4). 529 -537.
Kulkarni, GU and Sankar, G and Rao, CNR (1989) Nature of Pb in superconducting cuprates containing lead: A Pb L3 x‐ray absorption near‐edge spectroscopy study. In: Applied Physics Letters, 55 (4). pp. 388-389.
Kumar, Anurag (1989) Component Inventory Costs in an Assembly Problem with Uncertain Supplier Lead-Times. In: IIE Transactions, 21 (2). pp. 112-121.
Kumar, R (1989) Should chemical engineers warm up to hot superconductors? In: Indian Chemical Engineer, 31 (2). pp. 3-17.
Kumar, Anurag (1989) Adaptive load control of the central processor in a distributed system with a star topology. In: IEEE Transactions on Computers, 38 (11). pp. 1502-1512.
Kumar, N (1989) Cold fusion: is there a solid state effect? In: Current Science, 58 (15). pp. 833-835.
Kumar, Senthil A and Vasu, RM (1989) Imaging with oriented photographic diffusers. In: Optics Communications, 70 (2). pp. 77-81.
Kumar, Udaya K and Gundappa, Pradeep and Bai, Pramila BN and Natarajan, KA and Biswas, SK (1989) Laboratory studies on the wear of grinding media. In: Tribology International, 22 (3). pp. 219-225.
Kumari, M and Nath, G (1989) Double diffusive unsteady free convection on two-dimensional and axisymmetric bodies in a porous medium. In: International Journal of Energy Research, 13 (4). pp. 379-391.
Kumari, M and Nath, G (1989) Doubly diffusive unsteady mixed convection flow over a vertical plate embedded in a porous medium. In: International Journal of Energy Research, 13 (4). pp. 419-430.
Kumari, M and Nath, G (1989) Non-Darcy mixed convection boundary layer flow on a vertical cylinder in a saturated porous medium. In: International Journal of Heat and Mass Transfer, 32 (1). pp. 183-187.
Kumari, M and Nath, G (1989) Simultaneous heat and mass transfer in unsteady free convection from two-dimensional and axisymmetric bodies. In: Heat and Mass Transfer, 24 (6). pp. 329-336.
Kumari, M and Nath, G (1989) Unsteady boundary layer flow on a cylinder in a channel. In: International Communications in Heat and Mass Transfer, 16 (1). pp. 115-122.
Kumari, M and Nath, G (1989) Unsteady mixed convection flow of a thermomicropolar fluid on a long thin vertical cylinder. In: International Journal of Engineering Science, 27 (12). pp. 1507-1518.
Kumari, M and Nath, G (1989) Unsteady mixed convection with double diffusion over a horizontal cylinder and a sphere within a porous medium. In: Warme und Stoffubertragung-Thermo & Fluid Dynamics, 24 (2). pp. 103-109.
Kumari, M and Pop, I and Nath, G (1989) Mixed convection along a vertical cone. In: International Communications in Heat and Mass Transfer, 16 (2). pp. 247-255.
Kutty, TRN and Avudaithai, M (1989) Photocatalytic activity of tin-substituted TiO2 in visible light. In: Chemical Physics Letters, 163 (1). pp. 93-97.
Kutty, TRN and Raghu, N (1989) Varistors based on polycrystalline ZnO:Cu. In: Applied Physics Letters, 54 (18). pp. 1796-1798.
Lakshmi, Raj M (1989) Cellular automaton fluids - A review. In: Sadhana, 14 (3). pp. 133-172.
Lakshmi, MVS and Ramkumar, K (1989) I–U Characteristics of MOS Structures on Polycrystalline Silicon. In: Physica Status Solidi A, 111 (2). 667 -674.
Lakshmi, MVS and Ramkumar, K and Satyam, M (1989) Current controlled variable resistors through superconductors. In: Review of Scientific Instruments, 60 (7). pp. 1340-1341.
Lakshmi, MVS and Ramkumar, K and Satyam, M (1989) On superconducting Y-Ba-Cu-O films prepared by printing on lumina substrates. In: Journal of Physics D: Applied Physics, 22 (2). pp. 373-375.
Lalitha, HN and Ramakumar, S and Viswamitra, MA (1989) Structure of 5-methyl-2'-deoxycytidine 5'-monophosphate dihydrate. In: Acta Crystallographica Section C, 45 (Part 1). pp. 1652-1655.
Lalitha, R and George, Rajan and Ramasarma, T (1989) Mevalonate-metabolizing enzymes in Arachis hypogaea. In: Molecular and Cellular Biochemistry, 87 (2). pp. 161-170.
Madhusoodanan, KN and Jacob, Philip and Asokan, S and Parthasarathy, G and Gopal, ESR (1989) Photoacoustic investigation of the optical absorption and thermal diffusivity in SixTe100−x glassesstar. In: Journal of Non-Crystalline Solids, 109 (2-3). pp. 255-261.
Madhusoodanan, KN and Nandakumar, K and Philip, J and Titus, SSK and Asokan, S and Gopal, ESR (1989) Photoacoustic Investigation of Glass Transition in AsxTe1-x Glasses. In: Physica Status Solidi A, 114 (2). 525 -530.
Madina, Saheb S and Prakash, Desayi (1989) Ultimate Strength of RC Wall Panels in One-Way In-Plane Action. In: Journal of Structural Engineering, 115 (10). pp. 2617-2630.
Madyastha, KM and Moorthy, B (1989) Pulegone mediated hepatotoxicity: Evidence for covalent binding of R(+)-$[^{14}C]$pulegone to microsomal proteins in vitro. In: Chemico-Biological Interactions, 72 (3). pp. 325-33.
Mahapatra, PR and Poulose, MM (1989) Evaluating ILS and MLS Sites without Flight Tests. In: Journal of Navigation, 42 (2). pp. 278-290.
Mahapatra, PR and Shukla, US (1989) Accurate Solution of Proportional Navigation for Maneuvering Targets. In: IEEE Transactions on Aerospace and Electronic Systems, 25 (1). pp. 81-89.
Mahapatra, Pravas R (1989) New Strategies and Instruments for Enhancement of Aviation Safety. In: IEEE National Aerospace and Electronics Conference, 1989. NAECON 1989, 22-26 May, Dayton,OH, Vol.4, 1782-1789.
Mahapatra, Pravas R and Poulose, MM (1989) Accurate ILS And MLS Performance Evaluation In Presence Of Site Errors. In: IEEE 1989 National Aerospace and Electronics Conference. NAECON 1989, 22-26 May, Dayton,OH, Vol.1, 167-174.
Maitia, TK and Podder, SK (1989) Differential binding of peanut agglutinin with lipopolysaccharide of homologous and heterologous Rhizobium. In: FEMS Microbiology Letters, 65 (3). pp. 279-283.
Majumder, K and Brahmachari, SK (1989) Sequence specificity in spermine-induced structural changes in CG-oligomers. In: Biochemistry International, 18 (2). pp. 455-465.
Majumder, K and Mishra, RK and Bansal, Manju and Brahmachari, SK (1989) Sequence criteria for Z-DNA formation: studies on poly d(ACGT). In: Nucleic Acids Research, 17 (1). p. 450.
Mandal, BN and Chakrabarti, A (1989) A Note on Diffraction of Water Waves by a Nearly Vertical Barrier. In: IMA Journal of Applied Mathematics, 43 (2). pp. 157-165.
Mande, SC and Suguna, K (1989) A fast algorithm for macromolecular packing calculation. In: Journal of applied crystallography, 22 . pp. 627-629.
Mangamma, G and Bhat, SV (1989) NMR Studies of the Protonic Conductor $(NH_4)_4Fe(CN)_6•1.5H_2O$. In: Solid State Ionics, 35 (1-2). pp. 123-125.
Marathe, Y and Ramaswamy, S (1989) Frequency-Dependent Viscosity of Membrane Solutions. In: EPL: Europhysics Letters, 8 (6). pp. 581-585.
Marathe, Yatin (1989) Dissipative quantum dynamics of a charged particle in a magnetic field. In: Physical Review A, 39 (11). pp. 5927-5931.
Markose, Elizabeth Rani and Rao, MRS (1989) Testis-specific histone H1t is truly a testis-specific variant and not a meiotic-specific variant. In: Experimental Cell Research, 182 (1). pp. 279-283.
Mathew, George and Narasimhan, SV and Shivaprasad, AP (1989) An ADPCM With An Exponential Power Estimator Based Predictor for Speech Coding. In: Fourth IEEE Region 10 International Conference, TENCON '89, 22-24 November, Bombay,India, pp. 682-685.
Mathialagan, M and Rao, Jagannadha A (1989) A role for calcium in gonadotrophin-releasing hormone (GnRH) stimulated secretion of chorionic gonadotrophin by first trimester human placental minces in vitro. In: Placenta, 10 (1). pp. 61-70.
Mathias, PC and Patnaik, LM and Ramesh, Sudha (1989) Systolic Architectures in Curve Generation. In: Computers & Graphics, 13 (4). pp. 561-567.
Matsuda, Tsukasa and Kabat, Elvin A and Surolia, Avadhesha (1989) Carbohydrate binding specificity of the basic lectin from winged bean (Psophocarpus tetragonolobus). In: Molecular Immunology, 26 (2). pp. 189-195.
Meera, G and Ramesh, N and Brahmachari, Samir K (1989) Zintrons in rat \alpha-lactalbumin gene. In: FEBS Letters, 251 (1-2). pp. 245-249.
Mishra, AK and Milner, DF and Weaver, MJ and Rangarajan, SK (1989) Many-body effects in electrosorption : Some numerical consequences for partial charge transfer. In: Journal of Electroanalytical Chemistry, 271 (1-2). pp. 351-358.
Misra, RDK and Jacob, KT and Nandy, TK and Saha, RL (1989) Evaluation of the reactivity of titanium with mould materials during casting. In: Bulletin of Materials Science, 12 (5). pp. 481-493.
Mitra, P and Rajaraman, R (1989) Gauge-invariant reformulation of an anomalous gauge theory. In: Physics Letters B, 225 (3). pp. 267-271.
Mohan, KS and Gopinathan, KP (1989) Characterization of viral proteins ofOryctes baculovirus and comparison between two geographical isolates. In: Archives of Virology, 109 (3-4). pp. 207-222.
Mohan, KS and Gopinathan, KP (1989) Quantitation of serological cross-reactivity between two geographical isolates of Oryctes baculovirus by a modified ELISA. In: Journal of Virological Methods, 24 (1-2). pp. 203-213.
Mohapatra, YN and Kumar, V (1989) Temperature Dependence of Photocurrent in Undoped Semi-insulating Gallium Arsenide. In: Physica Status Solidi A: Applied Research, 114 (2). pp. 659-663.
Moona, Rajat and Rajaraman, V (1989) A Software Environment for General Purpose MIMD Multiprocessors. In: Fourth IEEE Region 10 International Conference,TENCON '89, 22-24 November, Bombay,India, pp. 98-101.
Moorthy, B and Madyastha, P and Madyastha, KM (1989) Metabolism of a monoterpene ketone, R-(+)-pulegone—a hepatotoxin in rat. In: Xenobiotica, 19 (2). pp. 217-224.
Moretto, V and Crisma, M and Bonora, GM and Toniolo, C and Balaram, Hemalatha and Balaram, P (1989) Comparison of the Effect of Five Guest Residues on the \beta-Sheet Conformation of Host ${(L-Val)}_n$ Oligopeptides. In: Macromolecules, 22 (7). pp. 2939-2944.
Moudgal, NR and Sairam, MR and Dighe, RR (1989) Relative ability of ovine follicle stimulating hormone and its beta-subunit to generate antibodies having bioneutralization potential in nonhuman primates. In: Journal of Biosciences, 14 (2). pp. 91-100.
Mukhopadhyay, NK and Ranganathan, S and Chattopadhyay, K (1989) Evolution of superlattice order in Al-Mn quasicrystals and its relation to face-centred icosahedral quasicrystals. In: Philosophical Magazine Letters, 60 (5). pp. 207-211.
Mukhopadhyay, Chaitali and Rao, VSR (1989) Computer modelling approach to study the modes of binding of alpha- and beta-anomers of D-galactose, D-fucose and D-glucose to L-arabinose-binding protein. In: International Journal of Biological Macromolecules, 11 (4). pp. 194-200.
Mukhopadhyay, NK and Chattopadhyay, K and Ranganathan, S (1989) Synthesis and structural aspects of quasicrystals in Mg-Al-Ag system: Mg4Al6Ag. In: Metallurgical Transactions. Physical Metallurgy and Material, 20 (5). pp. 805-812.
Mukhopadhyay, Snehasis and Thathachar, MAL (1989) Associative Learning of Boolean functions. In: IEEE Transactions on Systems, Man and Cybernetics, 19 (5). pp. 1008-1015.
Mukunda, N (1989) Aspects of the interplay between physics and biology. In: Journal of Genetics, 68 (2). pp. 117-128.
Mukundan, T and Bhanu, VA and Kishore, K (1989) First report of a polymeric peroxide initiated room temperature radical polymerization of vinyl monomers. In: Monthly Notices of the Royal Astronomical Society (12). pp. 780-781.
Mukundan, T and Kishore, K (1989) Synthesis of Poly(1,4-Divinylbenzene Peroxide). In: Journal of Polymer Science - Part C: Polymer Letters, 27 (11). pp. 455-456.
Mukundan, T and Kishore, K (1989) Poly(vinylnaphthalene peroxide)s: Syntheses, Characterization, and Thermal Reactivity. In: Macromolecules, 22 (12). pp. 4430-4433.
Munichandraiah, N and Shivannanjaiah, HN and Iyengar, RR (1989) Voltammetric Studies Of Pd(Ii) Ammonia Complex - Solvent Effect. In: Indian Journal of Chemistry - Section A: Inorganic, Physical, Theoretical and Analytical Chemistry, 28 (7). pp. 561-564.
Munichandraiah, N (1989) Electrical double-layer studies of lead dioxide powder. In: Journal of Electroanalytical Chemistry, 226 (1). pp. 179-184.
Munjal, ML and Doige, AG (1989) On uniqueness, transfer and combination of acoustic sources in one-dimensional systems. In: Journal of Sound and Vibration, 128 (1). 165 -166.
Munjal, ML and Eriksson, LJ (1989) Analysis of a hybrid noise control system for a duct. In: Journal of the Acoustical Society of America, 86 (2). pp. 832-834.
Munjal, ML and Eriksson, LJ (1989) Analysis of a linear one-dimensional active noise control system by means of block diagrams and transfer functions. In: Journal of Sound and Vibration, 129 (3). pp. 443-455.
Muralidhar, GK and Nagaraju, J and Mohan, S (1989) Effectiveness of a Differential Temperature Controller on a Solar Water Heating System: An Experimental Study. In: Journal of Solar Energy Engineering, 111 (1). pp. 97-99.
Murthy, Siva Ram C and Rajaraman, V (1989) Task assignment in a multiprocessor system. In: Microprocessing and Microprogramming, 26 (1). 63 -71.
Murthy, BRS and Vatsala, A and Nagaraj, TS (1989) Can Cam-Clay Model Be Generalized - Closure. In: Journal of Geotechnical and Geoenvironmental Engineering, 115 (8). pp. 1200-1202.
Murthy, GS and Lakshmi, BS and Moudgal, NR (1989) Radioimmunoassay of polypeptide hormones using immunochemically coated plastic tubes. In: Journal of Biosciences, 14 (1). pp. 9-20.
Murthy, ISN and Reddy, MRS (1989) ECG Synthesis Via Discrete Cosine Transform. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 1989. Images of the Twenty-First Century, 9-12 November, Seattle,WA, Vol.2, 773 -774.
Murthy, SSN (1989) The Nature of Glass Transition Process in Alcohols. In: Journal of Molecular Liquids, 40 (4). pp. 261-276.
Murthy, SSN (1989) The nature of freezing process in glasses. In: Journal of Molecular Liquids, 44 (1). pp. 51-61.
Murthy, SSN and Murthy, VRK and Sobhanadri, J (1989) Anomalous dielectric behavior of some ferrites. In: Journal of Applied Physics, 65 (5). pp. 2159-2161.
Murty, CVS and Richter, W and Murthy, MVK (1989) Modeling Of Thermal-Radiation In Fired Heaters. In: Chemical Engineering Research and Design, 67 (2). pp. 134-144.
Murty, Krishna AV and Kumar, Hari HK (1989) Modelling of symmetric laminates under extension. In: Composite Structures, 11 (1). 15 -32.
Naga, Raju J and Thomas, A (1989) Improvement in solar air collector performance with vaned diffuser. In: International Journal of Energy Research, 13 (6). pp. 639-642.
Naganagowda, GA and Suryaprakash, N and Khetrapal, CL (1989) Anisotropic contributions of indirect proton-phosphorus couplings in oriented phenylphosphonous dichloride and dibromide. In: Journal of Magnetic Resonance, 84 (1). pp. 166-169.
Nagaraj, TS and Ikyengar, Sundara Raja KT and Shashiprakash, SG (1989) Soil-concrete analogy - principles and potentials. In: Cement and Concrete Research, 19 (4). 534-546.
Nagarajan, R and Vijayaraghavan, R and Ganapathi, L and Ram, Mohan RA and Rao, CNR (1989) Evidence for two distinct orthorhombic structures associated with different Tc regimes in LnBa2Cu3O7 − δ (Ln = Nd, Eu, Gd and Dy): A study of the dependence of superconductivity on oxygen stoichiometry. In: Physica C: Superconductivity, 158 (3). pp. 453-457.
Nagarajan, VS and Rao, KJ (1989) Crystallization Studies of ZrO2-SiO2 Composite Gels. In: Journal of Materials Science, 24 (6). 2140 -2146.
Nagendra, CL and Mohan, S and Thutupalli, GKM (1989) High efficiency infrared antireflection coatings (ARCs) for space optics. In: Infrared Physics, 29 (2-4). pp. 195-198.
Nagpal, S and Shanthi, KN and Kori, R and Schroder, H and Metcalfe, DD and Rao, Subba PV (1989) Induction of allergen-specific IgE and IgG responses by anti-idiotypic antibodies. In: The Journal of Immunology, 142 (10). pp. 3411-3415.
Nagpal, Sunil and Rajappa, Lekha and Metcalfe, Dean D and Rao, Pillarisetti Subba V (1989) Isolation and characterization of heat-stable allergens from shrimp (Penaeus indicus). In: Journal of Allergy and Clinical Immunology, 83 (1). pp. 26-36.
Naidu, PS and Mohan, Krishna PG (1989) A study of the spectrum of an acoustic field in shallow water due to noise sources at the surface. In: Journal of the Acoustical Society of America, 85 (2). pp. 716-725.
Nandabalan, K and Padayatty, JD (1989) A minor 9.8 kb rDNA unit of rice variety IR-20. In: Indian Journal of Biochemistry & Biophysics, 26 (5). pp. 289-292.
Nandabalan, K and Padayatty, JD (1989) Initiation of transcription of rDNA in rice. In: Biochemical and Biophysical Research Communications, 160 (3). 1117 -1123.
Narahari, Y and Suryanarayanan, K and Reddy, Subba NV (1989) Discrete Event Simulation Of Distributed System Using Stochastic Petri Nets. In: Fourth IEEE Region 10 International Conference, TENCON '89., 22-24 November, Bombay,India, pp. 622-625.
Narasappa, Narasimhamurthy and Samuelson, Ashoka G and Hattikudur, Manohar (1989) Reaction of copper(I) phenoxide with phenyl isothiocyanate. Formation and x-ray structure of a novel copper(I) hexameric complex. In: Journal of the Chemical Society, 23 . pp. 1803-1804.
Narasimha, R and Dey, J (1989) Transition-zone models for 2-dimensional boundary layers: A review. In: Sadhana, 14 (2). pp. 93-120.
Narasimhamurthy, N and Samuelson, AG (1989) Synthetic Utility of Copper(I) Phenoxfde Complexes. In: Proceedings of the Indian National Science Academy, Part A: Physical Sciences, 55 A (2). pp. 383-391.
Narayan, Ranjani and Rajaraman, V (1989) A Method to Evaluate the Performance of a Multiprocessor Machine based on Data Flow Principles. In: Fourth IEEE Region 10 International Conference,TENCON '89, 22-24 November, Bombay,India, pp. 209-212.
Narayanan, SR and Sathyanarayana, S (1989) Equivalent circuit parameters of the film-covered magnesium/electrolyte interface in Mg/MnO2 dry cells from transient and ac impedance measurements. In: Journal of Electroanalytical Chemistry, 265 (1-2). 103-115 .
Narayanan, SR and Sathyanarayana, S (1989) Voltage delay during constant-current or constant-resistance discharge of $Mg-MnO_2$ dry cells: a comparative study. In: Journal of Applied Electrochemistry, 19 (4). pp. 495-499.
Narayanan, SR and Sathyanarayana, S (1989) Voltage delay during constant-current or constant-resistance discharge of Mg-MnO2 dry cells: A comparative study. In: Journal of Applied Electrochemistry, 19 (4). pp. 495-499.
Natarajan, K and Ramkumar, K and Satyam, M (1989) Breakdown in p‐n junction diodes made on polycrystalline silicon of large grain size. In: Journal of Applied Physics, 66 (5). pp. 2206-2208.
Natarajan, K and Ramkumar, K and Satyam, M (1989) Capacitance of p-n Junctions under Electrical Breakdown. In: Physica Status Solidi A, 111 (2). K269-K272.
Natarajan, K and Ramkumar, K and Satyam, M (1989) Reverse characteristics of phosphorous passivated polysilicon p-n junction diodes. In: Physica Status Solidi A, 115 (2). K265-K268.
Noor, Shahina Begum and Damodara, Poojary M and Hattikudur, Manohar (1989) X-Ray structure of a ternary complex, copper(II)–lnosine 5-monophosphate–benzimidazole. The first example for the absence of direct metal–nucleotide interaction. In: Dalton Transactions (8). pp. 1507-1512.
Padiyar, KR and Kothari, AG (1989) Analysis Of The Hvdc Turbine Generator Torsional Interactions. In: Electric Machines and Power Systems, 16 (5). pp. 303-317.
Padiyar, KR and Ghosh, KK (1989) Direct stability evaluation of power systems with detailed generator models using structure-preserving energy functions. In: International Journal of Electrical Power & Energy Systems, 11 (1). pp. 47-56.
Padiyar, KR and Ghosh, KK (1989) Dynamic security assessment of power systems using structure-preserving energy functions. In: International Journal of Electrical Power & Energy Systems, 11 (1). pp. 39-46.
Padiyar, KR and Sachchidanand, * and Kothari, AG and Bhattacharyya, S and Srivastava, A (1989) Study of HVDC Controls Through Efficient Dynamic Digital Simulation of Converters. In: IEEE Transactions on Power Delivery, 4 (4). pp. 2171-2178.
Padma, Doddaballapur K (1989) Determination of free elemental sulphur in some petroleum products. In: Talanta, 36 (4). pp. 525-526.
Paliyath, G and Rajagopal, I and Unnikrishnan, PO and Mahadevan, S (1989) Hormones and Cuscuta development: IAA uptake transport and metabolism in relation to growth in the absence and presence of applied cytokinin. In: Journal of Plant Growth Regulation, 8 (1). pp. 19-35.
Pandit, SS and Jacob, KT (1989) Experimental and Computational Characterization of Alloy--Spinel--Corundum Equilibrium in the System Fe--Co--Al--O at 1873K. In: Scandinavian Journal of Metallurgy, 18 (2). pp. 73-80.
Pappu, SV and Changkakoti, R (1989) Phase holograms recorded in methylene-blue sensitized dichromated gelatin using a helium-neon laser. In: Jena Review, 34 (4).
Pappu, SV (1989) Holographic optical elements — State-of-the-art review:Part I. In: Optics & Laser Technology, 21 (5). 315 -318.
Pappu, SV (1989) Holographic optical elements: State-of-the-art review: Part 2. In: Optics and Laser Technology, 21 (6). pp. 365-375.
Pappu, SV (1989) Science And Audit. In: Current Science (Bangalore), 58 (14). 782 -782.
Parhi, S and Nath, G (1989) Stability of viscous flow over a stretching sheet. In: Acta Technica CSAV, 34 (4). pp. 389-409.
Pech, Michael and Rao, C Durga and Robbins, Keith C and Aaronson, Stuart A (1989) Functional Identification of Regulatory Elements within the Promoter Region of Platelet-Derived Growth Factor 2. In: Molecular and Cellular Biology, 9 (2). pp. 396-405.
Perumal, Thirumalai P and Bhatt, Vivekananda M (1989) Quinone studies. Part III. Metal ion-catalysed oxidation of halophenols and halonaphthols by peroxidisulphate. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 101 (1). pp. 25-32.
Pinjare, SL and Balasubramanyam, N and Kumar, Vikram (1989) Photoluminescence at 0.944 eV from heat-treated n-type silicon. In: Physica Status Solidi A, 113 (2). K261-K264.
Pius, Kuruvilla and Chandrasekhar, Jayaraman (1989) Stability of silicon-based distonic radical cations. In: International Journal of Mass Spectrometry and Ion Processes, 87 (1). R15-R18.
Pop, I and Kumari, M and Nath, G (1989) Combined free and forced convection along a rotating vertical cylinder. In: International Journal of Engineering Science, 27 (3). pp. 193-202.
Pradeep, S and Shrivastava, SK (1989) Bounded-input/bounded-output stability of linear multidimensional time- varying systems. In: Journal of Guidance, Control, and Dynamics, 12 (5). pp. 753-756.
Pradeep, S and Shrivastava, SK (1989) On the L2 stability of linear multidimensional time varying systems. In: Journal of the Astronautical Sciences, 37 (2). pp. 145-158.
Pradeep, T and Rao, CNR (1989) Electronic structures of electron donor-acceptor complexes: results from ultraviolet photoelectron spectroscopy and molecular orbital calculations. In: THEOCHEM, 59 . pp. 339-352.
Pradeep, T and Rao, CNR (1989) Electronic structures of electron donor-acceptor complexes: results from ultraviolet photoelectron spectroscopy and molecular orbital calculations. In: Journal of Molecular Structure: THEOCHEM, 200 . pp. 339-352.
Pradeep, T and Sreekanth, CS and Hegde, MS and Rao, CNR (1989) Experimental electronic structures of sulfur dioxide complexes: an electron spectroscopic study. In: Journal of the American Chemical Society, 111 (14). pp. 5058-5063.
Pradeep, T and Sreekanth, CS and Hegde, MS and Rao, CNR (1989) A study of the electronic structures of n---v addition compounds of BH3 by a combined use of ups and eels. In: Journal of Molecular Structure, 194 . pp. 163-170.
Pradeep, T and Sreekanth, CS and Rao, CNR (1989) An ultraviolet photoelectron spectroscopic study of BF3–donor complexes. In: Journal of Chemical Physics, 90 (9). pp. 4704-4708.
Pradeep, Talappil and Rao, Ramachandra CN (1989) Electronic-Transitions In Hydrogen-Bonded Dimers Of Carboxylic-Acids In The Vapor-Phase - An Electron-Energy Loss Spectroscopic Study. In: Journal Of The Chemical Society-Chemical Communications (15). pp. 1019-1020.
Pradeepa, S and Shrivastava, SK (1989) Asymptotic behaviour and boundedness of linear systems with time varying coefficients. In: Acta Astronautica, 19 (10). pp. 787-795.
Prakash, KS and Padayatty, JD (1989) Transfer of saline tolerance from one strain of rice to another by injection of DNA. In: Current Science, 58 (17). pp. 991-993.
Prakash, Desayi and Balaji, Rao K (1989) Markov Chain Model for Cracking Behavior of Reinforced Concrete Beams. In: Journal of Structural Engineering, 115 (9). pp. 2129-2144.
Prasad, Rama (1989) Detection of moisture stress by indicator plots — A simulation. In: Journal of Hydrology, 109 (1-2). pp. 11-24.
Prasad, Rama and Mayya, SG (1989) Systems Analysis of Tank Irrigation: II. Delayed Start and Water Deficit. In: Journal of Irrigation and Drainage Engineering, 115 (3). pp. 406-420.
Radhakrishnan, R and Viswamitra, MA and Bhutani, KK and Ali, M (1989) Structure Of 3beta-Dimethylamino-21-Norcon-5-Enine-20-One Dihydrate. In: Acta Crystallographica Section C, 45 (Part 3). pp. 463-465.
Raghavana, MR and Jayachandran, R (1989) Analysis of the performance characteristics of a two-inertia power transmission system with a plate clutch. In: Mechanism and Machine Theory, 24 (6). pp. 499-503.
Raghavender, D and Naidu, MS (1989) Impulse Breakdown Characteristics and Cost/Benefit Analysis of $SF_6/CCL_2F_2/C0_2$ Mixtures. In: Conference on Electrical Insulation and Dielectric Phenomena, 1989. Annual Report, 29th October-2nd November, Leesburg,VA, pp. 489-494.
Raghoottama, PS and Soundararajan, S and Ramakrishna, J (1989) Chlorine-35 nuclear quadrupole resonance study of molecular torsional motion in some chloropyridine-N-oxides and chloroquinoline-N-oxides. In: Faraday Transactions 2, 85 . pp. 131-136.
Raghothama, S and Ramakrishnan, C and Balasubramanian, D and Balaram, P (1989) Conformational Analysis of Cyclolinopeptide A, a Cyclic Nonapeptide: Nuclear Overhauser Effect and Energy Minimization Studies. In: Biopolymers, 28 . pp. 573-588.
Raghu, M and Subramanyam, SV and Chatterjee, S (1989) Pressure induced incommensurate to commensurate transition in K3Cu8S6. In: Solid State Communications, 69 (10). pp. 949-952.
Raghu, Prasad BK and Jagadish, KS (1989) Inelastic Torsional Response of a Single-Story Framed Structure. In: Journal of Engineering Mechanics, 115 (8). pp. 1782-1797.
Rajanna, K and Mohan, S and Gopal, ESR (1989) Thin Film Strain Gauges--an Overview. In: Indian Journal of Pure & Applied Physics, 27 (7-8). pp. 453-460.
Rajanna, K and Mohan, S (1989) Strain-sensitive Properties of Vacuum Evaporated Manganese Films. In: Thin Solid Films, 172 (1). pp. 45-50.
Rajappa, S and Bhawal, BM and Rakeeb, A and Deshmukh, AS and Manjunatha, SG and Chandrasekhar, J (1989) Is The Nitro-Group Attracted Towards Sulfur. In: Journal of the Chemical Society - Series Chemical Communications (22). 1729 -1730.
Rajasekharan, R and Sastry, PS (1989) Effect of phenoxy acids and their derivatives on lipid metabolism in groundnut (Arachis hypogaea) leaves. In: Pesticide Biochemistry and Physiology, 33 (1). pp. 26-36.
Rajeswari, M and Raychaudhuri, AK (1989) Heat Release from a Supercooled Liquid near Glass Transition. In: Europhysics Letters, 10 (2). pp. 153-158.
Rajkumar, V (1989) An Adaptive Deadbeat Stabilizer for Power System Dynamic Stability. In: 28th IEEE Conference on Decision and Control, 1989, 13-15 December, Tampa,Florlda, Vol.3, 2186-2187.
Rajumon, MK and Sarma, DD and Vijayaraghavan, R and Rao, CNR (1989) A core-level photoemission spectroscopic study of the electron-doped superconductor, Nd2-xCexCuO4-δ. In: Solid State Communications, 70 (9). pp. 875-877.
Ramakrishna, BS (1989) C. V. Raman Centenary Symposium on Acoustics, 25-28 October 1988. In: Journal of the Acoustical Society of America, 85 (5). 2236 -2236.
Ramakrishnan, B and Viswamitra, MA (1989) Structure of l-arginyl-l-aspartic acid monohydrate. In: Acta Crystallographica Section C, 45 (Part 5). pp. 822-824.
Ramakrishnan, TV (1989) Superconductivity In Disordered Thin-Films. In: Physica Scripta, T127 . pp. 24-30.
Ramakrishnan, TV and Rao, CNR (1989) Physical chemistry of high-temperature oxide superconductors. In: Journal of Physical Chemistry, 93 (11). pp. 4414-4422.
Ramamurthy, B and Bhatt, MV (1989) Synthesis and antitubercular activity of N-(2-naphthyl)glycine hydrazide analogs. In: Journal of Medicinal Chemistry, 32 (11). pp. 2421-2426.
Ramamurthy , TS (1989) Recent studies on the behaviour of interference fit pins in composite plates. In: Composite Structures, 13 (2). pp. 81-99.
Ramanjaneyulu, CS and Sarma, VVS (1989) Modeling server-unreliability in closed queuing-networks. In: IEEE Transactions on Reliability, 38 (1). pp. 90-95.
Ramasesha, S (1989) Effects of electron correlations in conjugated organic molecules and solids. In: Journal of Molecular Structure, 194 . pp. 149-162.
Ramasesha, S and Albert, IDL (1989) Exact static polarizabilities of correlated finite model systems. In: Chemical Physics Letters, 154 (5). 501-504 .
Ramasesha, SK and Jacob, KT (1989) Studies on nonisothermal solid state galvanic cells — effect of gradients on EMF. In: Journal of Applied Electrochemistry, 19 (3). pp. 394-400.
Ramasesha, Sheela K and Jacob, KT (1989) EMF of a nonisothermal cell incorporating a mixed conductor. In: Journal of the Electrochemical Society, 136 (9). pp. 2720-2723.
Ramaswamy, S and Nethaji, M and Murthy, MRN (1989) Crystal-Structure Of Putrescine- Glutamic Acid Complex. In: Current Science (Bangalore), 58 (20). pp. 1160-1163.
Ramaswamy, Mythily and Srikanth, PN (1989) Multiplicity Result for an ODE via Morse Index. In: Houston Journal of Mathematics, 15 (4). pp. 595-599.
Ramdas, Jyoti and Mythili, E and Muniyappa, K (1989) RecA Protein Promoted Homologous Pairing in Vitro. Pairing Between Linear Duplex DNA Bound to HU Protein (Nucleosome Cores) and Nucleoprotein Filaments of RecA Protein-Single-Stranded DNA. In: Journal of Biological Chemistry, 264 (29). pp. 17395-17400.
Ramesh, N and Brahmachari, SK (1989) Structural alteration from non-B to B-form could reflect DNase I hypersensitivity. In: Journal of Biomolecular Structure & Dynamics, 6 (5). pp. 899-906.
Ramesh, BR and Srinivasa, N and Rajgopal, K (1989) An Algorithm for Computing the Discrete Radon Transform With Some Applications. In: Fourth IEEE Region 10 International Conference,TENCON '89, 22-24 November, Bombay,India, pp. 78-81.
Ramesh, Ganju K and Murthy, SK and Paul, Vithayathil J (1989) Purification and characterization of two cellobiohydrolases from Chaetomium thermophile var. coprophile. In: Biochimica et Biophysica Acta - General Subjects, 993 (2-3). pp. 266-274.
Ramkumar, K and Satyam, M (1989) Research note A novel BIMOS Schmitt trigger. In: International Journal of Electronics, 66 (2). pp. 267-271.
Ramprasad, BS and Radha, TS (1989) Thermal Stress in Thin Films using Real-time Holographic Interferometry. In: Second International Conference on Holographic Systems, Components and Applications, 11-13 September, Bath, pp. 29-32.
Ranade, VV and Joshi, JB and Marathe, AG (1989) Flow Generated By Pitched Blade Turbines Ii: Simulation Using Κ-Ε Model. In: Chemical Engineering Communications, 81 . pp. 225-248.
Ranganathan, S and Alok, Singh and Mayer, J and Urban, K (1989) Electron diffraction patterns from the Al-Mn decagonal phase. In: Philosophical Magazine Letters, 60 (6). pp. 261-267.
Ranganathan, S and Chattopadhyay, K (1989) Decagonal quasicrystals. In: informaworld, 16 (1-4). pp. 67-83.
Ranganathan, S and Prasad, R and Mukhopadhyay, NK (1989) Electron microscopy and diffraction of icosahedral twins in an aluminium—manganese alloy. In: Philosophical Magazine Letters, 59 (6). pp. 257-263.
Ranganathan, S and Singh, Alok and Mayer, J and Urban, K (1989) Electron diffraction patterns from the aluminum-manganese decagonal phase. In: Philosophical Magazine Letters, 60 (6). pp. 261-267.
Ranganathan, S and Hajra, JP (1989) Activities of manganese in Co-Mn-Cr alloys at 1323 K. In: Scripta Metallurgica, 23 (7). pp. 1049-1052.
Rangarajan, PN and Padmanaban, G (1989) Factors regulating the transcription of eukaryotic protein coding genes and their mechanism of action - a review. In: Journal of Biosciences, 14 (2). pp. 189-202.
Rangarajan, PN and Padmanaban, G (1989) Regulation of cytochrome P-450b/e gene expression by a heme- and phenobarbitone-modulated transcription factor. In: Proc Natl Acad Sci Unit States Am, 86 (11). pp. 3963-3967.
Rao, CNR (1989) Oxygen hole mechanism of superconductivity in cuprates and other metal oxides. [Book Chapter]
Rao, CNR (1989) Transition Metal Oxides. In: Annual Review of Physical Chemistry, 40 . pp. 291-326.
Rao, CNR and Sarma, DD and Ranga, Rao G (1989) Investigations of oxide superconductors by x-ray absorption, photoemission and cognate spectroscopies. In: Phase Transitions: A Multinational Journal, 19 (1-3). 69 -85.
Rao, CNR and Vijayaraghavana, R and Nagarajana, R (1989) Investigations of superconducting bismuth cuprates of the general formula Bi2-xPbx(Ca,Sr,Y)n 1 Cun O2n 4 δ. In: Phase Transitions: A Multinational Journal, 19 (4). pp. 201-211.
Rao, Gangavarapu Ranga and Rajumon, Meledathu Kurian and Sarma, Dipankar Das and Rao, Ramachandra CN (1989) Oxygen chemistry in copper oxides: evidence for O− species. In: Journal of the Chemical Society - Series Chemical Communications (20). pp. 1536-1538.
Rao, Jagannadha A and Kotagi, SG (1989) Effect of suppression of prolactin on gonadal function in immature male hamsters. In: Andrologia, 21 (5). pp. 498-501.
Rao, MR and Antony, A and Rajalakshmi, S and Sarma, DSR (1989) Studies on hypomethylation of liver DNA during early stages of chemical carcinogenesis in rat liver. In: Carcinogenesis, 10 (5). pp. 933-937.
Rao, CNR (1989) Certain Novel Aspects of Thallium and Bismuth Cuprate Superconductors. In: MRS Proceedings, 169 . p. 111.
Rao, CNR and Bhat, V and Nagarajan, R and Rao, Ranga G and Sankar, G (1989) Nature of copper in the new cuprate superconductors Pb2Sr2Ca1-xLxCu3O8+δ. In: Physical Review B: Condensed Matter, 39 (13). pp. 9621-9623.
Rao, CNR and Ganguli, AK and Nagarajan, R (1989) Superconductivity in layered nickel oxides. In: Pramana - Journal of Physics, 32 (2). L177-L179.
Rao, CNR and Ganguli, AK and Vijayaraghavan, R (1989) Superconducting (1:1:2:2)-type layered cuprates of the formula $TlCa_{1-x}L_xSr_2Cu_2O_y$ (L=Y or rare-earth element). In: Physical Review B: Condensed Matter and Materials Physics, 40 (4). pp. 2565-2567.
Rao, CNR and Raveau, B (1989) Structural aspects of high-temperature cuprate superconductors. In: Accounts of chemical research, 22 (3). pp. 106-113.
Rao, CNR and Vijayaraghavan, R and Ganapathi, L and Bhat, SV (1989) Bi2−xPbx(Ca, Sr)n+1CunO2n+4+δ (n = 1, 2, 3, and 4) family of superconductors. In: Journal of Solid State Chemistry, 79 (1). pp. 177-180.
Rao, Kameswara L (1989) Novel laser writing mechanism in void rich columnar thin films for optical storage applications. In: Optics Communications, 72 (3-4). pp. 163-168.
Rao, M and Krishnamurthy, HR and Pandit, R (1989) Hysteresis in model spin systems. In: Journal of Physics: Condensed Matter, 1 (45). pp. 9061-9066.
Rao, Narasimha K and Mohan, S (1989) Influence of substrate temperature and post-deposition heat treatment on the optical properties of SiO2 films. In: Thin Solid Films, 170 (2). pp. 179-184.
Rao, Narasimha K and Murthy, MA and Mohan, S (1989) Optical Properties of Electron-Beam-Evaporated $TiO_2$ Films. In: Thin Solid Films, 176 (2). pp. 181-186.
Rao, Rama P and Valiathan, MS (1989) Proceedings Of The Indo-Uk Symposium On Biomaterials Foreword. In: Bulletin Of Materials Science, 12 (1). p. 1.
Rao, Ranga G and Hegde, MS and Sarma, DD and Rao, CNR (1989) Evidence for holes on oxygen in some nickel oxides. In: Journal of Physics: Condensed Matter, 1 (11). pp. 2147-2150.
Rao, SM and Sridharan, A and Chandrakaran, S (1989) Influence of drying on the liquid limit behaviour of a marine clay. In: Geotechnique, 39 (4). pp. 715-719.
Rao, Sreenivasa D and Patnaik, LM (1989) Neural network based approach to standard cell placement. In: Electronics Letters, 25 (3). 208 -209.
Rao, VSR and Biswas, Margaret and Mukhopadhyay, Chaitali and Balaji, PV (1989) Computer simulation of protein—carbohydrate complexes: application to arabinose-binding protein and pea lectin. In: Journal of Molecular Structure, 194 . pp. 203-214.
Rao, VSR and Reddy, VS and Mukhopadhyay, C (1989) Computer Simulation of Protein—Carbohydrate Complexes. In: Abstracts of Papers - American Chemical Society, National Meeting, 197 . 61 -CARB.
Rao, Yelloji MK and Natarajan, KA (1989) Effect of Galvanic Interaction between Grinding Media and Minerals on Sphalerite Fotation. In: International Journal of Mineral Processing, 27 (1-2). pp. 95-109.
Rao, Yelloji MK and Natarajan, KA (1989) Electrochemical Effects of Mineral-Mineral Interactions on the Flotation of Chalcopyrite and Sphalerite. In: International Journal of Mineral Processing, 27 (3-4). pp. 279-293.
Ravichandran, KS and Dwarakadasa, ES (1989) Fatigue crack growth transitions in Ti-6Al-4V alloy. In: Scripta Metallurgica, 23 (10). pp. 1685-1690.
Ravikumar, CP and Sastry, S and Patnaik, LM (1989) Parallel circuit partitioning on a reduced array architecture. In: Computer-Aided Design, 21 (7). pp. 447-455.
Ravindranath, N and Rani, Sheela CS and Martin, F and Moudgal, NR (1989) Effect of FSH deprivation at specific times on follicular maturation in the bonnet monkey (Macaca radiata). In: Journal of Reproduction and Fertility, 87 (1). pp. 231-241.
Raychaudhuri, Arup K (1989) Origin of the plateau in the low-temperature thermal conductivity of silica. In: Physical Review B: Condensed Matter, 39 (3). pp. 1927-1931.
Reddy, KPJ (1989) Time-dependent analysis of an N2O gasdynamic laser. In: AIAA Journal, 27 (10). pp. 1387-1391.
Reddy, NK and Rao, GSK (1989) Studies In Terpenoids .89. Synthesis Of (+/-)-Marmelerin, A Tetrasubstituted Furanosequiterpene. In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (5). pp. 372-375.
Reghu, M and Subramanyam, SV (1989) Tunnel transport in polypyrrole at low temperature. In: Solid State Communications, 72 (4). pp. 325-329.
Rogers, CP and Inam, A and Hegde, MS and Venkatesan, T and Wu, XD and Dutta, B (1989) Heteroepitaxial Yba2cu3o7-X-Prba2cu3o7-X-Yba2cu3o7-X Weak Links Grown By Laser Deposition. In: IEEE Transactions on Electron Devices, 36 (11). p. 2631.
Rogers, CT and Gregory, S and Venkatesan, T and Wilkens, BJ and Wu, XD and Inam, A and Dutta, B and Hegde, MS (1989) Reduction in magnetic field induced broadening of the resistive transition in laser‐deposited YBa2Cu3O7−x thin films on MgO. In: Applied Physics Letters, 54 (20). pp. 2038-2040.
Rukmani, K and Kumar, Anil (1989) Proton-proton nuclear Overhauser effect as a probe for symmetry-preserving relaxation in oriented molecules. In: Journal of Magnetic Resonance, 85 (3). pp. 448-456.
Sachdev, PL and Nair, KRC (1989) Evolution and decay of cylindrical and spherical nonlinear acoustic waves generated by a sinusoidal source. In: Journal of Fluid Mechanics, 204 . pp. 389-404.
Sagayamary, RV and Devanathan, R (1989) Steady flow of couple stress fluid through tubes of slowly varying cross-sections--application to blood flows. In: Biorheology, 26 (4). pp. 753-769.
Sangunni, KS and Bhat, HL and Narayanan, PS (1989) Effect of √-irradiation on the structure and dynamics of domains in triglycine selenate. In: Ferroelectrics Letters Section, 10 (3). pp. 87-96.
Sangunni, KS and Bhata, HL and Narayanan, PS (1989) Effect of √-irradiation on the structure and dynamics of domains in triglycine selenate. In: Ferroelectrics Letters Section, 10 (3). pp. 87-96.
Sangunni, KS and Shashikala, MN and Bhat, HL and Narayanan, PS (1989) Polarization reversal studies in ferroelectric taap. In: Ferroelectrics, 94 (1). pp. 439-440.
Sankar, G and Kulkarni, GU and Kannan, KR (1989) Cu K-absorption edge study of cuprate superconductors. In: Pramana - Journal of Physics, 32 (5). L717-L719.
Sankar, G and Kulkarni, GU and Rao, CNR (1989) EXAFS of catalytic materials. In: Progress in Crystal Growth and Characterization, 18 . pp. 67-92.
Sankara-Ramakrishnan, R and Vishveshwara, Saraswathi (1989) A hydrogen bonded chain in bactereorhodopsin by computer modeling approach. In: Journal of Biomolecular Structure & Dynamics, 7 (1). pp. 187-205.
Sarma, DD (1989) Electronic structure of high Tc cuprates by electron spectroscopies. In: NATO ASI Series, Series C: Mathematical and Physical Sciences, 276 . pp. 499-507.
Sarma, DD and Taraphder, A (1989) Electronic structure of high-Tc cuprates from core-level photoemission spectroscopy. In: Physical Review B: Condensed Matter, 39 (16). pp. 11570-11574.
Sarma, DD and Ramasesha, S and Taraphder, A (1989) Hole pairing within an extended Anderson impurity model applicable to the high-Tc cuprates. In: Physical Review B: Condensed Matter, 39 (16). 12286 -12289.
Sarma, DD and Rao, CNR (1989) Holes and Hole-Pairing in the Oxygen Band of the High-Temperature Cuprate Superconductors. In: Synthetic Metals, 33 (2). pp. 131-140.
Sasisekharan, V and Baranidharan, S and Balagurusamy, VSK and Srinivasan, A and Gopal, ESR (1989) Non-periodic tilings in 2-dimensions with 4, 6, 8, 10 and 12-fold symmetries. In: Pramana - Journal of Physics, 33 (3). pp. 405-420.
Sathyanarayana, S (1989) Storage batteries for terrestrial solar applications: selection and design. In: Journal of the Electrochemical Society of India, 38 (2). pp. 125-145.
Satyanarayana, Karuturi and Rao, MR (1989) Characterization of poly(ADP-ribosyl)ated domains of rat pachytene chromatin. In: Biochemical Journal, 261 (3). pp. 775-786.
Savithri, HS and Suryanarayana, S and Murthy, MNR (1989) Structure-function relationships of icosahedral plant. In: Archives of Virology, 109 (3-4). pp. 153-172.
Savithri, HS and Suryanarayana, S and Murthy, MRN (1989) Structure-function relationships of icosahedral plant viruses. In: Archives of Virology, 109 (3-4). pp. 153-172.
Selvaraj, Ulagaraj and Sundar, Kershava HG and Rao, KJ (1989) Characterization studies of potassium phosphotungstate glasses and a model of structural units. In: Faraday Transactions 1 (related title), 85 . 251 -267.
Sen, Diptiman (1989) Non-Abelian chiral gauge theories in two dimensions. In: Phys. Rev. D, 39 (10). pp. 3096-3100.
Sen, Diptiman (1989) Witten index of supersymmetric chiral theories. In: Physical Review D � Particles and Fields, 39 (6). pp. 1795-1797.
Senbhagaraman, S and Guru, Row TN and Umarji, AM (1989) Corrected spacegroup for M0.5Ti2P3O12 compounds. In: Solid State Communications, 71 (7). pp. 609-611.
Sengupta, DP and Sen, I (1989) A Tensor Approach to Understand Dynamic Instability in a Power System and Design a Stabilizer. In: Fourth IEEE Region 10 International Conference TENCON '89, 22-24 November, Bombay,India, pp. 786-789.
Senthil, Kumar A and Vasu, RM (1989) Multiple imaging with an aberration optimized hololens array. In: Optical Engineering, 28 (8). pp. 903-908.
Sequeira, A and Rajagopal, H and Nagarajan, R and Rao, CNR (1989) Neutron diffraction evidence for oxygen dimers in Bi-Ca-Sr-Cu-O superconductors. In: Physica C: Superconductivity, 159 (1-2). pp. 87-92.
Seshagiri, PB and Adiga, PR (1989) Comparison of biotin binding protein of pregnant rat serum with rat serum albumin. In: Journal of Biosciences, 14 (3). pp. 221-231.
Shaila, MS and Purushothaman, V and Bhavasar, D and Venugopal, K and Venkatesan, RA (1989) Peste des petits ruminants of sheep in India. In: Veterinary Record, 125 (24). p. 602.
Shamim, AA (1989) An Overview Of Isdn. In: Arabian Journal for Science and Engineering, 14 (4). pp. 551-561.
Sharada, Gullapalli and Vidya, Shivaswamy and Ramasarma, T and Ramakrishna, Kurup CK (1989) Redistribution of subcellular calcium in rat liver on administration of vanadate. In: Molecular and Cellular Biochemistry, 90 (2). pp. 155-164.
Sharma, BS and Malhotra, JK and Mitra, AK and Parthasarathy, K and Iyengar, Ramakrishna BS and Naghabhushna, GR and Chandrashekharaiah, HS and Thukaram, D and Kishore, * and Padiyar, KR and Khincha, HP (1989) Over voltage Studies for UPSEB 765 kV Anpara-Unnao line operated at 400 kV. In: Second Work Shop and Conference on EHV Technology, Aug 7-10, 1989, Bangalore.
Shashidhar, MS and Bhatt, MV (1989) Aspects of tautomerism. Part 16. Influence of the y-keto function on the reactions of sulphonic acids. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 101 (4). pp. 319-326.
Shashikala, MN and Chary, BR and Bhat, HL and Narayanan, PS (1989) Vibrational spectroscopic studies of phase transition in ferroelectric TAAP. In: Journal of Raman Spectroscopy, 20 (6). pp. 351-357.
Shastry, MCR and Rao, KJ (1989) High-Resolution Magic Angle Spinning Nmr-Studies Of P-31 And B-11 In Fast Ion Conducting Agi-Ag2o-P2o5 And Agi-Ag2o-B2o3 Glasses. In: Pramana - Journal of Physics, 32 (6). pp. 811-820.
Shastrya, MCR and Rao, KJ (1989) A chemical approach to an understanding of the fast ion conduction in silver iodide-silver oxysalt glasses. In: Solid State Ionics, 37 (1). pp. 17-29.
Shekar, B and Narasimha, Murty M and Krishna, G (1989) Structural aspects of semantic-directed clusters. In: Pattern Recognition, 22 (1). pp. 65-74.
Sheshadri, TS (1989) Acoustic field generated by unsteady droplet combustion. In: Acoustics Letters, 12 (11). pp. 194-197.
Sheshadri, TS and Jain, VK (1989) Propellant Gas Phase Chemical Kinetics. In: Propellants, Explosives, Pyrotechnics, 14 (5). pp. 193-198.
Sheshadri, TS (1989) Influence of propellant choice on MPD arcjet cathode surface current density distribution. In: Physics Letters A, 141 (3-4). pp. 169-171.
Shinde, UP and Guru, Row TN and Mawal, YR (1989) Export of proteins across membranes: The helix reversion hypothesis. In: Bioscience Reports, 9 (6). pp. 737-745.
Shukla, US and Mahapatra, PR (1989) Optimization of Biased Proportional Navigation. In: IEEE Transactions on Aerospace and Electronic Systems, 25 (1). pp. 73-79.
Shukla, Uday S and Mahapatra, Pravas R (1989) Efficient Atmospheric and Extra-Atmospheric Interception Through Optimally Biased Proportional Navigation. In: IEEE 1989 National Aerospace and Electronics Conference, 1989. NAECON 1989, 22-26 May, Dayton,OH, Vol.1, 159-166.
Shukla, Uday S and Mahapatra, Pravas R (1989) A Powerful Kinematic Model for Proportional Navigation of Guided Weapons Against Maneuvering Targets. In: IEEE National Aerospace and Electronics Conference, 1989. NAECON 1989, 22-26 May, Dayton,OH, Vol.1, 194-120.
Simon, R and Mukunda, N (1989) Universal SU(2) gadget for polarization optics. In: Physics Letters A, 138 (9). pp. 474-480.
Simon, R and Mukunda, N and Sudarshan, ECG (1989) Hamilton's theory of turns generalized to Sp(2,R). In: Physical Review Letters, 62 (12). pp. 1331-1334.
Simon, R and Mukunda, N and Sudarshan, ECG (1989) Hamilton's theory of turns and a new geometrical representation for polarization optics. In: Pramana - Journal of Physics, 32 (6). pp. 769-792.
Simon, R and Mukunda, N and Sudarshan, ECG (1989) The theory of screws: A new geometric representation for the group SU(1,1). In: Journal of Mathematical Physics, 30 (5). pp. 1000-1006.
Singh, Lakha and Tyagi, Tuhi Ram and Somayajulu, YV and Vijayakumar, PN and Dabas, RS and Loganadham, B and Ramakrishnan, S and Rao, Rama PVS and Dasgupta, A and Navneeth, G and Klobuchar, JA and Hartmann, GK (1989) A multi-station satellite radio beacon study of ionospheric variations during total solar eclipses. In: Journal of Atmospheric and Terrestrial Physics, 51 (4). pp. 271-278.
Singh, MP and Sinha, KP (1989) A possible mechanism for high-temperature superconductivity: Application to Bi and Tl compounds. In: Solid State Communications, 70 (2). pp. 149-151.
Sivaramakumar, GR and Rajgopal, K (1989) Adaptive Control with Optimal Model Following. In: IEEE International Conference on Systems Engineering, 1989, 24-26 August, Fairborn,OH, pp. 127-130.
Soman, J and Vijayan, M and Radhakrishnan, R and Rao, T (1989) X-Ray Studies On Crystalline Complexes Involving Amino-Acids And Peptides .20. Crystal-Structures Of Dl-Arginine Acetate Monohydrate And Dl-Lysine Acetate And A Comparison With The Corresponding L-Amino-Acid Complexes. In: Journal of Biomolecular Structure & Dynamics, 7 (2). pp. 269-277.
Soman, Jayashree and Vijayan, M (1989) X-ray studies on crystalline complexes involving amino acids and peptides: Part XVIII. Crystal structure of a new form of L-arginine D-glutamate and a comparative study of amino acid crystal structures containing molecules of the same and mixed chirality. In: Journal of Biosciences, 14 (2). pp. 111-125.
Sosale, Chandrasekhar and Veerapadayachi, Venkatesan (1989) Novel, possibly antiaromatic, fulvenes. In: Journal of Chemical Research, Synopses, 9 . p. 276.
Soumyadeb, Ghosha and Kalpagam, K (1989) Solvent effect on the electronic absorption spectra of polyaniline. In: Synthetic Metals, 33 (1). pp. 11-17.
Sourendu, Gupta and Murthy, MVN and Pasupathy, J (1989) Isoscalar axial-vector renormalization constant and polarized proton structure function. In: Physical Review D – Particles and Fields, 39 (9). 2547 -2549.
Sowdhamini, R and Srinivasan, N and Shoichet, Brian and Santi, Daniel V and Ramakrishnan, C and Balaram, P (1989) Stereochemical modeling of disulfide bridges. Criteria for introduction into proteins by site-directed mutagenesis. In: Protein Engineering, 3 (2). pp. 95-103.
Sreenivasa, Kumar P and Veni, Madhavan (1989) A new class of separators and planarity of chordal graphs. [Book Chapter]
Sreenivasulu, M and Rao, Krishna GS (1989) One-pot synthesis of 2-chloronicotinonitriles and fused 2-chloro-3-cyanopyridines. In: Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 28B (7). pp. 584-586.
Sreenivasulu, M and Rao, Krishna GS (1989) Vilsmeier reaction studies on some $\alpha$ .$\beta$ -unsaturated alkenones. In: Indian Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 28B (6). pp. 494-495.
Sreerama, N and Saraswathi, Vishveshwara (1989) Ab initio studies on proton transfer involving Schiff base and related nitrogen compounds. In: Journal of Molecular Structure, 212 . pp. 53-60.
Sreerama, N and Vishveshwara, Saraswathi (1989) An ab-initio study of the proton affinity of conjugated schiff-base and related nitrogen compounds: an analysis of the triggering site of bacteriorhodopsin. In: Journal of Molecular Structure, 194 . pp. 61-72.
Sridhar, S and Srinivasan, Malur N and Seshadri, MR (1989) Thermal behavior of cores during solidification of castings. In: Regional Journal of Energy, Heat and Mass Transfer, 11 (1). pp. 51-55.
Sridhar, S and Nithyananda, R (1989) Undamped oscillations of homogeneous collisionless stellar systems. In: Monthly Notices of the Royal Astronomical Society, 238 (3). pp. 1159-1163.
Sridhar, S and Nityananda, R (1989) Undamped Oscillations of Collisionless Stellar Systems: Spheres,Spheroids and Discs. In: Journal of Astrophysics & Astronomy, 10 (3). pp. 279-293.
Sridhar, V and Murty, Narasimha M and Krishna, G (1989) A Logical Model for Decision-Making. In: IEEE International Conference on Systems, Man and Cybernetics, 1989, 14-17 November, Cambridge,MA, Vol.3, 1142-1147.
Srikanth, S and Jacob, KT (1989) Thermodynamic properties of copper-nickel alloys: measurements and assessment. In: Materials Science and Technology, 5 (5). pp. 427-434.
Srikanth, S and Jacob, KT (1989) Discussion of thermodynamic consistency of the interaction parameter formalism. In: Metallurgical and Materials Transactions B, 20B (3). pp. 434-437.
Srikanth, S and Jacob, KT (1989) Extension of Darken's Quadratic Formalism to Dilute Multicomponent Solutions. In: ISIJ International, 29 (2). pp. 171-174.
Srikanth, S and Jacob, KT (1989) A modified least squares algorithm to treat associations in liquid alloys. In: Calphad, 13 (2). pp. 149-158.
Srikanth, S and Jacob, KT and Abraham, KP (1989) Representation of Thermodynamic Properties of Dilute Multi-Component Solutions--Options and Constraints. In: Steel Research, 60 (1). pp. 6-11.
Srikantha, S and Jacob, KT (1989) Activities in liquid Ga-Te alloys at 1120 k. In: Thermochimica Acta, 153 . pp. 27-35.
Srikrishna, A and Hemamalini, Parthasarathy (1989) Bridged Systems Via Radical Cyclizations - Synthesis Of Chiral Bicyclo-[3.3.1]Nonanes By Sequential Inter-Molecular And Intra-Molecular Radical Michael Addition. In: Perkin Transactions1 (12). pp. 2511-2513.
Srikrishna, A and Krishnan, K (1989) Stereospecific Synthesis of Thaps-7(15)-ene and thaps-6-ene, Probable Biogenetic Precursors of Thapsanes. In: Tetrahedron Letters, 30 (47). pp. 6577-6580.
Srikrishna, A and Krishnan, K (1989) Synthesis of (.+-.)-marmelo oxides by a radical cyclization reaction. In: Journal of Organic Chemistry, 54 (16). 3981 -3983.
Srikrishna, A and Sharma, Veera Raghava G (1989) A simple, one-pot, regiospecific 1,3-dicarboxybenzannulation of active acyl systems. In: Tetrahedron Letters, 30 (27). pp. 3579-3580.
Srikrishna, A and Sunderbabu, G (1989) Total synthesis of (±)-laurene and epilaurene by radical cyclisation reaction. In: Tetrahedron Letters, 30 (27). pp. 3561-3562.
Srinivas, MV and Alva, P and Biswas, SK (1989) Slip line field analysis of a simple plane strain closed‐die forging. In: Proceedings of the Institution of Mechanical Engineers - Part B: Journal of Engineering Manufacture, 203 (B2). pp. 91-99.
Srinivasan, K (1989) Diffusion of krypton in liquid argon at infinite dilution. In: Cryogenics, 29 (9). pp. 935-936.
Srinivasan, K (1989) Saturated liquid densities of cryogenic liquids and refrigerants. In: International Journal of Refrigeration, 12 (4). pp. 194-197.
Sriram, Ramaswamy (1989) Dislocations and grain boundaries in quasicrystals. In: Phase Transitions: A Multinational Journal, 16 . pp. 575-588.
Subbanna, GN and Rao, CNR (1989) Transformation of the metastable cubic form of ZrO2 to the monoclinic form in xerogels and metal-ZrO2 composites: a kinetic study. In: European Journal of Solid State and Inorganic Chemistry, 26 (1). pp. 7-14.
Subramanian, S and Lahiri, AK and Jain, AKNK (1989) Electroslag remelting studies on the simultaneous desulfurization and dephosphorization of En24 steel using sodium carbonate. In: Transactions of the Indian Institute of Metals, 42 (2). pp. 121-125.
Subramanian, S and Natarajan, KA and Sathyanarayana, DN (1989) FTIR spectroscopic studies on the adsorption of an oxidized starch on some oxide minerals. In: Minerals & Metallurgical Processing, 6 (3). pp. 152-158.
Subramanian, C and Subramanian, DK (1989) Performance Analysis of Voting Strategies for a Fly-by-Wire System of a Fighter Aircraft. In: IEEE Transactions on Automatic Control, 34 (9). pp. 1018-1021.
Subramanian, N and Raman, KV (1989) NQR study of complexes between p-dichlorobenzene and some aromatic π acceptors. In: Journal of Molecular Structure, 197 . pp. 53-57.
Subramanian, S and Natarajan, KA (1989) Adsorption behaviour of an oxidised starch onto haematite in the presence of calcium. In: Minerals Engineering, 2 (1). pp. 55-64.
Sudalai, A and Rao, GSK (1989) Studies In Terpenoids .83. A New Synthesis Of 4-(2,3,6-Trimethylphenyl)Butan-2-Ol, A C-13-Norisoprenoid Artifact From Vitis-Vinifera Linn And Its Conversion To Several Terpenic Natural-Products Of Plant And Marine Originva. In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (2). pp. 110-112.
Sudalai, A and Rao, GSK (1989) Studies In Terpenoids .84. Synthesis Of 2-(2-Acetoxisopropyl)-5,8-Dimethyl-1,4-Dihydronaphthalene (Acetyldehydrorishitinol), A Sesquiterpenic Stress Metabolite From Potatoes. In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (2). pp. 113-115.
Sudalai, A and Rao, GSK (1989) Studies In Terpenoids .85. Synthesis Of 1,1-Dimethyl-7-Isobutyryltetralin, And Its Conversion Into 7-Tert-Butyl-1,1-Dimethyltetralin, A Rearrangement Product Of 10-Methylenelongibornane. In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (5). pp. 369-371.
Sudalai, A and Rao, GSK (1989) Studies In Terpenoids .87. Synthesis Of 2-Hydroxy-3,6,10-Trimethylbenzocyclooct-5-Ene (Isoparvifolin), A Sesquiterpenic Benzocyclooctene. In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (3). 219 -222.
Sudalai, A and Rao, GSK (1989) Synthesis Of 2 Meroterpenic Acetophenones Occurring In Senecio And Helianthella Species. In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (9). pp. 760-761.
Sudalai, A and Rao, GSK (1989) Synthesis Of Some Hydroxymethylbromo-Phenols Of Marine Origin. In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (10). pp. 858-859.
Sudalai, A and Rao, Krishna GS (1989) Studies in terpenoids. LXXXVII. Synthesis of 1,1-dimethyl-7-isobutyryltetralin, and its conversion into 7-tert-butyl-1,1-dimethyltetralin, a rearrangement product of 10-methylenelongibornane. In: Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 28B (5). pp. 369-371.
Sudalai, A and Rao, Krishna GS (1989) Studies on terpenoids. LXXXV. Synthesis of two monoterpenic hydroquinone dimethyl ethers occurring in Calea pilosa Baker. In: Journal of Chemistry, Section B: Organic Chemistry, 28 (6). 520-521 .
Sudhakar, Reddy B (1989) Energetics of firewood plantations. In: Energy Conversion and Management, 29 (3). pp. 199-206.
Sudhakara, Pamidighantam V and Chandrasekhar, Jayaraman (1989) Geometry changes induced by negative hyperconjugative interactions involving carbonyl and thiocarbonyl groups. In: Journal of Molecular Structure, 194 . pp. 135-147.
Sujatha, Devi P and Subba, Rao M (1989) Rare-earth chromium citrates as precursors for rare-earth chromites: lanthanum biscitrato chromium(III) dihydrate, La[Cr(C6H5O7)2]·2H. In: Thermochimica Acta, 153 (1). pp. 181-191.
Sukumar, R (1989) Ecology of the Asian Elephant in Southern India. I. Movement and Habitat Utilization Patterns. In: Journal of Tropical Ecology, 5 (1). pp. 1-18.
Sundar, Manoharan S and Patil, KC (1989) Synthesis, characterisation and thermal analysis of copper (II) and chromium (II, III) hydrazine carboxylates. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 101 (5). pp. 377-381.
Suresh, K and Mahesh, GV and Patil, KC (1989) Preparation Of Cobalt Doped Gamma-Fe2o3 And Mn-Zn Ferrites By The Thermal-Decomposition Of The Hydrazine Precursors. In: Journal of Thermal Analysis, 35 (4). pp. 1137-1143.
Suryanarayana, S and Rao, Appaji N and Murthy, MRN and Savithri, HS (1989) Primary structure of belladonna mottle virus coat protein. In: Journal of Biological Chemistry, 264 (11). pp. 6273-6279.
Swaminathan, K and Sinha, UC and Ramakumar, S and Bhatt, RK and Sabata, BK (1989) Structure of columbin, a diterpenoid furanolactone from Tinospora cordifolia Miers. In: Acta Crystallographica Section C, 45 . 300 -303.
Swaminathan, S and Jacob, KT (1989) A Comment on the Nomenclature of Invariant Three-Phase Equilibria. In: Bulletin of Alloy Phase Diagrams, 10 (4). pp. 329-331.
Swamy, KN and Sarma, IG (1989) Performance Of APN Guidance Law In A Two Dimensional Aerial Engagement Scenario. In: IEEE 1989 National Aerospace and Electronics Conference, 1989. NAECON 1989, 22-26 May, Dayton,OH, Vol.1, 209-215.
Swamy, Musti Joginadha and Surolia, Avadhesha (1989) Studies on the tryptophan residues of soybean agglutinin. Involvement in saccharide binding. In: Bioscience Reports, 9 (2). pp. 189-198.
Swamy, VT and Ranganathan, S and Chattopadhyay, K (1989) On the resolidification behavior of microsecond pulsed-laser-melted pool of bismuth. In: Journal of Crystal Growth, 96 (3). pp. 628-636.
Swamy, VT and Ranganathan, S and Chattopadhyay, K (1989) Rapidly solidified Al–Cr alloys: Crystalline and quasicrystalline phases. In: Journal of Materials Research, 4 (3). pp. 539-551.
Tendeloo, Van G and Menon, J and Singh, A and Ranganathan, S (1989) Electron diffraction studies of variable periodicity in decagonal quasicrystals in aluminium-cobalt alloys. In: Phase Transitions, 16 (1-4). pp. 59-65.
Thomas, JM (1989) A numerical survey of relativistic rotating neutron star structures using the Hartle-Thorne formalism. In: Astronomy & Astrophysics Supplement Series, 79 (2). pp. 189-203.
Tilak, BV and Venkatesh, S and Rangarajan, SK (1989) Polarization Characteristics of Porous Electrode Systems with Adsorbed Intermediates Participating in the Electrode Reaction. In: Journal of the Electrochemical Society, 136 (7). pp. 1977-1982.
Tiwari, RS and Claus, JC and Ranganathan, S (1989) Kinetics of two-stage crystallization in Metglas 2826A. In: Journal of Materials Science, 24 (12). pp. 4399-4402.
Toniolo, C and Crisma, M and Valle, G and Bonora, GM and Polinelli, S and Becker, EL and Freer, RJ (1989) Conformationally restricted formyl methionyl tripeptide chemoattractants: a three-dimensional structure-activity study of analogs incorporating a C alpha,alpha-dialkylated glycine at position 2. In: Pept Res, 2 (4). pp. 275-281.
Ueno, S and Jacob, KT and Waseda, Y (1989) Activities in liquid Na K and Al Mg alloys estimated using the pseudopotential theory of metals. In: High Temperature Materials and Processes, 8 (2). 89- 94.
Uma, K and Balaram, P (1989) Peptide Design - An Analysis Of Studies Using Alpha-Aminoisobutyric-Acid (Aib) And Z-Alpha,Beta Dehydrophenylalanine (Delta-Z-Phe). In: Indian Journal of Chemistry - Section B: Organic and Medicinal Chemistry, 28 (9). pp. 705-710.
Uma, K and Balaram, P and Kaur, Paramjeet and Kumar Sharma, Ashwani and Chauhan, VS (1989) Conformations of peptides containing Z-alpha,beta-dehydroleucine (delta ZLeu). A comparison of Boc-Pro-delta ZLeu-Gly-NHEt and Boc-Pro-delta ZPhe-Gly-NHEt. In: International Journal of Biological Macromolecules, 11 (3). pp. 169-171.
Vaishnav, YN and Antony, A (1989) Antibodies raised against denatured DNA bind to double-stranded DNA. In: Journal of Immunological Methods, 118 (1). pp. 25-30.
Varma, KBR and Rao, KJ and Rao, CNR (1989) Novel features of rapidly quenched melts of Bi2(Ca,Sr) 3Cu2O8+δ. In: Applied Physics Letters, 54 (1). pp. 69-71.
Varma, KBR and Raychaudhuri, AK (1989) Pyroelectric and dielectric properties of potassium hydrogen phthalate single crystals. In: Journal of Physics D: Applied Physics, 22 (6). pp. 809-811.
Varma, KBR and Subbanna, GN and Ramakrishnan, TV and Rao, CNR (1989) Dielectric properties of glasses prepared by quenching melts of superconducting Bi‐Ca‐Sr‐Cu‐O cuprates. In: Applied Physics Letters, 55 (1). pp. 75-77.
Varma, KBR and Subbanna, GN and Rao, CNR and Ramakrishnan, TV (1989) Novel dielectric properties of glasses prepared by quenching melts of Bisingle bondCasingle bondSrsingle bondCusingle bondO cuprates. In: Physica C: Superconductivity, 162-16 (PART 2). pp. 891-892.
Varma, Vijay and Fernandes, JR and Rao, CNR (1989) Raman and Infrared Spectroscopic Studies of the Low- and High-Temperature Forms of Octahalo Cyclic Phosphazene Tetramers, $P_4N_4Cl_8$ and $P_4N_4F_8$. In: Journal of Molecular Structure, 198 . pp. 403-412.
Vasantha, R and Nath, G (1989) Semi-similar solutions of the unsteady compressible second-order boundary layer flow at the stagnation point. In: International Journal of Heat and Mass Transfer, 32 (3). pp. 435-444.
Vasanthacharya, NY and Raychaudhuri, AK and Ganguly, P and Rao, CNR (1989) Evolution of magnetic order in the system LaNi1-xMnxO3 (x < 0.10). In: Journal of Magnetism and Magnetic Materials, 81 (1-2). pp. 133-137.
Vasudevan, P and Mann, Neelam and Santosh, * and Kannan, AM and Shukla, AK (1989) Electroreduction of oxygen on some novel cobalt phthalocyanine complexes. In: Journal of Power Sources, 28 (3). pp. 317-320.
Vathsala, * and Natarajan, KA (1989) Some electrochemical aspects of grinding media corrosion and sphalerite flotation. In: International Journal of Mineral Processing, 26 (3-4). pp. 193-203.
Venkatesan, T and England, P and Miceli, PF and Chase, EW and Chang, CC and Wilkens, B and Tarascon, JM and Wu, XD and Inam, A and Dutta, B and Hegde, MS and Wachtman, JB (1989) As-deposited near-single crystalline high Tc and Jc superconducting thin films by a pulsed lased deposition process. In: Solid State Ionics, 32-33 (Part 2). pp. 1043-1050.
Venkatesan, T and Wu, Xindi and Inam, Arun and Chang, Chuan C and Dutta, Banrundeb and Hegde, Manjanain S (1989) Laser processing of high-Tc superconducting thin films. In: IEEE Journal of Quantum Electronics, 25 (11). pp. 2388-2393.
Venkateswara, R and Narendra, N and Viswamitra, MA and Vaidyanathan, CS (1989) Cryptosin, a cardenolide from the leaves of Cryptolepis buchanani. In: Phytochemistry, 28 (4). pp. 1203-1205.
Venkateswara, Rao R and Dasgupta, Dipak and Vaidyanathan, CS (1989) Effect of cryptosin on Na+,K+-ATPase—A 31P NMR spectroscopic study. In: Biochemical Pharmacology, 38 (12). pp. 2039-2041.
Vijayadamodar, GV and Chandra, A and Bagchi, B (1989) Effects of Translational Diffusion on Dielectric Friction in a Dipolar Liquid. In: Chemical Physics Letters, 161 (4-5). pp. 413-419.
Vijayalakshmi, D and Rao, NA (1989) Identification of amino acid residues at the active site of human liver serine hydroxymethyltransferase. In: Biochemistry International, 19 (3). pp. 625-632.
Vijayamohanan, K (1989) Premature Failure In Ni-Cd Batteries - Reply. In: Current Science (Bangalore), 58 (17). pp. 944-945.
Vijayamohanan, K and Sathyanarayana, S and Joshi, SN (1989) A new lead/lead sulphate reference electrode for lead/acid battery research. In: Journal of Power Sources, 27 (2). pp. 167-176.
Vijayaraghavan, R and Ganguli, AK and Vasanthacharya, NY and Rajumon, MK and Kulkarni, GU and Sankar, G and Sarma, DD and Sood, AK and Chandrabhas, N and Rao, CNR (1989) Investigation of novel cuprates of the TlCa1-xLnxSr2Cu2O7- δ (Ln=rare earth) series showing electron- or hole-superconductivity depending on the composition. In: Superconductor Science and Technology, 2 (3). pp. 195-201.
Vijayaraghavan, R and Ram, Mohan RA and Rao, CNR (1989) A series of metallic oxides of the formula La3LnBaCu5O13+δ (Ln = rare earth or Y). In: Journal of Solid State Chemistry, 78 (2). pp. 316-318.
Vitta, S (1989) Structure and electron transport anomalies in InSe-In6Se 7 composite. In: Journal of Applied Physics, 66 (12). pp. 5885-5889.
Vivekanandan, R and Kutty, TRN (1989) Characterization of barium titanate fine powders formed from hydrothermal crystallization. In: Powder Technology, 57 (3). pp. 181-192.
Wang, Wu and Sitharama, Iyengar S and Patnaik, LM (1989) Memory-based reasoning approach for pattern recognition of binary images. In: Pattern Recognition, 22 (5). pp. 505-518.
Waseda, Y and Ueno, S and Jacob, KT (1989) Theoretical treatment of vapour pressures for liquid metals. In: Journal of Materials Science Letters, 8 (7). 857 -861.
Wenzel, Wolfgang and Jayaprakash, C and Pandit, Rahul and Ebner, C (1989) Critical micelle concentration from a lattice gas model. In: Journal of Physics: Condensed Matter, 1 (26). pp. 4245-4250.
Wolfe, A and Uberoi, C and Russell, CT and Lanzerotti, LJ and MacLennan, CG and Medford, LV (1989) Penetration of hydromagnetic energy deep into the magnetosphere. In: Planetary and Space Science, 37 (11). pp. 1317-1325.
Wu, XD and Hegde, MS and Xi, XX and Li, Q and Inam, A and Schwarz, SA and Martinez, JA and Wilkens, BJ and Barner, JB and Chang, CC and Nazar, L and Rogers, CT and Venkatesan, T (1989) Fabrication of Yy‐Pr1‐y ‐Ba‐Cu‐O Thin Films and Superlattices of ‐Ba‐Cu‐O/Yy‐Pr1‐y ‐Ba‐Cu‐O. In: MRS Proceedings, 169 .
Yoganandam, Y and Reddy, Umapathi V and Uma, C (1989) Modified linear prediction method for directions of arrival estimation of narrow-band plane waves. In: IEEE Transactions on Antennas and Propagation, 37 (4). 480 -488.
Yogesh, GP and Raghunandan, BN (1989) Flow structure and heat transfer characteristics behind a diaphragm in the presence of a diffusion flame. In: International Journal of Heat and Mass Transfer, 32 (1). 19 -28.
Yousuf, M and Kumar, A (1989) Isobaric electrical resistance along the critical line in nickel: An experimental test of universality. In: Physical Review B: Condensed Matter, 39 (10). pp. 7288-7291. | CommonCrawl |
Is there a maximal universe of sets?
In asking a question on this site about the unprovability on the Continuum Hypothesis, many people explained to me that for a given set of axioms, there are many different models that satisfy that set of axioms, a concept that I am very comfortable with now.
Hence, there exists a model for ZFC set theory that satisfies CH, and one that doesn't. (Correct me if I'm wrong about anything)
However, for practical purposes, it seems like there is model of set theory that is used almost universally, at least in branches of mathematics other than logic. It also seems like this model is "maximal" in the sense that if the existence of a set $A$ doesn't violate any of the axioms of ZFC, then $A$ exists in the universe of sets.
First off, am I even remotely correct in thinking this? Is there such a model? And if so, is it maximal in any sense?
If I am correct, then what's the big deal about the unprovability of the Continuum Hypothesis? It seems true of a lot of things. From reading the axioms, I don't see any reason why a set with the cardinality of the real numbers has to exist. And why be so squirrelly when talking about sets with cardinality between that of the reals and the natural numbers? If they exist within our maximal universe of sets, why not use them?
If I'm not correct, then does a maximal universe even exist? Or, is there a universe of sets, $U$, that satisfies ZFC, and all other models of ZFC are subsets of $U$?
If so, why wouldn't we use it? And my earlier questions still apply.
If not, why not? Is there a proof? And what model do we actually use, and why?
Sorry if this isn't a very well-asked question. It's kind of rambling and contains a lot of sub-questions, but I'm not sure how else to phrase this while still asking what I want to.
logic set-theory axioms provability
RothX
RothXRothX
$\begingroup$ "From reading the axioms, I don't see any reason why a set with the cardinality of the real numbers has to exist." Did you really intend to write this? $\mathbb{R}$ itself is a set ... $\endgroup$ – Noah Schweber Sep 15 '18 at 19:33
$\begingroup$ @NoahSchweber Yes, of course it's a set. My point is that I don't think you can prove $\mathbb{R}$ exists just from the ZFC axioms. Hence, some models don't include it. $\endgroup$ – RothX Sep 15 '18 at 19:38
$\begingroup$ I'm not sure what you mean - ZFC definitely proves that $\mathbb{R}$ exists (incidentally, it might be simpler to think of $\mathcal{P}(\omega)$ instead of $\mathbb{R}$). Of course, different models might disagree about what it is ... $\endgroup$ – Noah Schweber Sep 15 '18 at 19:39
EDITED in response to comments:
I think the issue you're facing is that there is actually no universal agreement on what the "mathematical universe" looks like, or whether it even exists, or in what sense.
This is one of the things a "foundational theory" like ZFC is for: it frees us from the constraint of having to commit to a philosophical perspective. An ultrafinitist, a formalist, a Platonist, and a multiversist (see below) can all agree with the statement "ZFC proves that the continuum does not have cofinality $\omega$" (for example). In my opinion, ZFC exists precisely because of the problems inherent to your question.
Specifically, your question isolates the following philosophical points:
Is there in fact a model of ZFC used by mathematicians, or at least the majority of mathematicians ("the model of set theory used almost universally")?
If the answer is yes, then do we have some mechanism for figuring things out about it, which goes beyond just using the ZFC axioms (and so in particular could decide CH) ("If they exist within our maximal universe of sets, why not use them?")?
Both of these questions are extremely controversial (and in fact I would answer "no" to the first right off the bat). In fact, consider the following weak claim:
Most mathematicians work in ZFC.
Even that is, in my opinion, of dubious truth - if you pick a mathematician at random and ask them to state the ZFC axioms without looking them up, with high probability they won't be able to. The ZFC axioms simply don't play a role in most mathematicians' activities. So in what sense can the point above even be claimed?
(Well, a Platonist commitment can get around it - "You're living in a model of ZFC even though you don't know it" - but that just takes us back to points $1$ and $2$ above.)
And this takes us to the strange paradox of philosophy in practice: that mathematicians are on the one hand empirically inconsistent (not just amongst each other, but even with ourselves) about these philosophical issues, and on the other hand nonetheless able to do mathematics (and even sometimes benefit from this "flexibility"). I have many opinions on this, but the two relevant points I want to make - at the risk of repeating myself a bit - are:
This "flexibility" poses a fundamental obstacle to giving a clear, consistent description of mathematical practice, especially if you want to avoid formalism; and in particular makes the role of anyone looking for a "model used in practice" almost impossible.
One of the important things ZFC does is to provide us with a "bulwark" against incomprehensibility: at the end of the day, I can always walk back a Platonist claim ("$2^{\aleph_0}$ has uncountable cofinality") to a formalist one based on a specific formal theory $T$ ("the specific theory $T$ proves that $2^{\aleph_0}$ has uncountable cofinality"), and by making the sociological agreement to treat ZFC as our "default $T$" we ensure that this fallback does not lead to "mathematical disintegration."
One question looming in the background, then, is: why ZFC? This is really twofold: on what basis do we justify the ZFC axioms (as true, or useful, or used, or ...), and why can't we justify anything more - or can we?
Both parts get us into interesting territory. I recommend the following articles:
Akihiro Kanamori, In praise of replacement.
Penelope Maddy, Believing the axioms (part I and part II).
Solomon Feferman, Does mathematics need new axioms?
Incidentally we can go even deeper, and question logic itself:
Gregory Moore, The emergence of first-order logic.
Jose Ferreiros, The road to modern logic - an interpretation.
To clarify, I am not claiming agreement with them, but I do think they're good sources.
Noah SchweberNoah Schweber
$\begingroup$ Right, I think what I meant by "the model of set theory used almost universally" is the model of set theory used in fields of math other than logic and set theory, which I feel is usually the same model, but I could be wrong. $\endgroup$ – RothX Sep 15 '18 at 19:41
$\begingroup$ @RothX You're making two assumptions - that such a model exists (which is easy to justify via Platonism, if you're willing to adopt that philosophy, but which pretty much any other philosophical perspective would at least demand justification for and in my opinion probably oppose strongly) and that we can tell what it looks like. Even assuming that there is an "actual model" used by mathematicians - in fact, a single one we all share - what mechanism do you propose we use to figure out of CH is "truly true?" $\endgroup$ – Noah Schweber Sep 15 '18 at 19:42
$\begingroup$ Right, well my problem is that I wasn't sure if such a model existed or if we could tell what it looked like, hence my question. Also, I'm fully on board with the unprovability of CH. I was wondering whether it was true or not in our "standard model" of ZFC, if such a model exists. $\endgroup$ – RothX Sep 15 '18 at 19:45
$\begingroup$ @RothX It does make sense to ask whether a set of axioms is true of a specific structure - if you're claiming that there is a "set-theoretic universe" used by mathematicians, how do you know ZFC is true of that universe? Why couldn't it be the case that the "set-theoretic universe" mathematicians use is actually a model of NF, instead? $\endgroup$ – Noah Schweber Sep 15 '18 at 19:50
$\begingroup$ @RothX I've edited my answer; I think this addresses (and represents) your question better, but if you prefer the old version feel free to rollback (or let me now how I could improve this one). $\endgroup$ – Noah Schweber Sep 15 '18 at 20:25
It's true that there is a ZFC model that satisfies CH and one that doesn't.
ZFC is not maximal in your sense. In fact, there can't be a theory of sets that is maximal in your sense, because of Gödel's incompleteness theorem.
In ZFC, there is a set with the cardinality of the reals, albeit it may not be obvious from the axioms, you can "construct" it from the axioms. That is, you can prove it's existence by actually building it. The reals exist in every ZFC model you can think of. The same can't be done with CH, hence, the big deal of the unprovability of it.
The model we use is the "standard interpretation" of ZFC, for most things. This (kinda) means that if you can build it in ZFC, then it "exists". I am not aware on any consensus of incorporating CH or it's negation in the standard model.
RyunaqRyunaq
$\begingroup$ To point 2, I never asked anything about ZFC being maximal. I only asked about whether, give ZFC, there was a maximal model of it. $\endgroup$ – RothX Sep 15 '18 at 19:51
$\begingroup$ As for point 3, could there be a model of ZFC without any set that has the cardinality of the reals? Or does such a set exist in every model? $\endgroup$ – RothX Sep 15 '18 at 19:51
$\begingroup$ @RothX It depends what you mean - by "the reals," do you mean the "true" reals? In that case, consider countable models of ZFC. However, each model has a set it thinks is the reals, and so in the "internal" sense every model has a set of size $\mathbb{R}$ - namely, (the thing it thinks is) $\mathbb{R}$. $\endgroup$ – Noah Schweber Sep 15 '18 at 19:59
$\begingroup$ @RothX Incidentally, the particular case of LS for set theory has a special name: Skolem's paradox. Ironically, within logic the LS theorem has become viewed as a generally positive feature of ZFC, even though (IIRC) Skolem originally used it as an argument against FOL! $\endgroup$ – Noah Schweber Sep 15 '18 at 20:04
$\begingroup$ @Ryunaq I disagree - I think that "maximal" is meant in the sense of "containing all the sets" (which of course begs the question). But this could indeed be clarified. And your analysis of that (and ZFC*) is right. $\endgroup$ – Noah Schweber Sep 15 '18 at 22:56
I don't have anything to contribute on the philosophical questions here; instead, I'll look at this:
It also seems like this model is "maximal" in the sense that if the existence of a set $A$ doesn't violate any of the axioms of ZFC, then $A$ exists in the universe of sets.
Unfortunately, this idea doesn't work; we can see why with the Continuum Hypothesis.
In a model where CH is true, then by definition we have a bijection $\phi_1 : 2^{\aleph_0} \leftrightarrow \aleph_1$. That's what it means for the two cardinals to be equinumerous. Also, remember that everything—including functions—is a set. In particular, $\phi_1$ is just a set of ordered pairs: it's a subset of $2^{\aleph_0}\times\aleph_1$ demonstrating a 1-to-1 correspondence between the two. In other models, ones where CH doesn't hold, $2^{\aleph_0}$ will be equal to some other aleph number.* E.g., there's a model where we have a bijection $\phi_2 : 2^{\aleph_0} \leftrightarrow \aleph_2$.
See the problem? $\phi_1$ is a set that exists in some models, and $\phi_2$ is a set that exists in others, but they can't both exist in the same model. Otherwise, we'd just compose the two to get a bijection proving $\aleph_1 = \aleph_2$, which is provably false (and therefore false in every model). So there can't be a "maximal model", because despite each set being valid in some model (and so each on their own is consistent with ZFC), no model can contain both.
This is a case of a more general phenomenon: models can really screw with your intuition of what it means for a statement to be "true". My favorite example is the fact that the set of von-Neumann naturals $\omega = \{0,1,2,\ldots\}$ (where each ordinal is the set of those less than it) doesn't model ZFC (e.g., it doesn't satisfy the axiom of pairing), but it DOES satisfy the powerset axiom! This is despite the fact that the "real" powerset of $2 \in \omega$ is $\mathscr{P}(2) = \mathscr{P}(\{0,1\}) = \{\{\},\{0\},\{1\},\{0,1\}\} \notin \omega$. See the linked article for an explanation of how this possibly makes sense. Basically, inside the model, $\mathscr{P}(2) = \{\{\},\{0\},\{0,1\}\} = 3$. While in "reality", $\{1\} \subset \{0,1\}$, inside the model the set $\{1\}$ doesn't even exist (and so in particular asking whether it's a subset of anything isn't even a well-formed question).
The bottom line is that the perspective inside a model doesn't necessarily match the perspective inside another model or from the "outside". This is because statements like "$X$ and $Y$ are equinumerous" or "$X$ is the powerset of $Y$" are actually statements not just about $X$ and $Y$, but about other sets in the universe (namely, a bijection $\phi$ between $X$ and $Y$, and subsets $S$ of $X$), and those other sets may or may not exist in the model alongside $X$ and $Y$.
* Actually, showing that every cardinal number is an aleph number requires the axiom of choice. But that doesn't affect the argument, because every model of ZFC is also a model of ZF; as such, the same two contradictory models still exist without Choice.
greatBigDotgreatBigDot
Not the answer you're looking for? Browse other questions tagged logic set-theory axioms provability or ask your own question.
Is most of mathematics independent of set theory?
Can the collection of definable sets be a set?
Universe cardinals and models for ZFC
Let $\Phi$ denote the statement that $\mathrm{GCH}$ holds, and that no inaccessible cardinals exist. Is $\Phi$ limiting?
ZFC,unprovability of existence of a countable model,Skolem construction and paradox
Is it circular to define the Von Neumann universe using "sets"?
Why should the underlying set of a model be a set?
Nonstandard cardinalities and $\mathbb{N}$
Is every true statement about the natural numbers provable in ZFC?
Does the set of all finite sets exist according to ZFC?
Is consistency of ZFC compatible with all models being proper classes? | CommonCrawl |
toffee apples buy online
[2] The latter number is also called the packing radius or the error-correcting capability of the code.[4]. It computes the bitwise exclusive or of the two inputs, and then finds the Hamming weight of the result (the number of nonzero bits) using an algorithm of Wegner (1960) that repeatedly finds and clears the lowest-order nonzero bit. ◮d∗ ≤ n if code has two or more codewords. In a more general context, the Hamming distance is one of several string metrics for measuring the edit distance between two sequences. Since the code is a linear block code, Theorem 6.5 applies and states that the minimum Hamming distance is equal to the weight of the non-zero 11011001 ⊕ 10011101 = 01000100. The Minimum Distance of a Code Lecturer: Kenneth Shum Scribe: Yulin Shao 1 Hamming distance - The Hamming distance between two strings of equal length, dH(u;v), is defined as the number of positions at which the corresponding symbols are different. One can also view a binary string of length n as a vector in Some compilers support the __builtin_popcount function which can calculate this using specialized processor hardware where available. View chapter Purchase book Hashing-based large … Since, this contains two 1s, the Hamming distance, d(11011001, 10011101) = 2. In particular, a code C is said to be k error detecting if, and only if, the minimum Hamming distance between any two of its codewords is at least k+1. """Return the Hamming distance between equal-length sequences. Certain compilers such as GCC and Clang make it available via an intrinsic function: number of bits that differ between two strings. Output: 4 Time complexity: O(n) Note: For Hamming distance of two binary numbers, we can simply return a count of set bits in XOR of two numbers. The latter number is also called the packing … 3 A large … [5] Hamming weight analysis of bits is used in several disciplines including information theory, coding theory, and cryptography. where the zip() function merges two equal-length collections in pairs. In this tutorial, we will study about hamming code and hamming distance in greater detail. If n=2, then minpair takes no more work and mindist takes one comparison). q The minimum Hamming distance is used to define some essential notions in coding theory, such as error detecting and error correcting codes. (iii) Hamming distance = 3. Explanation 1 : Initially, the hamming distance between S and T is 2(at 4 and 6). Thus a code with minimum Hamming distance dbetween its codewords can detect at most d-1 errors and can correct ⌊(d-1)/2⌋ errors. Can you describe a code generated by a generator polynomial as a linear systematic code? ◮d∗ =n+1(or d∗ =∞) for the useless code with only one codeword. If the two corresponding bits are different in the two code words 0 and 1, 1 and 0, then we get a 1 in this XOR. Given two integers x and y, calculate the Hamming distance.. 2 Z While comparing two binary strings of equal length, Hamming distance is the number of bit positions in which the two bits are different. Z Base step: If k=1, then the minimum Hamming distance (mindist) and a minimal pair (minpair) can be found in a trivial amount of time (If n>=3, then mindist is 0, and minpair can be found in at most two comparisons. Jul 21, 2019 According to the Wikipedia page on Hamming distance, this is exactly what I would expect. For a set of multiple codewords, the Hamming distance of the set is the minimum distance between any … What is Hamming Distance? Hamming distance is a metric for comparing two binary data strings. Therefore, the minimum distance is same as the smallest Hamming weight of difference between any pair of code vectors. Example: Input: x = 1, y = 4 Output: 2 Explanation: 1 (0 0 0 1) 4 (0 1 0 0) ↑ ↑ The above arrows point to positions where the corresponding bits are different. Hamming Distance between two strings in JavaScript, Hamming code for single error correction, double error detection, Count paths with distance equal to Manhattan distance in C++, Place k elements such that minimum distance is maximized in C++, Distance between Vertices and Eccentricity, Connectivity, Distance, and Spanning Trees, Pick points from array such that minimum distance is maximized in C++. Hamming's tenure at Bell Laboratories was illustrious. It is named after the American mathematician Richard Hamming. In particular, a code C is said to be k error detecting if, and only if, the minimum Hamming distance between any two of its codewords is at least k+1.[2]. That is, no pair of columns is linearly dependent, while any two columns sum to a third column, giving a triple of linearly dependent columns. q Z To guarantee the detection of up to 5 errors in all cases, the minimum Hamming distance in a block code must be _____ This is the minimum possible. Minimum Hamming distance or Minimum Hamming weight. Hamming codes have a minimum distance of 3, which means that the decoder can detect and correct a single error, but it cannot distinguish a double bit error of some codeword from a single bit error of a different codeword. differ by 1, but the distances are different for larger ", // The ^ operators sets to 1 only the bits that are different, // We then count the bit set to 1 using the Peter Wegner way, Learn how and when to remove this template message, error detecting and error correcting codes, "Error detecting and error correcting codes", "Inferring HIV Transmission Dynamics from Phylogenetic Sequence Relationships", A Survey of Encoding Techniques for Reducing Data-Movement Energy, https://en.wikipedia.org/w/index.php?title=Hamming_distance&oldid=996506175, All Wikipedia articles written in American English, Articles lacking in-text citations from May 2015, Wikipedia articles needing clarification from June 2020, Wikipedia articles incorporating text from the Federal Standard 1037C, Articles with example Python (programming language) code, Creative Commons Attribution-ShareAlike License, This page was last edited on 27 December 2020, at 00:42. Now the strings I will be using are binary words of the same length, so my first question is there a bitwise solution to this in Python? {\displaystyle \mathbb {R} ^{n}} or The key significance of the hamming distance is that if two codewords have a Hamming distance of d between them, then it would take d single bit errors to turn one of them into the other. Viewed 756 times 0 $\begingroup$ Why studying the minimum Hamming distance for linear codes over rings is interesting for coding theorists? Thus a code with minimum Hamming distance d between its codewords can detect at most d-1 errors and can correct ⌊(d-1)/2⌋ errors. """, "Undefined for sequences of unequal length. Solution \[d_{min}=2n+1\] How do we calculate the minimum distance between codewords? Minimum distance The minimum distance of block code C, is the smallest distance between all distance pairs of code words in C. The minimum distance of a block code determines both its error-detecting ability and error-correcting ability. Richard Hamming was an American mathematician that lived from 1915 thru 1998. Lemma … In this video I briefly explain what minimum distance is and why it is helpful. However, for comparing strings of different lengths, or strings where not just substitutions but also insertions or deletions have to be expected, a more sophisticated metric like the Levenshtein distance is more appropriate. Each binary Hamming code has minimum weight and distance 3, since as before there are no columns 0 and no pair of identical columns. In a set of words, the minimum Hamming distance is the smallest Hamming distance between all possible pairs. A major application is in coding theory, more specifically to block codes, in which the equal-length strings are vectors over a finite field. {\textstyle \mathbb {Z} /2\mathbb {Z} } To find this value, we find the Hamming distances between all words and select the smallest one. R It is also using in coding theory for comparing equal length data words. This is more easily understood geometrically as any closed balls of radius k centered on distinct codewords being disjoint. The Hamming space consists of 8 words 000, 001, 010, 011, 100, 101, 110 and 111. The Hamming distance is also used in systematics as a measure of genetic distance.[8]. [7] If Active 6 years, 4 months ago. Thus, some double-bit errors will be incorrectly decoded as if they were single bit errors and therefore go undetected, unless no correction is attempted. For example, given a valid Hamming codeword, a change in bit 3 changes three bits (1,2,3) such that the new codeword is a distance (d=3) from the initial word. I have written a script to find the minimum Hamming distance of strings in a list. The following function, written in Python 3.7, returns the Hamming distance between two strings: The function hamming_distance(), implemented in Python 2.3+, computes the Hamming distance between {\displaystyle q} two strings (or other iterable objects) of equal length by creating a sequence of Boolean values indicating mismatches and matches between corresponding positions in the two inputs and then summing the sequence with False and True values being interpreted as zero and one. So we say that their hamming distance is 5. The Hamming distance of two arrays of the same length, source and target, is the number of positions where the elements are different. While sending data from a sender to a receiver, there … The minimum Hamming distance between "000" and "111" is 3, which satisfies 2k+1 = 3. It is used in telecommunication to count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called the signal distance. So if the numbers are 7 and 15, they are 0111 and 1111 in binary, here the MSb is different, so the Hamming distance is 1. It is used for error detection or error correction when data is transmitted over computer networks. Note: 0 ≤ x, y < 2 31. For example, the Hamming distance between: For a fixed length n, the Hamming distance is a metric on the set of the words of length n (also known as a Hamming space), as it fulfills the conditions of non-negativity, symmetry, the Hamming distance of two words is 0 if and only if the two words are identical, and it satisfies the triangle inequality as well:[2] Indeed, if we fix three words a, b and c, then whenever there is a difference between the ith letter of a and the ith letter of c, then there must be a difference between the ith letter of a and ith letter of b, or between the ith letter of b and the ith letter of c. Hence the Hamming distance between a and c is not larger than the sum of the Hamming distances between a and b and between b and c. The Hamming distance between two words a and b can also be seen as the Hamming weight of a − b for an appropriate choice of the − operator, much as the difference between two integers can be seen as a distance from zero on the number line. In this paper we present upper bounds on the minimum Hamming distance of QC LDPC codes and study how these upper bounds depend on graph structure parameters (like variable degrees, check node degrees, girth) of the Tanner graph and of the … Suppose there are two strings 1101 1001 and 1001 1101. by treating each symbol in the string as a real coordinate; with this embedding, the strings form the vertices of an n-dimensional hypercube, and the Hamming distance of the strings is equivalent to the Manhattan distance between the vertices. Posted 3 months ago. = The minimum distance between any two vertices is the Hamming distance between the two binary strings. The clever arrangement of the Hamming codewords ensures that this is the case for every valid codeword in the set. In other words, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other. The symbols may be letters, bits, or decimal digits, among other possibilities. It is the number of positions at which the vectors differ. In order to calculate the Hamming distance between two strings, and , we perform their XOR operation, (a⊕ b), and then count the total number of 1s in the resultant string. After swapping the letters at positions 4 and 6 it becomes "pernament". [3] The metric space of length-n binary strings, with the Hamming distance, is known as the Hamming cube; it is equivalent as a metric space to the set of distances between vertices in a hypercube graph. The Hamming distance between two strings, a and b is denoted as d(a,b). n [clarification needed]. The running time of this procedure is proportional to the Hamming distance rather than to the number of bits in the inputs. The hamming distance is the number of bit different bit count between two numbers. . q Suppose there are four strings 010, 011, 101 and 111. With level-signaling scheme, the number of transitions depends on Hamming distance between consecutively transmitted buses. For example, consider the same 3 bit code consisting of two codewords "000" and "111". Likewise, codeword "111" and its single bit error words "110","101" and "011" are all within 1 Hamming distance of the original "111". Hence, the Minimum Hamming Distance, dmin = 1. Minimum Hamming Distance In a set of strings of equal lengths, the minimum Hamming distance is the smallest Hamming distance between all possible pairs of strings in that set. Z If two words have the same length, we can count the number of digits in positions where they have different digit. [9] Hence, by reducing this Hamming distance, the data-movement energy can be reduced. Then I explain how to find it "the long way" and the "shortcut." In a set of strings of equal lengths, the minimum Hamming distance is the smallest Hamming distance between all possible pairs of strings in that set. {\displaystyle q=3} Calc Input binary values: 00000, 01101, 10110, 11011 Results: Hamming distance is 3 Minimum distance between 00000 and 01101 .embed 3 In processor interconnects, the dynamic energy consumption depends on the number of transitions. Construct a (6, 3) systematic linear code and determine its minimum Hamming distance. - Triangular Inequality of Hamming distance:
Video Chat App With Strangers, Southwestern University Baseball Schedule, Alaska Anchorage Seawolves Women's Basketball, Best Talismans Reforge Hypixel Skyblock, 3333 Henry Hudson Parkway, Halo 5 Warzone Turbo, Amy Childs Now And Then, Environmental Volunteer Near Me, Minecraft Build Themes,
toffee apples buy online 2020 | CommonCrawl |
LaRA 2: parallel and vectorized program for sequence–structure alignment of RNA sequences
Jörg Winkler ORCID: orcid.org/0000-0003-1979-85531,2,
Gianvito Urgese3,
Elisa Ficarra4 &
Knut Reinert1,2
The function of non-coding RNA sequences is largely determined by their spatial conformation, namely the secondary structure of the molecule, formed by Watson–Crick interactions between nucleotides. Hence, modern RNA alignment algorithms routinely take structural information into account. In order to discover yet unknown RNA families and infer their possible functions, the structural alignment of RNAs is an essential task. This task demands a lot of computational resources, especially for aligning many long sequences, and it therefore requires efficient algorithms that utilize modern hardware when available. A subset of the secondary structures contains overlapping interactions (called pseudoknots), which add additional complexity to the problem and are often ignored in available software.
We present the SeqAn-based software LaRA 2 that is significantly faster than comparable software for accurate pairwise and multiple alignments of structured RNA sequences. In contrast to other programs our approach can handle arbitrary pseudoknots. As an improved re-implementation of the LaRA tool for structural alignments, LaRA 2 uses multi-threading and vectorization for parallel execution and a new heuristic for computing a lower boundary of the solution. Our algorithmic improvements yield a program that is up to 130 times faster than the previous version.
With LaRA 2 we provide a tool to analyse large sets of RNA secondary structures in relatively short time, based on structural alignment. The produced alignments can be used to derive structural motifs for the search in genomic databases.
Non-coding RNAs (ncRNAs) are RNA molecules that do not translate into proteins, but instead have various functions, e.g. they participate in splicing or gene regulation. Analysing ncRNA molecules by comparison to functionally related RNA molecules requires more than sequence information, because their function is primarily determined by their structure, which is often better conserved than the primary sequence. Hence, sequence–structure alignment rewards the conservation of structural interactions of the ncRNA molecules, which is a key property for many applications, e.g. finding homologous structures of known ncRNA families [1], phylogenetic fingerprinting as conducted for example for the ITS2 database [2], or the computation of a consensus structure of a set of related RNA molecules [3,4,5,6,7,8,9,10,11].
It is now well-established that ncRNA molecules introduce an additional layer in genetic information processing. They play a significant, active role in cell and developmental biology and carry out many tasks that were previously attributed exclusively to proteins. However, only a small fraction of ncRNA families have been identified so far and many more can still be discovered [12]. Structural RNA elements are also involved in the control of virus replication [13], transcription and translation, indicating that the usage of the RNA structure features will be exploited in the near future for designing novel antiviral strategies [14].
Owing to the importance of ncRNA molecules, there has been a steady stream of developments for analysing the molecules computationally. Specific rules govern RNA structure formation, therefore structured RNAs provide clear patterns of selection with base pairing patterns directly reflecting structural conservation [15]. In other words, two nucleotides that form a base pair may be changed by mutations but preserve the propensity to form a valid base pair (i.e. compensatory mutations). Having a good model of an RNA structure (or a secondary structure as proxy of the 3D structure) is therefore crucial to elucidate its function [16].
Considering structural information unfortunately adds complexity to the problem of aligning two or several sequences. The original algorithm for simultaneous alignment and folding by Sankoff [17] has the time complexity \({\mathcal {O}}(n^6)\) for the pairwise case with sequence length n. The tool LocARNA [18] reduces the time complexity to \({\mathcal {O}}(n^4)\) by limiting the computations to the thermodynamically probable base pairs. Also other tools like FoldAlignM [4] achieve this complexity for pairwise alignments. A quadratic complexity is reached by the programs SPARSE [19] and LaRA [5].
The secondary structure of the 0419 Odontoglossum ringspot virus with 11 predicted pseudoknots. In the central part the linear representation of the RNA structure with all the predicted interaction edges (blue lines) is shown. Evidently, pseudoknots are non-nesting interactions, i.e. the interaction edges of a pseudoknot cross each other. The circular view on the perimeter, shows the sample pseudoknots disposition on the RNA sequence by representing the interaction edges with pink lines
It is estimated that about \(12\%\) of known RNA structures contain pseudoknots [20], which are crossing interactions of loop regions. In Fig. 1 we show an example of pseudoknotted secondary structure from Shabash and Wiese [21] that has been predicted to have 11 pseudoknots and 13 hairpins [22]. In pseudoknotted structures the base pairing is not well nested, i.e. base pairs overlap each other with respect to their sequence position. Thus, pseudoknots are difficult to predict with standard methods that use dynamic programming or stochastic context-free grammars that rely on the nestedness property [23]. In fact the majority of today's software for structure prediction and alignment does not recognize pseudoknots, and the programs that do support them are more complex and are therefore more limited regarding the input size [24,25,26].
A short run time in relation to the problem size is an important aspect. Given the current rapid increase of the size of data sets it is essential to have efficient implementations available that solve the structural alignment problem in reasonable time, while securing a sufficient quality of the results. Some programs already allow to distribute the work on several cores for parallel execution through multi-threading. We go a step further and combine multi-threading and vectorization with SIMD instructions: By storing the data of 4 or 8 alignments in vectors we compute a vector of alignments simultaneously on each core. For example, with 16 cores and vector length 8, we process 128 alignments simultaneously.
Previous work has been done on the vectorization of the pairwise alignment computation using the wavefront approach [27, 28] and for the recognition of barcode and adapter sequences [29]. Our implementation is written in C++ and extends the work by Rahn et al. [28]. In order to use vectorization for our structural alignment approach, we implemented a version that can cope with position-specific scoring functions rather than a pure character comparison.
In 2007 and 2008 the tools LaRA and T-LaRA [5, 30] introduced a very competitive method, based on an ILP formulation that was solved using Lagrangian relaxation. It is still very competitive, however the software is not maintained any more, depends on outdated libraries and lacks parallelization.
We present an improved and parallelized version of the LaRA program for RNA sequence–structure alignment, which is up to \(130{\times}\) faster than the previous version thanks to vectorized and multi-threaded C++ code, while maintaining the accuracy. In contrast to existing software it can handle arbitrary pseudoknots and shows better performance on both simulated and experimentally determined RNA structures.
For a complete overview of all tools related to sequence–structure alignment we refer to the review paper by Lalwani et al. [31]. A recent paper in this field introduced a new tool RNAmountAlign [32] which uses mountain distance for pairwise structural alignments and runs in \({\mathcal {O}}(n^3)\) time for sequences of length n. The paper demonstrates besides RNAmountAlign also good performance for LocARNA [6] and LaRA [5]. LocARNA implements a variant of Sankoff's algorithm and is based on computing pairwise local alignments that consider the pairing probabilities, which have been obtained by the algorithm of McCaskill. LocARNA has complexity \({\mathcal {O}}(n^4)\) in the pairwise case, which makes it computationally expensive. The tool Pankov [33], which has \({\mathcal {O}}(n^2)\) asymptotic time complexity, applies an energy model that derives its energies from conditional loop probabilities, such that the probability of a structure can be more accurately computed.
The general workflow for LaRA is as follows: For given \(s \ge 2\) RNA sequences with secondary structure annotation, LaRA computes sequence–structure alignments of all \(\frac{s(s-1)}{2}\) pairwise combinations. This process is depicted in Fig. 2.2 for one structured sequence pair. All pairwise structural alignments are then progressively merged into a multiple alignment that conserves the structural information. In LaRA this is done with the T-Coffee algorithm (therefore it is also referenced as T-LaRA), which takes the information of the pairwise aligned sequences to compute the multiple alignment.
Katoh and Toh [34] presented a MAFFT-based framework named X-INS-i that incorporates structural information in the progressive multiple alignment step. Based on the structural pairwise alignments, e.g. from LaRA, and the base pair probabilities from McCaskill's algorithm, it adds a so-called four-way-consistency score contribution to the progressive alignment, which favours base pair interactions of high probability in combination with a high pairwise similarity of the involved nucleotides.
The following paragraphs introduce how LaRA solves pairwise sequence–structure alignments, however for the mathematical background we refer to the LaRA paper [5].
As a first step, the LaRA algorithm computes the base pair probability matrix (BPPM) of the sequences to be aligned by using the RNAfold tool [35], a widely known implementation of the McCaskill algorithm. Then, LaRA constructs from the given sequences an alignment graph, which is shown in Fig. 2.3. Between the nodes, which correspond to the sequences' characters, the graph contains two sets of edges:
Vertical alignment edges exist between each combination of a node from the first sequence and a node from the second sequence. For distinction, we use the term line for this type of edge. If a line l is active, i.e. the connected nodes are aligned, the flag \(x_l\) is set to 1 and otherwise to 0. The weight of a line \(w_l\) is initialized with the sequence alignment score of aligning the two nodes according to a given scoring scheme.
Horizontal interaction edges represent structural alignment. Let \({\mathcal {S}}_1\) and \({\mathcal {S}}_2\) be the sets of structural interactions of the two sequences. For each combination of two interactions from \({\mathcal {S}}_1 \times {\mathcal {S}}_2\) we determine line l which is incident to the left interaction partners and line m which is incident to the right interaction partners. We draw two directed edges (l, m) and (m, l) between those two lines and assign weights \(\vec {w}_{lm} = \vec {w}_{ml} = \frac{p_1 + p_2}{2}\) to them, where \(p_1\) is the probability of the respective interaction of the first sequence, and \(p_2\) is the probability of the respective interaction of the second sequence. Like above, the flag \(\vec {y}_{(l,m)}\) equals 1 if the edge (l, m) is active and 0 otherwise.
In order to represent a valid alignment, the graph needs to satisfy the following constraints:
All active lines must be conflict-free, i.e. any two lines with \(x=1\) cannot cross or be incident to the same node.
Each line is incident to at most 1 interaction edge.
An interaction edge can only be active if the line at its origin is active.
For any active interaction edge (l, m) the reverse edge must also be active: \(\vec {y}_{(l,m)} = \vec {y}_{(m,l)}\).
As we want to find the best alignment with regard to both sequence and structure, the objective function is
$$\begin{aligned} \max \sum _{l} w_l \cdot x_l + \sum _{(l,m)} \vec {w}_{lm} \cdot \vec {y}_{(l,m)} \end{aligned}$$
such that the constraints above are satisfied.
The problem can be solved by applying Lagrange relaxation on the last constraint: The maximum profit that a line can contribute is the weight of the maximum weighted outgoing edge plus the line weight itself, minus a penalty if the last constraint has been violated. The maximum score for each line is interpreted as a value in a position-specific score matrix, which is then used by a global alignment algorithm, e.g. Needleman and Wunsch [36]. As a result of the alignment algorithm, we have got a set of active, non-crossing lines where each nucleotide is incident to at most one line. The nucleotides which are not incident to an active line are aligned with a gap symbol to represent an insertion or deletion. Each active line has zero or one outgoing active interaction edge, which (if present) is the edge of maximum weight among all possible outgoing edges. We denote this alignment the relaxed solution, because it may violate the last constraint. Its score \(z_U\) is an upper bound for the optimal valid solution, because the computed alignment is optimal with respect to fewer constraints.
If for all pairs of lines l and m the equation \(\vec {y}_{(l,m)} = \vec {y}_{(m,l)}\) holds, then we have found the optimal valid solution to the original problem. Otherwise, some interaction edges contradict each other. Given the fixed set of active lines, we have to find a subset of interaction edges such that each nucleotide is paired with at most one other nucleotide and the interactions have the maximum weight. This is a maximum weighted matching problem that we solve with a greedy heuristic (see the following section). The result is a valid structural alignment and its score \(z_L\) is a lower bound for the solution of the original problem.
Overall, LaRA iteratively solves the relaxed problem, where the penalty for violating the constraint is incorporated in the scoring matrix. In each iteration after the alignment a new lower bound is computed by finding the best structural interactions of this alignment. The solutions get increasingly better through the iterations and the bounds \(z_U\) and \(z_L\) provide a quality guarantee after any number of iterations, as depicted in Fig. 2.4. When the bounds coincide, the optimal solution has been found.
The following subsections describe algorithmic and implementation details of the LaRA 2 program and point out the differences from the old version.
The five steps of the LaRA 2 workflow. 1. Compute individual structure annotation based on base pair probabilities and create all the combinations of pairs to be computed in parallel. 2. Create an alignment graph that satisfies the problem constraints. 3. Formalize constraints as an integer linear program (ILP). 4. The upper boundary for the optimal solution is computed as an ILP that is solved with Lagrangian Relaxation. The lower boundary is obtained with Maximum Weighted Matching. 5. The pairwise sequence–structure alignments are combined with a multiple sequence aligner tool like T-Coffee or MAFFT
LaRA 2 consists of five main steps as can be seen in Fig. 2: In the first step (Fig. 2.1) we compute the structural interactions in the form of a base pair probability matrix (BPPM) for each individual sequence by using the RNAfold tool [35]. If sequences with structure annotation are provided as input for LaRA 2, this step is omitted. Subsequently, we create all the pairwise combinations to be aligned in parallel.
The second step computes and validates the alignment graph, which we introduced in the previous section, ensuring that the constraints are satisfied. The application of a sequence alignment algorithm ensures that aligned base pairs are mutually conflict-free: in Fig. 2.3 the red lines must not cross each other. We implemented a procedure that validates the alignment structure ensuring that at most one pair of interaction edges (blue lines in Fig. 2.2) is incident to each red alignment line.
Step three formalizes the constraints as an integer linear program (ILP) with an objective function designed to maximize the weighted sum of sequence and structure scores, as introduced in the previous section.
In the fourth step we solve the ILP with Lagrangian Relaxation. The solution to the relaxed ILP can be computed via a sequence alignment with position-specific scores, and it serves as an upper boundary of the optimal solution. It is based on the Needleman-Wunsch algorithm in which we allow choosing among three different gap scoring models: Linear [36], Dynamic [37], and Affine [38]. The lower boundary is the result of a maximum weighted matching routine, which we improved in LaRA 2 with a greedy approach, which is explained in a following subsection.
Step five combines the pairwise alignments progressively according to the pairwise similarity. We use the multiple sequence alignment program T-Coffee or the MAFFT X-INS-i framework to combine all pairwise alignments into one multiple alignment.
In the following subsections we describe deeply some of the most important optimizations implemented in LaRA 2 for improving both performances and quality of produced alignments.
Generating the input
LaRA 2 works on a set of at least two RNA sequences with structure annotation. An RNA sequence is a string of n characters over the RNA alphabet \(\alpha = \{A,C,G,U,N\}\) where the characters represent the four nucleotides Adenine, Cytosine, Guanine, Uracil and the wildcard for an unknown nucleotide, respectively. The structure annotation of a sequence of length n is given as an \(n \times n\) matrix \({\mathcal {A}}\), where the entry \({\mathcal {A}}(i,j)\) denotes the probability \(p \in [0 \ldots 1]\) of nucleotide i and nucleotide j forming a pair in the secondary structure of the RNA molecule.
If the structure information is not available, LaRA 2 can internally compute it with the RNAfold tool [35], which calculates the partition function in order to obtain the individual interaction probabilities between base pairs. For the purpose of a fair comparison with other tools we always include the time for folding the sequences in our benchmarks. However, if the user has the structure annotation at hand the folding step can be omitted.
Computing the alignments in parallel
The first implementation of the LaRA algorithm [5] computes sequentially a sequence–structure alignment for all pairs of sequences and then combines the pairwise alignments using T-Coffee [39]. The program is still competitive (see results section), however it is not well maintained in the sense that old libraries are used (e.g. LEDA [40] for access to general matching algorithms) and the code is not parallelized. Hence, we present a re-implementation of the core algorithms in the C++ library SeqAn [41], which offers not only fast implementations of vectorized and multi-threaded sequence alignment routines, we also added efficient methods for maximum weighted matching.
To make the LaRA algorithm amenable for acceleration via vectorization, we changed the internal logic. In the LaRA algorithm each individual, pairwise sequence–structure alignment is solved using a Lagrange relaxation approach which in essence computes a series of interleaved standard sequence alignments with position dependent scores and a matching routine to adapt the Lagrange multipliers. In Algorithm 1 the code in line 3 shows the inner loop which computes per iteration one alignment followed by a general matching to update weights for the alignment in the next iteration.
We have changed this execution flow in order to use our recently developed many-against-many alignment interface which allows us to compute many pairwise sequence alignments in parallel using multi-core and SIMD vectorization. Hence, we compute the first iteration of all pairwise sequence alignments followed by a parallelized version of the matchings. This can be seen in Algorithm 2. This parallelized computation of the first iteration step is then followed by the second iteration of sequence alignments and matchings. We measured that the sequence alignment step is about 200 times faster on a 16 core standard Xeon processor with 256 bit SIMD registers (see [42] for a similar computation benchmark).
In our SIMD implementation the value types of the matrices and other data structures are SIMD vectors. A SIMD vector enables us to compute an operation on multiple data in a single step, and the amount of data—the vector length—is system-dependent. There exist different instruction sets, like SSE4, AVX2 and AVX512, which support 128, 256 and 512 bits per vector, respectively. For instance, given the AVX2 instruction set and an integer size of 32 bits, we can store and compute 8 integer values at once. We changed the data type of e.g. a score matrix cell from int to seqan::SimdVector<int>, which is a data structure in SeqAn for SIMD vectors. This data structure provides a system-independent interface for operations on SIMD vectors and uses internally the Intel compiler intrinsics [43], applying the vector size that is determined through compilation with one of the instruction sets. Currently, the alignments and boundary computation are implemented with SIMD instructions, whereas the matching step uses multi-threading (and updates the values inside the SIMD vectors for the next iteration).
Users of LaRA 2 do not need to care about enabling the SIMD functionality during run time. Instead, this decision is made with the compilation of LaRA 2, where the -march flag should be used to tell the compiler about the minimal hardware the code should run on. Details on the compiler configuration for Clang and GCC can be found in the installation instructions on our project website.
For the structural alignment problem we use Lagrangian Relaxation and solve the relaxed problem by feeding the structural information into a (vectorized) position-specific score matrix \({\mathcal {S}}\), which is then used as a parameter for the sequence alignment algorithm. This matrix is updated in each iteration and contains the scores for comparing the nucleotides as well as rewards or penalties for conserving or breaking structural interactions. The relaxed solution can still contain outgoing interactions that are not consistent with an incoming interaction and therefore the solution of the alignment is an upper boundary for the optimal score.
Maximum weighted matching for the lower bound
A valid solution (the Lagrangian primal) of the original alignment problem is computed by applying a maximum weighted matching (MWM) algorithm on the interaction graph that is depicted in Fig. 3. The algorithm ensures that no nucleotide is used twice for structural interactions and that they are consistent. The goal is to find the interactions of maximal weight that satisfy the conditions.
Maximum weighted matching. The algorithm selects the best valid interactions in order to compute a lower boundary for the optimal score. The matching property in the displayed interaction graph is not yet satisfied, because the leftmost node is incident to two interactions
We have tested two different heuristics for MWM: The Blossom algorithm by Edmonds [44], which is implemented in the Lemon Graph Library [45], and a greedy approach with look-ahead strategy, which we implemented in the SeqAn library [41].
The main idea of the Blossom algorithm is to search for cycles consisting of an odd number of edges and contract each such cycle into a single node, which is called a blossom. The search is then continued in the reduced graph.
In our greedy approach we generate a list of all edges sorted by their weight. Then we consider the heaviest k edges from the beginning of this list and perform an exhaustive search on the maximum weighting combination. The selected edges become part of the resulting matching and all incident edges are excluded from the list. We repeat this process with the next k heaviest edges from the list until the list is empty.
We find that using the greedy approach with \(k=5\) in our application results in a lower total run time compared to the Blossom algorithm. Although the greedy heuristic produces fewer optimal matchings, LaRA 2 compensates the outcome with a few more alignment and matching iterations.
The score for the lower bound of the current LaRA 2 iteration is the sum of the weights of the edges that are part of the computed matching, plus the sequence alignment score. The highest score over all iterations together with the corresponding alignment is reported as the valid solution of the pairwise sequence–structure alignment problem.
Combining the pairwise alignments into a multiple one
LaRA 2 can produce two different output formats, which can be selected with a parameter: MSA library for T-Coffee [39] and pairwise alignments for MAFFT [34].
The MSA library is a data structure that stores the base pairings with their individual scores for each of the \(\frac{s(s-1)}{2}\) pairwise alignments. Its scores correspond to the sequence and structure conservation in the associated nucleotide pair. The MSA Library can be directly used by the T-Coffee [39] algorithm for progressive multiple sequence alignment (MSA). T-Coffee incorporates structural information by constructing an alignment graph that contains the structural weights of the pairwise alignments. As the library data structure consists of a weighted set of sequences with weighted character pairings, it is flexible enough to support also the incorporation of other constraints (from e.g. already computed alignments) or additional, experimentally gained structures (e.g. obtained by SHAPE experiments) by adjusting the library weights accordingly in the file or by using T-Coffee's input flags.
The pairwise alignment output is designed to be parsed by the MAFFT framework and contains three lines per pairwise alignment: The first line is a header line similar to the FastA format containing both sequence identifiers, and the other two lines consist of the first and the second aligned sequence. The aligned sequences possibly contain gap symbols and have equal length. Through the four-way-consistency score in MAFFT it incorporates the structural information not only through the (here unweighted) base pairs in the pairwise alignments but additionally from the initial base pair probabilities resulting from the McCaskill algorithm.
In case \(s = 2\) there is no need for generating a multiple alignment, because there is only one pairwise alignment. In addition to the output formats described above, we support aligned FastA output for two sequences. Then there are four lines recorded: the first identifier, the first sequence (with gap symbols), the second identifier and the second sequence (with gap symbols). This format is accepted by most existing tools that take an alignment as input, and also T-Coffee and MAFFT can produce this format for the multiple alignment.
We implemented the algorithm for pairwise structural alignments in a new C++ program with the name LaRA 2, which computes structural alignments with pseudoknots in high quality. It is capable of processing large data sets because of its enormous speed-up thanks to its implementation optimized for multi-threading and vectorization. The name reflects that the underlying model is the one of LaRA [5] that has been improved with the techniques described in this paper.
Alongside the program we develop an interactive iPython manual that serves as a template for getting started and provides practical use cases. Furthermore, the manual on https://seqan.github.io/lara/ provides assistance for using LaRA 2 with T-Coffee or MAFFT for multiple structural alignments and demonstrates the supported input and output formats.
Table 1 The program versions and parameters for the benchmark
In order to demonstrate the performance of LaRA 2 compared to relevant existing software, we evaluate three different benchmarks with focus on multiple alignment with conserved structure, run time on a large data set, and the detection of pseudoknots. All benchmarks have been performed on a Linux server using an x86_64 architecture with Intel® Xeon® CPU E5-2650 v3 with 2.30 GHz and 126 GB RAM. We compiled with GCC version 9 and where applicable, we used up to 16 threads and AVX2 instructions. For the benchmarks we have used the program versions and parameters displayed in Table 1. The standalone MAFFT [46] tool was included as a sequence aligner in order to demonstrate the need of structural alignment.
Benchmark on general RNA families
In this benchmark we show the performance across several RNA families dependent on the sequence similarity. We took the BRAliBase 2.1 data set, which consists of 388 reference alignments of 5 sequences each. For RNAmountAlign we excluded 46 sequences that contain the character 'N', because the program does not accept wildcard symbols.
In order to evaluate the resulting multiple structural alignments we use two metrics: SPS and MCC. The sum-of-pairs (SPS) score is a measure of similarity between the test alignment and the curated reference alignment that is available in the Rfam database [1]. Values are in [0..1] where 1 means identity and value 0 represents maximal distance. While SPS considers solely the character matchings, the Matthews correlation coefficient (MCC) [47] evaluates the predicted secondary structure. It is a value in \([-1..1]\) where 1 denotes a perfect prediction, 0 is a random prediction according to the background distribution and \(-1\) denotes a total disagreement.
In order to compute the MCC (Eq. 1), we follow the publications of Murlet [48] and RNAmountAlign [32]. For future reference and reproducibility we provide the script in our LaRA 2 repository. In a first step we fold the test alignments with RNAalifold from the ViennaRNA package [35]. We have computed the consensus structures with PETfold [49] as well, which led to the same results. The reference alignments from BRAliBase do not contain the structure annotations, and therefore we assigned the respective structures from the Rfam 5.0 database, where the content of BRAliBase originates [50]. In the next step we assign the consensus structure to each sequence of the respective alignment. For all matching base pairs the sequence positions are extracted per sequence and stored in two sets: \(T_i\) contains the base pairs of sequence i in the test alignment and \(R_i\) contains the base pairs of sequence i in the reference. Based on these sets we define the confusion matrix and calculate the MCC. Note that the true negative (tn) value contains the number of all possible base pairs that are contained in neither the test nor the reference set.
$$\begin{aligned} \begin{aligned} tp&:= \sum _i \left| T_i \cap R_i \right| \\ fp&:= \sum _i \left| T_i \setminus R_i \right| \\ fn&:= \sum _i \left| R_i \setminus T_i \right| \\ tn&:= \sum _i \left| \left( T_i \cup R_i \right) ^c \right| \\ {\text {MCC}}&:= \frac{tp \cdot tn - fp \cdot fn}{\sqrt{(tp+fp)(tp+fn)(tn+fp)(tn+fn)}} \end{aligned} \end{aligned}$$
In Fig. 4 we show the performance of the tested tools according to the SPS and MCC benchmarks. The curves in (a) and (b) are fitted through the data points with a lowess smoother (\(f=0.5\)). The statistical significance of the MCC benchmark is displayed in (c). As annotated in BRAliBase [50], we divided the alignments in three groups of low, medium and high sequence similarity. For each group and each tool we calculated the median and \(95\%\) confidence intervals after bootstrapping 1000 samples.
a Sum-of-pairs score and b Matthews correlation coefficient are shown for different tools dependent on the sequence similarity. The tools were run on 388 alignments of the BRAliBase 2.1 data set (without SRP) and the curves were generated with a lowess function on the results. In order to show the MCC performance of MAFFT as a sequence alignment tool, as well as of the reference alignment from BRAliBase, we calculated the best secondary structure of the alignments with RNAalifold [35]. c \(95\%\) bootstrap percentile confidence intervals and medians for the MCC values. The first axis represents the sequence similarity in three groups: low (\(<55\%\)), medium (\(\ge 55\%\) and \(<75\%\)) and high (\(\ge 75\%\)), as annotated in BRAliBase [50]. For each group we bootstrapped 1000 samples of the MCC experiment
The results demonstrate that LaRA 2 performs as good as LocARNA and LaRA 1, and better than RNAmountAlign and MAFFT. In the alignments with more than \(70\%\) sequence similarity we observe the same performance for all tools in the SPS benchmark. This is expected, as the importance of the structure is low and even a sequence aligner like MAFFT is able to compute alignments that are close to the reference alignment. For lower sequence similarities we observe an almost linear regression in the SPS score of MAFFT, because the structure becomes more crucial. Here we observe that LaRA and LocARNA clearly perform the best among the tested tools.
Another question that has concerned us is the performance drop of all programs around the \(55\%\) sequence similarity region in the SPS benchmark. As Löwes et al. [51] have pointed out, this is the effect of an unbalanced representation of RNA families in the BRAliBase benchmark set.
For the structure evaluation with Matthews correlation coefficient we find again that LaRA 2 has the same performance as LocARNA and LaRA 1, while outperforming RNAmountAlign and MAFFT. An interesting observation is the decline of the reference curve for high sequence similarity, which is mainly represented by alignments of the tRNA family. For the reference curve we computed the optimal structures of the BRAliBase reference alignments with RNAalifold [35] (they do not provide reference structures), and compared them with the respective curated structures from Rfam with the MCC. We can see that the results of all the programs follow the same trend as the reference and for high sequence similarity the curves get closer to each other.
We were surprised to see that above \(55\%\) sequence similarity MAFFT has a better performance than RNAmountAlign in the MCC benchmark (see Fig. 4b, c). The comparably poor performance of RNAmountAlign for low sequence similarities is compliant with the results that have already been published [32]. Our assumption is that RNAmountAlign balances the weight too much on the sequence similarity.
The run time for the benchmark is displayed in Fig. 5. We summed up the run time for 481 executions of each tool, except RNAmountAlign, which we ran on the limited set as described above and scaled the run time accordingly.
The fastest result of the sequence–structure aligners is delivered by LaRA 2 with T-Coffee in less than 5.5 min. This is closely followed by RNAmountAlign (below 7 min), which is impressive in the light of its non-parallel execution, but shadowed by its performance in the benchmark. LaRA 2 with MAFFT runs in less than 13 min, LocARNA takes almost 35 min and the single-threaded LaRA requires more than 1 h to compute the test alignments. MAFFT is the fastest among all tested tools, however this is expected because sequence alignment is a less complex problem. As there are only five sequences aligned at a time, parallel execution has just a minor effect compared to the benchmark in the following subsection.
Run time of the tested programs for 481 alignments of 5 sequences each from the BRAliBase 2.1 benchmark, including the SRP data. The calculation of the base pair probabilities is included in the run time. For RNAmountAlign we multiplied the time for computing 384 alignments that do not contain wildcards with factor \(\frac{481}{384}\) in order to compare it with the other tools
In addition, we examined the time and memory consumption of LaRA 2 with respect to the average sequence length. As the BRAliBase data set contains rather short sequences (up to 300 bases) we extended the set with two additional RNA families from the RNAStrAlign database [11]: Telomerase and 16S rRNA. Each alignment consists of five sequences, and we averaged the run time per alignment over 10 runs in order to gain more accurate results. Figure 6 shows the results for the run time (left) and the maximum allocated memory (right). In both cases we observe a monotonic increase with sequence length, and an alignment of average sequence length 1500 takes about one minute and occupies at most 2.5 GB memory.
Run time and memory of LaRA 2 in relation to the sequence length. We used the sequences from BRAliBase 2.1 including SRP as well as Telomerase RNA and Mollicutes' 16S rRNA from RNAStrAlign database [11]. Each of the 560 alignments consists of 5 sequences, of which the average length is denoted on the x-axis. The y-axis shows the run time or peak memory consumption respectively for each alignment computation, including the calculation of the base pair probabilities. We averaged over 10 runs per alignment in order to obtain more accurate measurements
Benchmark of the run time for deep alignments
In order to demonstrate the ability of LaRA 2 to process large data sets in reasonable time, we use the plastids data set from the 5SrRNAdb [52] database, which contains 838 sequences with average length 123. This results in 350703 pairwise structural alignments that are then combined to a single multiple alignment. As Table 2 demonstrates, LaRA 2 with MAFFT X-INS-i can compute this in 26.5 min due to its efficient and parallel implementation. The run time with T-Coffee is about 54 min, and we found that in both cases the common pairwise alignment part takes less than 7.5 min. MAFFT is significantly faster in this benchmark due to the fact that it is a pure sequence aligner, which is a less complex problem. Interestingly, the multi-threaded version is even disadvantageous for MAFFT, likely because of the larger memory allocation. As stated in the introduction, LocARNA has a worse run time complexity compared to LaRA 2, which leads to a significantly slower execution with this large alignment. Note that RNAmountAlign and LaRA support only single-threaded execution. We computed also the SPS scores for the results of this benchmark, which were high values between 0.95 and 0.98 for all programs.
The speed-up of LaRA 2 with MAFFT using 16 threads is about 9 which is much better than the other tools. With T-Coffee it reduces to a factor of 3 due to the non-parallel implementation of T-Coffee. Still this is the same speed-up as LocARNA. Taking a look at the peak memory allocation in Table 2 reveals that even with so many sequences the calculations do not require an extensive amount of memory. The maximum allocation of around 4 GB we find when running T-Coffee (with LaRA 2 and LaRA 1) or RNAmountAlign. When we run LaRA 2 with MAFFT X-INS-i the peak memory is determined by LaRA 2, which is around 2 GB for 16 threads and 1.3 GB for single threaded execution. As this is lower than any other program in multi-threaded mode, we recommend using LaRA 2 with MAFFT if the memory is limited.
Table 2 Run time and memory consumption for the computation of a multiple alignment with 838 sequences of 5S rRNA Plastids, taken from the 5SrRNAdb database [52]
Run time of LaRA 2 for computing the base pair probabilities of 838 sequences of the Plastids data set and pairwise alignment of the 350703 combinations. Multiple alignment is not considered here. The matrix shows the run time for 1, 4, 8 and 16 threads with different SIMD instruction sets, which compute 1, 4 or 8 alignments per thread. In the bar-graph, we reported the time with minute:second [m:s] notation
In addition, we use the Plasmids data set to demonstrate the scaling of the run time of pairwise structural alignments with LaRA 2 in the light of SIMD instruction sets and multi-threading. Figure 7 visualizes the results. Note that this benchmark includes calculating the base pair probabilities of the sequences, but not a multiple alignment, which is performed by the T-Coffee or MAFFT X-INS-i programs.
The effect of SIMD instructions is a speed-up of \(1.8{\times}\)–\(1.6{\times}\) with AVX2 and \(1.6{\times}\) with SSE4. Because the vectorization is implemented for the alignment step and not for the matching and folding, these factors are reasonable. In combination with multi-threading we gain a large improvement of the run time. With 16 threads we achieve \(13{\times}\) speed-up compared to the single-threaded run of LaRA 2 in the SSE4 or non-SIMD case and \(11.5{\times}\) with AVX2. We analysed that the remaining sequential part in the program is mainly the computation of the base pair probabilities with RNAfold, which takes constantly 25 seconds. An additional effect has a larger memory allocation, e.g. for AVX2 instructions and 16 threads the program needs to allocate 128 alignments.
Benchmark on RNA structures with pseudoknots
Although it is estimated that \(12\%\) of RNA structures contain at least one pseudoknot [20] the most structural alignment methods do not implement mechanisms to conserve pseudoknotted structures, because their detection is computationally more demanding. As many commonly used software tools do not detect pseudoknots, the number \(12\%\) may still be underestimated. Generally, in alignments with a high enough sequence conservation a pseudoknot can be aligned correctly by any method that aligns for sequence similarity, while for alignments with low sequence similarity the ability of the methods to represent crossing structures becomes more important.
We show with SPS values of some pseudoknotted RNAs from Rfam and in a graphical example that LaRA 2 actually detects pseudoknots. SPS scores express the similarity to the reference alignment and therefore a high score indicates that the pseudoknot is aligned properly, however a low score can result from a different location and is not sufficient to prove the absence of the pseudoknot in the test alignment.
Table 3 SPS evaluation of the test programs on pseudoknotted structures from Rfam
The scores in Table 3 show that LaRA 2 performs the best according to the SPS criterion. This is expected, because LaRA 2 and LaRA receive their structural information from individual base pair probabilities and can model pseudoknots in their graph representation. The high scores of the structural interactions of the pseudoknot benefit the conservation of the respective columns of the multiple alignment as shown in the example above. A pure sequence aligner like MAFFT can only show good results with high sequence similarity like RF00499.
We have chosen a structure for the graphical example where the pseudoknot interactions are biologically essential: Athanasopoulos et al. [53] describe a pseudoknot in the regulatory region of the repBA gene, which consists of two complementary sequences of 8 bases. The base pairing between them forms a pseudoknot that is essential for translation. We have downloaded for this benchmark the respective seven seed sequences (accession RF01089) from the Rfam database [1] as well as the respective reference alignment.
Double covariance plots of RF01089 with R-chie [54] after structure prediction with IPKnot [55]. The plots demonstrate how well the different programs align the pseudoknot with respect to the reference structure from Rfam
We computed the structural multiple alignment from the seed sequences with all the tools. Based on these alignments we ran IPknot [55] (mode: McCaskill model with refinement, allow pseudoknots) to produce a folding of the alignments in order to detect whether the alignments have the correct pseudoknot positions aligned. Figure 8 visualizes the foldings from IPknot in comparison to the Rfam reference structure. The plots were computed as double covariance plots with the R-chie tool [54].
The pseudoknot of subject is the long-range interaction that is displayed in the reference part of all the plots. Comparing the plots reveals that almost all the tools correctly aligned the pseudoknot and placed it in the same position as in the reference; however with LocARNA the left side of the pseudoknot cannot be correctly spotted and is thus not represented in the alignment. We were surprised to see how well MAFFT aligns the pseudoknot in this example—apparently there is enough sequence similarity present in these pseudoknot sites such that a sequence aligner is able to align them correctly.
We have presented LaRA 2, a fast program for sequence–structure alignment of RNA sequences. LaRA 2 benefits from its improvements in parallel execution and a new matching algorithm such that it can solve the problem for large data sets in relatively short time. The underlying graph model allows the representation of pseudoknotted structures, which we demonstrated in the previous section. Furthermore, we show that on the BRAliBase benchmark set we have a similar performance as LocARNA.
In the future, we plan to analyse non coding RNA sequences named pre-miRNA with the new LaRA 2 tool that will be integrated in an investigation pipeline developed for calculating the miRNA and isomiR expression levels in small RNA-Seq datasets [56, 57].
We also plan to derive structural motifs from the resulting alignments of LaRA 2 in order to scan genomic sequences for the occurrences of the motif. This allows to analyse yet unknown RNA families and to derive possible functions.
Project name: LaRA 2
Project home page: https://seqan.github.io/lara
Operating systems: Tested on GNU/Linux and MacOS
Programming language: C++
Other requirements: SeqAn 2.4, Lemon 1.3.1, ViennaRNA 2.0 or higher
Licence: BSD-License (3-clause)
Any restrictions to use by non-academics: None
The source code of LaRA 2 is freely available on GitHub: https://github.com/seqan/lara. Data and scripts for the BRAliBase 2 benchmark are available on http://projects.binf.ku.dk/pgardner/bralibase/bralibase2.html. The Plastids data for the deep alignment benchmark is available on http://combio.pl/rrna/download. Reference alignments for the RNA structures used in our study can be accessed with the identification numbers RF00001, RF00005, RF00020, RF00029, RF00165, RF00499, RF01084, RF01089 on https://rfam.xfam.org.
AVX:
Advanced vector extensions
BPPM:
Base pair probability matrix
ILP:
Integer linear program
MFE:
Minimum free energy
Micro ribonucleic acid
Matthews correlation coefficient
MSA:
MWM:
Maximum weighted matching
ncRNA:
Non-coding ribonucleic acid
SIMD:
Single instruction, multiple data
SSE:
Streaming SIMD extensions
SPS:
Sum-of-pairs score
Kalvari I, Nawrocki EP, Argasinska J, Quinones-Olvera N, Finn RD, Bateman A, et al. Non-coding RNA analysis using the Rfam database. Curr Protoc Bioinform. 2018;62(1):e51.
Wolf M, Achtziger M, Schultz J, Dandekar T, Müller T. Homology modeling revealed more than 20,000 rRNA internal transcribed spacer 2 (ITS2) secondary structures. RNA. 2005;11(11):1616–23.
Hofacker IL, Bernhart SHF, Stadler PF. Alignment of RNA base pairing probability matrices. Bioinformatics. 2004;20(14):2222–7.
Torarinsson E, Havgaard JH, Gorodkin J. Multiple structural alignment and clustering of RNA sequences. Bioinformatics. 2007;23(8):926–32.
Bauer M, Klau GW, Reinert K. Accurate multiple sequence–structure alignment of RNA sequences using combinatorial optimization. BMC Bioinform. 2007;8(1):1–18.
Will S, Reiche K, Hofacker IL, Stadler PF, Backofen R. Inferring noncoding RNA families and classes by means of genome-scale structure-based clustering. PLoS Comput Biol. 2007;3(4):e65.
Xu Z, Mathews DH. Multilign: an algorithm to predict secondary structures conserved in multiple RNA sequences. Bioinformatics. 2011;27(5):626–32.
Tabei Y, Kiryu H, Kin T, Asai K. A fast structural multiple alignment method for long RNA sequences. BMC Bioinform. 2008;9(1):33.
Wei D, Alpert LV, Lawrence CE. RNAG: a new Gibbs sampler for predicting RNA secondary structure for unaligned sequences. Bioinformatics. 2011;27(18):2486–93.
Meyer IM, Miklós I. SimulFold: simultaneously inferring RNA structures including pseudoknots, alignments, and trees using a Bayesian MCMC framework. PLoS Comput Biol. 2007;3(8):e149.
Tan Z, Fu Y, Sharma G, Mathews DH. TurboFold II: RNA structural alignment and secondary structure prediction informed by multiple homologs. Nucleic Acids Res. 2017;45(20):11570–81.
Mattick JS. The functional genomics of noncoding RNA. Science. 2005;309(5740):1527–8.
Viehweger A, Krautwurst S, Lamkiewicz K, Madhugiri R, Ziebuhr J, Hölzer M, et al. Direct RNA nanopore sequencing of full-length coronavirus genomes provides novel insights into structural variants and enables modification analysis. Genome Res. 2019;29(9):1545–54.
Lim CS, Brown CM. Know your enemy: successful bioinformatic approaches to predict functional RNA structures in viral RNAs. Front Microbiol. 2018;8:2582.
Rivas E, Clements J, Eddy SR. A statistical test for conserved RNA structure shows lack of evidence for structure in lncRNAs. Nat Methods. 2017;14(1):45–8.
Gutell RR, Power A, Hertz GZ, Putz EJ, Stormo GD. Identifying constraints on the higher-order structure of RNA: continued development and application of comparative sequence analysis methods. Nucleic Acids Res. 1992;20(21):5785–95.
Sankoff D. Simultaneous solution of the RNA folding, alignment and protosequence problems. SIAM J Appl Math. 1985;45(5):810–25.
Will S, Joshi T, Hofacker IL, Stadler PF, Backofen R. LocARNA-P: accurate boundary prediction and improved detection of structural RNAs. RNA. 2012;18(5):900–14.
Will S, Otto C, Miladi M, Möhl M, Backofen R. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics. Bioinformatics. 2015;31(15):2489–96.
Danaee P, Rouches M, Wiley M, Deng D, Huang L, Hendrix D. bpRNA: large-scale automated annotation and analysis of RNA secondary structure. Nucleic Acids Res. 2018;46(11):5381–94.
Shabash B, Wiese KC. jViz. RNA 4.0—visualizing pseudoknots and RNA editing employing compressed tree graphs. PLoS ONE. 2019;14(5):e0210281.
Kucharik M, Hofacker IL, Stadler PF, Qin J. Pseudoknots in RNA folding landscapes. Bioinformatics. 2016;32(2):187–94.
Jabbari H, Wark I, Montemagno C, Will S. Knotty: efficient and accurate prediction of complex RNA pseudoknot structures. Bioinformatics. 2018;34(22):3849–56.
Rivas E, Eddy SR. A dynamic programming algorithm for RNA structure prediction including pseudoknots. J Mol Biol. 1999;285(5):2053–68.
Dirks RM, Pierce NA. An algorithm for computing nucleic acid base-pairing probabilities including pseudoknots. J Comput Chem. 2004;25(10):1295–304.
Möhl M, Will S, Backofen R. Lifting prediction to alignment of RNA pseudoknots. J Comput Biol. 2010;17(3):429–42.
Daily J. Parasail: SIMD C library for global, semi-global, and local pairwise sequence alignments. BMC Bioinform. 2016;17:81.
Rahn R, Budach S, Costanza P, Ehrhardt M, Hancox J, Reinert K. Generic accelerated sequence alignment in SeqAn using vectorization and multi-threading. Bioinformatics. 2018;34(20):3437–45.
Roehr JT, Dieterich C, Reinert K. Flexbar 3.0—SIMD and multicore parallelization. Bioinformatics. 2017;33(18):2941–2.
Bauer M, Klau GW, Reinert K. An exact mathematical programming approach to multiple RNA sequence–structure alignment. Algor Oper Res. 2008;3:130–46.
Lalwani S, Kumar R, Gupta N. Sequence–structure alignment techniques for RNA: a comprehensive survey. Adv Life Sci. 2014;4(1):21–35.
Bayegan AH, Clote P. RNAmountAlign: efficient software for local, global, semiglobal pairwise and multiple RNA sequence/structure alignment. PLoS ONE. 2020;15(1):e0227177.
Miladi M, Raden M, Will S, Backofen R. Fast and accurate structure probability estimation for simultaneous alignment and folding of RNAs with Markov chains. Algor Mol Biol. 2020;15(1):19.
Katoh K, Toh H. Improved accuracy of multiple ncRNA alignment by incorporating structural information into a MAFFT-based framework. BMC Bioinform. 2008;9(1):212.
Lorenz R, Bernhart SH, Zu Siederdissen CH, Tafer H, Flamm C, Stadler PF, et al. ViennaRNA package 2.0. Algor Mol Biol. 2011;6(1):26.
Needleman SB, Wunsch CD. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol. 1970;48(3):443–53.
Urgese G, Paciello G, Acquaviva A, Ficarra E, Graziano M, Zamboni M. Dynamic gap selector: a Smith Waterman sequence alignment algorithm with affine gap model optimisation. In: 2nd International work-conference on bioinformatics and biomedical engineering (IWBBIO), 7–9 April 2014; Granada. Copicentro Granada SL; 2014. p. 1347–1358.
Gotoh O. Consistency of optimal sequence alignments. Bull Math Biol. 1990;52:509–25.
Notredame C, Higgins DG, Heringa J. T-Coffee: a novel method for fast and accurate multiple sequence alignment. J Mol Biol. 2000;302(1):205–17.
Mehlhorn K, Näher S, Uhrig C. The LEDA platform for combinatorial and geometric computing. In: Palamidessi LM, Yung M, editors. Automata, languages and programming. Berlin: Springer; 1997. p. 7–16.
Reinert K, Dadi TH, Ehrhardt M, Hauswedell H, Mehringer S, Rahn R, et al. The SeqAn C++ template library for efficient sequence analysis: a resource for programmers. J Biotechnol. 2017;261:157–68.
Budach S. Generic SIMD extension of dynamic programming algorithms in SeqAn. Freie Universität Berlin; 2015. Master's thesis.
Intel Corporation. Intel® intrinsics guide. Accessed on 18th December; 2020. Available from https://software.intel.com/sites/landingpage/IntrinsicsGuide.
Edmonds J. Paths, trees, and flowers. Can J Math. 1965;17:449–67.
Dezső B, Jüttner A, Kovács P. LEMON—an open source C++ graph template library. Electron Notes Theor Comput Sci. 2011;264(5):23–45.
Katoh K, Standley DM. MAFFT multiple sequence alignment software version 7: improvements in performance and usability. Mol Biol Evol. 2013;30(4):772–80.
Matthews BW. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim Biophys Acta. 1975;405(2):442–51.
Kiryu H, Tabei Y, Kin T, Asai K. Murlet: a practical multiple alignment tool for structural RNA sequences. Bioinformatics. 2007;23(13):1588–98.
Seemann SE, Menzel P, Backofen R, Gorodkin J. The PETfold and PETcofold web servers for intra- and intermolecular structures of multiple RNA sequences. Nucleic Acids Res. 2011;39(Web Server issue):W107–11.
Gardner PP, Wilm A, Washietl S. A benchmark of multiple sequence alignment programs upon structural RNAs. Nucleic Acids Res. 2005;33(8):2433–9.
Löwes B, Chauve C, Ponty Y, Giegerich R. The BRaliBase dent—a tale of benchmark design and interpretation. Brief Bioinform. 2016;18(2):306–11.
Szymanski M, Barciszewska MZ, Erdmann VA, Barciszewski J. 5S ribosomal RNA database. Nucleic Acids Res. 2002;30(1):176–8.
Athanasopoulos V, Praszkier J, Pittard AJ. Analysis of elements involved in pseudoknot-dependent expression and regulation of the repA gene of an IncL/M plasmid. J Bacteriol. 1999;181(6):1811–9.
Lai D, Proctor JR, Zhu JYA, Meyer IM. R-chie: a web server and R package for visualizing RNA secondary structures. Nucleic Acids Res. 2012;40(12):e95.
Sato K, Kato Y, Hamada M, Akutsu T, Asai K. IPknot: fast and accurate prediction of RNA secondary structures with pseudoknots using integer programming. Bioinformatics. 2011;27(13):i85–93.
Urgese G, Paciello G, Acquaviva A, Ficarra E. isomiR-SEA: an RNA-Seq analysis tool for miRNAs/isomiRs expression level profiling and miRNA–mRNA interaction sites evaluation. BMC Bioinform. 2016;17(1):1–13.
Urgese G, Parisi E, Scicolone O, Di Cataldo S, Ficarra E. BioSeqZip: a collapser of NGS redundant reads for the optimization of sequence analysis. Bioinformatics. 2020;36(9):2705–11.
The authors thank the SeqAn team for support regarding the SeqAn interface, as well as anonymous referees for valuable comments.
Open Access funding enabled and organized by Projekt DEAL. JW has been supported by the Deutsche Forschungsgemeinschaft [RE 1712/10-1] and the International Max Planck Research School for Biology and Computation. The vectorization efforts in SeqAn have been supported by Intel in an IPCC at FU Berlin. The funding body did not play any role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Department of Mathematics and Computer Science, Free University Berlin, Takustraße 9, 14195, Berlin, Germany
Jörg Winkler & Knut Reinert
Max Planck Institute for Molecular Genetics, Ihnestraße 63-73, 14195, Berlin, Germany
Interuniversity Department of Regional and Urban Studies and Planning, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129, Turin, Italy
Gianvito Urgese
Department of Control and Computer Science, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129, Turin, Italy
Elisa Ficarra
Jörg Winkler
Knut Reinert
JW implemented LaRA 2 with major contributions from GU; JW and GU performed the benchmarks and wrote the publication; KR and EF designed and supervised the project. All authors read and approved the final manuscript.
Correspondence to Jörg Winkler.
Winkler, J., Urgese, G., Ficarra, E. et al. LaRA 2: parallel and vectorized program for sequence–structure alignment of RNA sequences. BMC Bioinformatics 23, 18 (2022). https://doi.org/10.1186/s12859-021-04532-7
RNA secondary structure
Structural alignment | CommonCrawl |
Iluminal is an example of an over-the-counter serotonergic drug used by people looking for performance enhancement, memory improvements, and mood-brightening. Also noteworthy, a wide class of prescription anti-depression drugs are based on serotonin reuptake inhibitors that slow the absorption of serotonin by the presynaptic cell, increasing the effect of the neurotransmitter on the receptor neuron – essentially facilitating the free flow of serotonin throughout the brain.
I stayed up late writing some poems and about how [email protected] kills, and decided to make a night of it. I took the armodafinil at 1 AM; the interesting bit is that this was the morning/evening after what turned out to be an Adderall (as opposed to placebo) trial, so perhaps I will see how well or ill they go together. A set of normal scores from a previous day was 32%/43%/51%/48%. At 11 PM, I scored 39% on DNB; at 1 AM, I scored 50%/43%; 5:15 AM, 39%/37%; 4:10 PM, 42%/40%; 11 PM, 55%/21%/38%. (▂▄▆▅ vs ▃▅▄▃▃▄▃▇▁▃)
along with the previous bit of globalization is an important factor: shipping is ridiculously cheap. The most expensive S&H in my modafinil price table is ~$15 (and most are international). To put this in perspective, I remember in the 90s you could easily pay $15 for domestic S&H when you ordered online - but it's 2013, and the dollar has lost at least half its value, so in real terms, ordering from abroad may be like a quarter of what it used to cost, which makes a big difference to people dipping their toes in and contemplating a small order to try out this 'nootropics thing they've heard about.
How much of the nonmedical use of prescription stimulants documented by these studies was for cognitive enhancement? Prescription stimulants could be used for purposes other than cognitive enhancement, including for feelings of euphoria or energy, to stay awake, or to curb appetite. Were they being used by students as smart pills or as "fun pills," "awake pills," or "diet pills"? Of course, some of these categories are not entirely distinct. For example, by increasing the wakefulness of a sleep-deprived person or by lifting the mood or boosting the motivation of an apathetic person, stimulants are likely to have the secondary effect of improving cognitive performance. Whether and when such effects should be classified as cognitive enhancement is a question to which different answers are possible, and none of the studies reviewed here presupposed an answer. Instead, they show how the respondents themselves classified their reasons for nonmedical stimulant use.
Racetams are often used as a smart drug by finance workers, students, and individuals in high-pressure jobs as a way to help them get into a mental flow state and work for long periods of time. Additionally, the habits and skills that an individual acquires while using a racetam can still be accessed when someone is not taking racetams because it becomes a habit.
I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn't, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I'm not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road.
Popular among computer programmers, oxiracetam, another racetam, has been shown to be effective in recovery from neurological trauma and improvement to long-term memory. It is believed to effective in improving attention span, memory, learning capacity, focus, sensory perception, and logical thinking. It also acts as a stimulant, increasing mental energy, alertness, and motivation.
Take at 10 AM; seem a bit more active but that could just be the pressure of the holiday season combined with my nice clean desk. I do the chores without too much issue and make progress on other things, but nothing major; I survive going to The Sitter without too much tiredness, so ultimately I decide to give the palm to it being active, but only with 60% confidence. I check the next day, and it was placebo. Oops.
Four of the studies focused on middle and high school students, with varied results. Boyd, McCabe, Cranford, and Young (2006) found a 2.3% lifetime prevalence of nonmedical stimulant use in their sample, and McCabe, Teter, and Boyd (2004) found a 4.1% lifetime prevalence in public school students from a single American public school district. Poulin (2001) found an 8.5% past-year prevalence in public school students from four provinces in the Atlantic region of Canada. A more recent study of the same provinces found a 6.6% and 8.7% past-year prevalence for MPH and AMP use, respectively (Poulin, 2007).
Taurine (Examine.com) was another gamble on my part, based mostly on its inclusion in energy drinks. I didn't do as much research as I should have: it came as a shock to me when I read in Wikipedia that taurine has been shown to prevent oxidative stress induced by exercise and was an antioxidant - oxidative stress is a key part of how exercise creates health benefits and antioxidants inhibit those benefits.
In 3, you're considering adding a new supplement, not stopping a supplement you already use. The I don't try Adderall case has value $0, the Adderall fails case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the Adderall succeeds case is worth $X-40-4099, where $X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that 0.5 \times (X-4179) > 0 ~> $X>4179$. (Adderall working or not isn't binary, and so you might be more comfortable breaking down the various how effective Adderall is cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I've designed it so it has a reasonable chance of showing that.)
Jesper Noehr, 30, reels off the ingredients in the chemical cocktail he's been taking every day before work for the past six months. It's a mixture of exotic dietary supplements and research chemicals that he says gives him an edge in his job without ill effects: better memory, more clarity and focus and enhanced problem-solving abilities. "I can keep a lot of things on my mind at once," says Noehr, who is chief technology officer for a San Francisco startup.
We included studies of the effects of these drugs on cognitive processes including learning, memory, and a variety of executive functions, including working memory and cognitive control. These studies are listed in Table 2, along with each study's sample size, gender, age and tasks administered. Given our focus on cognition enhancement, we excluded studies whose measures were confined to perceptual or motor abilities. Studies of attention are included when the term attention refers to an executive function but not when it refers to the kind of perceptual process taxed by, for example, visual search or dichotic listening or when it refers to a simple vigilance task. Vigilance may affect cognitive performance, especially under conditions of fatigue or boredom, but a more vigilant person is not generally thought of as a smarter person, and therefore, vigilance is outside of the focus of the present review. The search and selection process is summarized in Figure 2.
Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage.
"They're not regulated by the FDA like other drugs, so safety testing isn't required," Kerl says. What's more, you can't always be sure that what's on the ingredient label is actually in the product. Keep in mind, too, that those that contain water-soluble vitamins like B and C, she adds, aren't going to help you if you're already getting enough of those vitamins through diet. "If your body is getting more than you need, you're just going to pee out the excess," she says. "You're paying a lot of money for these supplements; maybe just have orange juice."
So, I have started a randomized experiment; should take 2 months, given the size of the correlation. If that turns out to be successful too, I'll have to look into methods of blinding - for example, some sort of electronic doohickey which turns on randomly half the time and which records whether it's on somewhere one can't see. (Then for the experiment, one hooks up the LED, turns the doohickey on, and applies directly to forehead, checking the next morning to see whether it was really on or off).
Between midnight and 1:36 AM, I do four rounds of n-back: 50/39/30/55%. I then take 1/4th of the pill and have some tea. At roughly 1:30 AM, AngryParsley linked a SF anthology/novel, Fine Structure, which sucked me in for the next 3-4 hours until I finally finished the whole thing. At 5:20 AM, circumstances forced me to go to bed, still having only taken 1/4th of the pill and that determines this particular experiment of sleep; I quickly do some n-back: 29/20/20/54/42. I fall asleep in 13 minutes and sleep for 2:48, for a ZQ of 28 (a full night being ~100). I did not notice anything from that possible modafinil+caffeine interaction. Subjectively upon awakening: I don't feel great, but I don't feel like 2-3 hours of sleep either. N-back at 10 AM after breakfast: 25/54/44/38/33. These are not very impressive, but seem normal despite taking the last armodafinil ~9 hours ago; perhaps the 3 hours were enough. Later that day, at 11:30 PM (just before bed): 26/56/47.
After I ran out of creatine, I noticed the increased difficulty, and resolved to buy it again at some point; many months later, there was a Smart Powders sale so bought it in my batch order, $12 for 1000g. As before, it made Taekwondo classes a bit easier. I paid closer attention this second time around and noticed that as one would expect, it only helped with muscular fatigue and did nothing for my aerobic issues. (I hate aerobic exercise, so it's always been a weak point.) I eventually capped it as part of a sulbutiamine-DMAE-creatine-theanine mix. This ran out 1 May 2013. In March 2014, I spent $19 for 1kg of micronized creatine monohydrate to resume creatine use and also to use it as a placebo in a honey-sleep experiment testing Seth Roberts's claim that a few grams of honey before bedtime would improve sleep quality: my usual flour placebo being unusable because the mechanism might be through simple sugars, which flour would digest into. (I did not do the experiment: it was going to be a fair amount of messy work capping the honey and creatine, and I didn't believe Roberts's claims for a second - my only reason to do it would be to prove the claim wrong but he'd just ignore me and no one else cares.) I didn't try measuring out exact doses but just put a spoonful in my tea each morning (creatine is tasteless). The 1kg lasted from 25 March to 18 September or 178 days, so ~5.6g & $0.11 per day.
Additionally, this protein also controls the life and death of brain cells, which aids in enhancing synaptic adaptability. Synapses are important for creating new memories, forming new connections, or combining existing connections. All of these components are important for mood regulation, maintenance of clarity, laser focus, and learning new life skills.
Null results are generally less likely to be published. Consistent with the operation of such a bias in the present literature, the null results found in our survey were invariably included in articles reporting the results of multiple tasks or multiple measures of a single task; published single-task studies with exclusively behavioral measures all found enhancement. This suggests that some single-task studies with null results have gone unreported. The present mixed results are consistent with those of other recent reviews that included data from normal subjects, using more limited sets of tasks or medications (Advokat, 2010; Chamberlain et al., 2010; Repantis, Schlattmann, Laisney, & Heuser, 2010).
From its online reputation and product presentation to our own product run, Synagen IQ smacks of mediocre performance. A complete list of ingredients could have been convincing and decent, but the lack of information paired with the potential for side effects are enough for beginners to old-timers in nootropic use to shy away and opt for more trusted and reputable brands. There is plenty that needs to be done to uplift the brand and improve its overall ranking in the widely competitive industry. Learn More...
If you want to try a nootropic in supplement form, check the label to weed out products you may be allergic to and vet the company as best you can by scouring its website and research basis, and talking to other customers, Kerl recommends. "Find one that isn't just giving you some temporary mental boost or some quick fix – that's not what a nootropic is intended to do," Cyr says.
For obvious reasons, it's difficult for researchers to know just how common the "smart drug" or "neuro-enhancing" lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular.
It is not because of the few thousand francs which would have to be spent to put a roof [!] over the third-class carriages or to upholster the third-class seats that some company or other has open carriages with wooden benches. What the company is trying to do is to prevent the passengers who can pay the second class fare from traveling third class; it hits the poor, not because it wants to hurt them, but to frighten the rich. And it is again for the same reason that the companies, having proved almost cruel to the third-class passengers and mean to the second-class ones, become lavish in dealing with first-class passengers. Having refused the poor what is necessary, they give the rich what is superfluous.
If the entire workforce were to start doping with prescription stimulants, it seems likely that they would have two major effects. Firstly, people would stop avoiding unpleasant tasks, and weary office workers who had perfected the art of not-working-at-work would start tackling the office filing system, keeping spreadsheets up to date, and enthusiastically attending dull meetings.
Can brain enhancing pills actually improve memory? This is a common question and the answer varies, depending on the product you are considering. The top 25 brain enhancement supplements appear to produce results for many users. Research and scientific studies have demonstrated the brain boosting effects of nootropic ingredients in the best quality supplements. At Smart Pill Guide, you can read nootropics reviews and discover how to improve memory for better performance in school or at work.
For Malcolm Gladwell, "the thing with doping is that it allows you to train harder than you would have done otherwise." He argues that we cannot easily call someone a cheater on the basis of having used a drug for this purpose. The equivalent, he explains, would be a student who steals an exam paper from the teacher, and then instead of going home and not studying at all, goes to a library and studies five times harder.
Serotonin, or 5-hydroxytryptamine (5-HTP), is another primary neurotransmitter and controls major features of the mental landscape including mood, sleep and appetite. Serotonin is produced within the body by exposure, which is one reason that the folk-remedy of "getting some sun" to fight depression is scientifically credible. Many foods contain natural serotonergic (serotonin-promoting or releasing) compounds, including the well-known chemical L-Tryptophan found in turkey, which can promote sleep after big Thanksgiving dinners.
Analgesics Anesthetics General Local Anorectics Anti-ADHD agents Antiaddictives Anticonvulsants Antidementia agents Antidepressants Antimigraine agents Antiparkinson agents Antipsychotics Anxiolytics Depressants Entactogens Entheogens Euphoriants Hallucinogens Psychedelics Dissociatives Deliriants Hypnotics/Sedatives Mood Stabilizers Neuroprotectives Nootropics Neurotoxins Orexigenics Serenics Stimulants Wakefulness-promoting agents | CommonCrawl |
go to structure
Section 3.3: The $\operatorname{Ex}^{\infty }$ Functor (cite)
Subsection 3.3.1: Digression: Braced Simplicial Sets
Subsection 3.3.2: The Subdivision of a Simplex
Subsection 3.3.3: The Subdivision of a Simplicial Set
Subsection 3.3.4: The Last Vertex Map
Subsection 3.3.5: Comparison of $X$ with $\operatorname{Ex}(X)$
Subsection 3.3.6: The $\operatorname{Ex}^{\infty }$ Functor
Subsection 3.3.7: Application: Characterizations of Weak Homotopy Equivalences
Subsection 3.3.8: Application: Extending Kan Fibrations
3.3 The $\operatorname{Ex}^{\infty }$ Functor
Let $f: X \rightarrow S$ be a Kan fibration of simplicial sets. If $S$ is a Kan complex, then $X$ is also a Kan complex. Moreover, for every vertex $x \in X$ having image $s = f(x) \in S$, Theorem 3.2.5.1 supplies an exact sequence of homotopy groups
\[ \cdots \rightarrow \pi _{2}(S,s) \xrightarrow {\partial } \pi _{1}(X_ s, x) \rightarrow \pi _{1}( X, x) \rightarrow \pi _1(S,s) \xrightarrow {\partial } \pi _{0}(X_ s, x) \rightarrow \pi _0( X,x) \rightarrow \pi _0(S,s). \]
If $S$ is not a Kan complex, then the results of §3.2.5 do not apply directly. However, one can obtain similar information by replacing $f$ by a Kan fibration $f': X' \rightarrow S'$ between Kan complexes, using the following result:
Theorem 3.3.0.1. Let $f: X \rightarrow S$ be a Kan fibration of simplicial sets. Then there exists a commutative diagram of simplicial sets
\[ \xymatrix { X \ar [r]^-{g'} \ar [d]^{f} & X' \ar [d]^{f'} \\ S \ar [r]^-{g} & S' } \]
$(a)$
The simplicial sets $S'$ and $X'$ are Kan complexes.
$(b)$
The morphisms $g$ and $g'$ are weak homotopy equivalences.
$(c)$
The morphism $f'$ is a Kan fibration.
$(d)$
For every vertex $s \in S$, the induced map $g'_{s}: X_{s} \rightarrow X'_{ g(s)}$ is a homotopy equivalence of Kan complexes.
Note that we can almost deduce Theorem 3.3.0.1 formally from the results of §3.1.6. Given a Kan fibration $f: X \rightarrow S$, we can always choose an anodyne map $g: S \rightarrow S'$, where $S'$ is a Kan complex (Corollary 3.1.6.2). Applying Proposition 3.1.6.1, we deduce that $g \circ f$ factors as a composition $X \xrightarrow {g'} X' \xrightarrow {f'} S'$, where $f'$ is a Kan fibration and $g'$ is anodyne. The resulting commutative diagram
then satisfies conditions $(a)$, $(b)$, and $(c)$ of Theorem 3.3.0.1. However, it is not so obvious that this diagram also satisfies condition $(d)$. To guarantee this, it is convenient to adopt a different approach to the results of §3.1.6. Following Kan ([MR90047]), we will introduce a functor $\operatorname{Ex}^{\infty }: \operatorname{Set_{\Delta }}\rightarrow \operatorname{Set_{\Delta }}$ and a natural transformation of functors $\rho ^{\infty }: \operatorname{id}_{\operatorname{Set_{\Delta }}} \rightarrow \operatorname{Ex}^{\infty }$ with the following properties:
$(a')$
For every simplicial set $S$, the simplicial set $\operatorname{Ex}^{\infty }(S)$ is a Kan complex (Proposition 3.3.6.9).
$(b')$
For every simplicial set $S$, the morphism $\rho _{S}^{\infty }: S \rightarrow \operatorname{Ex}^{\infty }(S)$ is a weak homotopy equivalence (Proposition 3.3.6.7).
$(c')$
For every Kan fibration of simplicial sets $f: X \rightarrow S$, the induced map $\operatorname{Ex}^{\infty }(f): \operatorname{Ex}^{\infty }(X) \rightarrow \operatorname{Ex}^{\infty }(S)$ is a Kan fibration (Proposition 3.3.6.6).
$(d')$
The functor $\operatorname{Ex}^{\infty }: \operatorname{Set_{\Delta }}\rightarrow \operatorname{Set_{\Delta }}$ commutes with finite limits (Proposition 3.3.6.4). In particular, for every morphism of simplicial sets $f: X \rightarrow S$ and every vertex $s \in S$, the canonical map $\operatorname{Ex}^{\infty }( X_{s} ) \rightarrow \{ s\} \times _{ \operatorname{Ex}^{\infty }(S) } \operatorname{Ex}^{\infty }(X)$ is an isomorphism (Corollary 3.3.6.5).
It follows from these assertions that for any Kan fibration $f: X \rightarrow S$, the diagram of simplicial sets
\[ \xymatrix { X \ar [d]^{f} \ar [r]^-{ \rho ^{\infty }_{X} } & \operatorname{Ex}^{\infty }(X) \ar [d]^{ \operatorname{Ex}^{\infty }(f) } \\ S \ar [r]^-{ \rho ^{\infty }_{S} } & \operatorname{Ex}^{\infty }(S) } \]
satisfies the requirements of Theorem 3.3.0.1.
Most of this section is devoted to the definition of the functor $\operatorname{Ex}^{\infty }$ (and the natural transformation $\rho ^{\infty }$) and the verification of assertions $(a')$ through $(d')$. The construction is rooted in classical geometric ideas. Let $n$ be a nonnegative integer, let
denote the topological simplex of dimension $n$. This topological space admits a triangulation whose vertices are the barycenters of its faces. More precisely, there is a canonical homeomorphism of topological spaces $| \operatorname{Sd}( \Delta ^{n} ) | \xrightarrow {\sim } | \Delta ^{n} |$, where $\operatorname{Sd}( \Delta ^{n} )$ denotes the nerve of the partially ordered set of faces of $\Delta ^{n}$ (Proposition 3.3.2.3). For every topological space $Y$, composition with this homeomorphism induces a bijection
\[ \varphi _{n}: \operatorname{Sing}_{n}(Y) \xrightarrow {\sim } \operatorname{Hom}_{\operatorname{Set_{\Delta }}}( \operatorname{Sd}( \Delta ^{n} ), \operatorname{Sing}_{\bullet }(Y) ). \]
Motivated by this observation, we define a functor $X \mapsto \operatorname{Ex}(X) = \operatorname{Ex}_{\bullet }(X)$ from the category of simplicial sets to itself by the formula $\operatorname{Ex}_ n(X) = \operatorname{Hom}_{\operatorname{Set_{\Delta }}}( \operatorname{Sd}( \Delta ^{n} ), X)$ (Construction 3.3.2.5). The preceding discussion can then be summarized by noting that, when $X = \operatorname{Sing}_{\bullet }(Y)$ is the singular simplicial set of a topological space $Y$, the bijections $\{ \varphi _ n \} _{n \geq 0}$ determine an isomorphism of semisimplicial sets $\varphi : X \rightarrow \operatorname{Ex}( X )$ (Example 3.3.2.9). Beware that $\varphi $ is generally not an isomorphism of simplicial sets: that is, it need not be compatible with degeneracy operators.
In §3.3.3, we show that the functor $\operatorname{Ex}: \operatorname{Set_{\Delta }}\rightarrow \operatorname{Set_{\Delta }}$ admits a left adjoint (Corollary 3.3.3.4). We denote the value of this left adjoint on a simplicial set $X$ by $\operatorname{Sd}(X)$, and refer to it as the subdivision of $X$. It is essentially immediate from the definition that, in the special case where $X = \Delta ^{n}$ is a standard simplex, we recover the simplicial set $\operatorname{Sd}( \Delta ^{n})$ defined above. More generally, let say that a simplicial set $X$ is braced if the collection of nondegenerate simplices of $X$ is closed under face operators (Definition 3.3.1.1). If this condition is satisfied, then the subdivision $\operatorname{Sd}(X)$ can be identified with the nerve of the category $\operatorname{{\bf \Delta }}_{X}^{\mathrm{nd}}$ of nondegenerate simplices of $X$ (Proposition 3.3.3.15). Moreover, we also have a canonical homeomorphism of topological spaces $| \operatorname{Sd}(X) | \rightarrow |X|$, which carries each vertex of $\operatorname{N}_{\bullet }( \operatorname{{\bf \Delta }}_{X}^{\mathrm{nd}} )$ to the barycenter of the corresponding simplex of $|X|$ (Proposition 3.3.3.6).
In §3.3.4, we associate to every simplicial set $X$ a pair of comparison maps
\[ \lambda _{X}: \operatorname{Sd}(X) \rightarrow X \quad \quad \rho _{X}: X \rightarrow \operatorname{Ex}(X); \]
we refer to $\lambda _{X}$ as the last vertex map of $X$ (Construction 3.3.4.3). In the special case $X = \Delta ^ n$, the source and target of $\lambda _{X}$ are both weakly contractible, so $\lambda _{X}$ is automatically a weak homotopy equivalence. From this observation, it follows from a simple formal argument that $\lambda _{X}$ is a weak homotopy equivalence for every simplicial set $X$ (Proposition 3.3.4.8). In §3.3.5, we exploit this to show that the functor $\operatorname{Ex}$ carries Kan fibrations to Kan fibrations (Corollary 3.3.5.4), and that the comparison map $\rho _{X}: X \rightarrow \operatorname{Ex}(X)$ is a weak homotopy equivalence for every simplicial set $X$ (Theorem 3.3.5.1). Consequently, the functor $\operatorname{Ex}: \operatorname{Set_{\Delta }}\rightarrow \operatorname{Set_{\Delta }}$ satisfies analogues of properties $(b')$, $(c')$, and $(d')$ above.
Unfortunately, the functor $\operatorname{Ex}: \operatorname{Set_{\Delta }}\rightarrow \operatorname{Set_{\Delta }}$ does not satisfy the analogue of condition $(a')$: in general, a simplicial set of the form $\operatorname{Ex}(X)$ need not satisfy the Kan extension condition. However, one can show that it satisfies a slightly weaker condition: for any morphism of simplicial sets $f_0: \Lambda ^{n}_{i} \rightarrow \operatorname{Ex}(X)$, the composite map $\Lambda ^{n}_{i} \xrightarrow {f_0} \operatorname{Ex}(X) \xrightarrow { \rho _{ \operatorname{Ex}(X)} } \operatorname{Ex}^2(X)$ can be extended to an $n$-simplex of the simplicial set $\operatorname{Ex}^2(X) = \operatorname{Ex}( \operatorname{Ex}(X) )$. We apply this observation in §3.3.6 to deduce that the direct limit
\[ \operatorname{Ex}^{\infty }(X) = \varinjlim ( X \xrightarrow { \rho _{X} } \operatorname{Ex}(X) \xrightarrow { \rho _{ \operatorname{Ex}(X) } } \operatorname{Ex}^2(X) \xrightarrow { \rho _{ \operatorname{Ex}^2(X)} } \operatorname{Ex}^3(X) \rightarrow \cdots ) \]
is a Kan complex (Proposition 3.3.6.9). Moreover, properties $(b')$, $(c')$, and $(d')$ for the functor $X \mapsto \operatorname{Ex}^{\infty }(X)$ are immediate consequences of the analogous properties of the functor $X \mapsto \operatorname{Ex}(X)$.
We close this section by outlining some applications of the functor $\operatorname{Ex}^{\infty }$. In §3.3.7 we prove that, in the situation of Theorem 3.3.0.1, assertion $(d)$ is a formal consequence of $(b)$ and $(c)$ (Proposition 3.3.7.1). Using this, we show that a Kan fibration of simplicial sets $f: X \rightarrow S$ is a weak homotopy equivalence if and only if it is a trivial Kan fibration (Proposition 3.3.7.4), and that a monomorphism of simplicial sets $g: X \hookrightarrow Y$ is a weak homotopy equivalence if and only if it is anodyne (Corollary 3.3.7.5). In §3.3.8 we prove a refinement of Theorem 3.3.0.1, which guarantees that every Kan fibration $f: X \rightarrow S$ is actually isomorphic to the pullback of a Kan fibration $f': X' \rightarrow S'$ between Kan complexes (Theorem 3.3.8.1). | CommonCrawl |
Predicting the Number of Payments in Peer Lending
Deciding which loans to invest in, or gauging the ongoing performance of a portfolio requires investors to be able to predict how much a given loan will return before reaching maturity. The present paper describes a model predicting such returns.
The mantra 'Past performance is no guarantee of future results.' shall always be kept in mind, in finance or other subjects. That being said, it's better to infer from past data that to rely on out-of-touch assumptions. Hence our idea is to analyze the fate of previous loans and build a model that anticipates how future loans will behave.
Since the loan amount is known from inception, we 'only' need to predict the total amount paid back to investors to calculate the financial return. Furthermore, as the amount paid monthly (also called installment) is constant over time, estimating the total amount paid back to investor only requires to estimate the number of payments made over a loan's life. A loan will generate payments as long as it 'survives', and a loan's 'death' means it stops paying.
Available Data
Lending Club provides historical data allowing us to analyze when loans stop paying. Unfortunately, most of the loans are still on-going, since Lending Club has grown spectacularly in the recent years.
Analyzing only mature loans is the simplest option. However, that causes two problems: first, the amount of data is significantly smaller. Out of 270,119 loans issued by Lending Club as of March 2014, only 20,233 are old enough to have reached maturity. Second, restricting a model to loans issued before 2011 can introduce a bias since borrowers' characteristics may have changed in the past 3 years.
Therefore we need to choose a model that is able to factor in current loans as well.
Loans may have different terms, and therefore different time horizons. We ensure consistency by normalizing their age on an interval [0,1]. For instance, the age of 60-months loan in its 39th month is 39/60 = 0.65. An age of 0 corresponds to the issuance, an age of 1 to maturity. In some rare cases, a loan can go beyond its maturity, for instance, when it was late in payment. For the sake of simplicity, we'll set the upper bound at 1 in all cases.
Likewise, the number of payments made by a loan can be normalized as the ratio between the total number of payments by the term of the loan. A Payment Ratio of 0 means the loans hasn't generated any payments yet. A Payment Ratio of 1 means all the installments were paid. For the sake of simplicity, we will also consider that loans that were paid-back early have been fully paid at maturity only.
Survival Function
Let S(t) be the survival rate of a loan at age t. If a loan is fully paid, then we have S(1) = 1. If it stopped paying at mid-life then S(0.5) = S(1) = 0.5. The complement of the Survival function S(t) corresponds to the probability of a loan to be dead at age t, which is the cumulative probability of dying at age t.
The problem of estimating survival is quite a common task, and a whole part of statistics called Survival Analysis is devoted to it. Two common usages are estimating how treatment influence life expectancy of patients in medicine, and predicting product reliability in engineering.
In both cases, statisticians also have to rely upon incomplete data, as we do with Peer Lending loans. For instance, medical researchers normally have to analyze the data while some subjects are still alive. Some subjects may also have moved away, and be lost for follow-ups (so whether they died or not is unknown). In both cases, one doesn't know how much longer they might survive. Such survival times are termed 'censored' to indicate that the period of observation was cut off before the monitored event occurred.
Let us first compare the probability of death over time for defaulting loans with different terms. We graph the proportion of defaulting loans that stop paying over time, for 6,097 36-months loans and for 2,783 60-months loans:
Cumulative Death Events for Defaulting Loans
In the graph above, the 36-month loans are crosses, the 60-month loans are dots. The horizontal axis corresponds to the maturity of the loans (0 = just issued, 1 = reached maturity), while the vertical axis corresponds to the proportion of loans that have stopped paying. In statistical analysis, the function giving the probability of failure over time is called the hazard function. We observe that the shapes are similar, and therefore will consider this hazard function is constant over the term of a loan (once again, the age of a loan is expressed on the interval [0,1]).
Lifetime Distribution Function
Since the hazard function is constant over the term of the loan, we can mix loans of different terms to plot the Lifetime Distribution Function.
This curve shows the probability of default over time for loans that will default. The risk of default increases sharply over time, especially after a few months, then decreases once a loan has passed half-maturity.
Default Rate
The previous Lifetime Distribution Function applies only to loans that are certain to default, it doesn't tell us the impact of different default rates on the survival curve. In other words: is a loan with a 0.25 probability of default likely to default at a different time than a loan with a probability of 1.0?
To understand such an impact, we need to analyze the number of missed payments based on the default rate. To do so, we process through a series of Monte-Carlo simulations. The Monte-Carlo simulation is method to obtain numerical estimates through repeated random samplings. In the present case, we pickup a pre-defined average default rate d at random in the interval [0,1]. Then we select 1,000 loans such that the proportion of defaulting loans matches our pre-defined average d. Then we measure how much they missed their payments by. We repeat the process thousands of times to obtain reliable results:
Missing Payments by Default rate
Note: there are still missing payments when default=0 because of loans that haven't reached maturity yet.
This shows that the probability of default has a linear effect on missed payments. In other words, the default rate 'flexes' the survival curve down, but does not change its shape. Therefore we can apply a Cox Proportional Hazards Model to create a survival estimator.
Cox Proportional Hazards
The Cox model is a well-recognized statistical technique for modeling survival data that simultaneously explores the effects of several variables. It is often used to analyze the survival of patients in a clinical trial, where it allows statisticians to isolate the effects of treatment from the effects of other variables. A strength of the Cox model is that it allows to include 'censored-data', i.e. loans that are not mature yet.
The method of proportional hazards splits the survival functions in 2 components: an underlying, baseline hazard function F'(t), and an effect parameter g(z) describing the effect of a vector z of explanatory variables of the loan.
The baseline hazard function is the cumulative probability of death of a hypothetical 'completely average' loan. Since we know the default rate does not modify the shape of the curve F(t), we can obtain it by multiplying the previous lifetime distribution function F(t) by a constant B. We end up with a parametric proportional hazards model where the probability of survival at time t for a loan with the vector of covariates z is:
$$ S(t | z) = 1 – g(z) \cdot B \cdot F(t)$$
The effect parameter g(z) is a proportionally constant function of the vector of covariates z. These covariates z are characteristics of the loan that have an impact on the probability of default. Some hypothetical examples are the loan grade, purpose or how many times the borrowers defaulted before.
It is typically assumed that the hazard responds logarithmically to continuous covariates. Categorical covariates are split in multiple, dummy variables. For instance, LoanPurpose = 'car' is changed into purpose_car = 1, purpose_other=0. The probability of a loan of characteristics z to have defaulted at time t can be constructed as:
$$ \lambda (t |z) = \lambda_0(t) \cdot e^{\beta z}$$
Where $ \lambda_0(t)$ is the baseline hazard, and $ \beta$ is a vector of parameters corresponding to the effect of each covariate.
The regression method introduced by Cox allows us to estimate both $ \beta$ and $ \lambda_0(t)$ by maximizing the partial likelihood of the survival curve. In practice, a Newton-Raphson optimization algorithm is used with the Hessian of the partial log-likelihood to converge to the correct parameters.
If you're lost: this is just a fancy way of saying there exists a clever mathematical method to find both an estimate of the average survival curve and the weight of each a loan's characteristics so we can predict when it will default with the best possible accuracy.
Baseline Survival
When applying the Cox proportional hazards method to Lending Club's historical data, we obtain the following baseline survival table:
This says, for instance, that a perfectly average, 36-months loan has a probability of default of 0.4%(1 – 0.996) when it's 2 months old (0.06 x 36).
The baseline survival at maturity is 0.895. Therefore S(1) = 0.895. Since S(1) = 1 – F'(1) = 1- B x F(1) and F(1) = 1, we have B = 0.105. In other words, a perfectly average loan has a 10.5% risk of defaulting before maturity.
This means:
$$ S(1 | z) = 1 – 0.105 \cdot g(z) $$
Effect Parameters
The Cox proportional hazards analysis gives the following parameter estimates:
Lower 95%
Upper 95%
FICO score (lower bound) –0.0010293 0.0004616 –0.001936 –0.000127
Sub Grade 0.05280805 0.0019866 0.0489075 0.0566947
Loan Amount –5.0413e–6 3.3655e–6 –1.167e–5 1.5238e–6
Debt-to-Income ratio 0.0027784 0.0015405 –0.000243 0.0057959
Open Credit Lines –0.0177774 0.0031108 –0.023885 –0.011691
Total Credit Lines 0.00447601 0.0013291 0.0018616 0.0070715
Number of Delinquencies –0.0541081 0.0175649 –0.089104 –0.020258
Number of Inquiries 0.11607386 0.0044991 0.1070607 0.1247001
Length of Employment –0.0077227 0.0029357 –0.013478 –0.00197
Home Ownership: 'mortgage' –0.2285397 0.1079474 –0.416494 0.0169571
Home Ownership: 'none' 0.46302684 0.4019667 –0.475457 1.1425968
Home Ownership: 'other' 0.04318891 0.1797057 –0.315094 0.3986464
Home Ownership: 'own' –0.1689286 0.1106693 –0.363594 0.0805187
Purpose: 'Car' –0.3688268 0.0751338 –0.519288 –0.22457
Purpose: 'Credit card' –0.5831873 0.0376587 –0.656992 –0.509326
Purpose: 'Debt consolidation' –0.2201599 0.0282871 –0.275089 –0.164145
Purpose: 'Educational' 0.23164711 0.1040292 0.0209673 0.4292712
Purpose: 'Home improvement' –0.1633505 0.0469559 –0.255983 –0.07186
Purpose: 'Major purchase' –0.2128853 0.0578053 –0.327724 –0.101034
Purpose: 'Medical' 0.1980045 0.0730734 0.0517766 0.3384039
Purpose: 'Small business' 0.59293817 0.0431258 0.5080413 0.677145
Purpose: 'Vacation' 0.07459664 0.1035141 –0.135136 0.2711493
A negative parameter estimate means it reduces the risk of default by 'flattening' the hazard curve. Hence the negative values for the FICO score, length of employment or the annual income. A positive parameter 'flexes' the hazard function up, meaning higher risks, hence the positive values for sub-grade or number of inquiries. Surprisingly, the number of delinquencies or the number of open credit lines have both negative parameters, which means they're correlated with lower default rates. The most likely explanation is that they're already factored in, excessively, in the Sub Grade.
Several characteristics turned out to be non-significant and will be discarded. For instance, the debt-to-income ratio has an estimate close to zero. The estimate even changes sign in the 95% confidence interval, meaning we don't even know for sure if its influence is positive or a negative.
Covariates Selection
Loans have more than a hundred different properties. Building a model with so many covariates is both impractical and dangerous, since the risk of over-fitting grows with the number of parameters.
We run the statistical data of mature loans through a stepwise regression to obtain the best predictive variables. The stepwise regression is an iterative process than selects the best predictive variables. Although controversial, this method is suited to the present case due to the relatively low number of covariates in our model.
Amongst the covariates selected are FICO score, Sub Grade, Debt-to-Income, Earliest Credit Line, Length of Employment or Number of Public Records.
Estimating Returns
Being able to forecast the number of payments for a loan based on its characteristics allows us to calculate its expected return. S(1|z), the probability of survival at maturity, can also be viewed as the payment ratio. Multiplying it by the loan term N gives the expected total number of payments:
$$ n = S(1|z) \cdot N $$
A series of identical payments over time is called an Annuity. With r being the monthly discount rate, the Net Present Value of a loan of amount A paying n annuities of amount p is:
$$ NPV = \frac {-A} {(1+r)^0} + \frac {p} {(1+r)^1} + \frac {p} {(1+r)^2} + … + \frac {p} {(1+r)^{n}} $$
Which gives:
$$ NPV = p \cdot \frac{1 – (1+r)^{-n}}{r} – A$$
When the NPV is 0, it means r the discount rate is such that the sum of the discounted payments equals the loan amount. This is the Internal Rate of Return
Unfortunately the IRR cannot be directly calculated. A computer program can, however, approximate it using subsequent iterations until the NPV is close enough to zero. A simple algorithm to speed up calculations called the secant method is:
$$ { r }_{ n+1 }={ r }_{ n }-{ NPV }_{ n }\left( \frac { r_{ n }-r_{ { n-1 } } }{ NPV_{ n }-NPV_{ n-1 } } \right) $$
Once the monthly return r has been determined, obtaining the annual rate of return simply requires us to annualize it:
$$ R = (1 + r)^{12 – 1}$$
Such a return estimate gives us a directly usable scoring mechanism.
To assess the validity of our prediction algorithm, we need to cross-validate it, which means obtaining the model parameters from one set of data, then applying it to a different set and measuring how it performs.
To do so, we divide the historical Lending Club data in 2 distinct sets. To ensure a random but consistent distribution, the first set is made up of loans with an even ID number, the second one with loans with an odd ID number. We take the first set as 'in-sample' and fit the Cox Proportional hazard model to estimate the effect parameters. Then we select the loans past maturity in the second 'out of sample' data-set, score them with the Expected Return procedure aforementioned, sort them by decreasing score and compute the average performance for each decile:
Expected Return
Observed Return
1,047 9.28% 9.50% 12.42%
1,047 4.50% 4.91% 9.46%
This data shows top scored loans are the ones providing the best return on average. Financial performance degrades as the score gets lower, empirically validating the efficiency of our model.
The average return for loans in the top expected quartile is 6.44%, a very significant 2.44% above the average of 4.00% for the same data set.
It also shows that the default rate does not necessarily increase with lower scores. Since the goal of the method is to increase returns, it sometimes means favoring riskier loans because their high interest rate compensates for the risk.
The LendingRobot Expected Return model relies on an improved and more sophisticated model than the one described above. Some of the improvements are:
Adding composite variables to the model. For instance, while 'Loan Amount' is not by itself a good predictor, the ratio of 'Loan Amount' divided by 'Monthly Income', that indicates how much someone is borrowing compared to what they can afford can be more strongly correlated with the probability of default. The stepwise algorithm can be used again to measure the significance of these composite variables and keep the most relevant ones.
Siloing the data according to a given variable, such as the loan grade, credit score or income, calculating the Cox parameters for each silo, and then applying a weighted average of the parameters. The underlying idea is that loan may have different behavioral characteristics based on their type. For instant the loan purpose may be more important for a high-risk loan than for a low-risk one. The optimal weights for each silo is obtained through a machine learning process called a greedy algorithm.
Our analysis shows that these improvements alone allow us to increase the returns of high score loans by 1.56%.
An improved version of the Cox proportional hazard model allows to predict future returns with a significant edge over pure random samples.
Moreover, once the model parameters have been determined, calculating expected returns is straightforward and can be done in real-time when new investment opportunities arise.
Tags: peer lending
7 Facts About Peer Lending
Lending Club + IPO = ???
7 predictions for 2017
Hammering the Final Nail in the 'Peer Lending' Coffin
What is the value of a Portfolio?
I'm a bit dubious about the normalising of time. Do you have any data to back it up? It seems more natural to assume that there is a constant hazard rate each month, than to say that 'time speeds up' for a 10 year loan….vs a 3 year loan. I understand the normalisation argument too ( eg if you have paid back 90% of your loan you are not going to default). I guess it is better to split data in maturities?
Emmanuel Marot says:
Loan terms are only 3 or 5 years. At the time of our analysis, we could only see limited discrepancies between the two, once the duration has been normalized.
That being said, we now have more data and are refining our analysis to take into account differences between platforms and terms. Stay tuned!
sorry I hadn't seen that that's exactly what you did (comparing maturities) in the next paragraph/graph: "We graph the proportion of defaulting loans that stop paying over time, for 6,097 36-months loans and for 2,783 60-months loans"
For training and test data set, you picked odd and even number loans. Ideally, I would train my algorithm on lets say 2007-2011 and then test it on 2011-2015 loans. This would validate if the economic factors influence the correlation with parameters such as Income, home ownership etc. What do you think?
Justin Hsi says:
We have reason to believe that economic factors are influencing the parameter's correlation with default (as work on our next model suggests), but we think that splitting to train on old loans and test on new loans would have other confounding variables that could mask the economic factor's influence (e.g. a platform's constantly changing credit scoring model). Nevertheless, much work remains to be investigated in this area!
I think it would be good if you validated the model the way it will be used: ie to predict new loan performance from old. so do something like using first x years data predict next year loans …
Pingback: 6 Important Facts Every Peer Investor Should Know – LendingRobot Blog
How do you model early full repayments when calculating expected returns?
At the individual note/loan level, early full repayment does not change our expected return (since it is the Internal Rate of Return of the predicted series of cashflows). For more details see our post on calculating returns. Partial prepayment does change our expected return insofar as relocating where we are on the hazard curve.
Pingback: A Review of Lending Club Defaults – LendingRobot Blog
Pingback: Choosing loans with Monte Carlo, Diversification Part 2 – LendingRobot Blog
Pingback: Our New Expected Return – LendingRobot Blog
Interesting approach but your model includes "sub_grade" which is essentially LC model output. So that is why you get funny regression output. Also LC is updating its model so "sub_grade" changes which makes the model you propose unstable in some sense. Any thought on this?
Also what is the advantage of using Cox over a logistic model? I understand that if you want to be buy and sell on the 2ndary market Cox is beneficial but if it is to buy and hold the notes, logistic models could be easier to built.
We do update our model from time to time, in order of keeping it relevant with changes in economic conditions and LC's updates to their model.
As a matter of fact, Cox was our original approach, but since then we split the survival and default models, so nowadays our model relies more on a multi-variate logistics regressions. | CommonCrawl |
Top PDF Superstring holography and integrability in AdS_5 x S^5
Superstring holography and integrability in AdS_5 x S^5
contribution to the spectrum by using second-order Rayleigh-Schr¨ odinger perturba- tion theory (there is an intermediate state sum involved, but since we are doing the calculation numerically, this is not a serious problem). There is also the issue of degen- eracy but the existence of a higher conserved charge once again renders the problem effectively non-degenerate. The resulting three-loop data for large-K was fit in Chap- ter 2 to a power series in K −1 to read off the expansion coefficients E su(2) 3,n . It turns out that, to numerical precision, the coefficients are non-vanishing only for n > 5 (as re- quired by BMN scaling). The results of this program are reproduced for convenience from Chapter 2 in table 4.17, where they are compared with string theory predictions derived (in the manner described in previous paragraphs) from eqn. (4.2.10). (The accuracy of the match is displayed in the last column of table 4.17.) The important point is that there is substantial disagreement with string results at O(λ 3 ) for all energy levels: the low-lying states exhibit a mismatch ranging from roughly 19% to 34%, and there is no evidence that this can be repaired by taking data on a larger range of lattice sizes. There is apparently a general breakdown of the correspondence between string theory and gauge theory anomalous dimensions at three loops, despite the precise and impressive agreement at first and second order. This disagreement was first demonstrated in the two-impurity regime [26]. It is perhaps not surprising that the three-loop disagreement is reproduced in the three-impurity regime, but it provides us with more information that may help to clarify this puzzling phenomenon.
Foundations of the AdS 5 x S^5 Superstring Part I
In the previous section we have shown that string equations of motion admit the Lax representation provided the parameter κ in the Lagrangian takes values ± 1. It is for these values of κ that the model exhibits the local fermionic symmetry. In addition to the κ-symmetry, the string sigma model has the usual reparametrization invariance. Due to these local symmetries not all degrees of freedom appearing in the Lagrangian (1.35) are physical. Thus, ultimately we would like to understand if and how integrability is inherited by the physical subspace which is obtained by making a gauge choice and imposing the Virasoro constraints. In this section we will make a first step in this direction by analyzing in detail the transformation properties of the Lax connection under the κ-symmetry and diffeomorphism transformations. We also indicate a relation between the Lax connection and the global psu(2, 2 | 4) symmetry of the model.
The AdS(5) x S**5 superstring in light cone gauge and its Bethe equations
[23] at the classical level. One certainly hopes integrability to persist also in the quantum theory, although it is unclear at present how this could be precisely imple- mented. Inspired by the all-loop Bethe ansatz conjectures on the gauge theory side and aware of the obtained data in the plane wave, flat space and spinning strings lim- its Arutyunov, Staudacher and one of the present authors were able to write down a set of quantum string Bethe equations [24] which are structurally very similar to the gauge theory equations of [15], differing by a so-called dressing factor which depends on (an infinite set of) undetermined functions of λ and thus taking into account the three loop discrepancies. These functions should be determined by comparison with quantum string data. First steps in this direction have been performed in [22, 25, 26]. The quantum string Bethe equations of [24] have been also generalized to the full P SU(2, 2 | 4) setting in [16] 3 .
On regularizing the AdS superstring worldsheet
initio calculations within it is to define it on a discretized spacetime or lattice. Lattice field theory methods have been recently become a subject of study also in the framework of worldsheet string models [10–12]. This approach bypasses the subtleties of realizing supersymmetry on the lattice - which characterise the lattice approach to the duality from the gauge theory side [13] - in that the Green-Schwarz superstring formulation that we use displays supersymmetry only in the target space. In the two-dimensional string world-sheet model under analysis supersymmetry appears as a flavour symmetry. Importantly, local symmetries (diffeomorphism and fermionic kappa-symmetry) are all fixed, and only scalar fields (some of which anti-commuting) appear, assigned to sites. This rather simplified setting - useful to have at most quartic fermionic interactions - still retains the sophisticated dynamics of relevant observables in this framework.
Green-Schwarz superstring on the lattice
In measuring the action at small values of the coupling g, we observe a divergence com- patible with a quadratic behavior ∼ a −2 . It is certainly possible that the reasoning leading to the line of constant physics (4.1) might be subject to change once all fields correlators are investigated – something which we leave for the future. However, in the lattice regu- larization performed here such divergences are expected. In continuum perturbation theory, power-divergences arising in this [8] and analogue models [11] are set to zero using dimen- sional regularization. From the perspective of a hard cut-off regularization like the lattice one, this is related to the emergence in the continuum limit of power divergences – quadratic, in the present two-dimensional case – induced by mixing of the (scalar) Lagrangian with the identity operator under UV renormalization. The problem of renormalization in presence of power divergences is in general non trivial, and one of the ways to proceed – which is our way here – is via non-perturbative subtractions of those divergences. While with the present data we are able to reliably and non-perturbatively subtract them, in general this procedure leads to potentially severe ambiguities, with errors diverging in the continuum limit. In the future it may be therefore worthwhile to explore whether other schemes – e.g. the Schr¨ odinger functional scheme [52] – could be used as a proper definition of the effective action under in- vestigation. We remark however that for the other physical observable here investigated, the h x x ∗ i correlator, we encountered no problems in proceeding to the continuum limit.
Z(x) and S(x) are the output of S-Box of AES and
In paper [7], it design a redundancy S-Box circuit in ASIC. Because of the redundancy, it cost more area, power and increase the path delay. We have implemented that design in FPGA to compare that design with our design. After implementation, the comparison of our S-Box design and others design is shown in Table VI.
Superstring Phenomenology - A Personal Perspective
The equivalence postulate point of view is that the fundamental equa- tion is the QHJE, which is a third–order non–linear differential equation. It is equivalent to the Schr¨odinger equation (in the sense discussed above eq. (5)), but requires specifying more initial conditions than for the Schr¨odinger equation. We have that there is a moduli space of solutions of the QHJE, which corresponds to the same wave function. That is, there are hidden variables which depend on the Planck length and are not detected in the solutions of the Schr¨odinger equation. This means that the Schr¨odinger equation with its related apparatus provides an effective description, albeit an extremely successful one from the experimental point of view. Now, the vacuum energy in conventional quantum mechanics is an artifact of the Hilbert space construction, i.e. it is an artifact of the effective descrip- tion. But from the point of view of the equivalence postulate the more complete solution is given by the QHJE, which admits a non–trivial solu- tion also for the state with vanishing energy and vanishing potential. The existence of such a specialized state already indicates that it may have something to do with the vacuum energy, as according to the equivalence postulate all other states are connected to this special state by coordinate transformations. This leads to the existence of a fundamental length scale with all the expected implications of modifications of the uncertainty rela- tions, and space–time uncertainty relations, etc. However, the important fact is the existence of the additional term in the quantum HJ equation, Q(q) = (¯ h/2m) {S 0 , q } , which is never vanishing. This term can be in-
Exact computation of one-loop correction to the energy of folded spinning string in AdS(5) x S-5
Starting with the classical algebraic curve describing a particular solution one can develop a semiclassical quantization [16, 44] by deforming the cuts definining the algebraic curve (adding extra roots) [45, 46]. Fluctuations are then perturbations of the cuts, and the one-loop correction to the energy is given as usual by the sum of the energy shifts (or characteristic frequencies) due to these fluctuations. Alternatively, one may try to guess the quantum extension of the classical finite gap integral equations, having as guiding principle the gauge theory information implying a description in terms of an asymptotic Bethe Ansatz [43]. Improved by the phase [47, 48] extracted from the 1-loop string data of [49], the Bethe Ansatz result for the 1-loop correction to string energy was shown [50] to agree, for a generic classical superstring solution, with the approach based on extracting the characteristic frequencies by perturbing the algebraic curve. This general equivalence was recently extended to include also the exponentially suppressed finite size effects with the asymptotic Bethe Ansatz starting point replaced by an appropriate Thermodynamic Bethe Ansatz (see [17] and references therein).
Precision calculation of 1/4-BPS Wilson loops in AdS(5) x S-5
Localization has been proven to be one of the most powerful tools in obtaining non pertur- bative results in quantum supersymmetric gauge theories [3]: An impressive number of new exact results have been derived in different dimensions, mainly when formulated on spheres or products thereof [3, 16]. In order to gain further intuition on the relation between localization and sigma-model perturbation theory in different and more general settings, we re-examine this issue addressing as follows the problem of how to possibly eliminate the ambiguity related to the partition function measure. We consider the string dual to a non-maximal circular Wil- son loop - the family of 1/4-BPS operators with path corresponding to a latitude in S 2 ∈ S 5 parameterized by an angle θ 0 and studied at length in [15, 17, 18] - and evaluate the corre-
Realistic Superstring Models
In Ref. [24] we address the following question: Given a supersymmetric string vacuum at the Planck scale, is it possible to obtain hierarchical supersymmetry breaking in the observable sector? A supersymmetric string vacuum is obtained by finding solutions to the cubic level F and D constraints. We take a gauge coupling in agreement with gauge coupling unification, thus taking a fixed value for the dilaton VEV. We then investigate the role of nonrenormalizable terms and strong hidden sector dynamics. The hidden sector contains two non–Abelian hidden gauge groups, SU(5) × SU (3), with matter in vector–like representations. The hidden SU (3) group is broken near the Planck scale. We analyze the dynamics of the hidden SU(5) group. The SU (5) hidden matter mass matrix is given by
Static gauge and energy spectrum of single mode strings in AdS(5) x S 5
scaling dimension has been computed for small 't Hooft coupling λ 1 in field perturbation theory up to an impressive 5-loop order [1]. The motivation for going to such high orders arose from the enormous progress in understanding the hidden integrable system behind the spectral problem of this AdS/CFT duality pair (for a recent review see [2]). Here, the computation of the Konishi scaling dimension has become something of a testing ground for the application of integrability techniques going beyond the asymptotic Bethe ansatz [3] in the form of the thermodynamic Bethe ansatz for the mirror model [4] or the Y-system [5]. The assumption of integrability is powerful enough to evaluate the Konishi scaling dimension to even higher orders [6], with the present record being set at eight [7] or even nine loops [8]. The integrability based results in principle can yield the scaling dimensions of short operators dual to the spectrum of short excited AdS 5 × S 5 strings for any value of the coupling λ and in particular also at
Feasibility Study of Local Structure Analysis of Ultrathin Films by X ray Fluorescence Holography
These images are compared with those reconstructed from holograms of the full 4 π sphere. Figures 7(a) and (b) show the reconstructed intensities of 1/2 1/2 0 and 1/4 1/4 1/4 par- allel to [100], [010] and [001] axes. The FWHMs of 1/2 1/2 0 and 1/4 1/4 1/4 parallel to [100] and [010] axes are about 0.03 nm, which is almost identical to the values in the case of the full sphere. The difference in the peak positions is within 0.004 nm. On the other hand, along the [001] axis, the peak of the limited sphere is broader than that of the full sphere. The FWHMs of the peaks of 1/2 1/2 0 and 1/4 1/4 1/4 increase from 0.032 nm to 0.079 nm and from 0.029 nm to 0.098 nm, respectively. This broadening prevents accurate determina- tion of the atomic position along the depth of the film. This broadening problem is solved by increasing the Q-range. One positive solution is to detect fluorescent X-rays excited by the incident beams passing through the substrate. This enlarges the angular range of the experimental hologram close to that of the full 4 π sphere. Consequently, it will provide us with the three-dimensional atomic positions around a specific atom in a thin film.
Water waves and Integrability
Following the idea of Johnson (2002), we can express the variables u, W , p as double-asymptotic expansion (in ε and δ) with terms, depending only on η(x, t) and explicitly on z. As a result, a single nonlinear equation for η will be obtained, and thus all variables will be expressed through the solution of this equation.
A. Conoscopic Holography Equipment
As an optic technique, CH is affected by factors such as surface slope and its position within the working range. In order to avoid these factors of influence, the roughness specimens were located on a test bench taking care that the test surfaces stayed parallel to the XY plane of the Conoscan 4000 scanner at distance equal to the sensor stand-off. The main directions of the test specimens were also aligned with the machine X-Y axes.
Uniform light cone gauge for strings in AdS(5) x s**5: Solving SU(1|1) sector
by means of constructing the Lax (zero-curvature) representation for the superstring equations of motion. In [28] we have shown that this connection admits a consistent reduction to the fields describing excitations from the su (1 | 1) sector. Thus, the non- trivial interacting Dirac Hamiltonian [28] which governs the dynamics in this sector is integrable, but its integrable properties are not transparent rather they are hidden in the highly non-trivial Lax pair. This pair can be formulated in terms of two 4 × 4 matrices, L σ and L τ , depending on a spectral parameter z and satisfying the condition of zero curvature
Galactic Structure and Holography
If sufficient data are available, the model can be extended to consider situations where different stellar populations are in sub-disks with different radial scale lengths and scale heights This is relevant because analysis of the Milky Way [5] found different star populations in sub-disks with scale height inversely related to sub-disk radial scale lengths.
Information technology in holography
The use of 3D technology in the field of holography opens up new opportunities in creating models in space. Modern information technology software significantly accelerates the process of creating holograms and improves the result. 3D technology helps to reduce the size of devices to reproduce holograms and increase their productivity. The disadvantage is that to model and reproduces holograms with a 3D image, computers with greater productivity than for 2D should be used, but the use of productive devices is a necessary step for the transition to the era of holography and the replacement of conventional projectors and showcases with modern and more productive ones. Information technology software for reproducing 3D images must be multipurpose and work on any device. It will create a fundamentally new situation in presentations and advertising.
a. What is the period of validity for an educator s license? Less than 5 years 5 years X 3 Greater than 5 years
Alaska Department of Education and Early Development, Division of Teaching and Learning Support, Assessment, Accountability and Student Information, Standards Based Assessments (SBAS), [r]
Superstring phenomenology -- present and future perspective
be shown that the superpotential terms of such a state with the Standard Model states vanish to all orders of nonrenormalizable terms. In this case the exotic states can interact with the Standard Model states only via the gauge interactions and cannot decay into them . In such a model therefore an exotic state will be stable and one has to check that its mass density does not over close the universe. These constraints were investigated in detail in ref. [39]. Several general remarks however are in order. Exotic states that do not have GUT origin are generic in superstring models. They arise due to the breaking of the non-Abelian gauge symmetries at the string rather than in the effective field theory level. Such states are therefore a generic signature of superstring compactification. Thus, they may lead to possible observable experimental signatures. For example, all the level one models predict the existence of fractionally charged states at least with Planck scale masses. Specific free fermionic models also predict the existence of Standard Standard Model states which are exotic from the point of view of the underlying SO(10). These may be color triplets, electroweak dou- blets or Standard Model singlets. Such states may provide an experimental signature of specific classes of superstring compactifications.
a. What is the period of validity for an educator s license? Less than 5 years 5 years X Greater than 5 years
Yes X Resources are being developed that support the standards. The resources will include content support and lesson models around the domains. In addition, examples of differentiated lessons and assessment supports will also be developed. Professional development will focus on standards orientation and best practices. Workshops will be in person at various locations around the state and virtual.
Top PDF Exact computation of one-loop correction to the energy of folded spinning string in AdS(5) x S-5 (+10000 docs)
Top PDF The AdS(5) x S**5 superstring in light cone gauge and its Bethe equations (+10000 docs)
Top PDF Foundations of the AdS 5 x S^5 Superstring Part I (+10000 docs)
Top PDF The Off shell Symmetry Algebra of the Light cone AdS(5) x S**5 Superstring (+10000 docs)
Top PDF Quadratic action for type IIB supergravity on AdS(5) x S(5) (+10000 docs)
Top PDF Semiclassical quantization of rotating superstring in AdS(5) x S**5 (+10000 docs)
Top PDF Spinning strings in AdS(5) x S**5 and integrable systems (+10000 docs)
Top PDF String hypothesis for the AdS 5 x S^5 mirror (+10000 docs)
Top PDF Thermodynamic Bethe Ansatz for the AdS 5 x S^5 Mirror Model (+10000 docs)
Top PDF On AdS(5) x S**5 String S matrix (+10000 docs)
Perturbative computation of string one-loop corrections to Wilson loop minimal surfaces in AdS(5) x S-5
Super spin chain coherent state actions and $AdS_5 \times S^5$ superstring
Open String Plane-Wave Light-Cone Superstring Field Theory
A determination of Aμ2 and the non-perturbative vacuum energy of Yang–Mills theory in the Landau gauge
An intermediate coupling expansion for hamiltonian lattice gauge theory | CommonCrawl |
An improved secure designated server public key searchable encryption scheme with multi-ciphertext indistinguishability
Junling Guo1,
Lidong Han ORCID: orcid.org/0000-0003-2094-56291,2,
Guang Yang1,
Xuejiao Liu1,2 &
Chengliang Tian3
Journal of Cloud Computing volume 11, Article number: 14 (2022) Cite this article
In the cloud, users prefer to store their sensitive data in encrypted form. Searching keywords over encrypted data without loss of data confidentiality is an important issue. In 2004, Boneh et al. proposed the first public-key searchable encryption scheme which allows users to search by the private key. However, most existing public-key searchable encryption schemes are vulnerable to keyword guessing attack and can not satisfy multi-ciphertext indistinguishability. In this paper, we construct a secure designated server public-key searchable encryption based on Diffie-Hellman problem. Our security analysis shows that our proposed scheme can resist against keyword guessing attack and provide multi-ciphertext indistinguishability for any adversity. Furthermore, the proposed scheme can achieve multi-trapdoor privacy for external attackers. Moreover, the simulation results between our scheme and previous schemes demonstrate our new scheme is suitable for practical application.
With the rapid development of cloud computing, a growing number of users and companies prefer to store data on the cloud. In such case, they encrypt the data before uploading in order to ensure data privacy. However, it is extremely difficult to retrieve keyword over encrypted data using traditional search mechanism. Searchable encryption has become a promising solution to ensure the security and availability of data.
In 2004, Boneh et al. [1] proposed the concept of Public-key Encryption with Keyword Search (PEKS) and gave a concrete scheme. However, in 2006, Byun et al. [2] put forward an offline keyword guessing attack(KGA) against Boneh et al. 's scheme. Later, Baek et al. [3] presented a PEKS scheme without a secure channel in 2008. Then, Rhee et al. [4] introduced a new security concept of PEKS, trapdoor indistinguishability, They put forward a PEKS scheme under designated test server (dPEKS) which satisfies trapdoor indistinguishability.
Wang et al. [5] proposed that even if [4] satisfies the trapdoor indistinguishability, their dPEKS cannot resist inside KGA. Since keyword encryption algorithms are public in previous schemes, it will enable the internal attacker to generate the ciphertext of a candidate keyword by himself. That is, the malicious server can efficiently test whether the trapdoor is generated by the canditate keyword or not.
To resist keyword guessing attacks initiated by malicious servers, many researchers have proposed some variants of PEKS schemes. Tang et al. [6] introduced the concept of keyword registration, which requires the sender to register keywords with the receiver in advance and proposes registered keyword search public key encryption (PERKS). Chen et al. [7] put forward a solution using two servers that do not collide with each other, but it is too ideal. Later, Huang et al. [8] presented the concept of public-key authenticated encryption with keyword search (PAEKS) to resist the inside KGA. In their scheme, the data owner needs to use the secret key to authenticate the ciphertext of the keyword. The malicious cloud server will not generate keyword ciphertext for testing without the owner's private key. Therefore, KGA does not succeed against their scheme.
Qin et al. [9] in 2020 introduced the new security concept called multi-ciphertext indistinguishability (MCI). That is, from two or more ciphertexts, the adversary can determine whether they are generated by a same keyword. And they constructed a new PAEKS that can guarantee MCI security but does not provide multi-trapdoor privacy (MTP) security in which attacker is able to check two or more trapdoors contain a same keyword. In 2021, Pan and Li [10] put forward a new PAEKS scheme with MCI and MTP security. Later, Cheng and Meng [11] proved that Panr and Li's scheme does not satisfy MTP security.
Motivations and contributions
In searchable encryption, the security goal is that the ciphertexts and trapdoors leak no information about keywords. So far, there is rarely public-key searchable encryption schemes achieve both MCI and MTP, and security against KGA. In this paper, our goal is to construct an enhanced secure designated server public-key searchable encryptionscheme with MCI and MTP. The contributions of our paper are summarized as follows:
We give a security analysis of Li et al.'s scheme [12] and show that their scheme does not satisfy trapdoor indistinguishability.
We propose a secure scheme that satisfies the requirement of testing the designated server. That is to say, no one can test except the designated server. Moreover, we prove that our scheme satisfies MCI security, MTP security for external adversaries, and designated testability.
We analyze our scheme's implementation and communication cost by comparing it with previous other schemes. The result shows that our scheme has excellent advantages in keyword ciphertext and trapdoor algorithms, and the test algorithm is not inferior to other schemes. Moreover, our scheme provides stronger security for keyword privacy.
In 2004, Boneh et al. [1] first proposed the public key encryption scheme with keyword search, which started the research on public-key searchable encryption. Later, Abdalla et al. [13] presented a searchable encryption scheme based on identity. Byun et al. [2] put forward offline KGA against Abdalla et al.'s scheme. Baek et al. [3] suggested that a tester should be appointed to perform the test algorithm to hide the user's search pattern, to ensure that only those who have the tester's private key can conduct the test. Rhee et al. [4, 14] put forward a dPEKS model to reject outside KGA and constructed a general structure of dPEKS based on the designated tester. Fang et al. [15] presented a dPEKS scheme that is not based on a random prediction machine to resist outside KGA. Rhee et al. [16] construct an identity-based PEKS scheme with a designated tester. Emura et al. [17] presented a general structure of SCF-PEKS based on anonymous identity-based encryption(IBE) and one-time signature. After that, many schemes [18–20] have made efforts to resist offline guessing attacks, but these schemes cannot resist inside KGA.
To resist inside KGA, Xu et al. [21] proposed a PEKS scheme with fuzzy keywords, reducing the security of inside KGA by ensuring that each trapdoor corresponds to multiple keywords. Wang et al. [22] gave a PEKS scheme with dual servers. In 2017, Huang et al. [8] proposed the concept of public-key authentication searchable encryption. After that, Huang et al.'s scheme has been extended to certificateless PAEKS [23–25] and identity-based PAEKS [12]. And in the field of Internet of Things, many PEAKS variants [26–28] have been proposed. In 2019, Lu et al. [29] presented a PEKS scheme without random prediction. Later, Noroozi et al. [30] proposed that Huang et al.'s scheme is insecure in the case of multiple receivers.
In 2020, Qin et al. [9] presented a new PAEKS that is claimed to provide multi-ciphertext indistinguishability but no multi-trapdoor privacy. Recently, Li et al. [12] proposed a new PAEKS scheme under a designated server which still cannot guarantee MTP. Furthermore, almost PAEKS [8, 12] and their variants [9, 25, 31, 32] cannot provide MTP security and hide the search pattern of the user. Later, Qin et al. [33] proposed an improved security model and gave a specific scheme. Recently, Lattice-based searchable encryption schemes [34, 35] have been proposed which are claimed to guarantee stronger security.
The rest of this paper is organized as follows. In section 2, we introduce some preliminary knowledge. Then we review Li et al.'s scheme and give a security analysis for it in section 3. The fourth section defines the enhanced scheme and its security model. Section 5 gives a concrete construction scheme and proves that it satisfies the designed testability, MTP security and MCI security. Then in section 6, we compare and analyze our scheme with others. In the last section, we give a summary and a prospect for the future.
Bilinear pairing
We briefly describe the definition of bilinear mapping. (See more details in [36]). Let \(\hat {e}:\mathbb {G}_{1} \times \mathbb {G}_{1} \rightarrow \mathbb {G}_{2}\) be a computable bilinear pairing, where \(\mathbb {G}_{1}\) and \(\mathbb {G}_{2}\) are two cyclic groups of prime order p. The map \(\hat {e}\) has the following properties.
For any \(x,y \in \mathbb {Z}_{p}^{*}\), \(g,g_{1} \in \mathbb {G}_{1}\), the equation \(\hat {e}\left (g^{x}, g_{1}^{y}\right) = \hat {e}\left (g, g_{1}\right)^{xy}\) holds.
For any generator \(g \in \mathbb {G}_{1}\), \(\hat {e}(g,g)\) is a generator of \(\mathbb {G}_{2}\).
For any \(g,g_{1} \in \mathbb {G}_{1}\), there exists a PPT algorithm to compute \(\hat {e}(g_{1},g)\).
Complexity assumptions
In this subsection, \(\mathbb {G}_{1}\) and \(\mathbb {G}_{2}\) are two cyclic groups of prime order p, g is a generator of \(\mathbb {G}_{1}\) and \(\hat {e}:\mathbb {G}_{1} \times \mathbb {G}_{1} \rightarrow \mathbb {G}_{2}\) is a bilinear map. Decisional Diffie–Hellman assumption and Decisional bilinear Diffie–Hellman assumption are introduced as follows.
(Decisional Diffie–Hellman (DDH) assumption): Given g, gx, \(g^{y} \in \mathbb {G}_{1}\), where \(x,y \in \mathbb {Z}_{q}^{*}\), there no exists polynomial-time algorithm to distinguish between (g,gx,gy,gxy) and (g,gx,gy,Z), where \(Z \in _{R} \mathbb {G}_{1}\). The advantage of adversary \(\mathcal {A}\) is
$$Adv^{DDH}_{\mathcal{A}}\!(\kappa)\! =\! \vert Pr[\mathcal{A}(g,g^{x},g^{y},g^{xy})]\! - \! Pr[\mathcal{A}(g,g^{x},g^{y},Z)] \vert$$
DDH assumption holds if the advantage is negligible.
(Decisional bilinear Diffie–Hellman (DBDH) assumption) : Given g, gx, gy, \(g^{z} \in \mathbb {G}_{1}\), where \(x,y,z \in \mathbb {Z}_{q}^{*}\). The advantage of the adversary \(\mathcal {A}\) is \(Adv^{DBDH}_{\mathcal {A}}(\kappa) = \vert Pr\left [\mathcal {A}\left (g,g^{x},g^{y},g^{z},e(g,g)^{xyz}\right)\right ] - Pr\left [\mathcal {A}(g,g^{x},g^{y},g^{z},Z\right ] \vert \), where \(x,y,z \in _{R} \mathbb {Z}_{q}^{*}\) and \(Z \in _{R} \mathbb {G}_{2}\). DBDH assumption holds if the advantage is negligible.
Our system framework is showed in Fig. 1. The system contains three entities: a cloud server, a data owner and a receiver. Moreover, the data owner wants to send confidential files to the cloud which are allowed the assigned receiver to access the data. The exact procedures are as follows: First, the data owner extracts a group of keywords from documents and builds an secure index including keyword ciphertexts and documents. Second, the data owner encrypts the files by symmetric encryption and uploads the encrypted file and keyword ciphertext index to the server. Third, the receiver generates a trapdoor for a query keyword and sends it to the server. Finally, after receiving the trapdoor, cloud server runs the test algorithm and outputs the search results.In Table 1, we summarize the notations used in this paper.
System Framework
Table 1 Notations
Cryptanalysis of li et al.'s scheme
In this section, we review an identity-based searchable authenticated encryption scheme under a designated server proposed by Li et al.. After analyzing their scheme, we propose that it cannot guarantee trapdoor indistinguishability.
Review of li et al.'s scheme
Li et al.'s scheme consists of the following polynomial algorithms:
Setup(κ): From the security parameter κ, it outputs a public parameter para= (\(\mathbb {G}_{1},\mathbb {G}_{2},\hat {e}, p, g, g_{1}, H, H_{1}\), mpk) and msk, where \(\mathbb {G}_{1}\) and \(\mathbb {G}_{2}\) are cyclic groups of prime order p. g and g1 are generators of \(\mathbb {G}_{1}\). \(\hat {e}:\mathbb {G}_{1} \times \mathbb {G}_{1} \rightarrow \mathbb {G}_{2}\) is an efficient bilinear map, and \(H:\mathbb {G}_{2} \times \{0,1\}^{*} \rightarrow \mathbb {G}_{1},H_{1}: \{0,1\}^{*} \rightarrow \mathbb {G}_{1}\), \(msk=\alpha \in \mathbb {Z}_{p}\), mpk=gα.
KGens(para): With the parameters para, it outputs the public/secret key pairs (PkS,SkS)=(gz,z) of the server, where \(z \in _{R} \mathbb {Z}_{p}\).
KGenusr(pp,msk,ID): Inputting (pp,msk,ID), it returns SkID=H1(ID)α.
\(\phantom {\dot {i}\!}dIBAEKS(para,w,Pk_{S},Sk_{ID_{O}},ID_{O},ID_{R})\): With para, w, Pks, \(\phantom {\dot {i}\!}Sk_{ID_{O}}\), IDO of a data owner and a receiver's IDR, it returns a keyword ciphertext Cw= (C1,C2,C3), where C1 = \(\hat {e}\left (H(k,w),Pk_{S}^{s}\right)\), C2 = gs, C3=\( g_{1}^{s}\), \(s \in _{R} \mathbb {Z}_{p}\) and \(\phantom {\dot {i}\!}k = \hat {e}(Sk_{ID_{O}}, H_{1}(ID_{R}))\).
\(\phantom {\dot {i}\!}Tarpdoor(para,w,Pk_{S},Sk_{ID_{R}},ID_{O},ID_{R})\): It outputs a trapdoor \(T_{w} = (H(k,w) \cdot g_{1}^{r},g^{r})\), where \(\phantom {\dot {i}\!}k = \hat {e}(H_{1}(ID_{O}),Sk_{ID_{R}})\).
Test(para,SkS,IDO,IDR,CW,TW): It outputs 1 if
$$C_{1} \cdot \hat{e}\left(T_{2}^{Sk_{S}},C_{3}\right)=\hat{e}\left(T_{1}^{Sk_{S}},C_{2}\right),$$
and 0 otherwise.
Cryptanalysis of their scheme
In [12], Li et al. claimed that their dlBAEKS scheme satisfies the trapdoor indistinguishability under the random prediction model. Although trapdoor contains a random number in dlBAKES, there is an efficient algorithm to ascertain whether two trapdoors encrypt the identical keyword or not. In fact, for any two trapdoors Tw=(T1,T2) and \(\phantom {\dot {i}\!}T_{w^{\prime }} = \left (T_{1}^{\prime },T_{2}^{\prime }\right)\) containing unknown keywords w and w′, respectively, the decision algorithm by adversary is as follows:
$$\begin{array}{*{20}l} &\quad \hat e (T_{1},g)\cdot e\left(g_{1},T_{2}\right)^{-1} \\ &= \hat e\left(g,H(k,w)\cdot g_{1}^{r}\right)\cdot e\left(g_{1},g^{r}\right)^{-1}\\ &=\hat e(g,H(k,w))e\left(g_{1},g^{r}\right)e\left(g_{1},g^{r}\right)^{-1}\\ &=\hat e(H(k,w),g) \end{array} $$
where k, g are both fixed values for the same owner and receiver. An attacker captures some tuples Tw=(T1,T2) and \(\phantom {\dot {i}\!}T_{w^{\prime }} = (T_{1}^{\prime },T_{2}^{\prime })\). This distinguishing attack works as follows: if
$$\hat e(T_{1},g) \cdot \hat e(g_{1},T_{2})^{-1} = \hat e(T_{1}^{\prime},g) \cdot \hat e(g_{1},T_{2}^{\prime})^{-1} $$
then w=w′, and w≠w′ otherwise. Thus, the dIBAEKS scheme in [12] is insecure for multi-trapdoor privacy. This means that for the data owner sharing files to the receiver, the external attacker can effectively determine if multiple trapdoors generated by the receiver corresponds to the same keyword.
In addition, another scheme dIBAEKS-3 proposed in [12] also has the similar vulnerability. The decision algorithm is as follows: \(\hat {e}(T_{1},g_{1})\cdot \hat {e}(T_{2},g)^{-1} = \hat {e}(H(k,w),g_{1}) \). From two trapdoors: Tw=(T1,T2) and \(\phantom {\dot {i}\!}T_{w^{\prime }}=(T_{1}^{\prime },T_{2}^{\prime })\), an attacker checks whether \(\hat {e}(T_{1},g_{1})\cdot \hat {e}(T_{2},g)^{-1} = \hat {e}(T_{1}^{\prime },g_{1})\cdot \hat {e}(T_{2}^{\prime },g)^{-1}\) holds. If it holds, these two trapdoors are generated by the same keyword. Thus, the dIBAEKS-3 scheme is not able to provide multi-trapdoor privacy.
Definitions and security model
Our scheme consists of seven (probabilistic) polynomial-time(PPT) algorithms as follows.
Setup(κ)→pp: Given a security parameter κ, it returns the global parameter pp.
KeyGenO(pp)→(PkO,SkO): Given the parameter pp, it returns the public/secret key pairs (PkO,SkO) of the data owner.
KeyGenR(pp)→(PkR,SkR): With the parameter pp, it outputs the public/secret key pairs (PkR,SkR) of the receiver.
KeyGenS(pp)→(PkS,SkS): Inputting pp, it calculates the public key and private key pairs (PkS,SkS) of the server.
PEKS(pp,PkS,PkR,SkO,w)→Cw: Given the parameter pp, PkS of server, PkR of receiver, SkO of data owner and a keyword w, it outputs the ciphertext Cw.
\(\phantom {\dot {i}\!}Trapdoor(pp, Pk_{O}, Sk_{R}, w^{\prime }) \rightarrow T_{w^{\prime }}\): With the parameter pp, the data owner's PkO, PkR of a receiver and a keyword w′, it computes the trapdoor \(\phantom {\dot {i}\!}T_{w^{\prime }}\) of w′.
\(\phantom {\dot {i}\!}Test\left (pp, Sk_{S}, C_{w}, T_{w^{\prime }}\right) \rightarrow \beta \): With pp, SkS, Cw and \(\phantom {\dot {i}\!}T_{w^{\prime }}\), it outputs 1 if Cw and \(T_{w^{\prime }}\) contain a same keyword, 0 otherwise.
Security model
In order to prevent an adversary obtaining any useful inforamtion of keywords, we define three games between a challenger \(\mathcal {C}\) and an adversary \(\mathcal {A}\), namely, multi-ciphertext indistinguishability, multi-trapdoor privacy and designated testability.
Game 1: Multi-ciphertext indistinguishability.
Setup: The challenger \(\mathcal {C}\) runs KeyGenS, KeyGenO and KeyGenR algorithms with pp to generate (PkS, SkS), (PkO,SkO) and (PkR,SkR). It returns the tuple (pp,(PkS,SkS)) to \(\mathcal {A}\).
Phase 1: \(\mathcal {A}\) can issue the following two oracles for polynomial number times.
Ciphertext Oracle \(\mathcal {O}_{C}\): With (PkO,PkR,PkS,w), \(\mathcal {C}\) computes the ciphertext Cw and sends it to \(\mathcal {A}\).
Trapdoor Oracle \(\mathcal {O}_{T}\): With (PkO,PkR,w), \(\mathcal {C}\) computes a trapdoor Tw of a keyword w and returns it to \(\mathcal {A}\).
Challenge: \(\mathcal {A}\) sends two tuples of challenge keywords \(\vec { w}_{0} = \left (w_{0,1}, \dots, w_{0,n}\right), \vec {w}_{1} = \left (w_{1,1}, \dots, w_{1,n}\right)\) to \(\mathcal {C}\). However, the attacker cannot query the challenge keyword in tuple \(\vec w_{0}\) or \(\vec w_{1}\) in advance. \(\mathcal {C}\) selects a random bit b∈{0,1}, computes Cb,i←PEKS(pp,PkS,PkR,SkO,wb,i), and returns a ciphertext set \(\vec C_{b} = \left (C_{b,1},\dots,C_{b,n}\right)\) to the adversary \(\mathcal {A}\).
Phase 2: The adversary \(\mathcal {A}\) can continue to query \(\mathcal {O}_{C}\) and/or \(\mathcal {O}_{T}\) for any keyword w except \(w \in \vec w_{0} \cup \vec w_{1} \).
Guess: The adversary \(\mathcal {A}\) sends its guess bit \(\hat {b^{\prime }}\) to \(\mathcal {C}\). Therefore, the condition that \(\mathcal {A}\) wins the game is b′=b. The advantage of any PPT attacker \(\mathcal {A}\) who wins this game is defined as \(Adv_{\mathcal {A}}^{MCI}(\kappa) = \vert Pr[b^{\prime } = b] - \frac {1}{2} \vert \).
Game 2: Multi-trapdoor privacy.
Setup: Same as Game 1, \(\mathcal {C}\) generates (PkS,SkS), (PkO,SkO) and (PkR,SkR) and gives (pp,(PkS,SkS)) to \(\mathcal {A}\).
Phase 1: As in Game 1, an adversary can adaptively query the ciphertext oracle \(\mathcal {O}_{C}\) and trapdoor oracle \(\mathcal {O}_{T}\) in polynomial time.
Challenge: \(\mathcal {A}\) sends two challenge keywords tuples \(\vec w_{0} = \left (w_{0,1},\dots,w_{0,n}\right)\), \(\vec w_{1} = \left (w_{1,1},\dots, w_{1,n}\right)\) to \(\mathcal {C}\). However, the attacker cannot query the challenge key in tuple \(\vec w_{0}\) or \(\vec w_{1}\) in advance. \(\mathcal {C}\) computes and returns a trapdoor set \(\vec T_{b} = \left (T_{b,1},\dots,T_{b,n}\right)\) of a random bit b∈{0,1}.
Phase 2: As in Phase 1, \(\mathcal {A}\) can continue to query \(\mathcal {O}_{C}\) and/or \(\mathcal {O}_{T}\) for any keyword w except \(w \in \vec w_{0} \cup \vec w_{1} \).
Guess: \(\mathcal {A}\) sends its guess bit \(\hat {b^{\prime }}\) to \(\mathcal {C}\). Therefore, \(\mathcal {A}\) will win the game if b′=b. The advantage of all PPT adversaries \(\mathcal {A}\) who win the game is defined as \(Adv_{\mathcal {A}}^{MTP}(\kappa) = \vert Pr[b^{\prime } = b] - \frac {1}{2} \vert \).
Game 3: Designated testability.
\(\mathcal {A}\) is an external adversary who can get the keyword ciphertext and the trapdoor by monitoring the public channel. However, \(\mathcal {A}\) cannot get the secret key of the server. Designated testability ensures that only a designated server who own the private key can search a keyword over ciphertexts.
Setup: \(\mathcal {C}\) runs KeyGenS, KeyGenO and KeyGenR algorithms with pp to generate the public and private key pairs (PkS,SkS), (PkO,SkO) and (PkR,SkR). It then sends the tuple (pp,PkS) to \(\mathcal {A}\).
Phase 1: There are two oracles as follows, which allow \(\mathcal {A}\) to query in polynomial time.
Ciphertext Oracle \(\mathcal {O}_{C}\): With (PkO,PkR,PkS,w), \(\mathcal {C}\) computes and returns the ciphertext Cw.
Trapdoor Oracle \(\mathcal {O}_{T}\): Input a tuple (PkO,PkR,w), \(\mathcal {C}\) computes and outputs trapdoor Tw.
Challenge: \(\mathcal {A}\) sends two challenge keywords w0, w1 to \(\mathcal {C}\), then \(\mathcal {C}\) calculates and outputs Cb of a random bit b∈{0,1}.
Phase 2: As in Phase 1, \(\mathcal {A}\) can carry on querying for any keyword wi except wi∈(w0,w1).
Guess: The adversary \(\mathcal {A}\) sends its guess bit \(\hat {b^{\prime }}\) to \(\mathcal {C}\). Therefore, \(\mathcal {A}\) wins the game if b′=b. The advantage for all PPT attackers who win the game is defined as \(Adv_{\mathcal {A}}^{DT}(\kappa) = \vert Pr[b^{\prime } = b] - \frac {1}{2} \vert \).
Proposed scheme
In this section, we propose a concrete construction of our scheme that can provide multi-ciphertext indistinguishability, multi-trapdoor privacy and security against key guessing attack. The details of proposed scheme are described as follows.
Setup(κ): From κ, it chooses a bilinear pairing \(\hat {e}:{\mathbb {G}_{1} \times \mathbb {G}_{1} \rightarrow \mathbb {G}_{2}}\), where \(\mathbb {G}_{1},\mathbb {G}_{2}\) are cyclic groups of prime order q, and selects two random generators \(g,h \in \mathbb {G}_{1}\) and two cryptographic hash functions H1: \(\mathbb {G}_{1} \rightarrow \mathbb {Z}_{q}^{*}\), H2: \(\{0,1\}^{*} \rightarrow \mathbb {Z}_{q}^{*}\). It returns the public parameter \(pp = \left (\mathbb {G}_{1},\mathbb {G}_{2},q,g,h, \hat {e},H_{1},H_{2}\right)\).
KeyGenO(pp): It takes a grobal pubilc parameter pp as inputs, selects x←Zp randomly and defines PkO=gx and SkO=x. It outputs the data owner's public/secret key pairs (PkO,SkO).
KeyGenR(pp): From pp, it chooses a random y←Zp and sets PkR=gy and SkR=y then returns the receiver's public/secret key pairs (PkR,SkR).
KeyGenS(pp): By a grobal pubilc parameter pp, it selects randomly z←Zp, and defines PkS=hz and SkS=z. Finally it returns the server's public/secret key pairs (PkS,SkS).
PEKS(pp,PkS,PkR,SkO,w): Given the public parameter pp, PkS, PkR, SkO and a keyword w, a data owner performs the following steps:
Select a number \(r \in _{R} \mathbb {Z}_{q}^{*}\).
Calculate C1=hr, \( \kern0.3em {C}_2=\kern0.3em \hat{e}{\left(\kern0.3em P{k}_R,P{k}_S\kern0.3em \right)}^{rS{k}_O{H}_2(w)}\kern0.3em \) and C3=grk, where \(\phantom {\dot {i}\!}k = H_{1}\left (Pk_{R}^{Sk_{O}}\right)\).
Output the ciphertext Cw=(C1,C2,C3) of w.
Trapdoor(pp,PkO,SkR,w′): From pp, PkO of data owner, SkR of receiver and a keyword w′, a receiver executes the following steps:
Choose a number \(s \in _{R} \mathbb {Z}_{q}^{*}\).
Compute T1=PkSs and \(\phantom {\dot {i}\!}T_{2} = {{Pk_{O}}^{Sk_{R}H_{2}(w^{\prime })}} \cdot g^{sk}\), where \(\phantom {\dot {i}\!}k = H_{1}\left (Pk_{O}^{Sk_{R}}\right)\).
Return the trapdoor \(T_{w^{\prime }} = (T_{1},T_{2})\)
Test(pp,SkS,Cw,Tw): After receiving \(T_{w^{\prime }}\), the server searchs over keyword ciphertexts {Cw} by testing \(\phantom {\dot {i}\!}\hat {e}(T_{2},C_{1}^{Sk_{S}}) = \hat {e}(T_{1},C_{3}) \cdot C_{2}\) using his private key SkS. If the equation holds, it outputs 1; otherwise, it outputs 0.
Correctness: Assume that (PkO,SkO), (PkR,SkR) and (PkS,SkS) be the data owner, the receiver and the server's public/secret key pairs respectively. Cw=(C1,C2,C3) is the ciphertext of a keyword w generated by the owner. \(T_{w^{\prime }} = (T_{1},T_{2})\) is a trapdoor of a keyword generated by the receiver. It follows that:
$$\begin{array}{*{20}l} \hat{e}(T_{2},C_{1}^{Sk_{S}}) &= \hat{e}\left({Pk_{O}}^{Sk_{R}H_{2}(w')+{rk}}, h^{{Sk_{S}}r} \right) \\ &= \hat{e}(g,h)^{rxyzH_{2}(w^{\prime}vv)} \cdot \hat{e}(g,h)^{rszk}.\\ \hat{e}(T_{1},C_{3}) \cdot C_{2} &= \hat{e}\left(h^{zs},g^{rk}\right) \cdot \hat{e}\left(g^{y},h^{z}\right)^{rxH_{2}(w)}\\ &= \hat{e}(g,h)^{rxyzH_{2}(w)} \cdot \hat{e}(g,h)^{rszk}. \end{array} $$
Thus, if w=w′, then \(\phantom {\dot {i}\!}\hat {e}\left (T_{2},C_{1}^{Sk_{S}}\right) = \hat {e}\left (T_{1},C_{3}\right) \cdot C_{2}\) holds with probability 1; otherwise, it holds with overwhelming probability by the collision resistance of the hash function H2.
Security proof
In this subsection, we prove that our scheme achieves the security of MCI, MTP and designated testability. Formally, we have the following theorems.
Under the assumption of DBDH, our scheme satisfies multi-ciphertext indistinguishability.
Proof Assume that \(\mathcal {A}\) is an adversary who tries to destroy the MCI security. And the algorithm \(\mathcal {C}\) for solving DBDH problem is established. Given a instance of this problem, such as \(Y = \left (\mathbb {G}_{1},\mathbb {G}_{2},\hat {e},q,g,g^{x},g^{y},g^{z},Z_{1}\right)\), the algorithm \(\mathcal {C}\) works exactly as follows.
Setup: \(\mathcal {C}\) randomly selects two hash functions \(H_{1}:\mathbb {G}_{1} \rightarrow \mathbb {Z}_{q}^{*}\), \(H_{2}:\{0,1\}^{*} \rightarrow \mathbb {Z}_{q}^{*}\) and sets \(pp = \left (\mathbb {G}_{1}, \mathbb {G}_{2}, q, g, \hat {e}, h =g^{\alpha }, H_{1}, H_{2}\right)\), PkO=gx, PkR=gy and (PkS,SkS)=(ht,t). It then sends pp and (PkS,SkS) to \(\mathcal {A}\).
Phase 1: We define several oracles as follows, which allow \(\mathcal {A}\) to query many times. We assume that \(\mathcal {A}\) cannot query the same oracle more than once.
Hash Oracle \(\phantom {\dot {i}\!}\mathcal {O}_{H_{1}}\): In response to the H1 query, the oracle maintains a tuple list \(\phantom {\dot {i}\!}L_{H_{1}}= \left \{< m_{i},a_{i}>\right \}\). We assume that \(\phantom {\dot {i}\!}\mathcal {O}_{H_{1}}\) can be asked by attackers for \(\phantom {\dot {i}\!}q_{H_{1}}\) times at most. For querying mi to the oracle, it will perform the following operations: At first, if \(\hat {e}(g,m_{i}) = \hat {e}\left (g^{x},g^{y}\right)\), \(\mathcal {C}\) randomly returns a bit b′ and halts. Otherwise \(\mathcal {C}\) checks whether mi exists in the tuple list. If so, \(\mathcal {C}\) takes out the corresponding tuple and returns ai to \(\mathcal {A}\). Otherwise, it randomly chooses a new exponent ai∈{0,1}κ, stores <mi,ai> in \(\phantom {\dot {i}\!}L_{H_{1}}\) and returns ai to \(\mathcal {A}\).
Hash Oracle \(\phantom {\dot {i}\!}\mathcal {O}_{H_{2}}\): In response to the H2 query, the oracle maintains a tuple list \(\phantom {\dot {i}\!}L_{H_{2}}= \left \{< w_{i},b_{i}>\right \}\). We assume that \(\phantom {\dot {i}\!}\mathcal {O}_{H_{2}}\) can be asked by attackers for \(\phantom {\dot {i}\!}q_{H_{2}}\) times at most. When submitting the keywords wi to the Oracle for query, it will perform the following operations: At first, it checks whether wi exists in the tuple list. if it exists, \(\mathcal {C}\) will take out the corresponding tuple and return bi to \(\mathcal {A}\). Otherwise, it randomly selects a new exponent bi∈{0,1}κ, stores <wi,bi> in \(\phantom {\dot {i}\!}L_{H_{2}}\)and returns bi to \(\mathcal {A}\).
Oracle \(\mathcal {O}_{E}\): It takes public key Pki as input. To response to the queries, the oracle maintains a tuple list LE={<Pki,ci,Vi>}, and it is assumed that \(\mathcal {O}_{E}\) can be asked by attackers for qE times at most. When submitting Pki to the Oracle query, it will perform the following operations: At first, if Pki=gx or Pki=gy, \(\mathcal {C}\) randomly returns a bit b′ and halts. Otherwise \(\mathcal {C}\) tests whether exists Pki in the tuple list. If so, \(\mathcal {C}\) chooses the candidate tuple and returns ci to \(\mathcal {A}\). Otherwise, it randomly selects a new exponent ci∈{0,1}κ, and computes \(\phantom {\dot {i}\!}V_{i} = {Pk_{i}}^{c_{i}}\). Finally, it stores <Pki,ci,Vi> in LE and outputs ci.
Ciphertext Oracle \(\mathcal {O}_{Ciphertext}\): Input a tuple (pkO,PkR,wi), which wi∈{0,1}∗, it randomly chooses \(r_{i} \in \mathbb {Z}_{q}^{*}\), and computes \(C_{w_{i}} = \left (C_{1w_{i}},C_{2w_{i}},C_{2w_{i}}\right)\) as follows.
If (PkO,PkR)=(gx,gy) or (PkO,PkR)=(gy,gx), then it sets \(\phantom {\dot {i}\!}g^{z} = g^{\alpha r_{i}}\), and computes \(C_{1w_{i}} = g^{z}\), \(\phantom {\dot {i}\!}C_{2w_{i}} = Z_{1}^{tb_{i}}\), \(C_{3w_{i}} = g^{r_{i}a_{i}} \).
Otherwise, at least one Pki in (PkO,PkR) is equal to gx or gy. It computes H2(wi)=bi, k=ai, and returns to \(\mathcal {A}\) with \(C_{w_{i}} = \left (C_{1w_{i}},C_{2w_{i}},C_{3w_{i}}\right)\), where \(C_{1w_{i}} = h^{r_{i}}\), \(C_{2w_{i}}=\hat {e}(g^{y},h^{r_{i}})^{tc_{o}b_{i}}\) and \(C_{3w_{i}} = g^{r_{i}{a_{i}}}\).
Trapdoor Oracle \(\mathcal {O}_{Trapdoor}\): Input (PkO,PkR,wi), where wi∈{0,1}∗, it randomly chooses \(s_{i} \in \mathbb {Z}_{q}^{*}\), and computes \(T_{w_{i}} = (T_{1w_{i}},T_{2w_{i}})\) as follows.
If (PkO,PkR)=(gx,gy) or (PkO,PkR)=(gy,gx), then it computes \(T_{1w_{i}} = g^{ts_{i}}\) and \(T_{2w_{j}i} = Z_{2}^{b_{i}}\cdot g^{{r_{i}a_{i}}}\).
Otherwise, at least one Pki in (PkO,PkR) equals to gx or gy. It calculates H2(wi)=bi, k=ai, and returns to \(\mathcal {A}\) with \(T_{w_{i}} = \left (T_{1w_{i}},T_{2w_{i}}\right)\), where \(T_{1w_{i}} = g^{ts_{i}} \), \(\phantom {\dot {i}\!}T_{2w_{i}}=(g^{x})^{c_{o}b_{i}}\cdot g^{{s_{i}a_{i}}}\).
Challenge: \(\mathcal {A}\) completes multiple queries on the above oracles. It selects two challenge keyword tuples \(\vec w_{0}^{*}\) and \(\vec w_{1}^{*}\), and sends them to \(\mathcal {C}\) with \(Pk_{O}^{*}\) and \(Pk_{R}^{*}\). \(\mathcal {C}\) randomly selects a random number ri and a bit \(\hat {b} \in \{0,1\}\). then \(\mathcal {C}\) outputs a ciphertext tuple \(\vec C_{{{w_{\hat {b}}}}^{*}} = \left (C_{{w_{\hat {b},1}}^{*}},\dots,C_{{w_{\hat {b},n}}^{*}}\right)\) where \(C_{{1w_{\hat {b},i}}^{*}} = g^{z}\), \(\phantom {\dot {i}\!}C_{{2w_{\hat {b},i}}^{*}} = Z_{1}^{tb_{i}}\), \(\phantom {\dot {i}\!}C_{{3w_{\hat {b},i}}^{*}} = g^{za_{i}} \).
Phase 2: As with Phase 1 of operation, \(\mathcal {A}\) continue to enquire \(\mathcal {O}_{Ciphertext}\) and/or \(\mathcal {O}_{Trapdoor}\) for any keyword wi except \(w_{i} \in \vec w_{0} \cup \vec w_{1} \).
Guess: The adversary\(\mathcal {A}\) sends its guess bit \(\hat {b^{\prime }}\) to \(\mathcal {C}\). Returns b′=0 if \(\hat {b^{\prime }} = \hat {b}\), b′=1 otherwise.
If the guess of the challenging public key is incorrect, \(\mathcal {C}\) will abort. This event is represented by E. If \(\mathcal {C}\) aborts, \(\mathcal {C}\) outputs a random bit. The termination probability of E is \(\frac {1}{q_{E}(q_{E} -1)}\), therefore, \(Pr[\overline {E}] = \frac {1}{q_{E}(q_{E} -1)}\).
Assume that algorithm \(\mathcal {C}\) is not aborted. If the simulation provided by algorithm \(\mathcal {C}\) is the same as scenario of \(\mathcal {A}\) in real attack and \(Z_{1} = \hat {e}(g,g)^{xyz}\), the adversary \(\mathcal {A}\) will win with \(Adv_{\mathcal {A}}^{MCI}(\kappa) + \frac {1}{2}\). If Z1 is randomly chosen from the group \(\mathbb {G}_{2}\), \(\phantom {\dot {i}\!}C_{{2w_{\hat {b},i}}^{*}} = Z_{1}^{Sk_{S}H_{2}(w)}\) is a random element of \(\mathbb {G}_{2}\). In this case, the trapdoor \(\vec T_{{w_{\hat {b}}}^{*}} \) and ciphertext \(\vec C_{{w_{\hat {b}}}^{*}}\) can be tested. When the keywords are consistent, test algorithm outputs 1. Therefore, \(\mathcal {A}\) has a 1/2 probability that he wins the Game 1. Thus, the advantage for \(\mathcal {C}\) in solving DBDH problem is
$$\begin{array}{*{20}l} &\quad Adv_{\mathcal{B}}^{DBDH}(\kappa)\\ &=\! \vert \!Pr[b^{\prime} = b\! \mid E]\! \cdot \!Pr[E] \!+ \!Pr[b^{\prime} = b\! \mid\! \overline{E}] \cdot\! Pr[\overline{E}]\! - \!\frac{1}{2} \vert\! \\ &= \!\vert\! \frac{1}{2} \!\cdot \!(1-\!Pr[\overline{E}])\! + \!\left(Pr[b^{\prime}\! =\! 0]\! \mid \!\overline{E}\!\cap\! b\! =\!0 \right)\!\cdot\! Pr[b\,=\,0]\!\\ & \qquad + Pr[b^{\prime} = 1 \mid \overline{E} \cap b = 1] \cdot Pr[\overline{E}] - \frac{1}{2} \vert\\ &\geq \vert Pr[\overline{E}] \cdot \left(\left(Adv_{\mathcal{A}}^{MCI} + \frac{1}{2}\right)\cdot \frac{1}{2} + \frac{1}{2} \cdot \frac{1}{2}\right) - \frac{1}{2} \end{array} $$
$$\begin{array}{*{20}l} & \qquad+\frac{1}{2} \cdot (1-Pr[\overline{E}]) + Pr[\overline{E}] \vert \\ &= \frac{1}{2}Pr[\overline{E}]Adv_{\mathcal{A}}^{MCI}(\kappa)\\ &= \frac{1}{2q_{E}(q_{E} - 1)} \cdot Adv_{\mathcal{A}}^{MCI}(\kappa). \end{array} $$
Under the assumption of DDH, our scheme satisfies semantically MTP security.
Proof Assume that \(\mathcal {A}\) is an external opponent who tries to crack the Multi-trapdoor Privacy. Moreover, the algorithm \(\mathcal {C}\) for solving the DDH problem is established. Given a instance of this problem, such as \(Y = \left (\mathbb {G}_{1},q,g,g^{x},g^{y},Z_{2}\right)\), the algorithm \(\mathcal {C}\) works exactly as follows.
Setup: \(\mathcal {C}\) randomly selects two hash functions \(H_{1}:\mathbb {G}_{1} \rightarrow \mathbb {Z}_{q}^{*}\), \(H_{2}:\{0,1\}^{*} \rightarrow \mathbb {Z}_{q}^{*}\) and sets \(pp = \left (\mathbb {G}_{1},\mathbb {G}_{2},q,g,h =g^{\alpha },H_{1},H_{2}\right)\), PkO=gx, PkR=gy and (PkS,SkS)=(ht,t). It then sends pp to \(\mathcal {A}\).
Phase 1: Same as in Theorem 1.
Challenge: \(\mathcal {A}\) completes multiple queries on the above research. It selects two challenge keyword tuples \(\vec w_{0}^{*}\) and \(\vec w_{1}^{*}\), and sends them to \(\mathcal {C}\) with \(Pk_{O}^{*}\) and \(Pk_{R}^{*}\). \(\mathcal {C}\) randomly select a number si and a bit \(\hat {b} \in \{0,1\}\). Then, \(\mathcal {C}\) returns a trapdoor set \(\vec T_{{w_{\hat {b}}}^{*}}= \left (T_{{w_{\hat {b},1}}^{*}},\dots, T_{{w_{\hat {b},n}}^{*}}\right)\) where \(T_{{1w_{\hat {b},i}}^{*}} = h^{ts_{i}}\), \(T_{{2w_{\hat {b},i}}^{*}} = Z_{2}^{b_{i}} \cdot g^{s_{i}a_{i}}\).
Phase 2: \(\mathcal {A}\) continue to enquire \(\mathcal {O}_{C}\) and/or \(\mathcal {O}_{T}\) for any keyword wi except \(w_{i} \in \vec w_{0} \cup \vec w_{1} \).
Guess: The adversary \(\mathcal {A}\) sends its guess bit \(\hat {b^{\prime }}\) to \(\mathcal {C}\). He returns b′=0 if \(\hat {b^{\prime }} = \hat {b}\), b′=1 otherwise.
If the guess of challenging public key is incorrect, \(\mathcal {C}\) will abort. This event will be represented by E. If \(\mathcal {B} \) aborts, \(\mathcal {C}\) outputs a random bit. The probability that it being equal to b is 1/2. According to the random guess of b, the termination probability of E is \(\frac {1}{q_{E}\left (q_{E} -1 \right)}\), therefore, \(Pr[\overline {E}] = \frac {1}{q_{E}\left (q_{E} -1 \right)}\).
Othervise, \(\mathcal {C}\) does not abort. If the simulation provided by algorithm \(\mathcal {C}\) is the same as scenario of \(\mathcal {A}\) in real attack and Z2=gxy, an adversary \(\mathcal {A}\) will win the game with the probability of \(Adv_{\mathcal {A}}^{MTP}(\kappa) + \frac {1}{2}\). If Z2 is randomly chosen from the group \(\mathbb {G}_{2}\), \(T_{{2w_{\hat {b},i}}^{*}} = Z_{2}^{b_{i}} \cdot g^{s_{i}a_{i}}\) will be a random element of \(\mathbb {G}_{2}\). Therefore, the challenge trapdoor tuple hides \(\hat {b}\) completely. In this case, the adversary can test the trapdoor \(\vec T_{{w_{\hat {b}}}^{*}} \) and the ciphertext \(\vec C_{{w_{\hat {b}}}^{*}}\). When the keywords are equal, the test algorithm outputs 1. Thus, the advantage for \(\mathcal {C}\) in solving DDH problem is equal to the advantage in Theorem 1.
Under the assumption of DBDH, our scheme satisfies designated testability.
Proof Assume that \(\mathcal {A}\) is an attacker who tries to crack the designated testability and the challenger \(\mathcal {C}\) wants to solve DBDH problem. Given a instance of this problem, such as \(Y = \left (\mathbb {G}_{1},\mathbb {G}_{2},\hat {e},q,h,h^{x},h^{y},h^{z},Z_{3}\right)\), the algorithm \(\mathcal {C}\) works as follows.
Setup: \(\mathcal {C}\) randomly selects two hash functions \(H_{1}:\mathbb {G}_{1} \rightarrow \mathbb {Z}_{q}^{*}\), \(H_{2}:\{0,1\}^{*} \rightarrow \mathbb {Z}_{q}^{*}\) and sets \(pp = (\mathbb {G}_{1}, \mathbb {G}_{2}, q\), \(\hat {e}\), h, H1, H2, g=hx), (PkO,SkO)=(gs,s), (PkR,SkR)=(gt,t) and PkS=hz. It then sends pp to \(\mathcal {A}\).
Phase 1: Hash Oracle \(\phantom {\dot {i}\!}\mathcal {O}_{H_{1}}\) and Hash Oracle \(\phantom {\dot {i}\!}\mathcal {O}_{H_{2}}\) are same in Theorem 1. We only define Exact Oracle \(\mathcal {O}_{E}\) as follows.
Exact Oracle \(\mathcal {O}_{E}\): Given Pk expect PkS, the algorithm \(\mathcal {C}\) returns Sk.
Frist \(\mathcal {A}\) performs multiple queries on the above oracle. It selects two challenge keywords \( w_{0}^{*}\) and \(w_{1}^{*}\). Then \(\mathcal {A}\) returns \((w_{0}^{*}, w_{1}^{*})\) to \(\mathcal {C}\) together with \(Pk_{O}^{*}\) and \(Pk_{R}^{*}\). \(\mathcal {C}\) selects a number \(y \in _{R} \mathbb {Z}_{q}\) and a bit \(\hat {b} \in _{R} \{0,1\}\), and outputs \(\phantom {\dot {i}\!}C_{{w_{\hat {b}}}^{*} }= \left (C_{{1w_{\hat {b}}}^{*}}, C_{{2w_{\hat {b}}}^{*}}, C_{{3w_{\hat {b}}}^{*}}\right)\) where \(C_{{1w_{\hat {b}}}^{*}} = h^{y}\), \(\phantom {\dot {i}\!}C_{{2w_{\hat {b}}}^{*}} = Z_{3}^{stb_{i}}\), \(\phantom {\dot {i}\!}C_{{3w_{\hat {b}}}^{*}} = g^{yk}\).
Phase 2: \(\mathcal {A}\) continue to enquire \(\mathcal {O}_{C}\) and/or \(\mathcal {O}_{T}\) for any keyword wi except wi∈{w0,w1}.
Guess: The adversary \(\mathcal {A}\) transmits its guess bit \(\hat {b^{\prime }}\) to \(\mathcal {C}\). Returns b′=0 if \(\hat {b^{\prime }} = \hat {b}\), b′=1 otherwise. If the simulation provided by algorithm \(\mathcal {C}\) is the same as \(\mathcal {A}\) in real attack and \(Z = \hat {e}(h,h)^{xyz}\), \(\mathcal {A}\) will win the game with the probability of \(Adv_{\mathcal {A}}^{DT}(\kappa) + \frac {1}{2}\). If Z is randomly chosen from the group \(\mathbb {G}_{2}\), \(\phantom {\dot {i}\!}C_{{2w_{\hat {b},i}}^{*}} = Z^{Sk_{S}H_{2}(w)}\) is a well distributed challenge ciphertext. And \(\mathcal {A}\) has a 1/2 probability of winning the game. Thus, \(\mathcal {C}\)'s advantage in solving DBDH problem is
$$\begin{array}{*{20}l} &Adv_{\mathcal{B}}^{DBDH}(\kappa) \\ &= \vert Pr[b^{\prime} = 1 \mid b = 1] \cdot Pr[b= 1] \\ & \qquad + Pr[b^{\prime} = 1 \mid b = 1] \cdot Pr[b= 1] - \frac{1}{2} \vert \\ &= \vert \frac{1}{2} \cdot \frac{1}{2} + \frac{1}{2}\cdot\left(Adv_{\mathcal{A}}^{DT}(\kappa) + \frac{1}{2}\right) - \frac{1}{2} \vert \\ &= \frac{1}{2}Adv_{\mathcal{A}}^{DT}(\kappa). \end{array} $$
Perfomance analysis
In this section, we analyze the performance of our scheme and the existing schemes(PAKES scheme [8], dIBAEKS scheme [12], SCF-PEKS scheme [3], dPEKS scheme [4] and Pan-Li's scheme [10]) in terms of computational and communication overheads. Moreover, we analyze the security between these PEKS schemes in MCI, MTP and security against KGA.
To evaluate the efficiency, we implemented the operations in our schemes using the MRACL library [37] on a personal notebook computer with an I7-8750H 2.20GHz processor, 16 GB memory, and Window 10 operating system.
First, we give the elapsed time of main operations used in searchable encryption schemes in Table 2. Main operations are pairing operation P, Hash-to-point operation Hp, modular exponentiation E and multiplication operation M in G1, where P≈Hp>M>E≫H. The general hash operation takes less time than the above operations in Table 2. Thus, it is ignored in our computation analysis.
Table 2 Symbols and execution times(ms)
From Table 3, we give a theoretical efficiency comparison in computational time and communication complexity of PEKS algorithm, Trapdoor algorithm, and Test algorithm of our scheme and previous schemes [3, 4, 8, 10, 12]. In terms of computational efficiency, compared with other algorithms, PEKS and trapdoor algorithms are more efficient without using hash-to-point operation. Among the Test algorithms, our scheme is slightly weaker than other schemes, because it adds the designated testability to ensure that only the specified server can perform search operations. In terms of communication efficiency, our efficiency is basically the same as other schemes. Figs. 2, 3 and 4 demonstrate the practical performance of PEKS algorithm, Trapdoor algorithm, and Test algorithm, respectively.
Running Time of PEKS Algorithm
Running Time of Trapdoor Algorithm
Running Time of Test Algorithm
Table 3 Computation and Communcitaion efficiency comparison
As shown in Fig. 2, the computation cost to encrypt the keywords is lower than the three schemes [3, 4, 12] and is similar to that of Huang et al.'s scheme [8] and Pan and Li's scheme [10]. For the efficiency of trapdoor algorithm, Fig. 3 illustrates that the trapdoor algorithm in our scheme runs much faster than that in all schemes [3, 4, 8, 10, 12] because our trapdoor algorithm performs no pairing and Hash-to-point operations. In Fig. 4, the computation complexity in our scheme is higher than Baek et al.'s scheme [3] and is not worse than other four schemes. To ensure that the user-side algorithms (Trapdoor and PEKS algorithms) have higher security and efficiency, we add the server's private key to the test algorithm for stronger security, thus the efficiency of the server's Test algorithm is compromised.
Moreover, Table 4 illustrates the security comparison including MCI security, MTP security, Inside KGA, and Requirement for the secure channel between our scheme and these existing schemes. As shown in Table 4, Huang et al.' s scheme [8] can resist inside KGA, but it needs a secure channel and cannot provide MCI security. Li et al.'s scheme [12], Baek et al.'s scheme [3] and Rhee et al.'s scheme [4] can provide MCI security but not guarantee MTP and security against KGA. Pan and Li's scheme [10] is able to resist inside KGA and have MTP security but cannot satisfy MCI security. Our scheme satisfies MCI, MTP and security against inside KGA.
Table 4 Security comparison
In this paper, we first analyze the security of Li et al.'s scheme and propose a multi-trapdoor attack against it. Next, we construct a secure public-key searchable encryption scheme with designated server based on Diffie-Hellman problem. It is proved that our scheme can provide multi-ciphertext indistinguishability, multi-trapdoor privacy security and designated testability. Then we compare our scheme with others in terms of communication cost and computational cost. The results show that our scheme is more efficient in keyword ciphertext and trapdoor algorithms. However, our scheme can not prevent the server from executing the multi-trapdoor attack since the server can construct a certain equation by his private key to obtain the relationship of multiple trapdoors. As our future work, we will explore achieving multi-trapdoor privacy of keywords for the inside servers.
The datasets used during the current study are available from the corresponding author on reasonable request.
Boneh D, Di Crescenzo G, Ostrovsky R, Persiano G (2004) Public Key Encryption with Keyword Search. Springer, Heidelberg.
Byun JW, Rhee HS, Park H-A, Lee DH (2006) Off-line Keyword Guessing Attacks on Recent Keyword Search Schemes over Encrypted Data. Springer, Heidelberg.
Baek J, Safavi-Naini R, Susilo W (2008) Public Key Encryption with Keyword Search Revisited. Springer, Heidelberg.
Rhee HS, Park JH, Susilo W, Lee DH (2010) Trapdoor security in a searchable public-key encryption scheme with a designated tester. J Syst Softw 83(5):763–771. https://doi.org/10.1016/j.jss.2009.11.726.
BingJian W, TzungHer C, FuhGwo J (2011) Security improvement against malicious server's attack for a dpeks scheme. Int J Inf Educ Technol 1(4):350–353.
Tang Q, Chen L (2009) Public-key Encryption with Registered Keyword Search. Springer, Heidelberg.
Chen R, Mu Y, Yang G, Guo F, Wang X (2015) Dual-server public-key encryption with keyword search for secure cloud storage. IEEE Trans Inf Forensic Secur 11(4):789–798. https://doi.org/10.1109/TIFS.2015.2510822.
Huang Q, Li H (2017) An efficient public-key searchable encryption scheme secure against inside keyword guessing attacks. Inf Sci 403:1–14. https://doi.org/10.1016/j.ins.2017.03.038.
Qin B, Chen Y, Huang Q, Liu X, Zheng D (2020) Public-key authenticated encryption with keyword search revisited: Security model and constructions. Inf Sci 516:515–528. https://doi.org/10.1016/j.ins.2019.12.063.
Pan X, Li F (2021) Public-key authenticated encryption with keyword search achieving both multi-ciphertext and multi-trapdoor indistinguishability. J Syst Archit 115:102075. https://doi.org/10.1016/j.sysarc.2021.102075.
Cheng L, Meng F (2021) Security analysis of pan et al.'s "public-key authenticated encryption with keyword search achieving both multi-ciphertext and multi-trapdoor indistinguishability". J Syst Archit 119:102248. https://doi.org/10.1016/j.sysarc.2021.102248.
Li H, Huang Q, Shen J, Yang G, Susilo W (2019) Designated-server identity-based authenticated encryption with keyword search for encrypted emails. Inf Sci 481:330–343. https://doi.org/10.1016/j.ins.2019.01.004.
Abdalla M, Bellare M, Catalano D, Kiltz E, Kohno T, Lange T, Malone-Lee J, Neven G, Paillier P, Shi H (2005) Searchable encryption revisited: Consistency properties, relation to anonymous ibe, and extensions In: Annual International Cryptology Conference, 205–222.. Springer, Berlin, Heidelberg.
Rhee HS, Susilo W, Kim H-J (2009) Secure searchable public key encryption scheme against keyword guessing attacks. IEICE Electron Express 6(5):237–243. https://doi.org/10.1587/elex.6.237.
Fang L, Susilo W, Ge C, Wang J (2013) Public key encryption with keyword search secure against keyword guessing attacks without random oracle. Inf Sci 238:221–241. https://doi.org/10.1016/j.ins.2013.03.008.
Rhee HS, Park JH, Lee DH (2012) Generic construction of designated tester public-key encryption with keyword search. Inf Sci 205:93–109. https://doi.org/10.1016/j.ins.2012.03.020.
Emura K, Miyaji A, Rahman MS, Omote K (2015) Generic constructions of secure-channel free searchable encryption with adaptive security. Secur Commun Netw 8(8):1547–1560. https://doi.org/10.1002/sec.1103.
Chen R, Mu Y, Yang G, Guo F, Huang X, Wang X, Wang Y (2016) Server-aided public key encryption with keyword search. IEEE Trans Inf Forensic Secur 11(12):2833–2842. https://doi.org/10.1109/TIFS.2016.2599293.
Zhou Y, Xu G, Wang Y, Wang X (2016) Chaotic map-based time-aware multi-keyword search scheme with designated server. Wirel Commun Mob Comput 16(13):1851–1858. https://doi.org/10.1002/wcm.2656.
Chen Y-C (2015) Speks: Secure server-designation public key encryption with keyword search against keyword guessing attacks. Comput J 58(4):922–933.
Xu P, Jin H, Wu Q, Wang W (2012) Public-key encryption with fuzzy keyword search: A provably secure scheme under keyword guessing attack. IEEE Trans Comput 62(11):2266–2277. https://doi.org/10.1109/TC.2012.215.
Wang C-h, Tu T-y (2014) Keyword search encryption scheme resistant against keyword-guessing attack by the untrusted server. J Shanghai Jiaotong Univ (Sci) 19(4):440–442. https://doi.org/10.1007/s12204-014-1522-6.
He D, Ma M, Zeadally S, Kumar N, Liang K (2017) Certificateless public key authenticated encryption with keyword search for industrial internet of things. IEEE Trans Ind Inform 14(8):3618–3627. https://doi.org/10.1109/TII.2017.2771382.
Wu L, Zhang Y, Ma M, Kumar N, He D (2019) Certificateless searchable public key authenticated encryption with designated tester for cloud-assisted medical internet of things. Ann Telecommun 74(7):423–434. https://doi.org/10.1007/s12243-018-00701-7.
Chen X (2020) Public-key authenticate encryption with keyword search revised: ∖probabilistic trapgen algorithm. IACR Cryptol ePrint Arch 2020:1211.
Pakniat N, Shiraly D, Eslami Z (2020) Certificateless authenticated encryption with keyword search: Enhanced security model and a concrete construction for industrial iot. J Inf Secur Appl 53:102525. https://doi.org/10.1016/j.jisa.2020.102525.
Lu Y, Wang G, Li J (2019) Keyword guessing attacks on a public key encryption with keyword search scheme without random oracle and its improvement. Inf Sci 479:270–276. https://doi.org/10.1016/j.ins.2018.12.004.
Noroozi M, Eslami Z (2019) Public key authenticated encryption with keyword search: revisited. IET Inf Secur 13(4):336–342. https://doi.org/10.1049/iet-ifs.2018.5315.
Wu L, Chen B, Zeadally S, He D (2018) An efficient and secure searchable public key encryption scheme with privacy protection for cloud storage. Soft Comput 22(23):7685–7696. https://doi.org/10.1007/s00500-018-3224-8.
Chen X (2020) Certificateless public-key authenticate encryption with keyword search revised: Mci and mtp. IACR Cryptol ePrint Arch 2020:1230.
Qin B, Cui H, Zheng X, Zheng D (2021) Improved security model for public-key authenticated encryption with keyword search In: International Conference on Provable Security, 19–38.. Springer, Cham.
Wang P, Xiang T, Li X, Xiang H (2020) Public key encryption with conjunctive keyword search on lattice. J Inf Secur Appl 51:102433. https://doi.org/10.1016/j.jisa.2019.102433.
Zhang X, Tang Y, Wang H, Xu C, Miao Y, Cheng H (2019) Lattice-based proxy-oriented identity-based encryption with keyword search for cloud storage. Inf Sci 494:193–207. https://doi.org/10.1016/j.ins.2019.04.051.
Blake IF, Seroussi G, Smart NP (2005) Advances in Elliptic Curve Cryptography vol. 317. Cambridge University Press, Cambridge.
MIRACL (2021) The MIRACL Core Cryptographic Library. https://github.com/miracl/core/. Accessed 28 Nov 2021.
The authors would like to thank the anonymous reviewers for their insightful comments and suggestions on improving this paper.
This work is supported by the National Natural Science Foundation of China(No.61702153, No.61972124) and the Natural Science Foundation of Zhejiang Province (No.LY19F020021).
School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
Junling Guo, Lidong Han, Guang Yang & Xuejiao Liu
Key Laboratory of Cryptography Technology of Zhejiang Province, Hangzhou Normal University, Hangzhou, China
Lidong Han & Xuejiao Liu
School of Computer Science and Technology, Qingdao Universit, Qingdao, China
Chengliang Tian
Junling Guo
Lidong Han
Guang Yang
Xuejiao Liu
This research paper was completed by the joint efforts of five authors. Therefore, any author participates in every part of the paper. But the basic roles of each author are summarized as follows: J.G. is the designer of the proposed model and method. L.H. is the corresponding author and the coordinator of the group, assisting J.G. in model design. G.Y. is the implementer and tester of the algorithm. X.L. is the main reviewer of this paper. C.T. is responsible for the experiment of the proposed method. All authors have read and agreed to the published version of the manuscript.
Junling Guo is a postgraduate in Cyberspace Security at School of Information Science and Technology, Hangzhou Normal University, China. His research interests include searchable encryption and public key cryptography.
Lidong Han received his Ph.D. degree from school of mathematics in Shandong university in 2010. Currently, he is working at Key Laboratory of Cryptography Technology of Zhejiang Province, and School of Information Science and Technology, Hangzhou Normal University. His research interests include cryptography, cloud computing, and remote user authentication.
Guang Yang is a postgraduate in Cyberspace Security at School of Information Science and Technology, Hangzhou Normal University, China. His research interests include data integrity, searchable encryption and public-key cryptography.
Xuejiao Liu received the BS, MS and PhD degrees in computer science from Huazhong Normal University, Wuhan, China. Now she is an associate professor in Hangzhou Normal University, Hangzhou, China. Her research interests cover network security, cloud security, security of Internet of vehicle and etc.
Chengliang Tian received the B.S. and M.S. degrees in mathematics from Northwest University, Xi'an, China, in 2006 and 2009, respectively, and the Ph.D. degree in information security from Shandong University, Jinan, China, in 2013. He held a postdoctoral position with the State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing. He is currently with the College of Computer Science and Technology, Qingdao University, as an Associate Professor. His research interests include latticebased cryptography, cloud computing security and privacy-preserving technology.
Correspondence to Lidong Han.
Guo, J., Han, L., Yang, G. et al. An improved secure designated server public key searchable encryption scheme with multi-ciphertext indistinguishability. J Cloud Comp 11, 14 (2022). https://doi.org/10.1186/s13677-022-00287-5
Searchable encryption
Keyword guessing attack
Multi-ciphertext indistinguishability
Diffie-Hellman problem
Multi-trapdoor privacy | CommonCrawl |
netnate
Check out my GitHub or subscribe to the RSS feed below.
Laplace Table
Most Laplace tables I have found online have either too little information or do not meet my standards for typesetting, so I made my own. Document Source Code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 \documentclass[12pt]{article} \usepackage[margin=0....
January 28, 2023 · 2 min
N-Body Simulation
Theory Step 1: Particle Characteristics Each particle is treated as a point mass, and each point mass has three intrinsic characteristics: a mass \(m_i\), a position vector \(\vec{r}_i=\left[\begin{array}{ccc}x_i & y_i & z_i\end{array}\right]\), and a velocity vector \(\vec{v}_i=\left[\begin{array}{ccc}u_i & v_i & w_i\end{array}\right]\). Step 2: Force Calculation According to Newton's law of universal gravitation, given two distinct point masses in space, the gravitational force acting on the objects takes the form: $$F=G\frac{m_1m_2}{r^2}$$...
December 21, 2022 · 2 min
Method of Characteristics
Theory Inline code \(y_t\). Numerical Implementation Normal: $$y_t = \beta_0 + \beta_1 x_t + \epsilon_t$$ see above.
If the work has LOC data in its front matter or is listed in the official Library of Congress catalog, that classification is used here. Otherwise, I have used my best judgement to sort some of the more obscure entries. B: Philosophy, Psychology, and Religion B: General Philosophy Aurelius, Marcus (tr. Hammond, Martin) - Meditations Nietzsche, Friedrich (tr. Hollingdale, R. J.) - Thus Spoke Zarathustra: A Book for Everyone and No One Plato (tr....
November 22, 2022 · 4 min
Manual Install Create an EFI, swap, and root partition. Mount the EFI and root partitions and enable the swap drive with swapon. Settings for archinstall Archinstall language: English Keyboard layout: us Mirror region: United States Locale language: en_US Locale encoding: UTF-8 Drives: 8.0 GB Disk layout: Wipe all drives, ext4 Encryption password: None Bootloader: grub-install Swap: True Hostname: linux-arch-i3wm Root password: arch User account: username:nathan | password:arch | sudo:true Profile: desktop, i3-gaps, VMware/VirtualBox Audio: pipewire Kernels: linux Additional packages: firefox, alacritty Network configuration: Copy ISO configuration Timezone: US/Central NTP: True Optional repositories: None Settings for i3-gaps Creating xinitrc config file:...
© 2023 netnate Powered by Hugo & PaperMod | CommonCrawl |
Volume 34 Issue 19: focus issue: thermodynamics of complex solids
The structure and thermochemist...
The structure and thermochemistry of K2CO3–MgCO3 glass
Thermochemistry
Simulation and EDXED
Ambient pressure
Application of pressure
Journal of Materials Research, Volume 34, Issue 19 (Focus Issue: Thermodynamics of Complex Solids)
14 October 2019 , pp. 3377-3388
Martin C. Wilding (a1), Brian L. Phillips (a2), Mark Wilson (a3), Geetu Sharma (a4), Alexandra Navrotsky (a4), Paul A. Bingham (a5), Richard Brooker (a6) and John B. Parise (a2)
1Materials and Engineering Research Institute, Sheffield Hallam University, Sheffield S1 1WB, U.K.; and Department of Geosciences, Stony Brook University, Stony Brook, New York 11794, USA
2Department of Geosciences, Stony Brook University, Stony Brook, New York 11794, USA
3Department of Chemistry, Physical and Theoretical Chemistry Laboratory, University of Oxford, Oxford OX1 3QZ, U.K.
4Peter A. Rock Thermochemistry Laboratory, University of California, Davis, California 95616, USA
5Materials and Engineering Research Institute, Sheffield Hallam University, Sheffield S1 1WB, U.K.
6School of Earth Sciences, University of Bristol, Bristol BS8 1RJ, U.K.
DOI: https://doi.org/10.1557/jmr.2019.250
Published online by Cambridge University Press: 16 September 2019
Figure 1: (a) FTIR absorbance spectra taken using micro-ATR. The 12C sample has two peaks assigned to ν1 symmetric stretching at 1049 and 1088 cm−1 and two peaks assigned to ν2 out-of-plane bending at 806 and 870 cm−1. Only one ν4 in-plane bend peak is seen at 691 cm−1, but the detector response may have cut off a second peak at a lower wavenumber. The ν3 asymmetric stretch is more complex, but can be fitted with 4 peaks representing two doublets. ν3 are approximately at 1340, 1377, 1395, and 1431 cm−1. In the upper spectra, the approximate magnitude of the isotopic shift related to 13C is indicated by vertical ticks. The 13C peaks are marked with an asterisk. (b) The glasses do not change their water content in the first 24 h. However, it should be noted that they eventually turn to a white powder within 3–4 weeks. The peak at 3200 cm−1 is assumed to represent O–H stretching, but has a lower frequency than observed in silicate glasses.
TABLE I: Enthalpy of drop solution of 0.55K2CO3–0.45MgCO3 glass at 975 K.
Figure 2: 13C MAS-NMR spectra of 13C-labeled 0.55K2CO3·0.45MgCO3 glass acquired at 100.56 MHz (9.4 T) and spinning rates of (a) 10.3 kHz and (b and c) 1.5 kHz. Spectra in (a and b) were taken by direct 13C excitation (DE) with 4.5 μs pulses separated by a 30-s relaxation delay for (a) 800 and (b) 280 acquisitions, and in (c) by standard 1H → 13C cross-polarization (CP) with a 2-ms contact time and 2-s relaxation delay, for 800 scans. Inset shows center bands at an expanded scale.
Figure 3: Energy dispersive (collected at sector 16 APS) for 55% K2CO3–45% MgCO3 glass compared with the summed partial contributions for the simulated liquids of the same composition (top); the lower panel shows the individual partial contributions with X-ray weighting (Ashcroft–Langreth) fo K–K, K–O, and O–O which dominate the X-ray scattering pattern. There is a considerable mismatch between the experiment and simulation. This could result from the difference between the fully relaxed liquid and the vitreous form of this carbonate.
Figure 4: Molecular dynamics snapshots of the 55% K2CO3–45% MgCO3 liquid at 1800 K showing the distribution of the carbonate anions. C–O bonds are shown in black (a) with some longer C–O bond lengths (shown in gray) and formation of the CO3+1 configuration apparent even at ambient pressure. This is compared with the rhombohedral crystal structure of K2Mg(CO3)2 [41] (b) with the unit cell illustrated; the rigid, triangular carbonate anions form planes perpendicular to the c-axis. In the lower figure, potassium and magnesium atoms are also shown (c); the structure is dominated by the strong interaction between the large, blue potassium cations and the distorted carbonate. The coordinate polyhedra of potassium and magnesium are shown for the ambient pressure crystal structure.
Figure 5: Six of the partial radial distribution functions to illustrate the response of the K2CO3–MgCO3 liquids to pressure. The curves are generated by the simulations at six different densities (pressures) with the lowest densities the topmost curves. The emergence of the second C–O length scale seen at high pressure in the C–O (a) and C–C (b) partial contributions. The changes in K–K (c), K–O (d) and O–O (e) partial contribution represent the changes in the potassium sub-density and strong interaction between the oxygen atoms in the carbonate anion and the potassium cations. The Mg–O partial contribution (f) changes little with pressure.
Carbonate glasses can be formed routinely in the system K2CO3–MgCO3. The enthalpy of formation for one such 0.55K2CO3–0.45MgCO3 glass was determined at 298 K to be 115.00 ± 1.21 kJ/mol by drop solution calorimetry in molten sodium molybdate (3Na2O·MoO3) at 975 K. The corresponding heat of formation from oxides at 298 K was −261.12 ± 3.02 kJ/mol. This ternary glass is shown to be slightly metastable with respect to binary crystalline components (K2CO3 and MgCO3) and may be further stabilized by entropy terms arising from cation disorder and carbonate group distortions. This high degree of disorder is confirmed by 13C MAS NMR measurement of the average chemical shift tensor values, which show asymmetry of the carbonate anion to be significantly larger than previously reported values. Molecular dynamics simulations show that the structure of this carbonate glass reflects the strong interaction between the oxygen atoms in distorted carbonate anions and potassium cations.
Present Address: University of Manchester at Harwell, Diamond Light Source, Harwell Campus, Didcot, Oxfordshire, OX11 0DE, U.K.
This author was an editor of this journal during the review and decision stage. For the JMR policy on review and publication of manuscripts authored by editors, please refer to http://www.mrs.org/editor-manuscripts/.
A variety of experimental and simulation techniques have been used to investigate the structure of glasses and glass-forming liquids. Such studies have aimed to make a connection between the atomic-scale glass or liquid structure and macroscopic properties such as viscosity, fragility, and optical characteristics. Much of this research has concentrated on commercially important systems, such as silicates and borosilicates, while less conventional glass systems have received considerably less attention. One such class of glasses form over narrow compositional ranges in nitrate, sulfate, and carbonate systems, where the process of vitrification itself remains poorly understood. Certain compositions in these dominantly ionic systems can form so-called fragile liquids even though the glass-forming ability is generally assumed to be poor because of a supercooled liquid structure that is temperature-dependent and is expected to change during the process of glass formation. The absence of covalent network structures may also play a role [1].
In a series of related studies on the structure of molten salts, including nitrates, sulfates, and carbonates, combining high-energy X-ray diffraction experiments and molecular dynamics simulations indicate that the structure and dynamics of molten salt systems are influenced by the emergence of secondary length scales, implying some linkage between the anionic groups [2, 3, 4]. In the case of carbonates, networks and other complex structures are formed from the molecular anion, the extent of which depends on temperature [2, 4]. Properties such as diffusion and, by extension, viscosity are dependent on the length of these carbonate network chains. These studies have shown how the structures within carbonates and other molten salt liquids differ from those assumed from ionic liquid behavior. Significantly, it has been shown that the experimentally observed configurations for carbonates are best reproduced by simulations that allow molecular anions to be treated as flexible rather than rigid structures with the carbon atom moving out of the planar trigonal geometry, allowing a second C–O bond length to emerge as the carbon atom forms a weak linkage to oxygen atoms of other carbonate groups, i.e., a network. Unfortunately, the ionic melt systems studied so far are not glass-forming under practically accessible cooling rates, so there is currently no direct link between the emergence of the secondary length scale and the formation of a glass-forming mechanism. While the rapid crystallization kinetics in most carbonate liquids limit the degree of supercooling of carbonate melts and largely prevent vitrification, there are a few carbonate compositions that are known to form glass [5] which can be used to evaluate the development and extent of the proposed low-dimensional structures. In addition, a biogenic form of amorphous calcium carbonate (ACC) also provides a useful structural comparison [6]. In this contribution, we present combined thermochemical and NMR spectroscopic studies of one such carbonate glass composition in the K2CO3–MgCO3 system to determine which mechanism might be responsible for its glass-forming ability.
The formation of glass in the system K2CO3–MgCO3 was reported by Eitel and Skaliks in 1929 [7]; although this was a passing observation and has, with a few exceptions [8, 9, 10], received little attention since. Glass in this system can be quenched successfully above a deep eutectic region at a molar K2CO3/MgCO3 ratio of ∼55:45 and at a pressure of ∼50 MPa [11, 12]. The elevated pressure is believed to prevent the carbonate from decomposing. Carbonates, along with other ionic glass formers, such as sulfates [13, 14] and nitrates [15], lack conventional network-formers such as silicate tetrahedra, and there is considerable speculation about how these exotic glasses form. In theory, the ionic nature of the carbonate anion should be dictated by the electronic structure in which all the "bonding" oxygen orbitals are incorporated into C–O pπ and sσ bonds, leaving none for covalent interactions. As such, they should not form the covalently bonded polymerized network usually associated with glass formation [16]. Spectroscopic studies of the K2CO3–MgCO3 glass [8, 9, 17] indicate the presence of two structurally distinct populations of carbonate anions. Genge et al. [8] suggest that the more symmetrical units form a flexible network that comprises carbonate anions with bridging, strongly interacting metal cations (here Mg2+), while nonbridging species (here K+) modify the network and are associated with distorted carbonate groups. It has also been suggested that glass formation in sulfate and nitrate systems requires the presence of two different cations with different field strengths and different degrees of polarization [13, 14, 15]. Our proposed structure of K2CO3–MgCO3 glass is significantly more complicated than simple ionic molten salt models would predict and is also defined by the flexibility of the molecular anion.
As has been demonstrated for alkali carbonate and nitrate liquids, molecular dynamics simulations using the flexible anion approach provide a means for exploring parameter space, for example, in the case of Na2CO3, we are able to identify the development of complicated, low dimensional structures that have temperature dependence. Since carbonate liquids are important agents of geochemical transport and have a substantive role in deep Earth processes, pressure is an important state variable and in part motivates this study. High pressure and high temperature studies of carbonate and indeed other liquids are challenging, and it is common to use glasses to evaluate the influence of pressure. Carbonate glass is therefore suitable for such an ambient temperature, high pressure study, and the 55K2CO3–45MgCO3 glass is the focus of a related ultra-high–pressure study using energy-dispersive diffraction techniques and requires an understanding of the structure and structure-dependent properties of the glass at ambient pressure. There are significant changes within the glass structure with increasing pressure, and based on the success of the combined HEXRD and MD modeling of carbonate liquids and other molten salts using a flexible anion approach [2, 3, 4], we can identify the structural trends by simulating the liquid structures at different densities. The trigonal geometry of the carbonate anion is imposed using harmonic springs that act between O–O and C–O pairs. This flexibility is clearly important in determining the observed 'liquid' structure and reproduces the pressure-dependent changes; and although the changes in the diffraction pattern are dominated by K–K, K–O, and O–O atom pairs, these changes are themselves indicative of associated changes in the underlying structure of the carbonate anions. The relationship between the K+ cations and the carbonate anions is characterized by structures that result from the strong electrostatic interactions between the oxygen ions in the carbonates and K+ cations, which result in preferential formation of close CO3 pairs and the emergence of a second C–O length-scale at 2.4 Å with increasing pressure. This represents the development of the CO3+1 structure which is heading toward CO4, but the flexible carbon forms a weaker, longer bond with oxygen of another group. The pressure-induced linkage in 55K2CO3–45MgCO3 glass represents the development of a carbonate 'network' similar to that observed in Na2CO3 melts at ambient pressure and has profound implications for melt properties such as viscosity. The simulations suggest a relatively little change in the partial contributions from the Mg2+ cations in either the reciprocal or real space: the Mg–O partial functions remain effectively independent of pressure. This is inconsistent with the model of Genge et al. (1995) [8] which has a modified polymerized network with Mg2+ acting as a "bridging species" and K+ ions act as a "modifier species".
The formation of glass in this system is not well understood, and the underlying structural information and accompanying simulation indicate that the flexibility of the carbonate anion is a key in controlling the glass-forming ability. In this study, we examine further the energetic and structural basis for glass formation in this system by determining the heat of formation of the glass using solution calorimetry, accompanied by 13C NMR and infrared spectroscopy studies that reveal the local environment of carbon and the degree of distortion of the carbonate anion. These experimental data are combined with the molecular dynamics simulations.
Attenuated total reflectance (ATR) infrared spectroscopy data were collected on two glass samples synthesized using high pressure techniques. One sample was made using reagent grade K2CO3 and magnesite as starting materials, and the second sample was 13C enriched and made using the K213CO3 starting material (5 wt% of the sample is 13C). These spectra are shown in Fig. 1. The use of ATR allows better resolved and more complete absorbance spectra for the fundamental internal vibrational mode of the carbonate groups, compared with that of Genge et al. [5], allowing a better assignment of peaks. There are clearly two distinct ν1 symmetric stretching and two ν2 out-of-plane bending peaks. This strongly suggests that there are two distinct carbonate groups present. The degeneracy of the two ν3a and ν3b asymmetric stretch modes appears to be lifted to produce doublets, with the two carbonate types then producing 4 peaks.
The lifting of degeneracy implies some distortion with the 3 oxygen atoms of the carbonate groups experiencing different environments for both the carbonates. The ν4 in-plane bend region is cut off by the detector, but the Raman spectrum for this glass shows a complex peak envelope in the 680- to 720-cm−1 region.
Deterioration of glass due to the presence of water introduced during high pressure synthesis is a persistent problem, and these glasses are known to be hygroscopic, and the presence of O–H at high frequencies has been reported by Genge et al. In Fig. 1(b), the progressive hydration of the glass is shown for both 12C- and 13C-enriched samples. Note that for the 13C sample (for NMR), there is more water present possibly because the starting materials may not have been perfectly dried. While water is undoubtedly present, there is no evidence that the water content changes over the course of 24 h. Over the course of several weeks, the samples deteriorated, even when stored in a desiccator, and the appearance changes from that of the transparent pristine glass to an opaque white powder. However, we assume for the calorimetry, NMR, and diffraction experiments that although there is water introduced during the high pressure synthesis, the degree of hydration does not change over the course of measurement.
The calorimetric data are summarized in Table I. The average heat of solution for seven pellets of the 0.55K2CO3·0.45MgCO3 glass is 115.00 ± 1.21 kJ/mol. To calculate the enthalpy of formation of the glass, not only is the heat of drop solution for the glass samples required but also the heat of drop solution of the component oxides. These values are obtained from literature, and the following thermochemical cycle is used to calculate the enthalpy of formation of the 0.55K2CO3–0.45MgCO3 glass. The measured heat of drop solution (kJ/mol) obtained from the calorimetry experiment is associated with the following reaction:
$$\eqalign{0.55{{\rm{K}}_2}{\rm{C}}{{\rm{O}}_3} \cdot 0.45{\rm{MgC}}{{\rm{O}}_3}_{\left( {{\rm{s}},298\;{\rm{K}}} \right)} \to 0.57{{\rm{K}}_{\rm{2}}}{{\rm{O}}_{\left( {{\rm{sol}},975\;{\rm{K}}} \right)}} \cr+ 0.43{\rm{Mg}}{{\rm{O}}_{\left( {{\rm{sol}},975\;{\rm{K}}} \right)}} + {\rm{C}}{{\rm{O}}_2}_{\left( {{\rm{g}},975\;{\rm{K}}} \right)}\quad ,$$
$$\Delta {H_{{\rm{drop}}\;{\rm{solution}},975\;{\rm{K}}}} = \Delta {H_1} = 115.00\; \pm \;1.21\;\left( {{{{\rm{kJ}}} / {{\rm{mol}}}}} \right)\quad .$$
The values for the heat of drop solution of K2O and MgO [18, 19] are
$$\eqalign{{{\rm{K}}_{\rm{2}}}{{\rm{O}}_{\left( {{\rm{s}},298\;{\rm{K}}} \right)}} \to {{\rm{K}}_{\rm{2}}}{{\rm{O}}_{\left( {{\rm{sol}},975\;{\rm{K}}} \right)}} = - 319.60\; \pm \;4.70 \cr= \Delta {H_2}\;\left( {{{{\rm{kJ}}} / {{\rm{mol}}}}} \right)\quad ,$$
$$\eqalign{{\rm{Mg}}{{\rm{O}}_{\left( {{\rm{s}},298\;{\rm{K}}} \right)}} \to {\rm{Mg}}{{\rm{O}}_{\left( {{\rm{sol}},975\;{\rm{K}}} \right)}} = - 5.34\; \pm \;0.26 \cr= \Delta {H_3}\;\left( {{{{\rm{kJ}}} / {{\rm{mol}}}}} \right)\quad .$$
The value for CO2 gas is obtained from tabulated heat capacity values [20],
$${\rm{C}}{{\rm{O}}_2}_{\left( {{\rm{g}},298\;{\rm{K}}} \right)} \to {\rm{C}}{{\rm{O}}_2}_{\left( {{\rm{g}},975\;{\rm{K}}} \right)} = 32.07 = \Delta {H_4}\;\left( {{{{\rm{kJ}}} / {{\rm{mol}}}}} \right)\quad .$$
So the enthalpy of formation of the 0.55K2CO3·0.45MgCO3 glass from oxides is calculated from
$$\eqalign{\Delta {H_{{\rm{f,298}}\;{\rm{K}}}} = - \Delta {H_1} + \left( {0.55\Delta {H_2} + 0.45\Delta {H_3} + \Delta {H_4}} \right) \cr= - 261.12\; \pm \;3.02\;\left( {{{{\rm{kJ}}} / {{\rm{mol}}}}} \right)\quad .$$
This value can be compared with the enthalpy of formation of the end-member carbonates determined from the following thermochemical cycle using the literature values for the enthalpy of formation of carbonates [21, 22, 23].
$$\eqalign{{{\rm{K}}_{\rm{2}}}{\rm{O}} + {\rm{C}}{{\rm{O}}_2}_{\left( {{\rm{g,298}}\;{\rm{K}}} \right)} \to {{\rm{K}}_{\rm{2}}}{\rm{C}}{{\rm{O}}_3}_{\left( {{\rm{s,298}}\;{\rm{K}}} \right)} = - 396.02\; \pm \;1.56 \cr= \Delta {H_5}\;\left( {{{{\rm{kJ}}} / {{\rm{mol}}}}} \right)\quad ,$$
$$\eqalign{{\rm{MgO}} + {\rm{C}}{{\rm{O}}_2}_{\left( {{\rm{g,298}}\;{\rm{K}}} \right)} \to {\rm{MgC}}{{\rm{O}}_3}_{\left( {{\rm{s,298}}\;{\rm{K}}} \right)} = - 117.04\; \pm \;0.94 \cr= \Delta {H_6}\;\left( {{{{\rm{kJ}}} / {{\rm{mol}}}}} \right)\quad ,$$
$$\eqalign{\Delta {H_{\left( {{\rm{f}},298\;{\rm{K}}} \right)}} = 0.55\Delta {H_5} + 0.45\Delta {H_6} \cr= - 270.06\; \pm \;1.81\;\left( {{{{\rm{kJ}}} / {{\rm{mol}}}}} \right)\quad .$$
These results indicate that the glass is only slightly metastable in enthalpy by +9.36 ± 1.95 kJ/mol relative to the end-members. This destabilization, which can be considered an effective vitrification enthalpy is rather small compared with typical heats of vitrification of other materials, although no data are available for potassium and magnesium carbonates, which are not glass-forming. This small enthalpy is consistent with the relative ease of glass formation in this composition and the absence of vitrification in the end-members. The configurational entropy arising from the mixing of K+ and Mg2+ ions in the glass may be sufficient to stabilize the glass under these synthesis conditions. For one mole of carbonate there are 1.55 moles of cations, and assuming random mixing, the configurational entropy is given as follows.
$$\eqalign{{S_{{\rm{conf}}}} = - 1.55R\left( {x\;\ln \;x + \left( {1 - x} \right)\ln \left( {1 - x} \right)} \right) \cr= 7.76\left( {{{\rm{J}} / {{\rm{mol}}\;{\rm{K}}}}} \right) \quad,$$
where x is the mole fraction of potassium ions equal to 1.10/1.55 = 0.71. At the synthesis temperature of 1053 K, the TΔS term (TS conf) would be 8.2 kJ/mol which is enough to compensate the enthalpic instability obtained by calorimetry. Thus we conclude that the glass may indeed be stable in free energy compared with a mixture of the crystalline end-members. We noted that the calculated configurational entropy represents a maximum value for random mixing, so the actual configurational entropy contribution may be somewhat smaller.
However the distortions of the carbonate polyhedra may also contribute to the enthalpy and entropy terms. The main point of the aforementioned argument is that the energetic destabilization of the ternary carbonate glass relative to binary crystalline carbonate end-members is very small and can be compensated by entropy terms arising from several types of disorder. Such thermodynamic behavior is not uncommon in ternary oxide systems showing strong acid–base interactions. For example, a glass of diopside (CaMgSi2O6) composition is stable in both enthalpy and free energy relative to a mixture of crystalline CaO, MgO, and 2SiO2), but crystalline diopside is even more stable [24, 25, 26]). A crystalline ternary carbonate phase K2Mg(CO3)ternary and its hydrated forms have been reported in early studies [27, 28, 29, 30], but there appear to be only limited thermodynamic data for such materials [31]. Thus, we cannot make a direct comparison of the thermodynamics of the ternary glass and corresponding crystalline ternary carbonates.
It is apparent on the basis of the ATR and also the NMR measurements (see below) that some water is present in the glass sample. This water is structurally bound and introduced during the synthesis. The heat of solution measurements could therefore be influenced by water. The influence of ∼10 mol% water on the thermodynamic cycle outlined earlier, also determined from heat capacity measurements, would make the glass even less energetically stable and would increase the effective heat of vitrification term from +9.36 to +30.87 ± 1.95 kJ/mol and would increase the magnitude of the TΔS term.
The 13C MAS-NMR spectra for the carbonate glass [Fig. 2(a)] show a broad, nearly symmetrical center band at 168.7 ppm, with a FWHM of 4.7 ppm. This chemical shift is typical for alkali and alkaline earth carbonates [32, 33, 34] and is slightly less than values typically published for end-member crystalline K2CO3 at 170.7 ppm or MgCO3 at 169.9 ppm, the 4.7 ppm FWHM being significantly greater than either at 0.3 and 0.5, respectively [35, 36]. The large peak width indicates the presence of a significant degree of disorder, consistent with an absence of any crystalline phase, and is greater than that for additive-free ACC [33, 34, 37]. This peak breadth corresponds to a broad distribution of chemical shifts in the glass that spans the chemical shift range for carbonate groups. The width is similar to distorted 'network carbonate' dissolved in fully polymerized silicate glasses on the SiO2–NaAlO2 join (4–6 ppm) (see Kohn et al. 1991; Brooker et al. 1999 [38, 39]). The 55K2CO3–45MgCO3 spectrum is otherwise consistent with that expected from triangular carbonate anions, but it does not provide any further detail on the degree of distortion of the carbonate anions anticipated from the molecular dynamics simulation of this composition.
More information on the distortion of the carbonate anions can potentially be extracted by determining the anisotropy and asymmetry of the chemical shift tensor. In a conventional solid state MAS-NMR experiment, the chemical shift tensor is intentionally averaged by rapid sample rotation to yield the high resolution isotropic line shape shown in Fig. 2(a). Slower rotation MAS techniques yield a complex spinning sideband pattern [Fig. 2(b)], from which the principal axis values of the chemical shift tensor δxx = 214 ± 2 ppm, δyy = 175 ± 1 ppm, and δzz = 117 ± 1 ppm were determined. The mean of the principal tensor values corresponds to the isotropic chemical shift δiso = (δxx + δyy + δzz)/3 and the position of the peak in the MAS-NMR spectrum at high spinning rate [Fig. 2(a)]. From these principal values can be calculated the anisotropy Δ = |δzz − δiso| = 52 ± 1 ppm, which provides a measure of the departure from cubic symmetry, and the asymmetry η = (δyy − δxx)/(δzz − δiso) = 0.75 ± 0.04 as the departure from axial symmetry (0 ≤ η ≤ 1).
Owing to the hygroscopic nature of the starting materials, we used standard CP-MAS and rotational-echo double-resonance (REDOR) methods to ascertain whether hydration of the glass exhibited any direct effect on the 13C chemical shift tensor values. Observation of reasonably strong CP-MAS intensity [Fig. 2(c)] and a slow build-up of REDOR dephasing, with a REDOR fraction asymptotically reaching 0.95 by about 20 ms (Fig. S1; Supplementary material), indicate that essentially all carbonate groups occur in proximity to H with mainly moderate to weak C–H dipolar coupling. However, the spinning sideband patterns obtained by CP-MAS, by REDOR difference at short dephasing time (1 ms; representing C proximal to H), and with REDOR dephasing at long dephasing time (6 ms; representing C distal to H) showed no systematic differences [Fig. 2(c); Fig. S2, Supplementary material]. The CSA parameters obtained from these spectra are within uncertainty of the values given earlier from spectra obtained by direct-excitation. This result indicates that although the glass contains H, the average chemical shift tensor for the carbonate groups is unrelated to proximity to H. The 13C-detected 1H NMR spectrum (Fig. S3; Supplementary material) is dominated by the signal from rigid structural water, with a smaller peak for hydroxyl groups, and a minor signal for hydrogen carbonate. The presence of minor hydrogen carbonate is also evident in the 13C CP/MAS spectrum as a small shoulder near 162 ppm (Fig. 2, inset).
A previous study of the CSA parameters for ACC has been used to investigate the degree of distortion of the carbonate anion [6] and compared with crystalline carbonates, the ACC is never anhydrous, and water may play a role in controlling the degree of carbonate distortion. It was found that values for the asymmetry parameter, η, show a strong relationship with the axial distortion of the carbonate group expressed as the difference between the longest and shortest C–O bond lengths (σ). In contrast, a very little variation in the anisotropy is found for carbonate groups in crystalline phases and ACC, which fall in the range 45–55 ppm. The anisotropy for 0.55K2CO3·0.45MgCO3 glass (52 ppm) falls well within this range and is similar to that observed for ACC (49.3 ppm). These observations suggest limited departure of the carbonate group from planar geometry. The value for the asymmetry determined for the 0.55K2CO3·0.45MgCO3 glass (0.75) is significantly larger than any previously reported values, including those for hydromagnesite (0.55) and ACC (0.50) that are influenced by significant hydrogen bonding interactions. Extrapolation of the relationship between η and σ suggested by Sen et al. [6], yields a value of σ ≈ 0.054 Å for the 0.55K2CO3·0.45MgCO3 glass. This significant value for σ inferred from the large value observed η supports the simulation approach we have used in allowing the anion to be flexible. We did note, however, that this estimate arises from an average CSA that represents the sum of what is likely to be wide spread in the distribution of values among carbonate anions in the glass as indicated by the large range of chemical shifts represented by the breadth of the isotropic peak shape.
The structure of the ambient pressure glass has been studied as part of a broader investigation into the behavior of carbonate liquids at high pressure. Simulation of the ambient pressure liquid has been carried out using the flexible anion approach outlined earlier, which has successfully modeled the changes in the carbonate liquid structure with high pressure. Figure 3 shows the X-ray total structure factor obtained from EDXRD compared with that obtained from computer simulation. The figure also shows the X-ray weighted contributions from selected partial structure factors which sum to form the total scattering function. The agreement between the experimental and simulated total scattering functions is reasonable, both at low and high Q, the latter being dominated by the well-defined C–O and O–O nearest-neighbor length-scales arising from the carbonate anions. The agreement at intermediate Q is poor, with the oscillations over this Q-range arising from a complex superposition of all ten partial structure factors. As a result, the detailed structure in this Q-range is highly sensitive to subtle variations either in the structure factors or in the weightings used to recombine them to form the total functions. For example, relatively small changes in the X-ray form factors [40] (which weight the partial structure factor contributions to the total scattering function) lead to large changes in S(Q) in this intermediate Q-range. In the present work we make standard corrections in considering the presence of oxygen anions and apply standard tabulated form factors for carbon and potassium and magnesium cations. It is possible that the highly correlated nature of the carbon and oxygen atoms (in forming the carbonate anions) requires a detailed re-evaluation of the appropriate form factors. In addition, our modeling focus has, to date, been on the liquid state (as part of a broader study of a range of molecular systems combined with high temperature levitation experiments). It is entirely possible that (even relatively small) changes in the structure on vitrification may alter the balance of weightings of the partial structure factors and improve agreement with experiment. However, generating effective glass structures from simulation models is a well-known and significant problem in condensed matter simulation and will be the focus of future work.
The ambient pressure liquid structure is shown in a molecular dynamics snapshot in Fig. 4. The figure illustrates the distortions of the carbonate anions away from the ideal trigonal planar configuration (4a). Note that there is no evidence that the carbonates form chains or the other complex structures as observed in simulation models for Na2CO3. There is, however, clear evidence that the additional C–O length scale is beginning to develop even at ambient pressure. As a result, the ambient pressure structure shows isolated, distorted carbonate anions with different C–O bond lengths consistent with the in-plane distortion identified by the NMR measurements. The flexibility of the carbonate anion, which allows the carbon atom to move out of the plane of the three oxygen atoms, appears limited at ambient pressure, consistent with the NMR data that suggests this deviation from the planar geometry is within the range of that for crystalline carbonates.
The ambient pressure crystal structure of synthetic K2Mg(CO3)2, determined by Hesse and Simons [41] is shown for comparison in Fig. 4. The ambient structure is rhombohedral $\left( {R\bar{3}m} \right)$ with planar, CO3 units arranged in the a–b plane. Magnesium octahedra are corner-shared with the carbonate and edge-shared with alternate KO9 coordinate polyhedra. This structure contrasts with that obtained from the simulation of the liquid, wherein the carbonate is more distorted, and both magnesium and potassium cations occur in irregular channels or percolation domains.
In previous interpretations of this carbonate glass structure, magnesium is assumed to act as a bridging species and to form a network with the carbonate anions. A motivation for that interpretation was the combination of larger formal ion charge and smaller ionic radius which make the magnesium cation significantly more polarizing than the potassium cation. Although the ambient pressure structure does show magnesium adopting this role, both magnesium and potassium could be argued to adopt a bridging role, at the very least in a sense that they form strong Coulombic interactions with the carbonate anions. Although magnesium clearly has an important role in stabilizing the glass, the greatest changes in response to the application of pressure are observed in the relationship between the potassium and oxygen ions. To highlight these changes, Fig. 5 shows the (unweighted) partial pair distribution functions for six selected ion pairs. On application of pressure, the Mg–O pair distribution function appears relatively invariant with the nearest-neighbor length-scale shortening slightly (consistent with a simple increase in density). The C–C, O–O, and C–O pair distribution functions show significant changes consistent with the greater emergence of the second C–O near-neighbor lengths-scale indicated earlier. The most dramatic changes are observed in the K–O pair distribution function. At ambient pressure, the nearest-neighbor distribution is relatively broad and at a relatively long range, indicative of the potassium cation acting more as a network modifier than a network-former. As the pressure increases the nearest-neighbor length-scale shifts to a significantly lower range (from r KO ∼ 3.4 Å to ∼2 Å) indicative of the change in the role of the potassium cation to act as a network-former, moderating the change in structure observed for the carbonate anions. More important, however, is that these contributions reflect underlying changes in the carbonate anion, and it is the flexibility of this anion that determines the response of this system to pressure [19]. The flexibility in the anion results in the changes to the O–O and K–O contributions that result in the dramatic, pressure-induced changes in the diffraction pattern which accompany the emergence of a second C–O length scale. The complex relationship between the K+ cations and the oxygen atoms in the carbonate determines the high pressure response, while changes in the magnesium atom pair contributions are limited.
The simulated liquid structure can be compared with those of double carbonates [27] at elevated pressures. The K2Mg(CO3)2 crystal transforms from a rhombohedral $\left( {R\bar{3}m} \right)$ structure into a monoclinic polymorph with pressure [27, 30]: in this case, the carbonate anion remains rigid with limited change in the C–O distances, in contrast to the glass. The response to pressure of the crystalline double carbonates reflects the anisotropy of compression along a- and c-axes since the polyhedron of the larger cation is more compressible than that of magnesium. In liquid, it is the distortion of the carbonate anion that influences the response to pressure and also the formation of the glass. The structure of this glass is therefore more similar to that proposed for aqueous calcium carbonate where there is strong interaction between H2O molecules and the distorted carbonate units in close proximity, stabilizing the structure against crystallization [5]. In the K–Mg carbonate glass, it appears that the magnesium cations form strong interactions with the carbonate anions across the whole pressure range (consistent with the expected strong electrostatic interactions), in a manner which can be considered network-forming. However, the role of the potassium cations is more interesting as this appears to change from being largely network-modifying at low pressure to network-forming at high pressure.
In this study, we have determined the enthalpy of formation of a rare carbonate glass by high temperature oxide calorimetry. This completely glassy sample has an enthalpy of formation from the oxides of −261.12 ± 3.02 kJ/mol. The glass is energetically metastable with respect to a mixture of the crystalline end-members by only +9.36 ± 1.95 kJ/mol, and this small contribution to the free energy is balanced by the configurational entropy of K–Mg mixing, providing the ternary glass possible stability with respect to the binary crystalline end-members. Further entropic stabilization may arise from carbonate group distortions and disorder confirmed by analysis of the 13C MAS NMR spectra, collected for a similarly prepared glass sample. The results show an asymmetry parameter (η), a measure of departure from axial symmetry, to be larger than for any alkali or alkaline earth carbonate, indicating that the carbonate anion is distorted, with an average difference in the C–O bond length of about 0.06 Å. However, the anisotropy shows that the deviation from planar geometry falls within the range of crystalline carbonates. Molecular dynamics simulations for the equivalent liquids using the flexible anion approach also show distorted carbonate units with the carbon atom moving slightly out of plane but not forming the higher coordinate CO3+1 configurations proposed for high pressure versions of this glass. The ambient pressure glass sample has distorted carbonate anions that are isolated and do not form networks but show a complex relationship with the potassium cations. The interaction between the potassium cations and the oxygen atoms from the carbonate define the glass structure at low pressure, and the same flexibility of the carbonate anion dictates the high pressure response. The ambient pressure glass is superficially similar in structure to amorphous hydrated calcium carbonate, where the structure is defined by distorted carbonate anions in close proximity to a hydrogen-bonded network.
Glasses were prepared using a starting mixture of 55 mol% K2CO3 and 45 mol% MgCO3, which is the ratio at the eutectic region (at ∼730 K) on the binary join [11]. Reagent grade K2CO3, or for ATR and NMR measurements, 98% 13C labeled K213CO3 (MSD isotopes) were dried at 673 K. The MgCO3 was in the form of a natural, optically clear magnesite crystal from Brumado, Brazil (virtually water-free as confirmed by FTIR) which was ground into fine powder immediately prior to the experiment. The mixture was loaded into 3.8-mm length Au capsules which were welded shut and placed into a Tuttle-type cold-seal bomb with a rapid quench rod extension [42]. Experiments were conducted at 1050 K, 0.1 GPa for ∼10–15 h and then quenched (>200 °C/s). The resulting glass was removed from the Au capsule, mostly as a single solid slug, representing the central part of the quenched melt.
IR spectra were collected using a ThermoNicolet i10 FTIR (Thermo Nicolet Corporation, Madison, Wisconsin) spectrometer fitted with a Ge-tipped ATR head. 128 scans were collected at a resolution of 8 cm−1 using a MCT detector and a KBr beamsplitter in the absorbance mode. The sample was polished using dry alumina grit coated sheets (to a 1 μm finish), and the collection area was approximately a 30-μm square.
The enthalpy of drop solution of the carbonate glass was determined by high temperature oxide melt solution calorimetry. These experiments used a custom-built, Tian–Calvet isoperibol calorimeters at the Peter A. Rock Thermochemistry Laboratory at UC Davis and followed the standard practice reviewed in several recent publications [18, 43, 44]. In these experiments, a series of small pellets of the carbonate glass (∼5 mg) are dropped from room temperature into a molten oxide solvent. In this case, the solvent was sodium molybdate (3Na2O–4MoO3) contained in a large platinum crucible which is contained in a silica glass crucible and outer silica glass liner. For these experiments, the entire glass assembly is flushed with oxygen at 43 mL/min. Oxygen is bubbled, at 4.5 mL/min, through the solvent using a platinum-tipped silica tube; this not only mixes the solvent to ensure complete dissolution of the sample and evolve the CO2 but can also be used to control the oxidation state.
The individual pellets are dropped from room temperature into the solvent at 975 K with the heat flow resulting from the drop measured as a change in voltage in the calorimeter thermopile. Each measurement comprises a 10 min-collection of the initial calorimeter baseline and the return to the sample baseline following the drop solution measurement. The overall reaction time was 28–30 min. The integral of the voltage change is converted to the reaction enthalpy by using a calibration factor determined by dropping pellets of known heat contents (α-Al2O3).
Solid-state NMR spectra were acquired at Stony Brook University with a 400 MHz (9.4 T) A Varian Inova spectrometer operating at 100.56 MHz and a 500 MHz Varian Infinity plus spectrometer at 125.68 MHz were used. A set of small pellets of the 13C-labeled 0.55K2CO3:0.45MgCO3 glass was loaded directly into a 3.2 mm rotor assembly and sealed with an air-tight press-fit cap. The direct-excitation 13C MAS-NMR (Magic Angle Spinning) spectra were acquired with standard single-pulse methods with 4.5 μs 80° pulses and a 30-s relaxation delay at spinning rates that varied from 1.5 to 10.3 kHz. The T 1 spin-lattice relaxation time was estimated to be 10 s by bracketing the null point in a series of inversion-recovery experiments. Additional abbreviated acquisition spectra at a longer (50 s) relaxation delay were acquired before and after each spectrum but showed no evidence of spectral changes that could indicate the onset of crystallization during the NMR experiments. The average 13C chemical shift tensor values were estimated from analysis of the spinning sideband intensities for a spectrum acquired at a 1.5-kHz spinning rate according to the method of Herzfeld and Berger [45] as implemented in "HBA" software [46]. To assess potential hydration of glass, additional spectra were acquired by 1H → 13C cross-polarization MAS (CP-MAS) methods at contact time ranging from 0.2 to 10 ms and spinning rates of 1.5 and 8.0 kHz and by 13C[1H] REDOR methods. A two-dimensional 13C[1H] CP heteronuclear correlation spectrum was acquired at a spinning rate of 8.0 kHz and a contact time of 2 ms, as 64 hypercomplex points in t1 at a 10-μs interval (100 kHz 1H spectral width). Linear prediction methods were used to complete the signal decay in t1 to mitigate truncation artifacts. From these data, 13C-detected 1H NMR spectra were obtained as a 1-dimensional cross-sections extracted at the 13C peak position. The chemical shifts were referenced to those of tetramethylsilane via secondary referencing to adamantane (38.6 ppm for the methylene 13C, 2.0 ppm for 1H).
In the previous work, molecular dynamics simulations have been performed on carbonate liquids using a potential developed by Tissen and Janssen, of the Born–Huggins–Mayer form [47]. The trigonal geometry of the carbonate anion is imposed by employing harmonic springs that act between C–O and O–O pairs. In later studies [3, 4], we developed an approach that allows both flexibility of the molecular anion and fluctuation of the internal charge distribution [48, 49, 50, 51]. In the simulations carried out here, we fix the charge distribution on the anion. The simulations have been carried out at a fixed temperature (of T = 1800 K) and constant volume. F x(Q) was generated by combining the partial (Ashcroft–Langreth) structure factors (of which there are ten for the four component system). These were calculated directly from the Fourier components of the ion densities, ${S_{\alpha ,\beta }} = \left\langle {A_\alpha ^*\left( Q \right){A_\beta }\left( Q \right)} \right\rangle$, where ${A_\alpha }\left( Q \right) = {1 \over {\sqrt {{N_\alpha }} }}\mathop \sum \limits_{j = 1}^N \exp \left( {iQ \cdot {r^j}} \right)$. Total X-ray structure factors were constructed from weighted sums of these partial structure factors using X-ray form factors taken from standard sources [52].
The energy-dispersive X-ray diffraction data were collected at HPCAT, sector 16, at the Advanced Photon Source (APS), Argonne National Laboratory. The total X-ray structure factor was obtained using the multi-angle energy-dispersive technique which uses a focused, white X-ray beam with 7 × 7_m size. Scattering data were collected on a Ge solid state detector (Canberra) at 2 theta angles of 3.14°, 4.14°, 5.14° ,7.14°, 9.14°, 12.15°, 16.15°, 22.15°, 28.14°, and 31.32°. This detector was calibrated using gold peaks at ambient pressure conditions. The total exposure for each pressure point was obtained by normalizing each detector pattern to the white X-ray beam with further corrections using the optimization techniques described by Shen et al. [53] and Kono et al. [54]. The energy-dispersive patterns for each detector were rescaled and merged to form a Faber–Ziman type total structure factor. In this study, we eliminated the data from the 3.14° detector bank since these clearly showed crystalline peaks from the sample assembly. The scattering intensity in the 31.32° detector bank was very low, and these latter data were also eliminated from the subsequent normalization. The individual segments were smoothed by an error-weighted spline and scaled to the energy of the primary X-ray beam in the highest angle segment.
To view supplementary material for this article, please visit https://doi.org/10.1557/jmr.2019.250.
M.C.W. and P.A.B. would like to acknowledge the funding support from the EPSRC under grant EP/R036225/1. M.W. is grateful for the support from the EPSRC Centre for Doctoral Training, Theory and Modeling in Chemical Sciences, under grant EP/L015722/1. R.A.B. was funded by the NERC Thematic Grant consortium NE/M000419/1. The diffraction study was performed at HPCAT (Sector 16) of the Advanced Photon Source (APS). The Advanced Photon Source is a US DOE Office of Science User facility, operated for the DOE Office of Science by Argonne National Laboratory under contract DE-AC02-06CH11357. Calorimetry at UC Davis was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Award DE-FG02ER1474.
1.Angell, C.A.: Formation of glasses from liquids and biopolymers. Science 267, 1924–1935 (1995).
2.Wilding, M.C., Wilson, M., Alderman, O.L.G., Benmore, C., Weber, J.K.R., Parise, J.B., Tamalonis, A., and Skinner, L.: Low-dimensional network formation in molten sodium carbonate. Sci. Rep. 6, 1–7 (2016).
3.Wilding, M.C., Wilson, M., Ribeiro, M.C.C., Benmore, C.J., Weber, J.K.R., Alderman, O.L.G., Tamalonis, A., and Parise, J.B.: The structure of liquid alkali nitrates and nitrites. Phys. Chem. Chem. Phys. 19, 21625–21638 (2017).
4.Wilson, M., Ribeiro, M.C.C., Wilding, M.C., Benmore, C., Weber, J.K.R., Alderman, O., Tamalonis, A., and Parise, J.B.: Structure and liquid fragility in sodium carbonate. J. Phys. Chem. A 122, 1071–1076 (2018).
5.Genge, M.J., Jones, A.P., and Price, G.D.: An infrared and Raman study of carbonate glasses-implications for carbonatite magmas. Geochim. Cosmochim. Acta 59, 927–937 (1995).
6.Sen, S., Kaseman, D.C., Colas, B., Jacob, D.E., and Clark, S.M.: Hydrogen bonding induced distortion of CO3 units and kinetic stabilization of amorphous calcium carbonate: Results from 2D C-13 NMR spectroscopy. Phys. Chem. Chem. Phys. 18, 20330–20337 (2016).
7.Eitel, W. and Skaliks, W.: Double carbonates of alkalis and alkaline earths. Z. Anorg. Allg. Chem. 183, 263–286 (1929).
8.Genge, M.J., Jones, A.P., and Price, G.D.: An infrared and Raman study of carbonate glasses-implications for the structure of carbonatite magmas. Geochim. Cosmochim. Acta 59, 927–937 (1995).
9.Genge, M.J., Price, G.D., and Jones, A.P.: Molecular dynamics simulations of CaCO3 melts to mantle pressures and temperatures—Implications for carbonatite magmas. Earth Planet. Sci. Lett. 131, 225–238 (1995).
10.Dobson, D.P., Jones, A.P., Rabe, R., Sekine, T., Kurita, K., Taniguchi, T., Kondo, T., Kato, T., Shimomura, O., and Urakawa, S.: In situ measurement of viscosity and density of carbonate melts at high pressure. Earth Planet. Sci. Lett. 143, 207–215 (1996).
11.Ragone, S.E., Datta, R.K., Roy, D.M., and Tuttle, O.F.: The system potassium carbonate-magnesium carbonate. J. Phys. Chem. 70, 3360–3361 (1966).
12.Datta, R.K., Roy, D.M., Faile, S.P., and Tuttle, O.F.: Glass formation in carbonate systems. J. Am. Ceram. Soc. 47, 153 (1964).
13.Forland, T. and Weyl, W.A.: formation of a sulfate glass. J. Am. Ceram. Soc. 33, 186–187 (1950).
14.MacFarlane, D.R.: Attempted glass formation in pure KHSO4. J. Am. Ceram. Soc. 67, C–28 (1984).
15.van Uitert, L.G. and Grodkiewicz, W.H.: Nitrate glasses. Mater. Res. Bull. 6, 283–292 (1971).
16.Jones, A.P., Genge, M., and Carmody, L.: Carbonate melts and carbonatites. In Carbon in Earth, Hazen, R.M., Jones, A.P., and Baross, J.A., eds. (The Mineralogical Society of America, Chantilly, Virginia, 2013); pp. 289–322.
17.Sharma, S.K. and Simons, B.: Raman study of K2CO3–MgCO3 glasses. In Carnegie Institute of Washington Yearbook, Vol. 79, H.S. Yoder, ed. (The Carnegie Institution of Washington, Washington DC, 1980); pp. 322–326.
18.Navrotsky, A.: Progress and new directions in calorimetry: A 2014 perspective. J. Am. Ceram. Soc. 97, 3349–3359 (2014).
19.Sahu, S.K., Boatner, L.A., and Navrotsky, A.: Formation and dehydration enthalpy of potassium hexaniobate. J. Am. Ceram. Soc. 100, 304–311 (2017).
20.Shivaramaiah, R. and Navrotsky, A.: Energetics of order-disorder in layered magnesium aluminum double hydroxides with inter layer carbonate. Inorg. Chem. 54, 3253–3259 (2015).
21.Chai, L.A. and Navrotsky, A.: Thermochemistry of carbonate-pyroxene equilibria. Contrib. Mineral. Petrol. 114, 139–147 (1993).
22.Kiseleva, I., Navrotsky, A., Belitsky, I.A., and Fursenko, B.A.: Thermochemistry of natural potassium sodium calcium leonhardite and its cation-exchanged forms. Am. Mineral. 81, 668–675 (1996).
23.Navrotsky, A., Putnam, R.L., Winbo, C., and Rosen, E.: Thermochemistry of double carbonates in the K2CO3–CaCO3 system. Am. Mineral. 82, 546–548 (1997).
24.Tarina, I., Navrotsky, A., and Gan, H.: Direct calorimetric measurment of enthalpics in diopside-anorthite-wollastonaite melts at 1773 K. Geochim. Cosmochim. Acta 58, 3665–3673 (1994).
25.Navrotsky, A., Maniar, P., and Oestrike, R.: Energetics of glasses in the system diopside-anorthite-forsterite. Contrib. Mineral. Petrol. 105, 81–86 (1990).
26.Hon, R., Weill, D.F., Kasper, R.B., and Navrotsky, A.: Enthalpies of mixing of glasses in the system albite-anorthite-diopside. Trans., Am. Geophys. Union 58, 1243 (1977).
27.Golubkova, A., Merlini, M., and Schmidt, M.W.: Crystal structure, high-pressure, and high-temperature behavior of carbonates in the K2Mg(CO3)2–Na2Mg(CO3)2 join. Am. Mineral. 100, 2458–2467 (2015).
28.Shatskiy, A., Litasov, K.D., Palyanov, Y.N., and Ohtani, E.: Phase relations on the K2CO3–CaCO3–MgCO3 join at 6 GPa and 900–1400 °C: Implications for incipient melting in carbonated mantle domains. Am. Mineral. 101, 437–447 (2016).
29.Shatskiy, A., Borzdov, Y.M., Litasov, K.D., Sharygin, I.S., Palyanov, Y.N., and Ohtani, E.: Phase relationships in the system K2CO3–CaCO3 at 6 GPa and 900–1450 °C. Am. Mineral. 100, 223–232 (2015).
30.Shatskiy, A., Sharygin, I.S., Gavryushkin, P.N., Litasov, K.D., Borzdov, Y.M., Shcherbakova, A.V., Higo, Y., Funakoshi, K-i., Palyanov, Y.N., and Ohtani, E.: The system K2CO3–MgCO3 at 6 GPa and 900–1450 °C. Am. Mineral. 98, 1593–1603 (2013).
31.Alekseev, A.I., Barinova, L.D., Rogacheva, N.P., and Kulinich, O.V.: Thermodynamic values of binary carbonate salts K2CO3·MgCO3·nH2O. J. Appl. Chem. USSR 57, 1168–1172 (1984).
32.Papenguth, H.W., Kirkpatrick, R.J., Montez, B., and Sandberg, P.A.: C-13 MAS NMR-spectroscopy of inorganic and biogenic carbonates. Am. Mineral. 74, 1152–1158 (1989).
33.Marc Michel, F., MacDonald, J., Feng, J., Phillips, B.L., Ehm, L., Tarabrella, C., Parise, J.B., Reeder, R.J.: Structural characteristics of synthetic amorphous calcium carbonate. Chem. Mater. 20, 4720–4728 (2008).
34.Michel, F.M., McDonald, J., Feng, J., Phillips, B.L., Ehm, L., Tarabrella, C., Parise, J.B., and Reeder, R.J.: Structural characteristics of synthetic amorphous calcium carbonate. Geochim. Cosmochim. Acta 72, A626 (2008).
35.Sevelsted, T.F., Herfort, D., and Skibsted, J.: C-13 chemical shift anisotropies for carbonate ions in cement minerals and the use of C-13, Al-27 and Si-29 MAS NMR in studies of Portland cement including limestone additions. Cem. Concr. Res. 52, 100–111 (2013).
36.Moore, J.K., Surface, J.A., Brenner, A., Wang, L.S., Skemer, P., Conradi, M.S., and Hayes, S.E.: Quantitative identification of metastable magnesium carbonate minerals by solid-state C-13 NMR spectroscopy (vol 49, pg 657, 2015) . Environ. Sci. Technol. 49, 1986 (2015).
37.Nebel, H., Neumann, M., Mayer, C., and Epple, M.: On the structure of amorphous calcium carbonate—A detailed study by solid-state NMR spectroscopy. Inorg. Chem. 47, 7874–7879 (2008).
38.Kohn, S.C., Brooker, R.A., and Dupree, R.: C-13 MAS NMR—A method for studying CO2 speciation in glasses. Geochim. Cosmochim. Acta 55, 3879–3884 (1991).
39.Brooker, R.A., Kohn, S.C., Holloway, J.R., McMillan, P.F., and Carroll, M.R.: Solubility, speciation and dissolution mechanisms for CO2 in melts on the NaAlO2–SiO2 join. Geochim. Cosmochim. Acta 63, 3549–3565 (1999).
40.Su, Z.W. and Coppens, P.: Relativistic X-ray elastic scattering factors for neutral atoms Z = 1–54 from multiconfiguration Dirac–Fock wavefunctions in the 0–12 Å−1 sin θ/λ range, and six-Gaussian analytical expressions in the 0–6 Å−1 range (vol A53, pg 749, 1997). Acta Crystallogr., Sect. A: Found. Crystallogr. 54, 357 (1998).
41.Hesse, K.F. and Simons, B.: Crystal structure of synthetic K2Mg(CO3)2. Z. Kristallogr. 161, 289–292 (1982).
42.Ihinger, P.D.: An experimental study of the interaction of water with granitic melt. Ph.D. thesis, California Institute of Technology, Pasedena, California, 1991.
43.Navrotsky, A.: High temperature reaction calorimetry applied to metastable and nanophase materials. J. Therm. Anal. Calorim. 57, 653–658 (1999).
44.Navrotsky, A.: High-temperature oxide melt calorimetry of oxides and nitrides. J. Chem. Thermodyn. 33, 859–871 (2001).
45.Herzfeld, J. and Berger, A.E.: Sideband intensities in NMR-spectra of samples spinning at the magic angle. J. Chem. Phys. 73, 6021–6030 (1980).
46.Eichele, K.: HBA. Ph.D. thesis, Universitaet Tuebingen, Tuebingen, Germany, 2015.
47.Tissen, J., Janssen, G.J.M., and Vandereerden, J.P.: Molecular dynamics simulation fo binary mixtures of molten alkali carboantes. Mol. Phys. 82, 101–111 (1994).
48.Costa, M.F. and Ribeiro, M.C.C.: Molecular dynamics of molten Li2CO3–K2CO3 (vol 138, pg 61, 2008). J. Mol. Liq. 142, 161 (2008).
49.Ribeiro, M.C.C.: First sharp diffraction peak in the fragile liquid Ca0.4K0.6(NO3)1.4. Phys. Rev. B 61, 3297–3302 (2000).
50.Ribeiro, M.C.C.: Ionic dynamics in the glass-forming liquid Ca0.4K0.6(NO3)1.4: A molecular dynamics study with a polarizable model. Phys. Rev. B 63, 0942051–09420510 (2001).
51.Ribeiro, M.C.C.: Molecular dynamics study on the glass transition in Ca0.4K0.6(NO3)1.4. J. Phys. Chem. B 107, 9520–9527 (2003).
52.Cromer, D.T. and Mann, J.B.: X-ray scattering functions compuited from numerical Hartree–Fock functions . Acta Crystallogr. A 24, 321–324 (1968).
53.Shen, G., Prakapenka, V.B., Rivers, M.L., and Sutton, S.R.: Structural investigation of amorphous materials at high pressures using the diamond anvil cell. Rev. Sci. Instrum. 74, 3021–3026 (2003).
54.Kono, Y., Kenney-Benson, C., Hummer, D., Ohfuji, H., Park, C., Shen, G., Wang, Y., Kavner, A., and Manning, C.E.: Ultralow viscosity of carbonate melts at high pressures. Nat. Commun. 5, 5:5091-5 (2014). | CommonCrawl |
Evaluate: {\text{begin}array l 3x-8y=-6 }-5x+4y=10\text{end}array .
Expression: $\left\{\begin{array} { l } 3x-8y=-6 \\ -5x+4y=10\end{array} \right.$
Multiply both sides of the equation by $2$
$\left\{\begin{array} { l } 3x-8y=-6 \\ -10x+8y=20\end{array} \right.$
Sum the equations vertically to eliminate at least one variable
$-7x=14$
Divide both sides of the equation by $-7$
$x=-2$
Substitute the given value of $x$ into the equation $3x-8y=-6$
$3 \times \left( -2 \right)-8y=-6$
Solve the equation for $y$
$y=0$
The possible solution of the system is the ordered pair $\left( x, y\right)$
$\left( x, y\right)=\left( -2, 0\right)$
Check if the given ordered pair is the solution of the system of equations
$\left\{\begin{array} { l } 3 \times \left( -2 \right)-8 \times 0=-6 \\ -5 \times \left( -2 \right)+4 \times 0=10\end{array} \right.$
Simplify the equalities
$\left\{\begin{array} { l } -6=-6 \\ 10=10\end{array} \right.$
Since all of the equalities are true, the ordered pair is the solution of the system
Solve for: (2)/(x+1)=(1)/(x+3)
Solve for: (5^4)^3
Solve for: {\text{begin}array l x=5 } x+y=12\text{end}array .
Solve for: x^2-13x+36
Solve for: (1)/(cos(θ))=(tan(θ))/(sin(θ))
Solve for: e^{0.08t}+4=7
Calculate: 5x-2=-22
Solve for: (8)/(5)/(7)/(3)
Calculate: y-1=(1)/(4)(x-3)
Evaluate: 1848/3
10 essential tips that will help you solve any math problem
The Top 5 Books For Learning math
A Guide to Understanding Calculus
What Makes MathMaster The Best Math App?
10 Tips For Working Through Complex Math Problems
Overcoming Math Anxiety in 5 Easy Steps | CommonCrawl |
Almost periodic solutions and stable solutions for stochastic differential equations
On the limit cycles of planar discontinuous piecewise linear differential systems with a unique equilibrium
November 2019, 24(11): 5903-5926. doi: 10.3934/dcdsb.2019112
Analysis of minimizers of the Lawrence-Doniach energy for superconductors in applied fields
Patricia Bauman 1, and Guanying Peng 2,,
Department of Mathematics, Purdue University, West Lafayette, IN 47907, USA
Department of Mathematics, University of Arizona, Tucson, AZ 85721, USA
* Corresponding author: Guanying Peng
Received September 2018 Published June 2019
Fund Project: The authors were supported in part by NSF Grant DMS-1109459.
We analyze minimizers of the Lawrence-Doniach energy for layered superconductors with Josephson constant $ \lambda $ and Ginzburg-Landau parameter $ 1/\epsilon $ in a bounded generalized cylinder $ D = \Omega\times[0, L] $ in $ \mathbb{R}^3 $, where $ \Omega $ is a bounded simply connected Lipschitz domain in $ \mathbb{R}^2 $. Our main result is that in an applied magnetic field $ \vec{H}_{ex} = h_{ex}\vec{e}_{3} $ which is perpendicular to the layers with $ \left|\ln\epsilon\right|\ll h_{ex}\ll\epsilon^{-2} $, the minimum Lawrence-Doniach energy is given by $ \frac{|D|}{2}h_{ex}\ln\frac{1}{\epsilon\sqrt{h_{ex}}}(1+o_{\epsilon, s}(1)) $ as $ \epsilon $ and the interlayer distance $ s $ tend to zero. We also prove estimates on the behavior of the order parameters, induced magnetic field, and vorticity in this regime. Finally, we observe that as a consequence of our results, the same asymptotic formula holds for the minimum anisotropic three-dimensional Ginzburg-Landau energy in $ D $ with anisotropic parameter $ \lambda $ and $ o_{\epsilon, s}(1) $ replaced by $ o_{\epsilon}(1) $.
Keywords: Superconductor, Lawrence-Doniach energy, layered superconductor, Ginzburg-Landau energy, minimizer.
Mathematics Subject Classification: Primary: 35J60, 35J65, 35Q40.
Citation: Patricia Bauman, Guanying Peng. Analysis of minimizers of the Lawrence-Doniach energy for superconductors in applied fields. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5903-5926. doi: 10.3934/dcdsb.2019112
S. Alama, A. J. Berlinsky and L. Bronsard, Minimizers of the Lawrence-Doniach energy in the small-coupling limit: Finite width samples in a parallel field, Ann. Inst. H. Poincaré Anal. Non Linéaire, 19 (2002), 281-312. doi: 10.1016/S0294-1449(01)00081-6. Google Scholar
S. Alama, L. Bronsard and A. J. Berlinsky, Periodic vortex lattices for the Lawrence-Doniach model of layered superconductors in a parallel field, Commun. Contemp. Math., 3 (2001), 457-494. doi: 10.1142/S0219199701000457. Google Scholar
S. Alama, L. Bronsard and E. Sandier, On the shape of interlayer vortices in the Lawrence-Doniach model, Trans. Amer. Math. Soc., 360 (2008), 1-34. doi: 10.1090/S0002-9947-07-04188-8. Google Scholar
S. Alama, L. Bronsard and E. Sandier, On the Lawrence-Doniach model of superconductivity: Magnetic fields parallel to the axes, J. Eur. Math. Soc., 14 (2012), 1825-1857. doi: 10.4171/JEMS/348. Google Scholar
S. Alama, L. Bronsard and E. Sandier, Minimizers of the Lawrence-Doniach functional with oblique magnetic fields, Comm. Math. Phys., 310 (2012), 237-266. doi: 10.1007/s00220-011-1399-2. Google Scholar
G. Alberti, S. Baldo and G. Orlandi, Variational convergence for functionals of Ginzburg-Landau type, Indiana Univ. Math. J., 54 (2005), 1411-1472. doi: 10.1512/iumj.2005.54.2601. Google Scholar
S. Baldo, R. L. Jerrard, G. Orlandi and H. M. Soner, Convergence of Ginzburg-Landau functionals in three-dimensional superconductivity, Arch. Rational Mech. Anal., 205 (2012), 699-752. doi: 10.1007/s00205-012-0527-2. Google Scholar
S. Baldo, R. L. Jerrard, G. Orlandi and H. M. Soner, Vortex density models for superconductivity and superfluidity, Comm. Math. Phys., 318 (2013), 131-171. doi: 10.1007/s00220-012-1629-2. Google Scholar
P. Bauman and Y. Ko, Analysis of solutions to the Lawrence-Doniach system for layered superconductors, SIAM J. Math. Anal., 37 (2005), 914-940. doi: 10.1137/S0036141004444597. Google Scholar
S. J. Chapman, Q. Du and M. D. Gunzburger, On the Lawrence-Doniach and anisotropic Ginzburg-Landau models for layered superconductors, SIAM J. Appl. Math., 55 (1995), 156-174. doi: 10.1137/S0036139993256837. Google Scholar
E. B. Fabes, M. Jr Jodeit and N. M. Rivière, Potential techniques for boundary value problems on ${C}^1$-domains, Acta Math., 141 (1978), 165-186. doi: 10.1007/BF02545747. Google Scholar
D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order, Classics in Mathematics, reprint of the 1998 ed., Springer-Verlag, Berlin, 2001. Google Scholar
T. Giorgi and D. Phillips, The breakdown of superconductivity due to strong fields for the Ginzburg-Landau model, SIAM Review, 44 (2002), 237-256. doi: 10.1137/S003614450139951. Google Scholar
Y. Iye, How anisotropic are the cuprate high Tc superconductors?, Comments Cond. Mat. Phys., 16 (1992), 89-111. Google Scholar
R. L. Jerrard and H. M. Soner, Limiting behavior of the Ginzburg-Landau functional, J. Funct. Anal., 192 (2002), 524-561. doi: 10.1006/jfan.2001.3906. Google Scholar
A. Kachmar, The ground state energy of the three-dimensional Ginzburg-Landau model in the mixed phase, J. Funct. Anal., 261 (2011), 3328-3344. doi: 10.1016/j.jfa.2011.08.002. Google Scholar
G. Peng, Convergence of the Lawrence-Doniach energy for layered superconductors with magnetic fields near $H_c_1$, SIAM J. Math. Anal., 49 (2017), 1225-1266. doi: 10.1137/16M1064398. Google Scholar
E. Sandier and S. Serfaty, On the energy of type-II superconductors in the mixed phase, Rev. Math. Phys., 12 (2000), 1219-1257. doi: 10.1142/S0129055X00000411. Google Scholar
E. Sandier and S. Serfaty, A rigorous derivation of a free-boundary problem arising in superconductivity, Ann. Sci. École Norm. Sup., 33 (2000), 561–592. doi: 10.1016/S0012-9593(00)00122-1. Google Scholar
E. Sandier and S. Serfaty, The decrease of bulk-superconductivity close to the second critical field in the Ginzburg-Landau model, SIAM J. Math. Anal., 34 (2003), 939-956. doi: 10.1137/S0036141002406084. Google Scholar
E. Sandier and S. Serfaty, Vortices in the Magnetic Ginzburg-Landau Model, Progress in Nonlinear Differential Equations and Their Applications 70, Birkhäuser, Boston, 2007. Google Scholar
G. Verchota, Layer potentials and regularity for the Dirichlet problem for Laplace's equation in Lipschitz domains, J. Funct. Anal., 59 (1984), 572-611. doi: 10.1016/0022-1236(84)90066-1. Google Scholar
Bo Chen, Youde Wang. Global weak solutions for Landau-Lifshitz flows and heat flows associated to micromagnetic energy functional. Communications on Pure & Applied Analysis, 2021, 20 (1) : 319-338. doi: 10.3934/cpaa.2020268
Xiaoxiao Li, Yingjing Shi, Rui Li, Shida Cao. Energy management method for an unpowered landing. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020180
Xinfu Chen, Huiqiang Jiang, Guoqing Liu. Boundary spike of the singular limit of an energy minimizing problem. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3253-3290. doi: 10.3934/dcds.2020124
Marcello D'Abbicco, Giovanni Girardi, Giséle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Equipartition of energy for nonautonomous damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 597-613. doi: 10.3934/dcdss.2020364
Luis Caffarelli, Fanghua Lin. Nonlocal heat flows preserving the L2 energy. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 49-64. doi: 10.3934/dcds.2009.23.49
Gervy Marie Angeles, Gilbert Peralta. Energy method for exponential stability of coupled one-dimensional hyperbolic PDE-ODE systems. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020108
Kuo-Chih Hung, Shin-Hwa Wang. Classification and evolution of bifurcation curves for a porous-medium combustion problem with large activation energy. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020281
Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230
Kai Yang. Scattering of the focusing energy-critical NLS with inverse square potential in the radial case. Communications on Pure & Applied Analysis, 2021, 20 (1) : 77-99. doi: 10.3934/cpaa.2020258
Md. Masum Murshed, Kouta Futai, Masato Kimura, Hirofumi Notsu. Theoretical and numerical studies for energy estimates of the shallow water equations with a transmission boundary condition. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1063-1078. doi: 10.3934/dcdss.2020230
Reza Lotfi, Zahra Yadegari, Seyed Hossein Hosseini, Amir Hossein Khameneh, Erfan Babaee Tirkolaee, Gerhard-Wilhelm Weber. A robust time-cost-quality-energy-environment trade-off with resource-constrained in project management: A case study for a bridge construction project. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020158
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Patricia Bauman Guanying Peng | CommonCrawl |
Hiv protease signal
The history of HIV protease inhibitors
CoMFA studies require that the 3D structures of the molecules to
CoMFA studies require that the 3D structures of the molecules to be analyzed be aligned according to a suitable conformational
template, which is assumed to be a "bioactive" conformation. Molecular alignment was carried out using the SYBYL "fit-atom" alignment function (Tripos Inc. 2002). The crystal structure of compound 4 was used as the alignment template. Figure 1 shows the 3D alignment of 27 molecules according to the alignment scheme in Fig. 2. Fig. 1 The 3D alignment of the 27 molecules is shown by capped sticks without hydrogens Fig. 2 Molecule 4 with atoms used for superimposition NU7026 clinical trial are named 1 to 7 CoMFA study The CoMFA descriptors were used as independent variables, and pEC50 values where used as dependent variables, in partial least squares (PLS) (Wold et al., 1984) regression analysis to derive 3D QSAR models. The steric (Lennard-Jones) and electrostatic (Coulomb) CoMFA fields were calculated using an sp 3 carbon as the steric probe atom and a +1 charge for the electrostatic probe. A grid spacing of 2 Å and a distance-dependent JQ-EZ-05 dielectric constant were chosen. The cutoff value for both steric and electrostatic interactions was set to 30 kcal/mol. Partial least squares analysis PLS regression analyses were performed using cross-validation to evaluate the predictive ability of the CoMFA models. Initial
PLS regression analyses were performed in conjunction with the cross-validation (leave-one-out method) option to obtain the optimal number of components to be used in the subsequent analysis of the dataset. All the leave-one-out Luminespib research buy cross-validated PLS analyses were performed with a column filter value of 2.0 kcal/mol to improve the signal-to-noise ratio by omitting those lattice points whose energy variation was below this threshold value. The final PLS regression analysis with 10 bootstrap
groups and the optimal number of components was performed on the complete dataset. The optimal number of components was determined by selecting the smallest PRESS value. Usually this value corresponds to Unoprostone the highest cross-validated \( r^2 \left(r^2_\textcv \right) \) value. The \( r^2_\textcv \) was calculated using the formula $$ r^2_\textcv = 1 – {\frac{{\sum {} \left(Y_\textpredicted – Y_\textobserved \right)^2}}{{\sum {} \left(Y_\textobserved – Y_\textmean \right)^2}}} $$where Y predicted, Y observed, and Y mean are the predicted, actual, and mean values of the target property (pEC50), respectively. The number of components obtained from the cross-validated analysis was subsequently used to derive the final QSAR models. In addition to \( r^2_\textcv \), the corresponding PRESS [PRESS = ∑(Y predicted − Y observed)2], the number of components, the nonconventional correlation coefficient \( r^2_\textncv \), and its standard errors were also computed.
In this study, a comprehensive phenotypic and genotypic character
In this study, a comprehensive phenotypic and genotypic characterization of the novel isolate Ivo14T was performed that P5091 allowed a detailed comparison to other bacteriochlorophyll (BChl) a-containing members of the OM60/NOR5 clade, so that a profound knowledge of the metabolic plasticity and taxonomic relationships encountered in this ecologically important group of marine gammaproteobacteria could be obtained. Results and discussion Isolation and identification of mixotrophic representatives of the OM60/NOR5 clade An isolation strategy originally designed for the retrieval of strains belonging to the genus Rhodopirellula within the Planctomycetales
resulted in the isolation of numerous representatives of the OM60/NOR5 clade of marine gammaproteobacteria [13, 25]. The isolation strategy included the use of antibiotics and a screening of red-pigmented strains,
so that all retrieved OM60/NOR5 isolates were pigmented. Strains belonging to this phylogenetic group represented about 10% of all red-pigmented colonies and could be affiliated either to the NOR5-3 or NOR5-1 lineage within this clade based on analyses of their 16S rRNA gene sequences [13]. Strains belonging to the OM60/NOR5 clade were further examined for the presence of pufL and pufM genes encoding proteins SCH727965 of the photosynthetic reaction center. From 18 out of 22 isolated strains fragments of pufLM genes could be amplified by PCR using specific primers. Probably, the strategy of Winkelmann and Harder [25] was such an effective method for the isolation of mixotrophic members these of the OM60/NOR5 clade, because it selected for pigmented and slowly growing
bacteria adapted to oligotrophic habitats. Two of the isolated strains, Rap1red (= NOR5-3) and Ivo14T (= NOR5-1BT), selleck representing two different lineages of the OM60/NOR5 clade were selected for a further analysis using genome sequencing. Strain Ivo14T representing the highly diverse and environmentally important NOR5-1 lineage was chosen for an additional detailed phenotypic characterization. Noteworthy, Haliea rubra (H. rubra), which is closely related to C. litoralis was also reported to form red-pigmented colonies on Marine Agar 2216 [18], but in the original species description the formation of photosynthetic pigments was not reported. To exclude the possibility that a phototrophic phenotype has escaped attention in described strains of the genus Haliea, type strains belonging to this genus were cultured in SYPHC medium, which allowed expression of pigments in all photoheterotrophic strains belonging to the OM60/NOR5 clade tested so far. In fact, photosynthetic pigments could be extracted from cells of H.
This effect was similar regardless of Gail score, whereas the eff
This effect was similar regardless of Gail score, whereas the effects were markedly stronger for women with higher baseline estradiol
levels [206]. SERMs and menopausal symptoms In breast cancer patients, it has been well documented that tamoxifen increases both severity and frequency of hot flushes. The situation is likely less severe when using raloxifene. Some RCTs did not DAPT report an PRIMA-1MET research buy increased frequency or severity of vasomotor symptoms in women discontinuing oestrogen–progestin as compared with placebo [207, 208]. Nevertheless, other studies reported an increase in hot flushes when using raloxifene [209], which led to the suggestion of a gradual conversion to raloxifene from low-dose oestrogen, with EX 527 in vitro a progression from 60 mg every alternate day to 60 mg/day. It has been showed in short duration studies that it is possible to avoid SERMs associated hot flushes and menopausal symptoms, using
a combination of a SEM (bazedoxifene) and estrogens (conjugated estrogens) [210]. Some non-skeletal side effects are favourable (breast cancer protection); others on the other hand are unfavourable (stroke risk, thromboembolism and endometrial cancer). The presence and the magnitude of these side effects vary between SERMs concluding that women with breast cancer treated with tamoxifen have an 82% increased risk of ischemic stroke and a 29% increased risk of any stroke, although the absolute risk remains small. Strontium ranelate Strontium ranelate is a first-line treatment for the management of postmenopausal osteoporosis. Its dual mode of action simultaneously reduces bone resorption and increases bone formation [211]. Strontium out ranelate has a limited number of non-skeletal effects, for which most of
the evidence comes from post hoc analyses of these two trials. Strontium and cartilage Osteoarthritis involves the degeneration of joint cartilage and the adjacent bone, which leads to joint pain and stiffness. There is some preclinical evidence for an effect of strontium ranelate on cartilage degradation. Strontium ranelate has been demonstrated to stimulate the production of proteoglycans in isolated human chondrocytes, leading to cartilage formation without affecting cartilage resorption [212]. There is also evidence for an impact on biomarkers of cartilage degradation. Treatment with strontium ranelate was associated with significantly lower levels of urinary excretion of a marker of cartilage degradation (CTX-II) (p < 0.0001) [213, 214]. The potential for a clinical effect of strontium ranelate in osteoarthritis indicated that 3 years' treatment with strontium ranelate was associated with a 42% lower overall osteoarthritis score (p = 0.0005 versus placebo) and a 33% reduction in disc space narrowing score (p = 0.03 versus placebo). These changes were concomitant to a 34% increase in the number of patients free of back pain (p = 0.03 versus placebo) [215].
[26] and Spencer et al [27] Radiographic vertebral deformities
[26] and Spencer et al. [27]. Radiographic vertebral deformities were defined as vertebral heights more than 3 SDs below the vertebra-specific population mean on the radiograph; vertebrae that met this posterior height criterion were classified as crush. The remaining vertebrae that had an anterior height reduction were called wedge. The remaining AMN-107 vertebrae that only had a central height reduction were called endplate. The timing of deformities could not be determined in this cross-sectional study. Vertebral osteoarthritis Radiographs were scored by a single reader (HK) for osteoarthritis of the thoracic spine in T4–T12 or lumbar
spine in L1–L4 using the Kellgren–Lawrence (KL) grade as follows: KL0, normal; KL1, slight osteophytes; KL2, definite osteophytes; KL3, disc space narrowing with large osteophytes; and KL4, bone sclerosis, disc space narrowing, and large osteophytes [28]. In the present
study, we defined the spine with disc space narrowing with and without osteophytes as KL3 [19]. KL grade was determined at intervertebral spaces, and the highest scores among thoracic or lumbar intervertebral spaces were then identified as the KL grade for that individual. Osteoarthritis was defined as KL grade 2 or higher. To evaluate the intrarater reliability of the KL grading, randomly selected radiographs of the thoracic and lumbar spine were scored by the same reader more than 1 month after the first reading for 40 individuals. The intrarater reliabilities were evaluated by kappa analysis. The reliability in KL grading of the thoracic selleck products or lumbar radiographs was found to be sufficient with kappa scores of 0.76 and 0.85, respectively. Radiographic readers (KA and HK) were blind to the ICG-001 supplier subjects' ages and other Teicoplanin characteristics. Statistical analysis For reasons of poor technical quality, the radiographs of two women did not allow reliable measurements of vertebral heights, leaving 584 women for the analyses. The Cochran–Armitage trend test was
used to evaluate differences in the prevalence of back pain among age groups, and the chi-square test was used to evaluate differences among categories of number of vertebral deformities. Logistic regression analysis was used to explore the associations of type and number of vertebral deformity with back pain in the previous month; results are presented as odds ratios (ORs) with 95 % confidence intervals (CIs). Data analyses were performed with commercially available software (SAS Institute, Cary, NC). Results The mean (SD) of age and BMI were 64.4 (9.6) years and 23.4 (3.5) kg/m2, respectively (Table 1). Fifteen percent of women had at least one vertebral deformity and 74 % had vertebral osteoarthritis. Forty-nine percent of women reported at least one painful joint at nonspine sites and 91 % were postmenopausal. The prevalence of upper back pain and low back pain were 19.2 % and 19.4 %, respectively (Table 2).
generally felt that a severity response form
generally felt that a severity response format would be more appropriate. Following completion of the first-stage cognitive debriefing interviews, the research team decided to focus the content of OPAQ-PF on physical function as a measure of the impact of osteoporosis, concentrating on the domains of mobility (walking, carrying, and climbing), physical positions (bending, reaching, picking up, standing, and sitting), and transfers (getting in and out of bed, chairs, and vehicles, and on and off the toilet). This led to the removal of items addressing fear of falling, Ilomastat chemical structure independence, and symptoms. As a result, the instrument generated at the end of the first stage of phase 2 had 16 items in three domains (mobility, physical positions, and transfers) and included a five-point scale that was used throughout the questionnaire: 'no difficulty'; 'a Talazoparib concentration little difficulty'; 'some difficulty'; 'a lot of difficulty'; and 'severe difficulty'. This instrument was used in the second stage of phase 2. Second stage: patient demographics Demographic data for the 18 participants (eight in diversity VS-4718 group 1, five in group 2, and five in group 3) recruited for this stage of the study are shown in Table 1. As in the first stage, this cohort was predominantly white (83 %), with a mean (±SD) age of 70.0 ± 9.2 years and a mean disease duration of 6.0 ± 4.1 years.
Twelve of the 18 patients had sustained a total of 16 fractures. The predominant fracture site in this cohort was the hip (n = 5). The remaining fractures were distributed among spine (n = 3), wrist (n = 1), ankle (n = 1), distal forearm (n = 1), humerus (n = 2), ribs (n = 1), pelvis (n = 1), and foot/toe (n = 1). Comorbid conditions included osteoarthritis, inflammatory arthritis, rheumatoid arthritis, diabetes, hypercholesterolemia, asthma, chronic obstructive pulmonary disease, hypertension, and restless legs syndrome. Second stage: concept elicitation In the second stage of phase 2, saturation was achieved after the 13th concept elicitation interview. Concept elicitation data supporting the Chlormezanone final version of OPAQ-PF are summarized in Table 2. First- and second-stage interview data are presented
together. The results demonstrate widespread support for all items in the domains of mobility, physical positions, and transfers. Second stage: cognitive debriefing Cognitive debriefing results obtained in the first stage of phase 2 reflect participants' thoughts regarding the design of the questionnaire, the language used, its applicability, the ease with which the instructions could be interpreted, response options, and the recall period. The questionnaire underwent further iterative modifications during the second stage of phase 2 as a result of participants' feedback. These modifications included removing one item, re-wording of items, and the addition of examples for clarification. As in the first stage of phase 2, all modifications were tracked in an item-tracking matrix.
It is estimated that 50% of all patients with a primary colorecta
It is estimated that 50% of all patients with a primary colorectal tumour will in due course develop hepatic metastases [2]. Once a primary malignancy has spread to the liver, the prognosis of many of these patients deteriorates significantly. Potentially curative treatment
options for hepatic metastases consist of subtotal hepatectomy or, in certain cases, radiofrequency ablation. VRT752271 Unfortunately, only 20-30% of patients are eligible for these potentially curative treatment options, mainly because hepatic metastases are often multiple and in an advanced stage at the time of presentation [3]. The majority of patients are therefore left with palliative treatment options. Palliative therapy consists primarily of systemic chemotherapy. In spite https://www.selleckchem.com/products/mk-5108-vx-689.html of the many promising developments on cytostatic and targeted biological agents over the last ten years, there are still certain tumour types that do not respond adequately Sotrastaurin in vivo and the long-term survival rate for patients with unresectable metastatic liver disease remains low [4–8]. Moreover, systemic chemotherapy can be associated with substantial side effects that lie in the non-specific nature of this treatment. Cytostatic agents are distributed over the entire body, destroying cells that divide rapidly, both tumour cells and healthy cells. For these reasons, a significant need for new treatment options is recognized. A relatively recently developed therapy for primary and secondary
liver cancer is radioembolization with yttrium-90 microspheres ( 90Y-RE). 90Y-RE is a minimally invasive procedure during which radioactive microspheres are instilled selectively into the hepatic artery using a catheter. The high-energy beta-radiation emitting microspheres subsequently strand in the arterioles (mainly) of
the tumour, and a tumoricidal radiation absorbed dose is delivered. The clinical results of this form of internal radiation therapy are promising [9, 10]. The only currently clinically available microspheres for radioembolization loaded with 90Y are made of either glass (TheraSphere ®, MDS Nordion Inc., Kanata, Ontario Canada) or resin (SIR-Spheres ®, SIRTeX Medical Ltd., Sydney, New South Wales, Australia). Although 90Y-RE is evermore used and considered a safe and effective treatment, 90Y-MS have a drawback: following administration the actual biodistribution (-)-p-Bromotetramisole Oxalate cannot be accurately visualized. For this reason, holmium-166 loaded poly(L-lactic acid) microspheres ( 166Ho-PLLA-MS) have been developed at our centre [11, 12]. Like 90Y, 166Ho emits high-energy beta particles to eradicate tumour cells but 166Ho also emits low-energy (81 keV) gamma photons which allows for nuclear imaging. As a consequence, visualization of the microspheres is feasible. This is very useful for three main reasons. Firstly, prior to administration of the treatment dose, a small scout dose of 166Ho-PLLA-MS can be administered for prediction of the distribution of the treatment dose.
Reactions mixtures were then held at 10°C 8 μL of the PCR amplif
Reactions mixtures were then held at 10°C. 8 μL of the PCR amplification mixture was analyzed by gel electrophoresis in a 0.8% agarose gel stained with ethidium bromide (1.0 μg/mL) and photographed under U.V.
transillumination. Purification and sequencing of PCR mip products PCR mip products were analyzed by gel electrophoresis in a 0.8% agarose gel (50 mL) stained with 3 μL SYBR Safe DNA gel strain (Invitrogen). DNA products were visualized under blue U.V. transillumination and picked up with a band of agarose gel. Then PCR products were purified using GeneCleanR Turbo Kit (MP Biomedicals) according to the manufacturer's instructions. Finally, the purified PCR products were suspended in 10 μL sterile water and then stored at −20°C. Sequencing was performed by GATC Biotech SARL MDV3100 (Mulhouse, France). PFGE subtyping Legionella isolates
were subtyped by pulsed field gel electrophoresis (PFGE) method as described previously [26]. Briefly, legionellae were treated with proteinase K (50 mg/mL) in TE buffer (10 mM Tris–HCl and 1 mM EDTA, pH 8) for 24 h at 55°C, and DNA was digested with 20 IU of SfiI restriction enzyme (Boehringer Mannheim, Meylan, France) for 16 h at 50°C. Fragments of DNA were separated in a 0.8% agarose gel prepared and run in 0.5× Tris-borate-EDTA buffer (pH 8.3) in a contour-clamped homogeneous field apparatus (CHEF DRII system; Bio-Rad, Ivry sur Seine, France) with a constant voltage of 150 V. Runs were carried out with increasing pulse times (2 to 25 s) at 10°C for 11 h and increasing selleck pulse times (35 to 60 s) at 10°C for 9 h. Then, the gels were stained for 30 min with a ethidium bomide solution and PFGE patterns were analyzed with GelComparII software (Applied Maths, Saint-Martens-Latem, Belgium). Quantification of Legionella virulence towards the amoeba PR171 Acanthamoeba castellanii Legionellae
were grown on BCYE agar and A. castellanii cells in PYG P-type ATPase medium (Moffat and Tompkins, 1992) for five days at 30°C prior to infection. A. castellanii cells were first seeded in plates of 24 multiwell to a final concentration of 5 × 106 cells per ml in PY medium (PYG without glucose. Plates were incubated during two hours at 30°C to allow amoeba adhesion. Then, Legionellae were added to an MOI ("multiplicity of infection") of 5 (in duplicate). In order to induce the adhesion of bacterial cells to the monolayer of amoeba cells, plates were spun at 2000 × g for 10 min and incubated for 1 h at 30°C. Non-adherent bacteria were removed by four successive washings of PY medium. This point was considered as the initial point of infection (T0) and the plates were incubated at 30°C. Extracellular cultivable bacteria released from amoebae were quantified at 1 day and 2 days post-infection as follows. Aliquots (100 μL) of the supernatants were taken and diluted in sterile water to the final 10-6 dilution.
These patients should undergo CT scanning with IV contrast of the
These patients should undergo CT scanning with IV contrast of the abdomen and pelvis with the exception of pregnant women where ultrasound is recommended [50]. CT scanning has a high sensitivity and specificity in confirming the diagnosis and identifying patients who are candidates for therapeutic PCD[51, 52]. CT scanning also excludes other causes of left lower quadrant abdominal pain (e.g. leaking abdominal aortic aneurism
or an ovarian abscess), Mizoribine research buy but is not click here reliable in differentiating acute diverticulitis from colon malignancy [53]. Patients who require an emergency operation This decision mostly pertains to patients with stage III and stage IV diverticulitis who present with signs of sepsis and need an emergency operation for source control.
The timing and type of source control is unclear. Traditionally, all of these patients were taken expediently to the OR. However, there has been a shift in this paradigm with the recognition that operating in the setting of septic shock sets the stage for postoperative AKI, MOF, prolonged ICU stays and dismal long-term outcomes [40, 44, 45]. Specifically, we believe patients in septic shock benefit from pre-operative optimization. This takes 2–3 hours [54, 55]. It starts with obtaining two large bore learn more IV lines through which broad spectrum antibiotics and a bolus of isotonic crystalloids (20 ml/kg) are administered. A central line (via the internal jugular vein placed under ultrasound guidance) and an arterial line are concurrently placed. With ongoing volume loading, CVP is increased to above 10 cmH2O. Bacterial neuraminidase At this point the patient is intubated and ventilation optimized. Norepinephrine is titrated to maintain MAP >65 mm Hg and if high doses are required, stress dose steroids and low dose vasopressin are administered. Electrolyte abnormalities are corrected and blood products are administered based on institutional guidelines. Lactate and mixed venous hemoglobin saturations are measured and trended to assess the adequacy of the resuscitative
efforts. Once the patient is stable enough to tolerate OR transport and general anesthesia, he/she should be transported to the OR for a source control operation. After the patient is in the OR and under general anesthesia, the surgeon needs to reassess whether the patient is still in septic shock. If so, the OR team should be informed that a DCL is going to be performed (described above). They should anticipate a short operation (roughly 30–45 minutes) and get the supplies necessary for a TAC. While the role of DCL in this setting is controversial, it should not be confused with the concept of a planned relaparotomy (described above) [32]. At the second operation, we believe that the decision to perform a delayed anastomosis should be individualized based on the current physiology, the condition of bowel, patient co-morbidities, and surgeon experience.
(A) Western blot analysis of BMPR-IB expression in parental gliom
(A) Western blot analysis of BMPR-IB expression in parental glioma cells, control vector–AAV and AAV-BMPR-IB-infected cells. (B) Cell cycle distribution analysis histogram. (Values are expressed as the mean±SD, n = 3. *, P < 0.05). Effects of BMPR-IB overexpression and knock-down on the growth of glioblastoma cells in vitro After 5 days of BMPR-IB overexpression or knock-down,
the anchorage-independent growth of BMPR-IB-overexpressing LY333531 purchase glioblastoma cells was drastically inhibited, as shown by a decrease in the number and volume of colonies on soft agar buy RXDX-101 compared with control cells, and the anchorage-independent growth of SF763 cells treated with siBMPR-IB was 2 times as high as that of the si-control-treated cells. BMPR-IB overexpression decreased the colony numbers of U251 and U87 by 55%
and 66%, and BMPR-IB knock-down caused an approximate 94% increase in colony numbers compared with controls(Figure 3A, B). Figure 3 Determination of anchorage-independent growth of human glioma cells with altered BMPR-IB expression using a soft-agar colony formation assay. (A) Microphotographs of colonies. (B) Columns, the mean of the colony numbers on triplicate plates from AZD5363 concentration a representative experiment (conducted twice); bars, SD. *, P < 0.001, as determined using Student's t-test. Effects of BMPR-IB overexpression and knock-down on the differentiation of glioblastoma cells in vitro The contrast photomicrographs showed that the glioblastoma cell lines U87 and U251 were prone to differentiate after 2 days of rAAV-BMPR-IB infection. Conversely, BMPR-IB knock-down inhibited the outgrowth of neurites in SF763 cells (Figure 4A). Immunofluorescence analysis showed that BMPR-IB infection increased the expression of GFAP protein, which is a recognized
marker of astrocytic differentiation, whereas BMPR-IB knock-down decreased PI3K inhibitor the expression of GFAP protein (Figure 4A). Further investigation using western blot analysis showed that BMPR-IB overexpression increased the expression of GFAP protein and inhibited the expression of Nestin, which is a marker of CNS precursor cells. In addition, BMPR-IB knock-down decreased the expression of GFAP protein and increased the expression of Nestin protein (Figure 4B). Figure 4 Induction of differentiation by BMPR-IB in human glioma cell lines. (A) After infection and transfection with rAAV-BMPR-IB and si-BMPR-IB, the expression of GFAP of glioblastoma cells was detected by immunofluorescence (left), and the morphological alterations were examined by phase contrast microscope(right). (B) WB analysis showed that BMPR-IB infection induced the expression of endogenous GFAP and inhibited the expression of Nestin, whereas BMPR-IB knock-down decreased the expression of GFAP and increased the expression of Nestin.
In an in vivo situation, we can expect such dead cells to be clea
In an in vivo situation, we can expect such dead cells to be cleared rapidly by the host immune system.
Non-replicating genetically modified filamentous phage which exerted high killing efficiency on cells with minimal release of endotoxin is reported [13]. Higher Salubrinal in vivo survival rate correlated with reduced inflammatory response in case of infected mice treated with genetically modified phage [14]. A phage genetically engineered to produce an enzyme that degrades extracellular polymeric substances and disperses biofilms is reported [15]. Although temperate phages present the problem of lysogeny and the associated risk of transfer of virulence factors through bacterial DNA transduction; we have used a temperate phage as a model for this study as the prophage status simplifies genetic manipulation. Because S. aureus strains are known to Veliparib harbor multiple prophages, which could potentially interfere with recombination and engineering events, we elected to lysogenize
phage P954 in a prophage-free host, S. aureus RN4220. Our strategy was to identify lysogens that harbored the recombinant endolysin-deficient phages, based on detection of phage P954 genes and the cat marker gene by PCR analysis (Figure 1). In the recombination experiment, selleck chemicals llc the 96 chloramphenicol resistant colonies obtained represented recombinant endolysin-inactivated prophage some of which lysed upon Mitomycin C induction. We suspected that the parent phage could also have lysogenized Bay 11-7085 along with the recombinant phage. We overcame the problem by repeating
the induction of chloramphenicol resistant lysogens and lysogenization of the phages produced. When we assessed the prophage induction pattern and phage progeny release of parent and endolysin-deficient phage P954 lysogens, we found that the absorbance of the culture remained unaltered and the extracellular phage titer was minimal with the recombinant phage lysogen. We observed a low phage titer 3 to 4 hours after induction, presumably due to natural disintegration and lysis of a small percentage of the cell population. In contrast, we observed lysis of the culture by the parent phage with increasing phage titer in the lysate, as expected (Figure 2). Complementation of the lysis-deficient phenotype was achieved using a heterologous phage P926 from our collection. Supplying the endolysin gene in trans allowed the recombinant phage to form plaques (Figure 3b, d). This was used to determine titers of the endolysin-deficient phage throughout our study, and provided an excellent method for efficient phage enrichment. Use of a heterologous phage endolysin enabled the recombinant phage to exhibit the lysis-deficient phenotype even after several rounds of multiplication. In vitro activity of the endolysin-deficient phage against MSSA and MRSA was comparable to that of the parent phage (Figure 4). Further, the recombinant phage was able to rescue mice from fatal MRSA infection (Figure 5), similar to the parent phage (data not shown).
(C) 2011 American Institute of Physics [doi:10 1063/1 3562519]"<
The mean relative telomere length was measured by quantitative Re
This is the first report of internal phloem in this genus "
Arrhenius expressions were adjusted for the seven kinetic constan
The mean total SRS score for the cohort was 99 4/120 None of the
nartsignaling.com | CommonCrawl |
Politics and International Relations (10)
Physics And Astronomy (3)
BJPsych Open (2)
International Psychogeriatrics (2)
Canadian Journal of Neurological Sciences (1)
High Power Laser Science and Engineering (1)
International Journal of Microwave and Wireless Technologies (1)
Wits University Press (10)
International Psychogeriatric Association (2)
Canadian Neurological Sciences Federation (1)
European Microwave Association (1)
Iron status and associations with physical performance during basic combat training in female New Zealand Army recruits
Nicola M. Martin, Cathryn A. Conlon, Rebecca J. M. Smeele, Owen A. R. Mugridge, Pamela R. von Hurst, James P. McClung, Kathryn L. Beck
Journal: British Journal of Nutrition / Volume 121 / Issue 8 / 28 April 2019
Print publication: 28 April 2019
Decreases in Fe status have been reported in military women during initial training periods of 8–10 weeks. The present study aimed to characterise Fe status and associations with physical performance in female New Zealand Army recruits during a 16-week basic combat training (BCT) course. Fe status indicators – Hb, serum ferritin (sFer), soluble transferrin receptor (sTfR), transferrin saturation (TS) and erythrocyte distribution width (RDW) – were assessed at the beginning (baseline) and end of BCT in seventy-six volunteers without Fe-deficiency non-anaemia (sFer <12 µg/l; Hb ≥120 g/l) or Fe-deficiency anaemia (sFer <12 µg/l; Hb <120 g/l) at baseline or a C-reactive protein >10 mg/l at baseline or end. A timed 2·4 km run followed by maximum press-ups were performed at baseline and midpoint (week 8) to assess physical performance. Changes in Fe status were investigated using paired t tests and associations between Fe status and physical performance evaluated using Pearson correlation coefficients. sFer (56·6 (sd 33·7) v. 38·4 (sd 23·8) µg/l) and TS (38·8 (sd 13·9) v. 34·4 (sd 11·5) %) decreased (P<0·001 and P=0·014, respectively), while sTfR (1·21 (sd 0·27) v. 1·39 (sd 0·35) mg/l) and RDW (12·8 (sd 0·6) v. 13·2 (sd 0·7) %) increased (P<0·001) from baseline to end. Hb (140·6 (sd 7·5) v. 142·9 (sd 7·9) g/l) increased (P=0·009) during BCT. At end, sTfR was positively (r 0·29, P=0·012) and TS inversely associated (r –0·32, P=0·005) with midpoint run time. There were no significant correlations between Fe status and press-ups. Storage and functional Fe parameters indicated a decline in Fe status in female recruits during BCT. Correlations between tissue-Fe indicators and run times suggest impaired aerobic fitness. Optimal Fe status appears paramount for enabling success in female recruits during military training.
Subjective memory complaints predict baseline but not future cognitive function over three years: results from the Western Australia Memory Study
Hamid R. Sohrabi, Michael Weinborn, Christoph Laske, Kristyn A. Bates, Daniel Christensen, Kevin Taddei, Stephanie R. Rainey-Smith, Belinda M. Brown, Samantha L. Gardener, Simon M. Laws, Georgia Martins, Samantha C. Burnham, Romola S. Bucks, Barry Reisberg, Nicola T. Lautenschlager, Jonathan Foster, Ralph N. Martins
Journal: International Psychogeriatrics / Volume 31 / Issue 4 / April 2019
Published online by Cambridge University Press: 02 October 2018, pp. 513-525
Print publication: April 2019
This study investigated the characteristics of subjective memory complaints (SMCs) and their association with current and future cognitive functions.
A cohort of 209 community-dwelling individuals without dementia aged 47–90 years old was recruited for this 3-year study. Participants underwent neuropsychological and clinical assessments annually. Participants were divided into SMCs and non-memory complainers (NMCs) using a single question at baseline and a memory complaints questionnaire following baseline, to evaluate differential patterns of complaints. In addition, comprehensive assessment of memory complaints was undertaken to evaluate whether severity and consistency of complaints differentially predicted cognitive function.
SMC and NMC individuals were significantly different on various features of SMCs. Greater overall severity (but not consistency) of complaints was significantly associated with current and future cognitive functioning.
SMC individuals present distinctive features of memory complaints as compared to NMCs. Further, the severity of complaints was a significant predictor of future cognition. However, SMC did not significantly predict change over time in this sample. These findings warrant further research into the specific features of SMCs that may portend subsequent neuropathological and cognitive changes when screening individuals at increased future risk of dementia.
Persistence of anxiety symptoms after elective caesarean delivery
Anna B. Janssen, Katrina A. Savory, Samantha M. Garay, Lorna Sumption, William Watkins, Isabel Garcia-Martin, Nicola A. Savory, Anouk Ridgway, Anthony R. Isles, Richard Penketh, Ian R. Jones, Rosalind M. John
Journal: BJPsych Open / Volume 4 / Issue 5 / September 2018
Published online by Cambridge University Press: 17 August 2018, pp. 354-360
In the UK, 11.8% of expectant mothers undergo an elective caesarean section (ELCS) representing 92 000 births per annum. It is not known to what extent this procedure has an impact on mental well-being in the longer term.
To determine the prevalence and postpartum progression of anxiety and depression symptoms in women undergoing ELCS in Wales.
Prevalence of depression and anxiety were determined in women at University Hospital Wales (2015–16; n = 308) through completion of the Edinburgh Postnatal Depression Scale (EPDS; ≥13) and State-Trait Anxiety Inventory (STAI; ≥40) questionnaires 1 day prior to ELCS, and three postpartum time points for 1 year. Maternal characteristics were determined from questionnaires and, where possible, confirmed from National Health Service maternity records.
Using these criteria the prevalence of reported depression symptoms was 14.3% (95% CI 10.9–18.3) 1 day prior to ELCS, 8.0% (95% CI 4.2–12.5) within 1 week, 8.7% (95% CI 4.2–13.8) at 10 weeks and 12.4% (95% CI 6.4–18.4) 1 year postpartum. Prevalence of reported anxiety symptoms was 27.3% (95% CI 22.5–32.4), 21.7% (95% CI 15.8–28.0), 25.3% (95% CI 18.5–32.7) and 35.1% (95% CI 26.3–44.2) at these same stages. Prenatal anxiety was not resolved after ELCS more than 1 year after delivery.
Women undergoing ELCS experience prolonged anxiety postpartum that merits focused clinical attention.
Declaration of interest
Alterations in dorsal and ventral posterior cingulate connectivity in APOE ε4 carriers at risk of Alzheimer's disease
Rebecca Kerestes, Pramit M. Phal, Chris Steward, Bradford A. Moffat, Simon Salinas, Kay L. Cox, Kathryn A. Ellis, Elizabeth V. Cyarto, David Ames, Ralph N. Martins, Colin L. Masters, Christopher C. Rowe, Matthew J. Sharman, Olivier Salvado, Cassandra Szoeke, Michelle Lai, Nicola T. Lautenschlager, Patricia M. Desmond
Journal: BJPsych Open / Volume 1 / Issue 2 / October 2015
Published online by Cambridge University Press: 02 January 2018, pp. 139-148
Recent evidence suggests that exercise plays a role in cognition and that the posterior cingulate cortex (PCC) can be divided into dorsal and ventral subregions based on distinct connectivity patterns.
To examine the effect of physical activity and division of the PCC on brain functional connectivity measures in subjective memory complainers (SMC) carrying the epsilon 4 allele of apolipoprotein E (APOE 4) allele.
Participants were 22 SMC carrying the APOE ɛ4 allele (ɛ4+; mean age 72.18 years) and 58 SMC non-carriers (ɛ4–; mean age 72.79 years). Connectivity of four dorsal and ventral seeds was examined. Relationships between PCC connectivity and physical activity measures were explored.
ɛ4+ individuals showed increased connectivity between the dorsal PCC and dorsolateral prefrontal cortex, and the ventral PCC and supplementary motor area (SMA). Greater levels of physical activity correlated with the magnitude of ventral PCC–SMA connectivity.
The results provide the first evidence that ɛ4+ individuals at increased risk of cognitive decline show distinct alterations in dorsal and ventral PCC functional connectivity.
The Euclid Data Processing Challenges
Pierre Dubath, Nikolaos Apostolakos, Andrea Bonchi, Andrey Belikov, Massimo Brescia, Stefano Cavuoti, Peter Capak, Jean Coupon, Christophe Dabin, Hubert Degaudenzi, Shantanu Desai, Florian Dubath, Adriano Fontana, Sotiria Fotopoulou, Marco Frailis, Audrey Galametz, John Hoar, Mark Holliman, Ben Hoyle, Patrick Hudelot, Olivier Ilbert, Martin Kuemmel, Martin Melchior, Yannick Mellier, Joe Mohr, Nicolas Morisset, Stéphane Paltani, Roser Pello, Stefano Pilo, Gianluca Polenta, Maurice Poncet, Roberto Saglia, Mara Salvato, Marc Sauvage, Marc Schefer, Santiago Serrano, Marco Soldati, Andrea Tramacere, Rees Williams, Andrea Zacchei
Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S325 / October 2016
Published online by Cambridge University Press: 30 May 2017, pp. 73-82
Euclid is a Europe-led cosmology space mission dedicated to a visible and near infrared survey of the entire extra-galactic sky. Its purpose is to deepen our knowledge of the dark content of our Universe. After an overview of the Euclid mission and science, this contribution describes how the community is getting organized to face the data analysis challenges, both in software development and in operational data processing matters. It ends with a more specific account of some of the main contributions of the Swiss Science Data Center (SDC-CH).
Influence of laser polarization on collective electron dynamics in ultraintense laser–foil interactions
HEDP and HPL 2016
Bruno Gonzalez-Izquierdo, Ross J. Gray, Martin King, Robbie Wilson, Rachel J. Dance, Haydn Powell, David A. MacLellan, John McCreadie, Nicholas M. H. Butler, Steve Hawkes, James S. Green, Chris D. Murphy, Luca C. Stockhausen, David C. Carroll, Nicola Booth, Graeme G. Scott, Marco Borghesi, David Neely, Paul McKenna
Journal: High Power Laser Science and Engineering / Volume 4 / 2016
Published online by Cambridge University Press: 27 September 2016, e33
The collective response of electrons in an ultrathin foil target irradiated by an ultraintense ( ${\sim}6\times 10^{20}~\text{W}~\text{cm}^{-2}$ ) laser pulse is investigated experimentally and via 3D particle-in-cell simulations. It is shown that if the target is sufficiently thin that the laser induces significant radiation pressure, but not thin enough to become relativistically transparent to the laser light, the resulting relativistic electron beam is elliptical, with the major axis of the ellipse directed along the laser polarization axis. When the target thickness is decreased such that it becomes relativistically transparent early in the interaction with the laser pulse, diffraction of the transmitted laser light occurs through a so called 'relativistic plasma aperture', inducing structure in the spatial-intensity profile of the beam of energetic electrons. It is shown that the electron beam profile can be modified by variation of the target thickness and degree of ellipticity in the laser polarization.
The dynamics of Andromeda's dwarf galaxies and stellar streams
Michelle L. M. Collins, R. Michael Rich, Rodrigo Ibata, Nicolas Martin, Janet Preston, The PAndAS collaboration
Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S321 / March 2016
Published online by Cambridge University Press: 21 March 2017, pp. 16-18
As part of the Z-PAndAS Keck II DEIMOS survey of resolved stars in our neighboring galaxy, Andromeda (M31), we have built up a unique data set of measured velocities and chemistries for thousands of stars in the Andromeda stellar halo, particularly probing its rich and complex substructure. In this contribution, we will discuss the structural, dynamical and chemical properties of Andromeda's dwarf spheroidal galaxies, and how there is no observational evidence for a difference in the evolutionary histories of those found on and off M31's vast plane of satellites. We will also discuss a possible extension to the most significant merger event in M31 - the Giant Southern Stream - and how we can use this feature to refine our understanding of M31's mass profile, and its complex evolution.
Autobiographical narratives relate to Alzheimer's disease biomarkers in older adults
Rachel F. Buckley, Michael M. Saling, Muireann Irish, David Ames, Christopher C. Rowe, Victor L. Villemagne, Nicola T. Lautenschlager, Paul Maruff, S. Lance Macaulay, Ralph N. Martins, Cassandra Szoeke, Colin L. Masters, Stephanie R. Rainey-Smith, Alan Rembach, Greg Savage, Kathryn A. Ellis
Journal: International Psychogeriatrics / Volume 26 / Issue 10 / October 2014
Published online by Cambridge University Press: 27 June 2014, pp. 1737-1746
Autobiographical memory (ABM), personal semantic memory (PSM), and autonoetic consciousness are affected in individuals with mild cognitive impairment (MCI) but their relationship with Alzheimer's disease (AD) biomarkers are unclear.
Forty-five participants (healthy controls (HC) = 31, MCI = 14) completed the Episodic ABM Interview and a battery of memory tests. Thirty-one (HC = 22, MCI = 9) underwent β-amyloid positron emission tomography (PET) and magnetic resonance (MR) imaging. Fourteen participants (HC = 9, MCI = 5) underwent one imaging modality.
Unlike PSM, ABM differentiated between diagnostic categories but did not relate to AD biomarkers. Personal semantic memory was related to neocortical β-amyloid burden after adjusting for age and apolipoprotein E (APOE) ɛ4. Autonoetic consciousness was not associated with AD biomarkers, and was not impaired in MCI.
Autobiographical memory was impaired in MCI participants but was not related to neocortical amyloid burden, suggesting that personal memory systems are impacted by differing disease mechanisms, rather than being uniformly underpinned by β-amyloid. Episodic and semantic ABM impairment represent an important AD prodrome.
PART TWO - POWER, POLITICS AND PARTICIPATION
Clare Ballard, Ahmed Bawa, Aninka Claassens, John G.I. Clarke, Scarlett Cornelissen, Miriam Paola, Keith Gottschalk, Ran Greenstein, Bridget Kenny, Gilbert M. Khadiagala, Ian Macun, Xolela Mangcu, Zethu Matebeni, Boitumelo Matlala, Dale T. McKinley, Mopeli L. Moshoeshoe, Sarah Mosoetsa, Prishani Naidoo, Devan Pillay, Nicolas Pons-Vignon, Martin Prew, Roger Southall, Justin Merwe, Jeremy Wakeford
Book: New South African Review
Published by: Wits University Press
Print publication: 31 March 2014, pp 109-109
PART THREE - PUBLIC POLICY AND SOCIAL PRACTICE
New South African Review
A fragile democracy – Twenty years on
Print publication: 31 March 2014
The death of Nelson Mandela on 5 December 2013 was in a sense a wake-up call for South Africans, and a time to reflect on what has been achieved since 'those magnificent days in late April 1994' (as the editors of this volume put it) 'when South Africans of all colours voted for the first time in a democratic election'. In a time of recall and reflection it is important to take account, not only of the dramatic events that grip the headlines, but also of other signposts that indicate the shape and characteristics of a society. The New South African Review looks, every year, at some of these signposts, and the essays in this fourth volume of the series again examine and analyse a broad spectrum of issues affecting the country. They tackle topics as diverse as the state of organised labour; food retailing; electricity generation; access to information; civil courage; the school system; and – looking outside the country to its place in the world – South Africa's relationships with north-east Asia, with Israel and with its neighbours in the southern African region. Taken together, these essays give a multidimensional perspective on South Africa's democracy as it turns twenty, and will be of interest to general readers while being particularly useful to students and researchers.
PART FOUR - SOUTH AFRICA AT LARGE
Print publication: 31 March 2014, pp iv-vii
Frontmatter
Print publication: 31 March 2014, pp i-iii
PART ONE - ECOLOGY, ECONOMY AND LABOUR
Print publication: 31 March 2014, pp 17-17
Print publication: 31 March 2014, pp viii-ix
A Fragile Democracy – Twenty Years On, the fourth New South African Review, is one of doubtless numerous attempts to characterise the state of South Africa some two decades after those magnificent days in late April 1994 when South Africans of all colours voted for the first time in a democratic election. As we write this, we are approaching the country's fourth such election, a significant indicator of the overall success of our democratic transition – for although there may prove to be wrinkles there is every expectation that the forthcoming contest will again be 'free and fair'. Nonetheless, there are likely to be changes in the electoral landscape, there being significant prospect at time of writing that the ruling African National Congress's (ANC's) proportion of the vote will fall below 60 per cent, the level of electoral dominance it has consistently achieved hitherto. While the ANC can claim many triumphs, and can convincingly claim to have transformed South Africa for the better (materially and spiritually), there is nonetheless widespread discontent abroad. The ANC itself displays many divisions. The Tripartite Alliance (which links it to the South African Communist Party (SACP) and the Congress of South African Trade Unions (Cosatu)), is creaking; it is threatened by new opposition parties which appeal to disaffection – especially among the poor and those who feel excluded from the benefits of democracy – and even the established opposition party, the Democratic Alliance (DA) today seeks to cloak itself in the mantle of Mandela. Even while the ANC boasts about steady growth, more jobs, improved service delivery and better standards of living for the majority, critics point out that the economy is stagnating, unemployment remains stubbornly high, corruption flourishes, popular protest abounds, and government and many public services (notably the intelligence agencies and the police) have earned an alarming reputation for unaccountability. So we could go on – but we won't, as we would rather encourage our readers to engage with the wide-ranging set of original essays provided by our authors. | CommonCrawl |
Is entanglement *not* intrinsic to state, but dependent on division into subsystems? (Susskind QM)
I'm working through Susskind's "Quantum Mechanics" book (TTM series), which I quite like.
In Lecture 7 (Chapter 7), he studies a 2-spin system. A single spin has eigenvectors:
$$|u\rangle=\begin{pmatrix}1\\0\end{pmatrix},~~ |d\rangle=\begin{pmatrix}0\\1\end{pmatrix}$$
and then a 2-spin state has eigenvectors:
$$|uu\rangle=\begin{pmatrix}1\\0\\0\\0\end{pmatrix},~~ |ud\rangle=\begin{pmatrix}0\\1\\0\\0\end{pmatrix},~~ |du\rangle=\begin{pmatrix}0\\0\\1\\0\end{pmatrix},~~ |dd\rangle=\begin{pmatrix}0\\0\\0\\1\end{pmatrix}$$
Alice studies the first with an operator $\sigma$ and Bob the second with an operator $\tau$ (these are really product operators of single-spin $\sigma_z$ with the identity $I$: $\sigma_z\bigotimes I$ and $I\bigotimes\sigma_z)$:
$$\sigma = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} ~~~ \tau = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}$$
Now for the interesting stuff.
We can have a product state where the two spins ("subsystems") are independent (no entanglement):
$$\psi ~=~ (a_1|u\rangle+a_2|d\rangle)\bigotimes(b_1|u\rangle+b_2|d\rangle)$$
$$~~~=~a_1b_1|uu\rangle+a_1b_2|ud\rangle+a_2b_1|du\rangle+a_2b_2|dd\rangle~~~(1)$$
where the $a_i$ and $b_i$ are separately normalized to $1$ so that if we calculate the expectation for either spin the other does not factor in at all. For example $\langle\psi|\sigma|\psi\rangle=a_1^2-a_2^2$ with no appearance of the $b_i$.
Then Susskind says that most randomly chosen coefficients of the $|uu\rangle...$ (normalized) will not factorize as in $(1)$. Then they are entangled. And an example of a maximally entangled state is the singlet state:
$$|S\rangle=\frac{1}{\sqrt 2}(|ud\rangle-|du\rangle)$$
Now $\langle S|\sigma|S\rangle=0$ so you have zero information about the individual spins. However, you have information about correlated measurements, because $\langle S|\tau\sigma|S\rangle=-1$ where by matrix multiplication
$$ \tau_z\sigma_z = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} $$
Susskind then discusses how you can test whether a state is entangled or not (and how much entangled) by computing the correlation of operators $A$ and $B$, or checking the eigenvalues of single-state density matricies ($\rho_{2x2})$, which should be $\{1,0,0,0...\}$, or checking if the state coefficients $\{0,\frac{1}{\sqrt 2},-\frac{1}{\sqrt 2},0\}$ can factorize as in $(1)$ (they can't).
Question (rewritten after helpful answers by tparker and Emilio Pisanty)
Aren't these entanglement tests all relative to the chosen 4x4 operators, $\sigma_z$ and $\tau_z$, which reflect a particular choice of dividing the state into subsystems?
Instead of a subdivision based on the two spins, we can subdivide based on $|S\rangle$ and the triplet states $|T_1\rangle=\frac{1}{\sqrt 2}(|ud\rangle+|du\rangle),~~|T_2\rangle=\frac{1}{\sqrt 2}(|uu\rangle+|dd\rangle)$ and $|T_3\rangle=\frac{1}{\sqrt 2}(|uu\rangle-|dd\rangle)$. Let's change basis with a similarity matrix $P=(|T_3\rangle~|T_2\rangle~|T_1\rangle~|S\rangle)$. In this new basis, $|S\rangle...|T_3\rangle$ are basis vectors and
$$ A=\tau_z\sigma_{z,new basis} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} =\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} \bigotimes \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} $$
$$ B=\tau_y\sigma_{y,new basis} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} =\begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \bigotimes \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} $$
We consider the new basis vectors as product vectors isomorphic to single spins which can each be in states labeled $|+\rangle$ and $|-\rangle$ (so as not to confuse with $|u\rangle$ and $|d\rangle$) and we get that
$$ |S\rangle = |{--}\rangle,~~~~ |T_1\rangle = |{-+}\rangle,~~~~ |T_2\rangle = |{+-}\rangle,~~~~ |T_3\rangle = |{++}\rangle $$
Since $A$ and $B$ are of the form of product operators, we can let them define a new subdivision of the full system. Each new subsystem no longer corresponds to an electron at a specific location, as in the original division. A and B can be thought to operate on one label each (A on the first + or -, B on the second).
With this new subdivision, each of $|S\rangle...|T_3\rangle$ are not entangled.
Entanglement is in the eye of the beholder (4x4 operator, or subsystem division). Yes?
quantum-mechanics quantum-spin quantum-entanglement spinors density-operator
johndecker
johndeckerjohndecker
$\begingroup$ So you are proposing expanding the arbitrary state as a superposition of the singlet and triplet states instead of the 2-spin states? I guess you would need some measuring device that only determines the singlet/triplet state without measuring any individual spins? $\endgroup$
– BioPhysicist
$\begingroup$ Yes. I'm sure there are practical issues building equipment. But at an abstract level, QM allows us to create any observables, and define operators in their eigenvector basis. Entanglement is not an intrinsic property of a state. Entanglement usually is discussed with the "subsystems" being chosen as two spins that are physically removed, because that's what happens in the interesting EPR/Bell case. I'm thinking: a state is just a state, and whether or not it appears entangled depends on how you interact with it (observe it, operate on it). $\endgroup$
– johndecker
$\begingroup$ So to word it a little differently, you are proposing it should be possible to take a state where one superposition does not have sets of coefficients whose squares sum to $1$, but if we were to express the same state as a superposition using a different basis, we would find that a partition of the coefficients exists such that the sum of squares within each partition does sum to $1$. If this is what you are saying, I think you are correct, but this is not my central area of study. Hopefully someone with more experience in this can weigh in. $\endgroup$
$\begingroup$ @AaronStevens, essentially yes. (Though I'm not sure I'd call the parts superpositions; perhaps "factor substates." $\endgroup$
$\begingroup$ You have this phrase ..."then neither |S⟩ nor |T⟩ is entangled, but pure states." You seem to imply that an entangled state cannot be a pure state. This is WRONG. $\endgroup$
– wcc
I think I understand your question, but I don't understand Aaron Stevens's comments at all, which you claim to be a valid rephrasing, so it's possible that I'm not actually understanding your question correctly. With that caveat:
Your basic idea is right, but your statements aren't quite mathematically precise enough to be completely correct. (For one thing, you're using the words "entangled" and "pure" as if they were mutually exclusive, but they're not - the maximally entangled state that you describe is both entangled and pure.) Yes, whether a state has internal entanglement does indeed depend on how you factor the Hilbert space into subsystems.
But you're missing a key point, which is that the Hilbert spaces for a composite system is a tensor product of the individual systems' Hilbert spaces, not a direct sum. The Hilbert space $\mathcal{H}_{AB} = \{ |uu\rangle, |ud\rangle, |du\rangle, |dd\rangle \}$ for a two-spin system is the tensor product $\mathcal{H}_{AB} = \mathcal{H}_A \otimes \mathcal{H}_B$, where $\mathcal{H}_A$ and $\mathcal{H}_B$ are both isomorphic to the Hilbert space $\{ u, d \}$ for a single spin. So we can meaningfully talk about operator that only act on one subsystem. But the set of linear combinations of the $|S\rangle$ and $|T\rangle$ states forms the direct sum $\{|S\rangle\} \oplus \{|T\rangle\}$, so we can't think of the $|S\rangle$ and $|T\rangle$ states as subsystems that operators can act on independently.
Sometimes, a composite system's Hilbert space can be written as a tensor product in two inequivalent ways. This really does correspond to two different valid ways to divide the complete system into subsystems, and whether or not the subsystems are entangled can indeed depend on that division. (But this is not quite the same thing as basis dependence, because it turns out that the entanglement is independent of the basis one chooses for each subsystem. Once one chooses a division of the complete system into physical subsystems, then any change of basis within one subsystem will not affect the entanglement.)
We can't see this with your two-spin example, but we can see it if we consider a system of three spins $A$, $B$, and $C$, whose Hilbert space is $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C = \{ uuu, uud, udu, udd, duu, dud, ddu, ddd \}$. Consider the state $$\frac{1}{\sqrt{2}} (|u_A d_B\rangle - |d_A u_B\rangle) \otimes |u_C\rangle = \frac{1}{\sqrt{2}}(|udu\rangle - |duu\rangle).$$ In this state the spins A and B are maximally entangled, but the spin C is not entangled with either of them. One person might only have experimental access to operators that act on either (a) the A and B spins or (b) the C spin. This person would naturally consider the A and B spins together as comprising a single subsystem, and the C spin as comprising a separate subsystem. They would therefore naturally factor the Hilbert space as $\mathcal{H} = \mathcal{H}_{AB} \otimes \mathcal{H}_C$, and say that the state is not entangled. They would not observe any unusual correlations between spins in "separate subsystems".
But someone else might have experimental access to a different set of operators, which can only act on either (a) the A spin, or (b) the B and C spins. This second person would naturally consider the A spin as comprising a single subsystem, and the B and C spins together as comprising a separate subsystem. They would therefore naturally factor the Hilbert space as $\mathcal{H} = \mathcal{H}_A \otimes \mathcal{H}_{BC}$, and say that the state is entangled (in fact, maximally entangled). They would observe perfect correlations between (what they describe as) "separate subsystems".
But again, once you specify a particular tensor factorization of your Hilbert space into fixed subsystems, then the entanglement between the subsystems is both basis- and observer-independent.
tparkertparker
$\begingroup$ I like the answer, but I am feeling a little uneasy with the statement that observers have the freedom to factorize Hilbert space...In your example, what prevents the observers from completely factorizing to $\mathcal{H}_A \otimes \mathcal{H}_B \otimes \mathcal{H}_C$? Can you provide an example for the set of observables that "forces" the the observer to conclude with $\mathcal{H}_A \otimes \mathcal{H}_{BC}$? $\endgroup$
$\begingroup$ @IamAStudent I think it all comes down to what the observer can measure. If they can only measure each spin separately, then it would be in their best interest to use the factorization you have suggested. If they can only measure A and B together or C by itself, then the first suggested factorization in the answer is more useful. $\endgroup$
$\begingroup$ tparker, don't worry about my comments of the question. I think your paragraph starting with "Sometimes, a composite system's Hilbert space can be written as a tensor product in two inequivalent ways." is really what I was trying to get at, and I think this is the best part that sufficiently answers the OP's question. $\endgroup$
$\begingroup$ @IAmAStudent Great question. There's a big subtlety that I swept under the rug in my answer - I was only considering bipartite entanglement within pure states. We could also consider multipartite entanglement or bipartite entanglement within mixed states (which, thanks to the purification theorem, are actually mathematically equivalent concepts). In this case the concept of entanglement becomes much more complicated and subtle. The observer is certainly free to factorize the Hilbert space further, but I didn't want to get into that story. $\endgroup$
– tparker
$\begingroup$ @IAmAStudent Aaron Stevens' comment below yours is exactly correct. Mathematically, you're free to factor your Hilbert space in many different ways, which might formally differ on whether the state is entangled. But the physically natural way to do so is to group together subsystems which it's experimentally feasible to measure all at once (without losing quantum coherence). $\endgroup$
Entanglement is in the eye of the beholder (4×4 operator, or subsystem division). Yes?
Yes, but that is a pretty useless observation.
The formal definition of an entangled state of a bipartite quantum system with state space $\mathcal H = \mathcal H_A\otimes \mathcal H_B$ is as follows:
a separable state is one whose density matrix can be separated as a sum of tensor products of individual density matrices, i.e. if $\rho\in \mathcal B(\mathcal H)$ is the density matrix of the system, $\rho$ is separable if and only if there exist density matrices $\rho_{A,i}\in \mathcal B(\mathcal H_A)$ and $\rho_{B,i}\in \mathcal B(\mathcal H_B)$ and weights $p_i\geq 0$ such that $$ \rho = \sum_i p_i \rho_{A,i}\otimes \rho_{B,i}. $$
an entangled state is any state that is not separable.
For clarity, entanglement is an intrinsic property of the state, together with the partition of the state space into tensor factors.
If you're willing to re-factorize your total state space into some other tensor-product factorization, then a state that's entangled in the $A$, $B$ bipartite scheme is indeed liable to be seen as separable in some alternative $A'$, $B'$ factorization.
However, if you're able to re-factorize your total state space in such a way, then that tells you that your initial split into parties wasn't very meaningful to begin with. In real-world scenarios, we use entanglement as a relevant concept for bipartite systems where the tensor-product factorization of the state space (i.e. the splitting of the system into the two "parties" alluded to in "bipartite") is fixed from the context and cannot be changed easily. If you see it used in a context where that's not the case (ahem) then any conclusions drawn from the entanglement are correspondingly weakened.
One useful way to see this is by noting that the theory of entanglement is, very often, best thought of as a resource theory. Resource theories are great ways to analyze situations where you have one class of operations which is easy to implement but which might be insufficient to achieve some pre-specified goal. Other good examples are thermodynamics (where the operations are energy-conserving processes, and the resource is entropy) and gaussianity (where the operations are linear optical operations); in entanglement, the class of free operations is that of Local Operations and Classical Communication, generally abbreviated as LOCC, and it is obviously tied strictly to a splitting of the system into parties which can operate 'locally' and which can communicate classically.
Resource theories, of course, are only useful when the resource they describe is actually valuable, and when their restricted operations are in fact hard to implement: just as the study of thermodynamics is pretty useless if you have a magical black box that can inject and remove energy from any part of your system at your command, the study of entanglement is pretty meaningless if you have free access to non-LOCC unitary operations that cut across the A-to-B split.
That doesn't mean that you can't talk about entanglement in such a situation, like e.g. the spins of two electrons which are in bound states in the same atom or molecule, but if the re-factorization is physically possible in anything like a reasonable sense, then the conclusions that stem from the presence of the entanglement will be correspondingly trivialized.
But more importantly, if you look at real-world usage, it is always of the form
this system is entangled with that system.
Under your re-factorization, the first part of that sentence to lose its meaning is not "entangled", it's "system".
(The answer below addresses a specific interpretation of v6 of the question, which was, frankly, much more interesting than the current version. I'm keeping it around because of that.)
What Susskind provides is known as an entanglement witness, and here you do get some amount of "eye of the beholder" behaviour. Generically, an entanglement witness is some operator $A$ such that its expectation value in state $\rho$, $\mathrm{Tr}(\rho A)$, will satisfy $$ \mathrm{Tr}(\rho A) \geq 0 \qquad \forall\text{ separable }\rho, $$ so that $$ \mathrm{Tr}(\rho A) < 0 \implies \rho\text{ is entangled}. $$ However, most entanglement witnesses are imperfect: that is, for any given entanglement witness $A$, there will typically be entangled states $\rho$ for which $\mathrm{Tr}(\rho A) \geq 0$, so that $A$ cannot detect the entanglement of that particular entangled state.
Nevertheless, for any given entangled state, there will always be at least one entanglement witness that can certify that it is entangled.
In other words, the definition of entanglement is independent of the operators used to detect its presence, but typically those operators will have a limited scope in which entangled states they can detect.
And if that makes it sound like entanglement is a tricky object to detect and characterize, then... yes, pretty much.
Emilio PisantyEmilio Pisanty
$\begingroup$ I think - although I am not positive - that the freedom to tensor-factorize the Hilbert space in different ways (with different resulting evaluations of entangled vs. not for the same state) was the core of the OP's question. $\endgroup$
$\begingroup$ I think we're all trying to say that entanglement is defined relative to a factorization of the space. The question isn't that confused and it's simply asking whether or not unseparable states become seperable when you change basis; the answer to that is "yes". $\endgroup$
– DanielSank
$\begingroup$ @DanielSank Nitpick: when you change tensor factorization, not when you change "basis". A tensor factorization of a Hilbert space is a completely basis-independent concept. Choosing such a factorization makes certain bases more natural to work with (bases whose basis vectors are product states with respect to that factorization), but strictly speaking "tensor factorization" and "basis" are completely independent concepts. $\endgroup$
$\begingroup$ I disagree with your claim that there is always an overwhelmingly natural/useful way to factorize the Hilbert space. It's true that a real-space factorization is often the most useful in practice, but sometimes the "particle-basis" factorization is useful instead - this is basically how "first quantization" works. Consider a wavefunction $\psi(x,y) = \phi_A(x) \phi_B(y)$ for two distinguishable particles $A$ and $B$. ... $\endgroup$
$\begingroup$ Such a wavefunction is a product state with respect to the particle "basis" (see caveat above for why quotation marks), but is entangled in the position "basis" - if you measure $n = 0, 1, \text{ or } 2$ particles in the left-hand half of the system, then you instantly know that there are exactly $2 - n$ particles in the right-hand half of the system. Not as exciting as a violation of the Bell inequalities, but still counts as entanglement. Indeed the Born-Oppenheimer and Hartree-Fock approximations in DFT and quantum chemistry both consist of assuming no entanglement in the particle "basis" $\endgroup$
Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-spin quantum-entanglement spinors density-operator or ask your own question.
Example of time-dependent factorization of a Hilbert space
Entangled vectors in hilbert space
Why is the electron magnetic moment always parallel to the spin for an electron?
Recognizing Entanglement from the Density Matrix in 2-qubit case
Distributive property of tensor product
Eigenvalues of a two particle system in a coupled vs. uncoupled basis
How does the many-worlds interpretation look like in bra ket notation?
How to extend a quantum operation to an auxiliary Hilbert space? | CommonCrawl |
Long time dynamics of a phase-field model of prostate cancer growth with chemotherapy and antiangiogenic therapy effects
Stability and dynamic transition of vegetation model for flat arid terrains
doi: 10.3934/dcdsb.2021181
Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible.
Readers can access Online First articles via the "Online First" tab for the selected journal.
Dynamics of a stochastic HIV/AIDS model with treatment under regime switching
Miaomiao Gao 1, , Daqing Jiang 2,3,4,, , Tasawar Hayat 4,5, , Ahmed Alsaedi 4, and Bashir Ahmad 4,
School of Mathematics and Statistics, Qingdao University, Qingdao 266071, China
College of Science, China University of Petroleum (East China), Qingdao 266580, China
Key Laboratory of Unconventional Oil and Gas Development, China University of Petroleum (East China), Ministry of Education, Qingdao 266580, China
Nonlinear Analysis and Applied Mathematics(NAAM)-Research Group, Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
Department of Mathematics, Quaid-I-Azam University, Islamabad 44000, Pakistan
* Corresponding author: Daqing Jiang
Received April 2020 Revised April 2021 Early access July 2021
Fund Project: This work is supported by the National Natural Science Foundation of China under Grant No. 11871473 and Natural Science Foundation of Shandong Province under Grant No. ZR2019MA010
This paper focuses on the spread dynamics of an HIV/AIDS model with multiple stages of infection and treatment, which is disturbed by both white noise and telegraph noise. Switching between different environmental states is governed by Markov chain. Firstly, we prove the existence and uniqueness of the global positive solution. Then we investigate the existence of a unique ergodic stationary distribution by constructing suitable Lyapunov functions with regime switching. Furthermore, sufficient conditions for extinction of the disease are derived. The conditions presented for the existence of stationary distribution improve and generalize the previous results. Finally, numerical examples are given to illustrate our theoretical results.
Keywords: Stochastic HIV/AIDS model, treatment, regime switching, stationary distribution, extinction.
Mathematics Subject Classification: Primary: 34E10, 34F05; Secondary: 37A50.
Citation: Miaomiao Gao, Daqing Jiang, Tasawar Hayat, Ahmed Alsaedi, Bashir Ahmad. Dynamics of a stochastic HIV/AIDS model with treatment under regime switching. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021181
S. Al-Sheikh, F. Musali and M. Alsolami, Stability analysis of an HIV/AIDS epidemic model with screening, Int. Math. Forum., 6 (2011), 3251-3273. Google Scholar
P. J. Birrell, A. M. Presanis and D. D. Angelis, Multi-state Models of HIV Progression in Homosexual Men: An Application to the CASCADE Collaboration, Technical report, MRC Biostatistics Unit, 2012. Google Scholar
L. Cai, S. Guo and S. Wang, Analysis of an extended HIV/AIDS epidemic model with treatment, Appl. Math. Comput., 236 (2014), 621-627. doi: 10.1016/j.amc.2014.02.078. Google Scholar
L. Cai, X. Li, M. Ghosh and B. Guo, Stability analysis of an HIV/AIDS epidemic model with treatment, J. Comput. Appl. Math., 229 (2009), 313-323. doi: 10.1016/j.cam.2008.10.067. Google Scholar
L. Cai and J. Wu, Analysis of an HIV/AIDS treatment model with a nonlinear incidence, Chaos Solitons Fractals, 41 (2009), 175-182. doi: 10.1016/j.chaos.2007.11.023. Google Scholar
Y. Cai, Y. Kang and W. Wang, A stochastic SIRS epidemic model with nonlinear incidence rate, Appl. Math. Comput., 305 (2017), 221-240. doi: 10.1016/j.amc.2017.02.003. Google Scholar
Collaborative Group on AIDS Incubation and HIV Survival including the CASCADE EU Concerted Action, Time from HIV-1 seroconversion to AIDS and death before widespread use of highly-active antiretroviral therapy: A collaborative re-analysis, Lancet., 355 (2000), 1131–1137. doi: 10.1016/S0140-6736(00)02061-4. Google Scholar
N. H. Dang, N. H. Du and G. Yin, Existence of stationary distributions for Kolmogorov systems of competitive type under telegraph noise, J. Differential Equations, 257 (2014), 2078-2101. doi: 10.1016/j.jde.2014.05.029. Google Scholar
T. Feng and Z. Qiu, Global anaiysis of a stochastic TB model with vaccination and treatment, Discrete Contin. Dyn. Syst. Ser. B, 24 (2019), 2923-2939. doi: 10.3934/dcdsb.2018292. Google Scholar
R. M. Granich, C. F. Gilks, C. Dye, K. M. D. Cock and B. G. Williams, Universal voluntary HIV testing with immediate antiretroviral therapy as a strategy for elimination of HIV transmission: A mathematical model, Lancet., 373 (2009), 48-57. doi: 10.1016/S0140-6736(08)61697-9. Google Scholar
A. Gray, D. Greenhalgh, L. Hu, X. Mao and J. Pan, A stochastic differential equation SIS epidemic model, SIAM J. Appl. Math., 71 (2011), 876-902. doi: 10.1137/10081856X. Google Scholar
D. J. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM Rev., 43 (2001), 525-546. doi: 10.1137/S0036144500378302. Google Scholar
T. D. Hollingsworth, R. M. Anderson and C. Fraser, HIV-1 transmission, by stage of infection, J. Infect. Dis., 198 (2008), 687-693. doi: 10.1086/590501. Google Scholar
S. D. Hove-Musekwa and F. Nyabadza, The dynamics of an HIV/AIDS model with screened disease carriers, Comput. Math. Methods Med., 10 (2009), 287-305. doi: 10.1080/17486700802653917. Google Scholar
H.-F. Huo, R. Chen and X.-Y. Wang, Modelling and stability of HIV/AIDS epidemic model with treatment, Appl. Math. Model., 40 (2016), 6550-6559. doi: 10.1016/j.apm.2016.01.054. Google Scholar
H.-F. Huo and L.-X. Feng, Global stability for an HIV/AIDS epidemic model with different latent stages and treatment, Appl. Math. Model., 37 (2013), 1480-1489. doi: 10.1016/j.apm.2012.04.013. Google Scholar
L. Imhof and S. Walcher, Exclusion and persistence in deterministic and stochastic chemostat models, J. Differential Equations, 217 (2005), 26-53. doi: 10.1016/j.jde.2005.06.017. Google Scholar
C. Ji, The threshold for a stochastic HIV-1 infection model with Beddington-DeAngelis incidence rate, Appl. Math. Model., 64 (2018), 168-184. doi: 10.1016/j.apm.2018.07.031. Google Scholar
J. Jia and G. Qin, Stability analysis of HIV/AIDS epidemic model with nonlinear incidence and treatment, Adv. Difference Equations, 2017 (2017), 136. doi: 10.1186/s13662-017-1175-5. Google Scholar
M. E. Kretzschmar, M. F. S. van der Loeff, P. J. Birrell, D. D. Angelis and R. A. Coutinho, Prospects of elimination of HIV with test-and-treat strategy, Proc. Natl. Acad. Sci., 110 (2013), 15538-15543. doi: 10.1073/pnas.1301801110. Google Scholar
H. Kunita, Itô's stochastic calculus: Its surprising power for applications, Stoch. Proc. Appl., 120 (2010), 622-652. doi: 10.1016/j.spa.2010.01.013. Google Scholar
J. A. Levy, Pathogenesis of human immunodeficiency virus infection, Microbiol. Rev., 57 (1993), 183-289. doi: 10.1128/mr.57.1.183-289.1993. Google Scholar
D. Li, S. Liu and J. Cui, Threshold dynamics and ergodicity of an SIRS epidemic model with Markovian switching, J. Differential Equations, 263 (2017), 8873-8915. doi: 10.1016/j.jde.2017.08.066. Google Scholar
X. Lin, H. W. Hethcote and P. van den Driessche, An epidemiological model for HIV/AIDS with proportional recruitment, Math. Biosci., 118 (1993), 181-195. doi: 10.1016/0025-5564(93)90051-B. Google Scholar
D. Liu and B. Wang, A novel time delayed HIV/AIDS model with vaccination and antiretroviral therapy and its stability analysis, Appl. Math. Model., 37 (2013), 4608-4625. doi: 10.1016/j.apm.2012.09.065. Google Scholar
H. Liu, X. Li and Q. Yang, The ergodic property and positive recurrence of a multi-group Lotka-Volterra mutualistic system with regime switching, Syst. Control Lett., 62 (2013), 805-810. doi: 10.1016/j.sysconle.2013.06.002. Google Scholar
Q. Liu, D. Jiang, T. Hayat and A. Alsaedi, Stationary distribution and extinction of a stochastic HIV-1 infection model with distributed delay and logistic growth, J. Nonlinear Sci., 30 (2020), 369-395. doi: 10.1007/s00332-019-09576-x. Google Scholar
Q. Liu, D. Jiang, T. Hayat and A. Alsaedi, Threshold behavior in a stochastic delayed SIS epidemic model with vaccination and double diseases, J. Franklin Inst., 356 (2019), 7466-7485. doi: 10.1016/j.jfranklin.2018.11.055. Google Scholar
Q. Liu, D. Jiang, T. Hayat, A. Alsaedi and B. Ahmad, Dynamics of a multigroup SIQS epidemic model under regime switching, Stoch. Anal. Appl., 38 (2020), 769-796. doi: 10.1080/07362994.2020.1722167. Google Scholar
X. Mao, G. Marion and E. Renshaw, Environmental Brownian noise suppresses explosion in population dynamics, Stoch. Process. Appl., 97 (2002), 95-110. doi: 10.1016/S0304-4149(01)00126-0. Google Scholar
[31] X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, London, 2006. doi: 10.1142/p473. Google Scholar
[32] R. M. May, Stability and Complexity in Model Ecosystems, Princeton University Press, NJ, 2001. doi: 10.1515/9780691206912. Google Scholar
X. Meng, S. Zhao, T. Feng and T. Zhang, Dynamics of a novel nonlinear stochastic SIS epidemic model with double epidemic hypothesis, J. Math. Anal. Appl., 433 (2015), 227-242. doi: 10.1016/j.jmaa.2015.07.056. Google Scholar
C. C. McCluskey, A model of HIV/AIDS with staged progression and amelioration, Math. Biosci., 181 (2003), 1-16. doi: 10.1016/S0025-5564(02)00149-9. Google Scholar
M. A. Nowak and R. M. May, Virus Dynamics, Mathematical Principles of Immunology and Virology, Oxford University, Oxford, 2000. Google Scholar
M. U. Nsuami and P. J. Witbooi, A model of HIV/AIDS population dynamics including ARV treatment and pre-exposure prophylaxis, Adv. Difference Equations, 2018 (2018), 11. doi: 10.1186/s13662-017-1458-x. Google Scholar
M. U. Nsuami and P. J. Witbooi, Stochastic dynamics of an HIV/AIDS epidemic model with treatment, Quaest. Math., 42 (2019), 605-621. doi: 10.2989/16073606.2018.1478908. Google Scholar
B. $\varnothing$ksendal, Stochastic Differential Equations: An Introduction with Applications, 6$^{nd}$ edition, Springer-Verlag, Berlin Heidelberg, 2005. Google Scholar
O. M. Otunuga, Global stability for a $2n+1$ dimensional HIV/AIDS epidemic model with treatments, Math. Biosci., 299 (2018), 138-152. doi: 10.1016/j.mbs.2018.03.013. Google Scholar
S. Peng and X. Zhu, Necessary and sufficient condition for comparison theorem of 1-dimensional stochastic differential equations, Stochastic Process. Appl., 116 (2006), 370-380. doi: 10.1016/j.spa.2005.08.004. Google Scholar
K. Qi and D. Jiang, The impact of virus carrier screening and actively seeking treatment on dynamical behavior of a stochastic HIV/AIDS infection model, Appl. Math. Model., 85 (2020), 378-404. doi: 10.1016/j.apm.2020.03.027. Google Scholar
K. Qi and D. Jiang, Threshold behavior in a stochastic HTLV-I infection model with CTL immune response and regime switching, Math. Methods Appl. Sci., 41 (2018), 6866-6882. doi: 10.1002/mma.5198. Google Scholar
A. Rathinasamy, M. Chinnadurai and S. Athithan, Analysis of exact solution of stochastic sex-structured HIV/AIDS epidemic model with effect of screening of infectives, Math. Comput. Simulation, 179 (2021), 213-237. doi: 10.1016/j.matcom.2020.08.017. Google Scholar
A. Settati and A. Lahrouz, Stationary distribution of stochastic population systems under regime switching, Appl. Math. Comput., 244 (2014), 235-243. doi: 10.1016/j.amc.2014.07.012. Google Scholar
C. A. Stoddart and R. A. Reyes, Models of HIV-1 disease: A review of current status, Drug Discovery Today Dis. Models, 3 (2006), 113-119. doi: 10.1016/j.ddmod.2006.03.016. Google Scholar
The CASCADE Collaboration, Survival after introduction of HAART in people with known duration of HIV-1 infection, Lancet., 355 (2000), 1158-1159. doi: 10.1016/S0140-6736(00)02069-9. Google Scholar
T. D. Tuong, D. H. Nguyen, N. T. Dieu and K. Tran, Extinction and permanence in a stochastic SIRS model in regime-switching with general incidence rate, Nonlinear Anal. Hybrid Syst., 34 (2019), 121-130. doi: 10.1016/j.nahs.2019.05.008. Google Scholar
D. Wanduku, The stochastic extinction and stability conditions for nonlinear malaria epidemics, Math. Biosci. Eng., 16 (2019), 3771-3806. doi: 10.3934/mbe.2019187. Google Scholar
World Health Organization Data on the Size of the HIV/AIDS Epidemic, Available from: https://www.who.int/data/gho/data/themes/hiv-aids/GHO/hiv-aids. Google Scholar
X. Zhang and H. Peng, Stationary distribution of a stochastic cholera epidemic model with vaccination under regime switching, Appl. Math. Lett., 102 (2020), 106095. doi: 10.1016/j.aml.2019.106095. Google Scholar
Y. Zhao, D. Jiang, X. Mao and A. Gray, The threshold of a stochastic SIRS epidemic model in a population with varying size, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 1277-1295. doi: 10.3934/dcdsb.2015.20.1277. Google Scholar
C. Zhu and G. Yin, Asymptotic properties of hybrid diffusion systems, SIAM J. Control Optim., 46 (2007), 1155-1179. doi: 10.1137/060649343. Google Scholar
Figure 1. The solution of subsystem with state 1. (Color figure online)
Figure 3. The pictures (a), (b) and (c) are the solution of system (3). The picture (d) is the corresponding Markov chain with $ \pi = (\frac{3}{5},\frac{2}{5}) $. (Color figure online)
Figure 7. The diagrams track the variation trends of $ S(t) $ and $ I_{k}(t),k = 1,2 $ with different transmission rate $ \lambda(m),m = 1,2 $. (Color figure online)
Figure 8. The diagrams track the variation trends of $ I_{k}(t) $ and $ T_{k}(t),k = 1,2 $ with different treatment rate $ \tau $. (Color figure online)
Table 1. List of the biological parameters
Parameter Definition Value Source
$\rho_{k}$ Transition rates per year from stage $k$ to stage $k+1$ for untreated individuals $\rho_{1}=1/0.271, \rho_{2}=1/8.31$ [7,2,13]
$\gamma_{k}$ Transition rates per year from stage $k$ to stage $k+1$ for treated individuals $\gamma_{1}=1/8.21, \gamma_{2}=1/54$ [2,46]
$\tau$ Rate per year of moving from the untreated to the treated population range 0-100$\%$ [20]
$\phi$ Rate of moving from the treated back to the untreated population range 0-100$\%$ [20]
$\epsilon$ Infectivity of individuals under treatment around 0.01 [10]
$h_{k}$ Infectivity of untreated individuals in stage $k$ of infection per year around 2.76 for $h_{1}$, around 0.106 for $h_{2}$ [13]
Li Zu, Daqing Jiang, Donal O'Regan. Persistence and stationary distribution of a stochastic predator-prey model under regime switching. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2881-2897. doi: 10.3934/dcds.2017124
Shangzhi Li, Shangjiang Guo. Persistence and extinction of a stochastic SIS epidemic model with regime switching and Lévy jumps. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 5101-5134. doi: 10.3934/dcdsb.2020335
Cristiana J. Silva, Delfim F. M. Torres. A TB-HIV/AIDS coinfection model and optimal control treatment. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 4639-4663. doi: 10.3934/dcds.2015.35.4639
Hongfu Yang, Xiaoyue Li, George Yin. Permanence and ergodicity of stochastic Gilpin-Ayala population model with regime switching. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3743-3766. doi: 10.3934/dcdsb.2016119
Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4887-4905. doi: 10.3934/dcdsb.2020317
Moatlhodi Kgosimore, Edward M. Lungu. The Effects of Vertical Transmission on the Spread of HIV/AIDS in the Presence of Treatment. Mathematical Biosciences & Engineering, 2006, 3 (2) : 297-312. doi: 10.3934/mbe.2006.3.297
Yanan Zhao, Yuguo Lin, Daqing Jiang, Xuerong Mao, Yong Li. Stationary distribution of stochastic SIRS epidemic model with standard incidence. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2363-2378. doi: 10.3934/dcdsb.2016051
Chao Xu, Yinghui Dong, Zhaolu Tian, Guojing Wang. Pricing dynamic fund protection under a Regime-switching Jump-diffusion model with stochastic protection level. Journal of Industrial & Management Optimization, 2020, 16 (6) : 2603-2623. doi: 10.3934/jimo.2019072
Arni S.R. Srinivasa Rao, Masayuki Kakehashi. Incubation-time distribution in back-calculation applied to HIV/AIDS data in India. Mathematical Biosciences & Engineering, 2005, 2 (2) : 263-277. doi: 10.3934/mbe.2005.2.263
Cristiana J. Silva. Stability and optimal control of a delayed HIV/AIDS-PrEP model. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021156
Helen Moore, Weiqing Gu. A mathematical model for treatment-resistant mutations of HIV. Mathematical Biosciences & Engineering, 2005, 2 (2) : 363-380. doi: 10.3934/mbe.2005.2.363
Nara Bobko, Jorge P. Zubelli. A singularly perturbed HIV model with treatment and antigenic variation. Mathematical Biosciences & Engineering, 2015, 12 (1) : 1-21. doi: 10.3934/mbe.2015.12.1
Shohel Ahmed, Abdul Alim, Sumaiya Rahman. A controlled treatment strategy applied to HIV immunology model. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 299-314. doi: 10.3934/naco.2018019
Lin Xu, Rongming Wang, Dingjun Yao. Optimal stochastic investment games under Markov regime switching market. Journal of Industrial & Management Optimization, 2014, 10 (3) : 795-815. doi: 10.3934/jimo.2014.10.795
Nguyen Huu Du, Nguyen Thanh Dieu, Tran Dinh Tuong. Dynamic behavior of a stochastic predator-prey system under regime switching. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3483-3498. doi: 10.3934/dcdsb.2017176
Rui Wang, Xiaoyue Li, Denis S. Mukama. On stochastic multi-group Lotka-Volterra ecosystems with regime switching. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3499-3528. doi: 10.3934/dcdsb.2017177
Mourad Bellassoued, Raymond Brummelhuis, Michel Cristofol, Éric Soccorsi. Stable reconstruction of the volatility in a regime-switching local-volatility model. Mathematical Control & Related Fields, 2020, 10 (1) : 189-215. doi: 10.3934/mcrf.2019036
Federico Papa, Francesca Binda, Giovanni Felici, Marco Franzetti, Alberto Gandolfi, Carmela Sinisgalli, Claudia Balotta. A simple model of HIV epidemic in Italy: The role of the antiretroviral treatment. Mathematical Biosciences & Engineering, 2018, 15 (1) : 181-207. doi: 10.3934/mbe.2018008
Xiaoling Zou, Dejun Fan, Ke Wang. Stationary distribution and stochastic Hopf bifurcation for a predator-prey system with noises. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1507-1519. doi: 10.3934/dcdsb.2013.18.1507
Gigi Thomas, Edward M. Lungu. A two-sex model for the influence of heavy alcohol consumption on the spread of HIV/AIDS. Mathematical Biosciences & Engineering, 2010, 7 (4) : 871-904. doi: 10.3934/mbe.2010.7.871
Miaomiao Gao Daqing Jiang Tasawar Hayat Ahmed Alsaedi Bashir Ahmad | CommonCrawl |
A signal reconstruction method of wireless sensor network based on compressed sensing
Shiyu Zhu1,2,
Shanxiong Chen1,
Xihua Peng1,
Hailing Xiong1 &
Sheng Wu1
EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 106 (2020) Cite this article
Compressed sensing (CS) is a new theory for sampling and recovering signal-based sparse transformation. This theory could help us to acquire complete signal at low cost. Therefore, it also satisfies the requirement of low-cost sampling since bandwidth and capability of sampling is not sufficient. However, wireless sensor network is an open scene, and signal is easily affected by noise in the open environment. Specially, CS theory indicates a method of sub-Nyquist sampling which is effective to reduce cost in the process of data acquirement. However, the sampling is "imperfect", and the corresponding data is more sensitive to noise. Consequently, it is urgently requisited for robust and antinoise reconstruction algorithms which can ensure the accuracy of signal reconstruction. In the article, we present a proximal gradient algorithm (PRG) to reconstruct sub-Nyquist sampling signal in the noise environment. This algorithm iteratively uses a straightforward shrinkage step to find the optimum solution of constrained formula, and then restores the original signal. Finally, in the experiment, PRG shows excellent performance comparing to OMP, BP, and SP while signal is corrupted by noise.
With the rapid development of Internet of Things technology, more and more researchers are actively participating in this research field. As one of the supporting technologies of the Internet of Things, wireless sensor networks have attracted a lot of attention. An important application of wireless sensor networks is to monitor the temperature, humidity, and illumination of the environment. Usually, a wireless sensor network is composed of a large number of sensor nodes, each node needs to collect a large amount of data, and then reach the central node through multi-hop routing. In this process, a lot of storage space and energy are consumed. Due to the limited computing, power, and storage capabilities of sensor nodes, we need to establish efficient models for data acquisition and transmission to maximize sensor life and reduce the cost of information acquisition. So, the main goal of data collection is to collect the most accurate data with the least cost. Traditional methods such as distributed source coding techniques [1], cooperative wavelet transform [2], and data clustering [3] can be used to reduce data traffic. For example, in order to improve the efficiency of wireless network data transmission and reduce energy consumption, Shih et al. studied the selection of the modulation method of wireless signal transmission, and proposed a low-power coding mode of the physical layer [4]. Singh et al. proposed a low-energy signal sampling method in wireless networks to increase the lifetime of sampling nodes [5]. Some researchers also established the prediction model of multi-step sensor data in wireless sensor networks to reduce network traffic and increase network life correspondingly [6,7,8]. According to the characteristics of time series or spatial sequence of wireless sensor networks, [9] uses Fourier transform, discrete cosine transform, and wavelet transform to establish signal sparse basis, generate sparse representation data of signals, and then sample sparse data. This can greatly reduce the time and space consumption of sampling. The sparse reconstruction algorithm can achieve more accurate data reconstruction and lower energy efficiency [10]. These methods make use of the spatial correlation of the detected data, compress and encode the data, but cannot effectively handle the abnormal event data, and the computational complexity is high.
The theory of compressed sensing proposed in recent years provides a new way of data acquisition for wireless sensor networks [11,12,13]. According to the theory of compressed sensing, a sparse signal can be accurately reconstructed with fewer samples, and its sampling can be done by linearly projecting the detected data. This allows sensor nodes to perform data acquisition in a compressed manner without additional computational overhead. For the wireless sensor network, although it has the characteristics of convenient construction, strong adaptability and high transmission efficiency, there are some limitations in some aspects, such as energy supply, sensor life cycle, delay, bandwidth, signal distortion, and transmission cost. Nodes in wireless sensor networks also require independent energy supplies, so energy consumption is an important factor in determining the life cycle of sensor nodes. The combination of compressed sensing theory and wireless sensor networks provides an effective way to solve these problems [14], which can optimize sensor node energy consumption [15]. Compressed sensing enables sparse signals in wireless sensor networks to be accurately reconstructed with fewer samples [16]. Compressed sensing essentially provides a method for optimal computation based on mathematically constrained conditions for the solution of sparse information.
When combining compressed sensing theory with wireless sensor network, the influence of noise on signal in wireless sensor network environment must be considered [17]. The compressed sampling itself uses the projection of the signal on the sparse basis to generate the sparse signal, then uses the measurement matrix to sense, and then obtains the sampling value. In the process of sampling, in fact, it is not as complete as the sampling method under the Shannon-Nyquist theory to collect signal information, but a kind of under sampling method [18]. The "incomplete" of signal acquisition makes the sampling value more sensitive to noise than the "complete" sampling. Reducing the impact of noise on this "incomplete" sampling is the key to the compression sensing theory that can be effectively used in wireless sensor networks.
In the experimental, 100 sensor nodes are randomly distributed in the area of 100 × 100, and the center of the area is the center node. The target signals (sources) to be detected are randomly distributed in the region. The experiment assumes that the sensor node collects the signal in a period of time, and each sensor processes the signal sparsely, compresses the sample, and then transmits it to the central node.
Further, we built a real wireless sensor network system, which is composed of 30 temperature sensor nodes. Every nodes support 802.11 and 2.4 GHz network bands. The wireless sensor nodes are separated by 5 m, and the central node is directly replaced by PC. A stable heat source was randomly placed in the experiment, and then the temperature of the heat source was measured. Since the current hardware-based sensing matrix design is still not perfect, we add a module to each sensing node, implement the sparse and compressed sampling by software, and then transmit temperature data by compressed sampled to center node. Signal of temperature was refactored in the center node.
The main contributions of this paper are as follows:
The multi-path channel transmission model and compressed sampling model of the wireless sensor network are given, and the mathematical representation of the sampling matrix of the sensor network is given. The Restricted Isometry Property (RIP) is also demonstrated.
In view of the fact that wireless sensor networks are susceptible to noise interference, a noise reduction algorithm for compressed sensing restoration is proposed. In this algorithm, the approximate gradient iteration method is adopted, and the convex optimization problem of signal recovery is solved step by step to approach the optimal solution, so that the signal can be reconstructed perfectly. The experimental results show that the algorithm has good robustness and reconstruction accuracy in noisy environment.
Through experiments, we analyze the excellent performance of our proposed sensor signal reconstruction algorithm compared to other algorithms under the number of iterations and noise interference. Furthermore, a temperature sensing wireless sensor network environment is constructed. The test results show that our method has higher reconstruction accuracy.
The rest of this paper is arranged as follows: in Section 2, we briefly discuss the basic theory of compressed sensing and the constrained isometric attribute; in Section 3, we introduce the working structure of wireless sensor network, gives the multi-path channel transmission model of sensor network, and demonstrates the construction method of compressed sampling matrix of wireless sensor network and its compliance with rip characteristics; in Section 4, we discuss the sensing signal for the reconstruction problem, and an approximate gradient descent algorithm is proposed for signal reconstruction in noisy environment; in Section 5, the specific process of signal acquisition and reconstruction in wireless sensor network based on compression sensing is introduced in detail; in Section 6, the experimental environment used for performance analysis is introduced, and the experimental results are discussed; and in Section 7, the full text is summarized, and further research work is introduced.
Basic theory of compressed sensing
If a discrete signal has only k non-zero elements, the signal is considered to be k sparse. Considering a non-sparse discrete signal U, the sparse or near sparse representation of the signal can be obtained under an appropriate sparse basis Ψ ∈ RN × L:
$$ \mathrm{U}=\Psi x $$
U is an n-dimensional signal. Ψ ∈ RN × L is the sparse basis matrix of signal U. Ψ ∈ RL × 1 is the sparse or near sparse representation of signal U. Under the theory of compressed sensing, the process of sampling discrete signals can be described as follows: the m-times projection of a signal U with length N on the sensing matrix Φ and {Φi, i = 1, 2, …M} can obtain the compressed sampling form of its signal. Its expression is \( {y}_i={\Phi}_i^Tu \)i = 1, 2,...M. where M represents the sampling number for the signal. In order to improve the sampling efficiency, the sampling times should be as small as possible, usually m < n. Therefore, it can be seen that the length of Y is less than the length of u, so it is called compression sensing. Different from the traditional way of data collection, compression, transmission, and decompression, the compression sensing theory does not need to acquire complete signals and high-resolution images, but only collects the information that best represents the data characteristics, which greatly saves the storage space and reduces the transmission cost. The biggest difference between the compression sensing and the traditional data sampling method is that the compression sensing realizes the compression in the data collection process, and reconstructs in the later use; the traditional method is to collect the complete data information first, and then compress for the needs of storage and transmission [19]. Therefore, compressed sensing theory is an under acquisition method for data acquisition, which can acquire information at a rate lower than Nyquist. The mathematical model of compressed perception is expressed as follows:
For signals U ∈ RN × 1, find a linear measurement matrix Φ ∈ RM × N (m < n) and perform projection operation.
$$ y=\Phi u $$
Among \( \Phi =\left[\begin{array}{l}{\Phi}_1^T\\ {}{\Phi}_2^T\\ {}\dots \\ {}{\Phi}_M^T\end{array}\right],\kern0.5em u=\left[\begin{array}{l}{u}_1\\ {}{u}_2\\ {}\dots \\ {}{u}_N\end{array}\right],\kern1em \mathrm{and}\kern1em y=\left[\begin{array}{l}{y}_1\\ {}{y}_2\\ {}\dots \\ {}{y}_M\end{array}\right] \), Y is the collected signal. Now the key of the problem is to recover u from signal y. because Φ is not a square matrix (M < N), it involves solving an under determined equation (Table 1). In this way, u can be solved by many groups of solutions (Table 2). The theory of compressed sensing shows that under certain conditions, u has a unique solution, and the only solution is the reconstruction of Y obtained by compressed sampling using the recovery algorithm (Table 3). In order to show the projection of the signal on the measurement matrix, we use numerical calculation to explain the process. The Φ is the measurement matrix (as follows Table 1), u is the original signal (as follows Table 2), y is the signal sampled by formula (2) (as follows Table 3).
Table 1 Φ is the measurement matrix
Table 2 u is the original signal
Table 3 y is the signal sampled by formula (2)
Equation (2) shows the sampling mode of the signal. The theory of compressed sensing shows that the solution of (2) must ensure that x is sparse, and then it can be solved by L0 norm minimization. In the actual environment, most of the signals are non-sparse. The existing theory shows that when the signal is projected onto the orthogonal transform basis, the absolute value of most of the transform coefficients is very small, and the obtained transform vectors are sparse or nearly sparse, which can be regarded as a simple expression of the original signal, which is a prior condition of compression perception, that is, the signal must be sparse under some transformation. Therefore, sparse transform basis ψ can be established to complete sparse representation of non-sparse signals according to formula (1) (Table 5). Combining formulas (1) and (2), the compression sampling of signal U can be described as follows: through formula (2), the compression sampling of signal U is carried out to get y, then the sparse solution x is obtained according to formula (3), and finally, the signal U is reconstructed by sparse inverse transformation of x. The numerical calculation shows that the sparse signal x is obtained by recovering y. x is consistent with projection of u on the sparse basis ψ (as shown in Table 4). Namely, Table 5 is the sparse representation of Table 3, which further shows that the signal can be restored with low sampling through sparse transformation.
$$ y=\Phi \Psi x=\Theta x $$
Table 4 Ψ is the sparse basis
Table 5 x is the projection of the original signal u on the sparse basis, and the sparsity of u is 7
Where Θ = ΦΨ, this is still an underdetermined equation, but under certain constraints, x can be obtained by y. Of course, if the signal itself is sparse, then no sparse transformation is needed, then Θ = Φ. In addition to the condition that the signal needs to satisfy the sparse expression in compressed sensing, another important constraint is that Θ satisfies the Restricted Isometry Property (RIP).
Definition 1 [19] for matrix Θ, there is a constrained equidistance constant δs, which is the minimum value that holds the following equation.
$$ \left(1-{\delta}_s\right){\left\Vert {x}_s\right\Vert}_2^2\le {\left\Vert \Theta {x}_s\right\Vert}_2^2\le \left(1+{\delta}_s\right){\left\Vert {x}_s\right\Vert}_2^2 $$
Here, s = 1, 2,... is an arbitrary integer, and x is an arbitrary s-order sparse vector. If a matrix Θ conforms to (4), then Θ satisfies the constraint equidistant property. If δs is not too close to 1, it is not very rigorous to say that Θ conforms to the s-order constraint isometric property. When this property is present, the matrix approximation contains the Euclidean distance of the sparse signal x, which in turn implies that the x sparse vector cannot be in the null space of the Θ matrix.
In practice, we not only care about the recovery of sparse signals, but also those near-sparse signals (the signal vector has some smaller values in addition to the elements with larger values). Therefore, for the near sparse signal vector \( \hat{x} \), assuming that there are k large element values, the remaining elements are zero except for the k larger elements, denoted by \( {\hat{x}}_k \).
Theorem 1 assumes a 2 k-order RIP constant B of a matrix A. For C, the solution x is obtained according to D to satisfy the following formula:
$$ {\left\Vert {x}^{\ast }-\hat{x}\right\Vert}_2\le {C}_0{k}^{-1/2}{\left\Vert x-{\hat{x}}_k\right\Vert}_1 $$
C0 is a constant. In fact, if \( \hat{x} \) is a standard k sparse vector, then \( \hat{x} \) can be completely recovered from y, and for near-sparse signals, it can be fully recovered under the condition of satisfying Eq. (5).
Working structure of wireless sensor network
Wireless sensor network is composed of many autonomous sensor nodes, which can detect the physical state of the surrounding environment. Each physical node consists of four parts: sensor unit, processing unit, communication unit, and energy supply unit. Wireless sensor nodes usually collect environmental data, such as temperature, pressure, flow rate, humidity and location, and then send these data to the central node (sink) through wireless transmission, and the central node uses other transmission media for transmission. In addition to the aggregation node, each wireless sensor node collects information within its monitoring range and sends it to the aggregation point, so a large number of data in the aggregation point may cause data transmission blocking. In wireless sensor network, compression sensing technology is introduced, and compression sampling is carried out in the process of data acquisition, which can greatly reduce the amount of data transmission and energy consumption.
The wireless sensor system based on compressed sensing works in the following way: each target periodically transmits a signal, the transmission period is T, and the targets are independent of each other and do not require synchronization. The sensor periodically collects the signal, and the period is also T. after the end of the period time slice, each sensor sends the result to the central node, which recovers the data using the perception matrix, then transmits it, and finally analyzes the data at the processing end. As shown in Fig. 1, the solid dot represents the sensor node, and the square represents the center node.
Communication structure of wireless sensor network
Suppose that there are N sensors randomly distributed in the detection area, they can detect the event signals generated in the area. K represents the number of times the sensor node transmits in the period T. Considering that in wireless sensor network, a large amount of data transmission in a long time interval will consume more energy, and frequent data transmission in a short time interval will also cause the energy consumption of nodes to be too fast, so the selection of K plays an important role in balancing the energy consumption of nodes. Each node periodically detects the event signals in this region and gets a vector sequence x, which is usually non sparse, so sparse transformation is needed. Choosing appropriate sparse transform basis can improve the robustness of transmission signal and the accuracy of reconstruction signal.
Data sampling in wireless sensor networks with compressed sensing
Generally, wireless sensor network consists of a large number of sensor nodes, which have the ability of acquisition, processing, communication, and control, and can monitor the real environment. A wireless sensor network with n nodes, the data collected by each node in a cycle is Xi, i = 1,2,3.... Here, Xi is a scalar, so in a period, the whole wireless sensor network data constitutes a vector, which is expressed as:
$$ \mathrm{X}={\left[{\mathrm{x}}_1,\mathrm{x},\dots {\mathrm{x}}_{\mathrm{n}}\right]}^T $$
In general, for wireless sensor networks, to obtain complete information, it needs a complete n samples of signal x, and compression sensing can recover the complete wireless sensor signal (β includes non-zero coefficient) by acquiring the transformation coefficient β (||β||0 < <N) of the signal.
In wireless sensor networks, the data vector x is usually large, which may be composed of data from hundreds of thousands of wireless sensor nodes. Compressed sensing can reduce the amount of information collected in wireless sensor networks. For a signal x, if there is a sparse basis ψ, and the signal x can achieve a dilution of P sparse representation under the sparse basis, the sparse basis is expressed as:
$$ \Psi ={\left[{\Psi}_1,{\Psi}_2,\dots {\Psi}_p\right]}^T $$
Therefore, the data sampling of the wireless sensor network can be expressed as:
$$ X=\sum \limits_{i=1}^N{S}_i{\psi}_i\kern0.24em \mathrm{OR}\kern0.24em X= S\psi $$
where S is a sparse representation of X. Therefore, the vector data X generated by the N wireless sensor nodes in one cycle can be represented as a vector S (p < <N) having p non-zero coefficients. The usual compression method requires prior determination of the position of all non-zero coefficients of the signal X of length N. The method of compressive sensing does not need to determine the non-zero coefficient in advance and can directly compress and sample the signal. By means of compressed sampling, in the wireless sensor network, we only need to obtain the vector data of length M (p < M < <N) to fully express the information of the whole sensor network unit period sampling and reconstruct it. The original signal. Therefore, the data size processed by the sensor network is reduced from N to M, thereby saving space and time for processing. Compressed sensing uses the sampling matrix Φ to directly sample the data on the sensing node. Considering the sparse representation of the signal X = SΨ, the signal Y obtained by the compressed sampling is expressed as:
$$ Y=\phi X=\phi \varPsi S $$
Among them ϕ = {ϕj, i} is the sampling matrix, also called the perceptual matrix, and the elements in the matrix satisfy the independent and identical distribution, and the variance is 1/M. Therefore, the size of Y obtained by compression sampling is much lower than the original signal, and it is easier to store, transmit, and process. Therefore, the formula (9) can be transformed into:
$$ \left[\begin{array}{c}{y}_1\\ {}{y}_2\\ {}\dots \\ {}{y}_M\end{array}\right]=\left[\begin{array}{cccc}{\varphi}_{1,1}& {\varphi}_{1,2}& \dots & {\varphi}_{1,N}\\ {}{\varphi}_{2,1}& {\varphi}_{2,2}& \dots & {\varphi}_{2,N}\\ {}\dots & \dots & \dots & \dots \\ {}{\varphi}_{M,1}& {\varphi}_{M,2}& \dots & {\varphi}_{M,N}\end{array}\right]\left[\begin{array}{c}{x}_1\\ {}{x}_2\\ {}\dots \\ {}{x}_N\end{array}\right] $$
In order to achieve perfect recovery after compressed sampling, the value of M is:
$$ M\le \frac{p\log \left(N/p\right)}{1/c} $$
We have carried out numerical analysis on the sampling length M. The analysis was implemented under different signal length N and sparsity P. As shown in Table 6, the lower the sparsity is, that is, the less the signal non-zero element is and the smaller the required sampling length M is; of course, under the same sparsity, the longer the signal length is, the longer the sampling length will be.
Table 6 The influence of sparsity and signal length on sampling length
Where C is a constant [20]. In order to ensure complete recovery of the sensing signal from the under-sampled information X, we further give four limiting conditions [21]. First, in a wireless sensor network with N nodes, in order to avoid congestion, the common rate R of the sensor node is set as:
$$ R\ge \sqrt{\frac{\log N}{\pi N}} $$
Second, when the central node receives the signal, the range of the arrival rate ξ:
$$ \zeta \ge \frac{4 WN}{\sigma M\log N} $$
where W is the bandwidth of transmission signal, and σ > 0 is a small constant. Third, in wireless sensor networks, in order to reduce channel contention from sensor nodes to central nodes, we set the service rate μ as:
$$ \mu =\frac{1+ W\lambda}{W} $$
Finally, for the N node wireless sensor network, when the node ni sends information to the node nj, in order to ensure the efficiency of transmission, the distance between the nodes ni and nj is generally not greater than the common rate:
$$ \left\Vert {n}_i-{n}_j\right\Vert \le R $$
Multipath channel transmission model
In a wireless sensor network, a source may send information to multiple sensor nodes, and a sensor node may also receive signals from multiple sources. Therefore, the entire network signal transmission model can be expressed as the energy r1, …, rm consumed by the sensor S1, S2, …, Sn receiving the source R1, …, Rm. It is assumed that the Ri energy consumption Cij is received from Sj; therefore, the multipath channel transmission model needs to find an optimal transmission scheme to minimize energy consumption.
The mathematical description of this problem is as follows: let the energy consumption from Ri to point Sj be Xij, so the total consumption is:
$$ S=\sum \limits_{i=1}^m\sum \limits_{j=1}^n{c}_{ij}{x}_{ij} $$
where Xij meets
$$ \left\{\begin{array}{l}\sum \limits_{j=1}^n{x}_{ij}={r}_{j\kern2em }i=1,2,\dots, m\\ {}\sum \limits_{i=1}^n{x}_{ij}={s}_{j\kern2em }j=1,2,\dots, m\\ {}{x}_{ij}\ge 0\end{array}\right. $$
$$ \sum \limits_{i=1}^m{r}_i=\sum \limits_{j=1}^n{s}_j $$
Therefore, in the sensor network determined by R and s, the problem of finding the best transmission channel is transformed into finding a set of values of Xij satisfying formula (17) to make formula (16) take the minimum value.
Definition 2 assumption R = [r1r2 … rm]T, S = [s1s2 … sn]T are two positive vectors, satisfying
$$ \sum \limits_{i=1}^n{r}_i=\sum \limits_{j=1}^n{s}_j>0. $$
Assumption ℋ (R, S) = {A ∈ Rm × n|A ≥ 0, .A is m x n with R s, as row sum vector ST as column sum vector}
That is, for a given positive quantity R and S, ℋ(R,S) is a set of all m× n nonnegative matrices with R as row sum vector and ST as column sum vector. Such problems are called nonnegative matrix problems with given row sum and column sum.
Definition 3 assumes that q1, q2, …, qr are some non-negative real numbers and satisfy \( \sum \limits_{i=1}^r{q}_i=1. \) The combination \( \sum \limits_{i=1}^r{q}_i{x}_i \) is called the convex combination of the element x1, x2, …, xr. Let X be a set, and the whole convex combination of any finite element in X is called the convex hull of the set X. If any finite element of the set X, its convex combination still in X, and X is said to be a convex set; if a point P of convex set X is not a convex combination of X other points in X, then P is said to be a pole of X.
Suppose A is a m × n non-negative matrix, and b is an m-dimensional non-negative vector. Then the set
$$ \Omega =\left\{y\in {R}^n\left| Ay\le b\right.\right\} $$
is a convex set.
From Klein-Milman theorem, it is known that a bounded convex set is a convex combination of its poles. From the theory of linear algebra, it is known that the point y belongs to is a pole if and only if the column vector of a corresponding to the non-zero coordinate in Y is the independent vector group of the column vector set of A.
Since the study of a given row, column, and non-negative matrix problem is closely related to the signal transmission problem of the sensor network, the extreme value of the transmission problem must be realized by the pole on the domain. Therefore, the pole in ℋ(R, S)corresponds to the minimum energy consumption of the sensor network.
For the pole problem of H(R, S), the analysis is as follows:
Lemma 1 assumes that R = [r1r2 …rm]T are two positive vectors, satisfying\( \sum \limits_{i=1}^m{r}_i\sum \limits_{j=1}^n{s}_j \), and then A is the pole of ℋ(R, S) if and only if A is the only matrix in H(R, S) that has the same zero pattern as A.
Prove:
suppose A ∈ H(R, S). It is easy to know that A = [aij] is the matrix in ℋ(R, S) equivalent to aij in A is the solution of the equation.
$$ \left\{\begin{array}{l}\sum \limits_{j=1}^n{x}_{ij}={r}_i\kern1em i=1,2,\dots, m\\ {}\sum \limits_{j=1}^n{x}_{ij}={s}_i\kern1em j=1,2,\dots, n\\ {}\sum \limits_{i=1}^m{r}_i=\sum \limits_{j=1}^n{s}_i\end{array}\right. $$
Therefore, Eq. (21) can be written as
$$ \left\{\begin{array}{l}{x}_{11}+{x}_{12}+\dots {x}_{1n}\kern16.12em ={r}_1\\ {}\kern8em {x}_{21}+{x}_{22}+\dots {x}_{2n}\kern8em ={r}_2\\ {}\kern14em \dots \kern7em \dots \\ {}\kern15em {x}_{m1}+{x}_{m2}+{x}_{mm}\kern1em ={r}_m\\ {}{x}_{11}\kern7em +{x}_{21}\kern4.5em \dots +{x}_{m1}\kern5em ={s}_1\\ {}\kern2em {x}_{12}\kern7em +{x}_{22}\kern2.5em \dots {x}_{n1}\kern6em ={s}_2\\ {}\kern3.5em \dots \kern8.5em \dots \kern4em \dots \kern4.5em \dots \\ {}\kern4em {x}_{1n}\kern8em +{x}_{2n}\kern4em +{x}_{mn}\kern1em ={s}_n\end{array}\right. $$
Due to condition \( \sum \limits_{i=1}^m{r}_i=\sum \limits_{j=1}^n{s}_j \), Eq. (22) is a set of compatible equations of rank m + n−1. Knowing the solution of Eq. (22) by convex set theory, A = [aij] is the pole in ℋ(R, S).
After the pole is solved, the transmission channel of the sensor network can be established. The sensing node receives the signal through the channel and constructs a sampling matrix. Details of section 3.3 are detailed.
Compressed sampling matrix of wireless sensor network
In the actual measurement, the active sensor node captures the signal of the event. But there are two problems. One is that if all events happen at the same time, each sensor will receive the signal of mutual interference. Secondly, under the condition of propagation loss and thermal noise, the signal will be distorted seriously. In order to further analyze the signal acquisition process of wireless sensor network, the vector expression of the signal received by the sensor under noise is given here [22]:
$$ {Y}_{M\times 1}={G}_{M\times N}{X}_{N\times 1}+{\omega}_{M\times 1} $$
Here, X represents the original signal (in order to simplify the description, assume x is a sparse vector, of course, the actual signal x may not be sparse, but we can use the sparse basis constructed by DCT and other methods to sparse), Y represents the sensing signal, and ω represents the thermal noise and interference. It obeys the independent Gaussian distribution with mean value of zero and variance of σ2. GM × N is a channel sampling matrix, whose structure is as follows:
$$ {G}_{i,j}={\left({d}_{i,j}\right)}^{-\frac{\alpha }{2}}\left|{h}_{i,j}\right|i\in \mathrm{M},\kern0.5em \mathrm{and}\ \mathrm{j}\in \mathrm{N} $$
Where dij is the distance from the i-th signal sensor to the j-th signal sensor. α is the propagation loss factor. hmn is the Rayleigh attenuation parameter of Gaussian noise with mean value of zero and variance of σ2. Therefore, the compressed sensing process of wireless sensor network can be expressed as follows:
$$ \left[\begin{array}{c}{y}_1\\ {}{y}_2\\ {}\dots \\ {}{y}_M\end{array}\right]=\left[\begin{array}{cccc}{G}_{1,1}& {G}_{1,2}& \dots & {G}_{1,N}\\ {}{G}_{2,1}& {G}_{2,2}& \dots & {G}_{2,N}\\ {}\dots & \dots & \dots & \dots \\ {}{G}_{M,1}& {G}_{M,2}& \dots & {G}_{M,N}\end{array}\right]\left[\begin{array}{c}{x}_1\\ {}{x}_2\\ {}\dots \\ {}{x}_N\end{array}\right]+\left[\begin{array}{c}{\omega}_1\\ {}{\omega}_2\\ {}\dots \\ {}{\omega}_M\end{array}\right] $$
In order to explain the compressed sampling process in detail, the numerical calculation is carried out. Where the distance between sensors in the channel sampling matrix is generated randomly, the matrix G is obtained according to formula (24). The signal x and the interference term w are sampled according to Eq. (25). Then, Then Fourier orthogonal transformation matrix is used as sparse basis to recover signal. Five sets of numerical analysis are carried out to compare the difference between the original signal and the recovered signal (|recovered signal-original signal|/|original signal|*100%). As shown in Table 7, the numerical calculation shows that after compression sampling, it can be recovered, but the recovery accuracy is not enough, which is related to the signal reconstruction algorithm. The later part of this paper studies the reconstruction algorithm.
Table 7 The difference between the original signal and the recovery signal
RIP is the constraint that all sampling matrices must follow under the theory of compressed sensing. Therefore, for the sampling matrix G, we further demonstrate that it conforms to the constraint equidistant property.
According to the Johnson-Lindenstrauss theorem [23], when a matrix Φ satisfies the RIP condition, then the following formula holds:
$$ {\displaystyle \begin{array}{c}\Pr \left(\left|{\left\Vert \Phi x\right\Vert}_2-{\left\Vert x\right\Vert}_2\right|\ge \varepsilon {\left\Vert x\right\Vert}_2\right)\le 2{e}^{-{nc}_0\left(\varepsilon \right)}\\ {}0<\varepsilon <1\end{array}} $$
Pr(•) indicates the probability of reaching the desired value. C0(ε) is a constant that depends on ε and is greater than zero.
Now, we discuss how to use the convergence of inequality (26) to prove the RIP property of matrix G. First, we discuss the sampling matrix G in a fixed k-dimensional subspace. In particular, we give a subscript set T(T ≤ k), and XT represents a non-zero vector set with subscript T in RN space. This is a k-dimensional linear space which can be used for L2 norm calculation.
The general way to build such a linear space is to build a set of points in the k-dimensional subspace, which meet the uniform constraints of (26), and then extend the result to all k-dimensional signals. This is a common method of constructing set space, and the proof of Dvoretsky theory is also constructed [24]. For L2 norm of matrix G in finite dimensional space, we cannot get the appropriate boundary constraint at the beginning, but adopt the way of gradual refinement.
Theorem 2 assumes that Φ(w) and w ∈ ΩnN are a random matrix of size n × N, which satisfies inequality (26). For any set T, there is |T|0 = k < n, and 0 < δ < 1, then
$$ \left(1-\delta \right){\left\Vert \mathrm{x}\right\Vert}_2\le {\left\Vert \Phi (w)\mathrm{x}\right\Vert}_2\le \left(1+\delta \right){\left\Vert \mathrm{x}\right\Vert}_2\mathrm{x}\in {\mathrm{X}}_T $$
The probability that the formula is established is greater than or equal to \( 1-2{\left(12/\delta \right)}^k{e}^{-{c}_0\left(\delta /2\right)n} \)
First of all, it can be concluded that formula (27) is obviously valid under the constraint of ‖x‖2 = 1. Because Φ is linear, next, we choose a finite set of points QT, QT ⊆ XT, and for all q ∈ QT, there is ‖q‖2 ≤ 1, for all x ∈ XT, and there is ‖X‖2 ≤ 1, so we can get
$$ \underset{q\in {Q}_T}{\min }{\left\Vert x-q\right\Vert}_2\le \delta /4 $$
Further, we get that there is ‖QT‖1 ≤ (12/δ)k for set QT. Next, the consistency constraint is applied to the point set X of Eq. (19) and ε = δ/2, so its probability will be greater than \( 1-2{\left(12/\delta \right)}^k{e}^{-{c}_0\left(\delta /2\right)n} \), so
$$ \left(1-\delta /2\right){\left\Vert q\right\Vert}_2\le {\left\Vert \Phi q\right\Vert}_2\le \left(1+\delta /2\right){\left\Vert q\right\Vert}_2\;q\in {Q}_T $$
Here we assume that a is the minimum value satisfying the above formula
$$ {\left\Vert \Phi x\right\Vert}_2\le \left(1+\mathrm{A}\right){\left\Vert x\right\Vert}_2,x\in {X}_T,{\left\Vert x\right\Vert}_2\le 1 $$
Our goal is to draw A ≤ δ. So, for any x ∈ XT and ‖X‖2 ≤ 1, we can choose q ∈ QT so that ‖x − q‖2 ≤ δ/4, in this case, we can get:
$$ {\left\Vert \Phi x\right\Vert}_2\le {\left\Vert \Phi q\right\Vert}_2+{\left\Vert \Phi \left(x-q\right)\right\Vert}_2\le 1+\delta /2+\left(1+\mathrm{A}\right)\delta /4 $$
Since A is the minimum value that satisfies the formula (30), A ≤ δ/2 + (1 + A)δ/4 is taken here, so there is \( \mathrm{A}\le \frac{3\delta /4}{1-\delta /4}\le \delta \). We have proved that the upper bound of inequality (27) is established, and the lower bound proof process is similar, which is expressed as follows:
$$ {\displaystyle \begin{array}{l}{\left\Vert \Phi x\right\Vert}_2\ge {\left\Vert \Phi q\right\Vert}_2-{\left\Vert \Phi \left(x-q\right)\right\Vert}_2\\ {}\kern2em \ge 1-\delta /2-\left(1+\delta \right)\delta /4\ge 1-\delta \end{array}} $$
Then, the lower bound of (27) is also true.
After demonstrating that the sampling matrix G conforms to the constraint equidistant property, another factor that affects the sampling efficiency and recovery accuracy is the number of samples. What we need to further determine is the number of times the sensor node transmits in the period T, the number of sensors that acquire the signal, and the number of all the sensors: K < M < N. Therefore, the last measured signal vector Y is a compressed representation of the event. From another point of view, the vector Y is a feature that acquires X by a lower number of samples (M times). Since the noise interference of the wireless sensor network directly affects the signal accuracy of the compressed sampling, it has a great influence on the signal reconstruction result. Here, we adopt an approximate gradient descent algorithm, which can reconstruct the result of compressed sampling in the noise interference environment and recover the original signal with higher accuracy.
Approximate gradient descent algorithm
In order to effectively recover the original signal in the wireless sensing network, signal recovery in a noisy environment must be considered. Therefore, the noisy compressed sensing model is established as follows [11]:
An unknown signal X ∈ RN can be expressed as a known sampling matrix Φ ∈ RM × N (M < <N) and a linear measured value Y ∈ RM:
$$ {Y}_{M\times 1}={\Phi}_{M\times N}{X}_{N\times 1}+{\omega}_{M\times 1} $$
In order to reconstruct X, we need to solve the proposed constrained denoising model by Candes et al. [25]:
$$ {\min}_x{\left\Vert X\right\Vert}_1\kern0.5em \mathrm{subject}\kern0.5em \mathrm{to}{\left\Vert \Phi X-Y\right\Vert}_2^2 $$
The solution of formula (34) is a convex optimization process. Here, we give a general description of the problem. For unconstrained convex optimization problems, the expression is minimize
$$ F(x),F(x)=\mathrm{f}(x)+g(x) $$
The objective function F(x) is considered to be a combinatorial convex optimization function, where g(x) is a continuous convex function, but not smooth, and f(x) is a smooth convex function. The first derivative of function F(x) is lipshciz continuous definition 4. Function f(x) is a lipshciz continuous function if and only if:
$$ {\left\Vert \nabla f(x)-\nabla f(y)\right\Vert}_2\le L(f)\left\Vert x-y\right\Vert $$
where L(f) > 0 is the Lipschitz constant.
For a general optimization problem, a smooth function f(x) is convex if and only if the tangent of the function is below the function curve. Its mathematical expression is:
$$ f(x)\ge f(y)+<\nabla f(y),x-y> $$
If the first-order Lipschitz of f(x) is continuous and is a convex function, then f(x) is a local quadratic function whose Hessain matrix is L(f) ⋅ I and satisfies the following conditions:
$$ f(x)\le f(y)+<\nabla f(y),x-y>+\frac{L(f)}{2}{\left\Vert x-y\right\Vert}_2^2 $$
The proof is as follows:
$$ {\displaystyle \begin{array}{l}f(x)=f(y)+\underset{0}{\overset{1}{\int }}\left\langle \nabla f\left(y+\tau \left(x-y\right)\right),x-y\right\rangle d\tau \\ {}=f(y)+\left\langle \nabla f(y),x-y\right\rangle +\underset{0}{\overset{1}{\int }}\left\langle \left.\nabla f\left(y+\tau \left(x-y\right)\right)-\nabla f(y),x-y\right\rangle d\tau \right.\end{array}} $$
$$ {\displaystyle \begin{array}{l}\left|f(x)-f(y)-\left\langle \nabla f(y),x-y\right\rangle \right|\\ {}=\left|\underset{0}{\overset{1}{\int }}\left\langle \nabla f\left(y+\tau \left(x-y\right)\right)-\nabla f(y).x-y\right\rangle \right|\\ {}\le \underset{0}{\overset{1}{\int }}\left|\nabla f\left(y+\tau \left(x-y\right)\right)-\nabla f(y).x-y\right| d\tau \\ {}\le \underset{0}{\overset{1}{\int }}{\left\Vert \nabla f\left(y+\tau \left(x-y\right)\right)-\nabla f(y)\right\Vert}_2\cdot {\left\Vert x-y\right\Vert}_2 d\tau \\ {}\le \underset{0}{\overset{1}{\int }}\tau L(f){\left\Vert x-y\right\Vert}_2^2 d\tau =\frac{L(f)}{2}{\left\Vert x-y\right\Vert}_2^2\end{array}} $$
Certificate completion
In order to reconstruct the original signal from the compressed sampled signal, further consider the mathematical optimization model of signal reconstruction in the process of compressed sensing, i.e.,
$$ \operatorname{Minimize}\kern0.24em {\left\Vert X\right\Vert}_1\ \mathrm{subject}\ \mathrm{to}\kern0.24em {\left\Vert \Phi X-y\right\Vert}_1<\varepsilon $$
Where x and y are vectors and Φ are matrices. This problem in the optimization solution is usually expressed as:
$$ \operatorname{Minimize}\kern0.24em {\left\Vert \Phi x-y\right\Vert}_2^2+\lambda {\left\Vert x\right\Vert}_1 $$
where λ is the weight between the sparsity of x and the signal error, and the value of λ depends on the degree to which the two parts of Eq. (39) play a definitive role in the optimization problem. Obviously, Eq. (38) is equivalent to Eq. (35), and its corresponding form is:
$$ \mathrm{f}\left(\mathrm{x}\right)={\left\Vert \Phi \mathrm{x}\hbox{-} \mathrm{y}\right\Vert}_2^2\cdot g\left(\mathrm{x}\right)=\lambda {\left\Vert \mathrm{x}\right\Vert}_1 $$
If in the convex equation of (35), the variable is not a vector but a matrix, and this can be extended to digital signal processing in two-dimensional space, such as image processing. The model of the minimum distortion of total variance can be used to achieve the filtering of image compression perception [26]. For the problem of (39), we propose an approximate gradient descent algorithm to find the optimal solution.
Suppose the function f(x) is a smooth convex function, and the first-order Lipschitz continuous, using the gradient method for k iterations, the expression is:
$$ {X}_k={X}_{k-1}-{t}_k\nabla f\left({X}_{k-1}\right) $$
where tk > 0 is a scalar representing the step size of the iteration, where a quadratic equation is used to illustrate the iterative calculation from xk-1 to xk
$$ {X}_k=\arg \min \left\{f\left({X}_{k-1}\right)+<\left(X-{X}_{k-1}\right),\nabla f\left({X}_{k-1}\right)>+\frac{1}{2{t}_k}{\left\Vert X-{X}_{k-1}\right\Vert}_2^2\right\} $$
Ignore the constant term, and the same formula can be obtained by (35):
$$ {X}_k=\arg \min \left\{\frac{1}{2{t}_k}{\left\Vert X-\left({X}_{k-1}-{t}_k\nabla f\left({X}_{k-1}\right)\right)\right\Vert}_2^2+g(X)\right\} $$
For the compressed sensing in noisy environment, it can be expressed as follows: minimize f(x) = minimize \( {\left\Vert \Phi \mathrm{x}-\mathrm{y}\right\Vert}_2^2+\lambda {\left\Vert \mathrm{y}\right\Vert}_1 \).
According to formula (39), the iteration of each step can be obtained:
$$ {X}_k=\arg \min \left\{\frac{1}{2{t}_k}{\left\Vert X-\left({X}_{k-1}-{t}_k\nabla f\left({X}_{k-1}\right)\right)\right\Vert}_2^2+\lambda {\left\Vert X\right\Vert}_1\right\} $$
To find the gradient of \( f(X)={\left\Vert \Phi X-y\right\Vert}_2^2 \), the formula (44) is equivalent to:
$$ {X}_k=\arg \min \left\{\frac{1}{2{t}_k}{\left\Vert X-{X}_{k-1}+2{t}_k{\Phi}^T\left({\Phi \mathrm{X}}_{k-1}-X\right)\right\Vert}_2^2+\lambda {\left\Vert X\right\Vert}_1\right\} $$
where xk is an iterative calculation using such a linear shrinking step.
Let Ek = Xk − 1 − 2tkΦT(ΦXk − 1 − y) then (45) be transformed into:
$$ {X}_k=\arg \min \left\{\frac{1}{2{t}_k}{\left\Vert X-{E}_k\right\Vert}_2^2+\lambda {\left\Vert X\right\Vert}_1\right\} $$
For Eq. (46), we first consider the simple form under one-dimensional conditions:
$$ \underset{x\in \Re }{\min }Q(x)=\lambda \left|x\right|+{\left(x-f\right)}^2 $$
The solution of this formula is \( x= shrink\left(f,\frac{\lambda }{2}\right) \)
Definition 5 The shrink operator expression is as follows:
$$ shrink\left(f,\frac{\lambda }{2}\right)=\left\{\begin{array}{ll}f-\frac{\lambda }{2}& if\kern0.5em f>\frac{\lambda }{2}\\ {}0& if\kern1em -\frac{\lambda }{2}\le f\le \frac{\lambda }{2}\\ {}f+\frac{\lambda }{2}& if\kern0.5em f<-\frac{\lambda }{2}\end{array}\right. $$
For Eq. (46), it can be decomposed into multiple one-dimensional optimizations. For the i-th dimension optimization problem, I can fix other elements of the xkj outer vector x, and Eki denotes the i-th element of the vector Ek. According to the definition 5, we can get:
$$ {X}_k=\lambda \times shrink\left({\beta}_j,{t}_k\lambda \right) $$
$$ {\beta}_j={\sum}_i{E}_{ki}-{\sum}_{k\ne j}{X}_k $$
(among)
By using formula (46) for iterative calculation, xk is kept close to the optimal value. As long as the number of iterations is properly controlled, the original signal can be reconstructed and the noise can be effectively filtered.
Reconstruction of wireless sensor signal based on compressed sensing
Using the approximate gradient descent method as the signal reconstruction algorithm, the specific process of signal acquisition based on compressed sensing wireless sensor network can be expressed as follows:
In a wireless sensor network, all sensor nodes are first time synchronized. Assuming that the event occurred for a period of time, each active node detects the event signal with a period T. The resulting signal is represented by the vector X. In order to thin the signal, a discrete cosine transform is used to construct the sparse basis matrix Ψ. Each sensing node generates a projection of a signal vector under the matrix in a period of T, which can achieve signal thinning. This step is a prerequisite for compressive sensing of wireless signals.
Each sensor node constructs a sampling matrix according to Eq. (17). Further, the thinned signal vector is projected under the sampling matrix to obtain Y, that is, the sampling of the signal is completed. Since the sampling matrix is not a square matrix, this is a process of undersampling the signal.
The sensor node transmits the compressed sampled signal to the central node of the sensor network, and the sampling matrix is also transmitted to the central node (if all sensing nodes use the same sampling matrix, only one of the nodes needs to transmit the sampling matrix to central node). After receiving the signal, the center uses the approximate gradient algorithm to recover the sparse form of the signal and uses the discrete cosine inverse transform to restore the signal, further completing the signal fusion processing. The entire system flow is shown in Fig. 2.
flow chart of wireless sensor based on compressed sensing. The sensor node obtains the signal, after sparse change, samples sparse signal with the sampling matrix, and then transmits it to the central node for recovery
The central node receives the compressed signal Y, and then uses PRG algorithm to approximate the exact solution of Y step by step. At the beginning, we construct a unit vector with the same length as the original signal vector as the initial vector. In the process of algorithm execution, the selection of convergence threshold ε determines the execution time and accuracy of the algorithm. Here, we use the optimal convergence threshold ε = 0.015 obtained by T. blumensath and others in the literature [27]. When using the linear contraction operator to calculate the k-step approximation solution, its step tk depends on the step tk-1, tk = ((tk − 1 − 1)2 + ε)p/2 − 1 of the last iteration, where p is 0.21 [28]. The algorithm is executed iteratively until the convergence threshold condition is satisfied.
Simulation experiment
In the experimental design, 100 sensor nodes are randomly distributed in the area of 100 × 100, and the center of the area is the center node. The target signals (sources) to be detected are randomly distributed in the region. The experiment assumes that the sensor node collects the signal in a period of time, and each sensor processes the signal sparsely, compresses the sample, and then transmits it to the central node.
For the sensor node to acquire the signal, the weak signal on a certain sensor node may be a strong signal at other nodes, and a signal strength threshold may be set, and the signal below the threshold is no longer acquired; so to the extent that weaker signals are avoided, they are filtered out as noise during the recovery phase. In order to verify the performance of PRG algorithm in wireless sensor network, we introduce orthogonal matching tracking (OMP) [29], basis pursuit (BP) [30], and subspace tracking (subspace), subspace pursuit, (SP) [32] algorithm, and compressive sampling matching (COSAMP) [33] algorithm to compare and analyze the reconstruction accuracy of different algorithms. According to the theory proposed by Candes et al., the number of times of compression sampling, that is, the line m of the sampling matrix satisfies m ≥ C ⋅ μ2(Φ, Ψ) ⋅ r ⋅ log n, where r represents the sparseness of the signal after sparse projection, n represents the signal length, Φ is the perceptual matrix, and Ψ represents the sparse basis matrix. If Φ, Ψ is irrelevant [34], ideally the coherence factor μ(Φ, Ψ) = 1, then m ≥ C ⋅ r ⋅ log n, most experimental results show that m ≥ 4r is the best value. In the experiment in this paper, the sparse basis matrix uses a 1024 × 1024 discrete cosine transform matrix, so the sensor accepts the signal with a length of 1024 bits for segmentation. The sampling matrix is constructed using (24). In order to more clearly demonstrate the resilience of the PRG algorithm and other algorithms, we use the signal-to-noise ratio (SNR) of the reconstructed signal and the original signal to represent the recovery effect. The definition is as follows:
$$ SNR\left({X}_{true},{X}_{rec}\right)=20\log {\frac{{\left\Vert {X}_{true}\right\Vert}_2}{\left\Vert {X}_{true}-{X}_{rec}\right\Vert}}_2 $$
where Xture represents the original signal from the source, and Xrec represents the signal that is compressed and then reconstructed.
The signal reconstruction by PRG algorithm is a process of successively approximating the optimal solution. To quantitatively analyze the performance of the PRG algorithm, the experiment is performed at a sampling rate of 400. In Eq. (46), the parameter λ is 0.35 [35]. It can be seen from Fig. 3 that the reconstruction effect of the PRG algorithm tends to be stable after 10 iterations, and the number of iterations is further increased, and the signal reconstruction effect is not improved much. Therefore, the number of iterations of the PRG algorithm is unified to 10 times in subsequent experiments.
PRG algorithm iteration number and reconstruction effect. The SNR of the reconstructed wireless sensor signal by PRG algorithm, in which iterations involve 5, 10, 15, 20, 25, 30, and 35
Figure 4 shows the recovery ability of OMP, SP, BP, CoSaMP, and PRG algorithm in wireless sensor network without noise interference. For the sampling matrix constructed by formula (24), the sub matrix is constructed by randomly selecting row vectors of the matrix to undersample, and the number of row vectors is the sampling rate.
Reconstruction of sensor signals without noise. The SNR of the reconstructed wireless sensor signal by PRG, OMP, SP, BP, and COSAMP. The sampling rates 100, 150, 200, 250, 300, 350,and 400, respectively
It can be seen from the observation in Fig. 4 that the resilience of different algorithms is not much different in a noise-free environment. Under the condition of low sampling rate, the recovery ability of these algorithms is not good. This is because the low sampling rate cannot guarantee the main characteristic information of the sensor to obtain the signal, and it is difficult to achieve perfect reconstruction of the signal, that is, the SNR value is low. As the sampling rate is further increased, it can be seen that the SNR value is increasing and tends to be stable, that is, the signal can be accurately reconstructed.
In order to further analyze the influence of noise interference on the signal collected by the sensor, we add Gaussian white noise and sinusoidal signal plus narrowband Gaussian noise with a signal-to-noise ratio of 10, 20,...100 to the original signal. These are two common noises in wireless sensor networks .Then, compare the reconstruction capabilities of the four algorithms OMP, SP, BP, COSAMP, and PRG. In this experiment, the number of samples is 400, the discrete cosine transform is used to construct the base matrix, and the SNR values of the signal and noise are gradually increased. Figures 5 and 6 show the resilience of the algorithm under Gaussian white noise and sinusoidal signal plus narrowband Gaussian noise. In the case of low signal-to-noise ratio, it means that the noise energy and the signal energy are equivalent. In order to better reduce the noise influence, this puts high requirements on the reconstruction ability of the algorithm. In fact, the noise is relatively strong. In most cases, most algorithms are difficult to reconstruct the signal perfectly. As the SNR value increases, the situation improves. It can be seen whether Gaussian white noise or sinusoidal signal plus narrowband Gaussian noise, the PRG algorithm exhibits better resilience than other algorithms in the SNR value of 40–90, and since 90 is relatively large in signal energy relative to noise, most algorithms can reconstruct the signal better. Moreover, compared with the Gaussian white noise, PRG algorithm has a good recovery effect under sinusoidal signal plus narrowband Gaussian noise. Therefore, it can be concluded that the PRG algorithm exhibits better reconstruction performance under non-strong noise interference and can effectively restore the original signal.
Sensor signal reconstruction under Gaussian white noise. The SNR of the reconstructed wireless sensor signal by PRG, OMP, SP, BP, and COSAMP. The SNR of noise is 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100, respectively
Sensor signal reconstruction under sinusoidal signal plus narrowband Gaussian noise. The SNR of the reconstructed wireless sensor signal by PRG, OMP, SP, BP, and COSAMP. The SNR of noise is 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100, respectively
Further, we built a real wireless sensor network system, which is composed of 30 temperature sensor nodes. Every nodes support 802.11 and 2.4 GHz network bands. The wireless sensor nodes are separated by 5 m, and the central node is directly replaced by PC. A stable heat source was randomly placed in the experiment, and then the temperature of the heat source was measured. Since the current hardware-based sensing matrix design is still not perfect, we add a module to each sensing node, implement the sparse and compressed sampling by software, and then transmit temperature data by compressed sampled to center node. Signal of temperature was refactored in the center node. The experiment still uses the discrete cosine transform to construct the base matrix, and other parameters are the same as the previous experiments. We randomly placed the heat source at 10 locations and repeated 10 experiments to compare the reconstruction capabilities of the reconstruction algorithms OMP, SP, BP, COSAMP, and PRG, as shown in Fig. 7. The relative error between the actual temperature of the heat source and the temperature calculated by the central node is used to represent the reconstruction accuracy.
Temperature sensing reconstruction error based on compressed sensing
From Fig. 6, it can be observed that under the same conditions, the reconstruction accuracy of the PRG algorithm is generally better than other algorithms, but its reconstruction performance is not as stable as the BP and OMP algorithms. In the experiment, we further analyzed that the reconstruction time of the temperature sensing data of the PRG algorithm is consistent with the OMP algorithm, which is lower than the SP, COSAMP, and BP algorithms, and the rapid reconstruction capability is also important for reducing the energy consumption of the wireless sensor network.
In order to study the time complexity of algorithm reconstruction, we compare the time overhead of various algorithms for heat source signal reconstruction. In theory, the algorithm's reconstruction time for the signal increases as the number of iterations increases. In the experiment, we tested the time to reconstruct the heat source signal in the case where the interval of the number of iterations is [1, 12], and the SNR values of the noise are 20, 40, 60, and 80, respectively. As shown in Figs. 8 and 9, it can be seen that whether Gaussian white noise or sinusoidal signal plus narrowband Gaussian noise, the time increases with the increase of iteration times, and it is obvious that the iterative calculation increases the time overhead. The noise also has a significant impact on the reconstruction time. The higher the noise, the longer the time required to reconstruct the signal. This is because the noise introduces extra data, and the noise is non-sparse signal. The compression is relatively small, which makes the overall calculation of the algorithm increase. Lead to increased time complexity. The PRG algorithm proposed in this paper has lower time overhead than other algorithms. In particular, when SNR = 60, the time cost is significantly lower than other algorithms. In the sinusoidal signal plus narrowband Gaussian noise, we further find that the recovery time of the heat source signal pair is less than that of the Gaussian white noise.
Reconstruction time of heat source signals under Gaussian white noise
Reconstruction time of heat source signals under sinusoidal signal plus narrowband Gaussian noise
The advantage of compressed sampling is that it acquires the complete signal at a lower cost. The wireless sensor network needs this facility. Since wireless sensor networks are susceptible to noise, signal reconstruction of undersampled data becomes difficulty. Based on multi-path channel transmission model for wireless sensor networks, an approximate gradient descent algorithm is proposed to recover compressed signal under noise. The algorithm can get the optimal solution of the constraint equation through stepwise iterative approximation, and then restore the original signal. Compared with OMP, SP, BP, and COSAMP algorithm, PRG algorithm shows better reconstruction performance under noisy environment. In the test of temperature sensing networks, the result shows that the PRG algorithm has certain advantages in both reconstruction accuracy and time. However, the following limitations of the PRG algorithm need to be further studied:
Although the overall convergence time of PRG algorithm is short, it is found that the convergence time of single iteration of PRG algorithm is more than the other three algorithms in the experiment. In the follow-up research, we need to further optimize the linear contraction step model to reduce time complexity of the algorithm, reconstruction time, and energy consumption of the sensor network.
The determination of the weight λ between the sparsity of signal x and the error is based on the weight selection method .This method is a fast shrinkage threshold algorithm proposed by Beck et al. It is not clear for adaptability of PRG algorithm to the reconstruction of wireless sensor signal. Therefore, it is necessary to further analyze from mathematical inference.
The datasets used and analyzed during the current study are available from the corresponding author on a reasonable request.
CS:
RIP:
Restricted Isometry Property
OMP:
Orthogonal matching pursuit
Basis pursuit
SP:
Subspace pursuit
SNR:
PRG:
Approximate gradient algorithm
COSAMP:
Compressive sampling matched pursuit
F. Yan, X. M. Zhang, L. Tao, and H. Zhang, "Network Coding-Based Flooding with a Mobile Sink in Low-Duty-Cycle Wireless Sensor Networks," Ieee Transactions on Mobile Computing, vol. 18, pp. 1857-1869, 2019.
C. Lindberg, A.G.I. Amat, H. Wymeersch, Compressed sensing in wireless sensor networks without explicit position information. IEEE Trans Signal Inform Process Networks 3, 404–415 (2017)
B. Amin, B. Mansoor, S.J. Nawaz, S.K. Sharma, M.N. Patwary, Compressed sensing of sparse multipath MIMO channels with superimposed training sequence. Wirel Pers Commun. 94, 3303–3325 (2017)
Eugene Shih,Seong-Hwan Cho,Nathan Ickes,Rex Min, Amit Sinha, Alice Wang, Anantha Chandrakasan. Physical layer driven protocol and algorithm design for energy-efficient wireless sensor networks. //MobiCom '01 Proceedings of the 7th annual international conference on Mobile computing and networking, Rome, Italy, 2001: 272-287.
V.K. Singh, G. Sharma, M. Kumar, Compressed sensing based acoustic event detection in protected area networks with wireless multimedia sensors. Multimed. Tools Appl. 76, 18531–18555 (Sep 2017)
Cheng H , Feng D , Shi X , et al. "Data quality analysis and cleaning strategy for wireless sensor networks". Eurasip Journal on Wireless Communications & Networking, 2018, 2018(1):61.pp.21-32.
H. Cheng, Z. Xie, Y. Shi, N. Xiong, Multi-step data prediction in wireless sensor networks based on one-dimensional CNN and bidirectional LSTM. IEEE Access 7, 117883–117896 (2017)
W. Guo, W. Zhu, Z. Yu, J. Wang, B. Guo, A survey of task allocation: Contrastive perspectives from wireless sensor networks and Mobile Crowdsensing. IEEE Access 7, 78406–78420 (2019)
S. Li, L. Da Xu, X. Wang, Compressed sensing signal and data acquisition in wireless sensor networks and internet of things. IEEE Trans Indust Inform 9(4), 2177–2186 (2012)
[10] D.L. Donoho, A. Maleki, A. Montanari, The noise-sensitivity phase transition in compressed sensing. IEEE Trans. Inf. Theory 57(10), 6920–6941 (2011) [10] M. Leinonen, M. Codreanu, and M. Juntti, "Sequential Compressed Sensing With Progressive Signal Reconstruction in Wireless Sensor Networks," IEEE Transactions on Wireless Communications, vol. 14, pp. 1622-1635, 2015.
C. Shan-xiong, P.X.-h. Xiong-Hailing, S. Wu, Intrusion detection based on compressed sensing. ICIC Express Letters 7(10), 3169–3176 (2013) [11] M. Leinonen, M. Codreanu, and M. Juntti, "Distributed Distortion-Rate Optimized Compressed Sensing in Wireless Sensor Networks," IEEE Transactions on Communications, vol. 66, pp. 1609-1623, 2018
V.K. Singh, V.K. Singh, M. Kumar, In-network data processing based on compressed sensing in WSN: A survey. Wirel. Pers. Commun. 96, 2087–2124 (Sep 2017) [12] D. P. Qiao and G. K. H. Pang, "A Modified Differential Evolution With Heuristic Algorithm for Nonconvex Optimization on Sensor Network Localization," Ieee Transactions on Vehicular Technology, vol. 65, pp. 1676-1689, Mar 2016
J.A. Jahanshahi, H. Danyali, M.S. Helfroush, A modified compressed sensing-based recovery algorithm for wireless sensor networks. Radioengineering 28, 610–617 (Sep 2019)
J.Y. Yang, X.M. Yang, X.C. Ye, C.P. Hou, Reconstruction of structurally-incomplete matrices with reweighted low-rank and sparsity priors. IEEE Trans. Image Process. 26, 1158–1172 (2017)
K. Dillon, Y. Fainman, and Y. P. Wang, "Computational estimation of resolution in reconstruction techniques utilizing sparsity, total variation, and nonnegativity," Journal of Electronic Imaging, vol. 25, 2016.
E. Candes, J, Terence Tao. Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)
G.M. Cao, P. Jung, S. Stanczak, F.Q. Yu, Data aggregation and recovery in wireless sensor networks using compressed sensing. Int J Sensor Networks 22, 209–219 (2016)
C. Lindberg, A.G.I. Amat, H. Wymeersch, Compressed sensing in wireless sensor networks without explicit position information. IEEE Trans Signal Inform Process Networks 3, 404–415 (Jun 2017)
Nir Ailon, Bernard Chazelle. Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform. //Proceedings of the thirty-eighth annual ACM symposium on Theory of computing. Seattle, WA, USA, 2006:557-563.
M. Ledoux, The Concentration of Measure Phenomenon (Am Math Soc, American, 2001)
E.J. Candes, Y. Plan, et al., Proc IEEE 98(6), 925–936 (2010)
C.C. Gong, L. Zeng, Adaptive iterative reconstruction based on relative total variation for low-intensity computed tomography. Signal Process. 165, 149–162 (2019)
Blumensath, Thomas, Mehrdad Yaghoobi, Mike E. Davies. Iterative hard thresholding and l0 regularisation. acoustics, // Speech and Signal Processing, 2007(ICASSP 2007),IEEE International Conference on. Honolulu. Hawaii ,American, 2007:877-880.
C. Chen, L. He, H.S. Li, J.Z. Huang, Fast iteratively reweighted least squares algorithms for analysis-based sparse reconstruction. Med. Image Anal. 49, 141–152 (2018)
C.T. Tony, L. Wang, Orthogonal matching pursuit for sparse signal recovery with noise. IEEE Trans. Inf. Theory 57(7), 4680–4688 (2011)
V. Saligrama, Manqi Zhao. Thresholded basis pursuit: Lp algorithm for order-wise optimal support recovery for sparse and approximately sparse signals from noisy random measurements. IEEE Trans. Inf. Theory 57(3), 1567–1586 (2011)
D. Wei, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)
M.A. Davenport, D. Needell, M.B. Wakin, Signal space CoSaMP for sparse recovery with redundant dictionaries. IEEE Trans. Inf. Theory 59, 6820–6829 (2013)
C.T. Tony, T. Jiang, Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices. Ann. Stat. 39(3), 1496–1525 (2011)
A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. Siam J Imaging Sci 2(1), 183–202 (2009)
This work was supported by the National Natural Science Foundation of China (41271292), China Postdoctoral Science Foundation (2015 M580765), Chongqing Postdoctoral Science Foundation (Xm2016041), the Fundamental Research Funds for the Central Universities, China (XDJK2018B020),and Chongqing City science and technology education research projects (KJQN201801901)
College of Computer and Information Science, Southwest University, Chongqing, China
Shiyu Zhu, Shanxiong Chen, Xihua Peng, Hailing Xiong & Sheng Wu
Chongqing Institute of Engineering, Chongqing, China
Shiyu Zhu
Shanxiong Chen
Xihua Peng
Hailing Xiong
Sheng Wu
SYZ contributed in investigation, methodology, draft manuscript writing, manuscript reviewing, and editing. SXC contributed in the overall design and programming. HWL contributed in the design of models and algorithms, reviewing and editing the manuscript, and funding acquisitions. XHP contributed in experimental environment construction. HLX contributed in reviewing and editing the manuscript. SW contributed in result analysis and reviewing and editing the manuscript. All author(s) read and approved the final manuscript.
Correspondence to Shanxiong Chen.
Zhu, S., Chen, S., Peng, X. et al. A signal reconstruction method of wireless sensor network based on compressed sensing. J Wireless Com Network 2020, 106 (2020). https://doi.org/10.1186/s13638-020-01724-2
Sparse reconstruction
Sub-Nyquist | CommonCrawl |
Limit (music)
From formulasearchengine
The first 16 harmonics, with frequencies and log frequencies.
In music theory, limit or harmonic limit is a way of characterizing the harmony found in a piece or genre of music, or the harmonies that can be made using a particular scale. The term limit was introduced by Harry Partch,[1] who used it to give an upper bound on the complexity of harmony; hence the name. "Roughly speaking, the larger the limit number, the more harmonically complex and potentially dissonant will the intervals of the tuning be perceived."[2] "A scale belonging to a particular prime limit has a distinctive hue that makes it aurally distinguishable from scales with other limits."[3]
1 The harmonic series and the evolution of music
2 Odd-limit and prime-limit
2.1 Odd limit
2.2 Identity
2.3 Prime limit
4 Beyond just intonation
The harmonic series and the evolution of music
Overtone series, partials 1-5 numbered Play (help·info).
Harry Partch, Ivor Darreg, and Ralph David Hill are among the many microtonalists to suggest that music has been slowly evolving to employ higher and higher harmonics in its constructs (see emancipation of the dissonance).{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} In medieval music, only chords made of octaves and perfect fifths (involving relationships among the first 3 harmonics) were considered consonant. In the West, triadic harmony arose (Contenance Angloise) around the time of the Renaissance, and triads quickly became the fundamental building blocks of Western music. The major and minor thirds of these triads invoke relationships among the first 5 harmonics.
Around the turn of the 20th century, tetrads debuted as fundamental building blocks in African-American music. In conventional music theory pedagogy, these seventh chords are usually explained as chains of major and minor thirds. However, they can also be explained as coming directly from harmonics greater than 5. For example, the dominant 7th chord in 12-ET approximates 4:5:6:7, while the major 7th chord approximates 8:10:12:15.
Odd-limit and prime-limit
In just intonation, intervals between pitches are drawn from the rational numbers. Since Partch, two distinct formulations of the limit concept have emerged: odd limit (generally preferred for the analysis of simultaneous intervals and chords) and prime limit (generally preferred for the analysis of scales){{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}. Odd limit and prime limit n do not include the same intervals even when n is an odd prime.
Odd limit
For a positive odd number n, the n-odd-limit contains all rational numbers such that the largest odd number that divides either the numerator or denominator is not greater than n.
In Genesis of a Music, Harry Partch considered just intonation rationals according to the size of their numerators and denominators, modulo octaves.[4] Since octaves correspond to factors of 2, the complexity of any interval may be measured simply by the largest odd factor in its ratio. Partch's theoretical prediction of the sensory dissonance of intervals (his "One-Footed Bride") are very similar to those of theorists including Hermann von Helmholtz, William Sethares, and Paul Erlich.[5]
See #Examples, below.
{{#invoke:Hatnote|hatnote}} An identity is each of the odd numbers below and including the (odd) limit in a tuning. For example, the identities included in 5-limit tuning are 1, 3, and 5. Each odd number represents a new pitch in the harmonic series and may thus be considered an identity:
Template:Underline C Template:Underline C Template:Underline G Template:Underline C Template:Underline E Template:Underline G ...
Template:Underline 2 Template:Underline 4 Template:Underline 6 Template:Underline 8 Template:Underline 10 Template:Underline 12 ...
"The number 9, though not a prime, is nevertheless an identity in music, simply because it is an odd number".[6] Partch defines "identity" as "one of the correlatives, 'major' or 'minor', in a tonality; one of the odd-number ingredients, one or several or all of which act as a pole of tonality".[7]
Odentity and udentity are, "short for Over-Identity," and, "Under-Identity," respectively.[8] "An udentity is an identity of an utonality".[9]
Prime limit
First 32 harmonics, with the harmonics unique to each limit sharing the same color.
For a prime number n, the n-prime-limit contains all rational numbers that can be factored using primes no greater than n. In other words, it is the set of rationals with numerator and denominator both n-smooth.
p-Limit Tuning. Given a prime number p, the subset of Q + {\displaystyle \mathbb {Q} ^{+}} consisting of those rational numbers x whose prime factorization has the form
x = p 1 α 1 p 2 α 2 . . . p r α r {\displaystyle x=p_{1}^{\alpha _{1}}p_{2}^{\alpha _{2}}...p_{r}^{\alpha _{r}}} with p 1 , . . . , p r ≤ p {\displaystyle p_{1},...,p_{r}\leq p} forms a subgroup of ( Q + , ⋅ {\displaystyle \mathbb {Q} ^{+},\cdot } ). ... We say that a scale or system of tuning uses p-limit tuning if all interval ratios between pitches lie in this subgroup.[10]
In the late 1970s, a new genre of music began to take shape on the West coast of the United States, known as the American gamelan school. Inspired by Indonesian gamelan, musicians in California and elsewhere began to build their own gamelan instruments, often tuning them in just intonation. The central figure of this movement was the American composer Lou Harrison{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}. Unlike Partch, who often took scales directly from the harmonic series, the composers of the American Gamelan movement tended to draw scales from the just intonation lattice, in a manner like that used to construct Fokker periodicity blocks. Such scales often contain ratios with very large numbers, that are nevertheless related by simple intervals to other notes in the scale.
odd-limit
prime-limit
3/2 perfect fifth 3 3 Play (help·info)
4/3 perfect fourth 3 3 Play (help·info)
5/4 major third 5 5 Play (help·info)
5/2 major tenth 5 5 Play (help·info)
5/3 major sixth 5 5 Play (help·info)
7/5 lesser septimal tritone 7 7 Play (help·info)
10/7 greater septimal tritone 7 7 Play (help·info)
9/8 major second 9 3 Play (help·info)
27/16 Pythagorean major sixth 27 3 Play (help·info)
81/64 ditone 81 3 Play (help·info)
243/128 Pythagorean major seventh 243 3 Play (help·info)
Beyond just intonation
In musical temperament, the simple ratios of just intonation are mapped to nearby irrational approximations. This operation, if successful, does not change the relative harmonic complexity of the different intervals, but it can complicate the use of the harmonic limit concept. Since some chords (such as the diminished seventh chord in 12-ET) have several valid tunings in just intonation, their harmonic limit may be ambiguous.
3-limit (Pythagorean) tuning
Five-limit tuning
7-limit tuning
Numerary nexus
Otonality and Utonality
Tonality diamond
Tonality flux
↑ Fox, Christopher (2003). Microtones and Microtonalities, p.13. Taylor & Francis.
↑ Bart Hopkin, Musical Instrument Design: Practical Information for Instrument Design (Tucson, Ariz.: See Sharp Press. 1996), p. 160. ISBN 1-884365-08-6.
↑ Havryliv, M. and Narushima, T. (2006). "Metris: A Game Environment for Music Performance", Computer Music Modeling and Retrieval: Third International Symposium, CMMR 2005, Pisa, Italy, September 26-28, 2005, Revised Papers, p.105n3. Richard Kronland-Martinet, Thierry Voinier, Sølvi Ystad; eds. Springer Science & Business Media. ISBN 9783540340270.
↑ Harry Partch, Genesis of a Music: An Account of a Creative Work, Its Roots, and Its Fulfillments, second edition, enlarged (New York: Da Capo Press, 1974), p. 73. ISBN 0-306-71597-X; ISBN 0-306-80106-X (pbk reprint, 1979).
↑ Paul Erlich, "The Forms of Tonality: A Preview". Some Music Theory from Paul Erlich (2001), pp. 1–3 (Accessed 29 May 2010).
↑ Partch, Harry (1979). Genesis Of A Music: An Account Of A Creative Work, Its Roots, And Its Fulfillments, p.93. ISBN 0-306-80106-X.
↑ Partch (1979), p.71.
↑ Dunn, David, ed. (2000). Harry Partch: An Anthology of Critical Perspectives, p.28. ISBN 9789057550652.
↑ Template:Cite web
↑ David Wright, Mathematics and Music. Mathematical World 28. (Providence, R.I.: American Mathematical Society, 2009), p. 137. ISBN 0-8218-4873-9.
"Limits: Consonance Theory Explained", Glen Peterson's Musical Instruments and Tuning Systems.
"Harmonic Limit", Xenharmonic.
{{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }} Template:Microtonal music Template:Musical tuning
Retrieved from "https://en.formulasearchengine.com/index.php?title=Limit_(music)&oldid=227739"
Just tuning and intervals
About formulasearchengine | CommonCrawl |
Coding and decoding
The process of representing information in a definite standard form and the inverse process of recovering the information in terms of such a representation of it. In the mathematical literature an encoding (coding) is a mapping of an arbitrary set $ A $ into the set of finite sequences (words) over some alphabet $ B $, while the inverse mapping is called a decoding. Examples of an encoding are: the representation of the natural numbers in the $ r $- ary number systems, where each number $ N = 1, 2 \dots $ is made to correspond to the word $ b _ {1} \dots b _ {l} $ over the alphabet $ B _ {r} = \{ 0 \dots r - 1 \} $ for which $ b _ {1} \neq 0 $ and $ b _ {1} r ^ {l-} 1 + \dots + b _ {l} = N $; the conversion of English texts by means of a telegraphic code into sequences consisting of the transmission of pulses and pauses of various lengths; and the mapping applied in writing down the digits of a (Russian) postal index (see Fig.a and Fig.b). In this latter case, there corresponds to each decimal digit a word over the alphabet $ B _ {2} = \{ 0 , 1 \} $ of length 9 in which numbers referring to a line that is to be used are marked by the symbol $ 1 $. (For example, the word $ 110010011 $ corresponds to the digit $ 5 $.)
Figure: c022890a
Figure: c022890b
The investigation of various properties of coding and decoding and the construction of encodings that are effective in some sense and possess specified properties constitute the subject matter of coding theory. Usually, a criterion for the effectiveness of an encoding is in some way or other tied up with the minimization of the length of the code words (images of the elements of the set $ A $), and the specified properties of the coding relate to the guarantee of a given level of noise immunity, to be understood in some sense or other. In particular, noise immunity means the possibility of unique decoding in the absence of, or at a tolerable level of, distortion of the code words. Besides noise immunity, a number of additional requirements may be imposed on the code. For example, in the choice of an encoding for the digits of a postal code, an agreement with the ordinary way of writing digits is necessary. As additional requirements, restrictions are often applied relating to the allowed complexity of the scheme effecting the coding and decoding. The problems in coding theory were in the main created under the influence of the theory of information transmission as developed by C. Shannon [1]. A source of new problems in coding theory is provided by the creation and perfection of automated systems of gathering, storage, transmission, and processing of information. The methods of solving problems in coding theory are mainly combinatorial, probability-theoretic and algebraic.
An arbitrary coding $ f $ of a set (alphabet) $ A $ by words over an alphabet $ B $ can be extended to the set $ A ^ {*} $ of all words over $ A $( messages) as follows:
$$ f ( a _ {1} \dots a _ {k} ) = f ( a _ {1} ) \dots f ( a _ {k} ) , $$
where $ a _ {i} \in A $, $ i = 1 \dots k $. Such a mapping $ f : A ^ {*} \rightarrow B ^ {*} $ is called a letter-by-letter encoding of the messages. A more general class of encodings of messages is found by automated encoding realized by initial asynchronous automata (cf. Automaton), which deliver at each instant of time some (possibly empty) word over the alphabet $ B $. The importance of this generalization consists in the fact that the automaton realizes, in different states, different encodings of the letters of the alphabet of the messages. A letter-by-letter encoding is an automaton encoding realized by a single-state automaton. One of the branches of coding theory is the study of the general properties of a coding and the construction of algorithms for recognizing these properties (see Coding, alphabetical). In particular, necessary and sufficient conditions have been found for automated and letter-by-letter encodings in order that 1) the decoding be single-valued; 2) there exists a decoding automaton, that is, an automaton effecting the decoding with a certain bounded delay; or 3) there exists a self-adjusting decoding automaton (making it possible to eliminate in a limited amount of time the effect of a mistake in the input sequence or in the functioning of the automaton itself).
The majority of problems in coding theory reduces to the study of finite or countable sets of words over an alphabet $ B _ {r} $. Such sets are called codes. In particular, there corresponds to each single-valued encoding $ f : B _ {m} \rightarrow B _ {r} ^ {*} $( and letter-by-letter encoding $ f : B _ {m} ^ {*} \rightarrow B _ {r} ^ {*} $) a code $ \{ f ( 0) \dots f ( m - 1 ) \} \subset B _ {r} ^ {*} $. One of the basic assertions in coding theory is that the condition of injectivity of a letter-by-letter encoding $ f : B _ {m} ^ {*} \rightarrow B _ {r} ^ {*} $ imposes the following restrictions on the lengths $ l _ {i} = l _ {i} ( f ) $ of the code words $ f ( i) $:
$$ \tag{1 } \sum _ { i= } 0 ^ { m- } 1 r ^ {- l _ {i} } \leq 1 . $$
The converse statement also holds: If $ ( l _ {0} \dots l _ {m-} 1 ) $ is a set of natural numbers satisfying (1), then there exists a one-to-one letter-by-letter encoding $ f : B _ {m} ^ {*} \rightarrow B _ {r} ^ {*} $ such that the word $ f ( i) $ has length $ l _ {i} $. Furthermore, if the numbers $ l _ {i} $ are increasingly ordered, then one can take for $ f ( i) $ the first $ l _ {i} $ symbols after the decimal point of the expansion of $ \sum _ {j=} 0 ^ {i-} 1 r ^ {- l _ {j} } $ in an $ r $- ary fraction (Shannon's method).
The most definitive results in coding theory relate to the construction of effective one-to-one encodings. The constructions described here are used in practice for the compression of information and access of information from the memory. The concept of effectiveness of an encoding depends on the choice of the cost criterion. In the definition of the cost $ L ( f ) $ of a one-to-one letter-by-letter encoding $ f : B _ {m} ^ {*} \rightarrow B _ {r} ^ {*} $ it is assumed that there corresponds to each number $ i \in B _ {m} $ a positive number $ p _ {i} $ and that $ P = \{ p _ {0} \dots p _ {m-} 1 \} $. The following versions of a definition of the cost $ L ( f ) $ have been investigated:
1) $ L _ {mean } ( f ) = \sum _ {i=} 0 ^ {m-} 1 p _ {i} l _ {i} $,
2) $ L ^ {(} t) ( f ) = ( \mathop{\rm log} _ {r} \sum _ {i=} 0 ^ {m-} 1 p _ {i} r ^ {t l _ {i} } ) / t $, $ 0 < t < \infty $,
3) $ L ^ \prime ( f ) = \max _ {0 \leq i \leq m - 1 } ( l _ {i} - p _ {i} ) $, where it is supposed that in the first two cases the $ p _ {i} $ are the probabilities with which some Bernoullian source generates the corresponding letters of the alphabet $ B _ {m} $( $ \sum _ {i=} 0 ^ {m-} 1 p _ {i} = 1 $), while in the third case, the $ p _ {i} $ are desirable lengths of code words. In the first definition, the cost is equal to the average length of a code word; in the second definition, as the parameter $ t $ increases, the longer code words have a greater influence on the cost ( $ L ^ {(} t) ( f ) \rightarrow L _ {\textrm{ mean } } ( f ) $ as $ t \rightarrow 0 $ and $ L ^ {(} t) ( f ) \rightarrow \max _ {0 \leq i \leq m ^ {-} 1 } l _ {i} $ as $ t \rightarrow \infty $); in the third definition, the cost is equal to the maximum excess of the length $ l _ {i} $ of the code word over the desired length $ p _ {i} $. The problem of constructing a one-to-one letter-by-letter encoding $ f : B _ {m} ^ {*} \rightarrow B _ {r} ^ {*} $ minimizing the cost $ L ( f ) $ is equivalent to that of minimizing the function $ L ( f ) $ on the sets $ ( l _ {0} \dots l _ {m-} 1 ) $ of natural numbers satisfying the condition (1). The solution of this problem is known for each of the above definitions of a cost.
Suppose that the minimum of the quantity $ L ( f ) $ on the sets $ ( l _ {0} \dots l _ {m-} 1 ) $ of arbitrary (not necessarily natural) numbers satisfying condition (1) is equal to $ L _ {r} ( P) $ and is attained on the set $ ( l _ {0} ( P) \dots l _ {m-} 1 ( P) ) $. The non-negative quantity $ I ( f ) = L ( f ) - L _ {r} ( P) $ is called the redundancy, and the quantity $ I ( f ) / L ( f ) $ is called the relative redundancy of the encoding $ f $. The redundancy of a one-to-one encoding $ f : B _ {m} ^ {*} \rightarrow B _ {r} ^ {*} $ constructed by the method of Shannon for lengths $ l _ {i} $, $ l _ {i} ( P) \leq l _ {i} < l _ {i} ( P) + 1 $, satisfies the inequality $ I ( f ) < 1 $. For the first, most usual, definition of the cost as the average number of code symbols necessary for one letter of the message generated by the source, the quantity $ L _ {r} ( P) $ is equal to the Shannon entropy
$$ H _ {r} ( P) = - \sum _ { i= } 0 ^ { m- } 1 p _ {i} \mathop{\rm log} _ {r} p _ {i} $$
of the source calculated in the base $ r $, while $ l _ {i} ( P) = - \mathop{\rm log} _ {r} p _ {i} $. The bound of the redundancy, $ I ( f ) = L _ {\textrm{ mean } } ( f ) - H _ {r} ( P) < 1 $, can be improved by using so-called coding blocks of length $ k $, in which messages of length $ k $( rather than separate letters) are encoded by the Shannon method. The redundancy of such an encoding does not exceed $ 1 / k $. This same method is used for the effective encoding of related sources. In connection with the fact that the determination of the lengths $ l _ {i} $ in coding by the Shannon method is based on a knowledge of the statistics of the source, methods have been developed, for certain classes of sources, for the construction of a universal encoding that guarantees a definite upper bound on the redundancy for any source in this class. In particular, a coding by blocks of length $ k $ has been constructed with a redundancy that is, for any Bernoullian source, asymptotically at most $ ( ( m - 1 ) / 2 k ) \mathop{\rm log} _ {r} k $( for fixed $ m , r $ as $ k \rightarrow \infty $), where this asymptotic limit cannot be improved upon.
Along with the problems of the effective compression of information, problems of the estimation of the redundancy of concrete types of information are also considered. For example, the relative redundancy of certain natural languages (in particular, English and Russian) has been estimated under the hypothesis that their texts are generated by Markov sources with a large number of states.
In the investigation of problems on the construction of effective noise-immune encodings, one usually considers encodings $ f : B _ {m} \rightarrow B _ {r} ^ {*} $ to which correspond codes $ \{ f ( 0) \dots f ( m - 1 ) \} $ belonging to the set $ B _ {r} ^ {n} $ of words of length $ n $ over the alphabet $ B _ {r} $. It is understood here that the letters of the alphabet of the messages $ B _ {m} $ are equi-probable. The effectiveness of such an encoding is estimated by the redundancy $ I ( f ) = n - \mathop{\rm log} _ {r} m $ or by the transmission rate $ R ( f ) = ( \mathop{\rm log} _ {r} m ) / n $. In the definition of the noise-immunity of an encoding, the concept of an error is formalized and a model for the generation of errors is brought into consideration. An error of substitution type (or simply an error) is a transformation of words consisting of the substitution of a symbol in a word by another symbol in the alphabet $ B _ {r} $. For example, the production of a superfluous line in writing out the figure of a Russian postal index leads to the replacement in the coded word of the symbol 0 by the symbol 1, while the omission of a necessary line leads to the replacement of 1 by 0. The possibility of detecting and correcting errors is based on the fact that, for an encoding $ f $ with non-zero redundancy, the decoding $ f ^ { - 1 } $ can be extended in arbitrary fashion onto the $ r ^ {n} - m $ words in $ B _ {r} ^ {n} $ which are not code words. In particular, if the set $ B _ {r} ^ {n} $ is partitioned into $ m $ disjoint sets $ D _ {0} \dots D _ {m-} 1 $ such that $ f ( i) \in D _ {i} $ and the decoding $ f ^ { - 1 } $ is extended so that $ f ^ { - 1 } ( D _ {i} ) = i $, then in decoding, all errors that translate a code word $ f ( i) $ in $ D _ {i} $, $ i = 0 \dots m - 1 $, will be corrected. Similar possibilities arise also in the case of other types of error, such as the erasure of a symbol (substitution by a symbol of a different alphabet), the alteration of the numerical value of a code word by $ \pm b r ^ {i} $, $ b = 1 \dots r - 1 $, $ i = 0 , 1 ,\dots $( an arithmetic error), the deletion or insertion of a symbol, etc.
In the theory of information transmission (see Information, transmission of), probabilistic models of error generation are considered; these are called channels. The simplest memoryless channel is defined by the probabilities $ p _ {ij} $ of substitution of a symbol $ i $ by a symbol $ j $. One defines for this channel the quantity (channel capacity)
$$ C = \max \ \sum _ { i= } 0 ^ { r- } 1 \sum _ { j= } 0 ^ { r- } 1 q _ {i} p _ {ij} \ \mathop{\rm log} _ {r} \left ( \frac{ p _ {ij} }{\sum _ {h=} 0 ^ {m-} 1 q _ {h} p _ {hj} } \right ) , $$
where the maximum is taken over all sets $ ( q _ {0} \dots q _ {m-} 1 ) $ such that $ q _ {i} \geq 0 $ and $ \sum _ {i=} 0 ^ {m-} 1 q _ {i} = 1 $. The effectiveness of an encoding $ f $ is characterized by the transmission rate $ R ( f ) $, while the noise-immunity is characterized by the mean probability of an error in the decoding $ P ( f ) $( under the optimal partitioning of $ B _ {r} ^ {n} $ into subsets $ D _ {i} $). The main result in the theory of information transmission (Shannon's theorem) is that the channel capacity $ C $ is the least upper bound of the numbers $ R $ such that for any $ \epsilon > 0 $ and for all sufficiently large $ n $ there exists an encoding $ f : B _ {m} \rightarrow B _ {r} ^ {n} $ for which $ R ( f ) \geq R $ and $ P ( f ) < \epsilon $.
Another model for the generation of errors (see Error-correcting code; Code with correction of arithmetical errors; Code with correction of deletions and insertions) is characterized by the property that in each word of length $ n $ there occur not more than a prescribed number $ t $ of errors. Let $ E _ {i} ( t) $ be the set of words obtainable from $ f ( i) $ as a result of $ t $ or fewer errors. If for the code
$$ \{ f ( 0) \dots f ( m - 1 ) \} \subset B _ {r} ^ {n} $$
the sets $ E _ {i} ( t) $, $ 0 \dots m - 1 $, are pairwise disjoint, then in a decoding such that $ E _ {i} ( t) \subseteq D _ {i} $, all errors admitting of a model of the above type for the generation of errors will be corrected, and such a code is called a $ t $- error-correcting code. For many types of errors (for example, substitutions, arithmetic errors, insertions and deletions) the function $ d ( x , y ) $ equal to the minimal number of errors of given type taking the word $ x \in B _ {r} ^ {n} $ into the word $ y \in B _ {r} ^ {n} $ is a metric, and $ E _ {i} ( t) $ is the metric ball of radius $ t $. Therefore the problem of constructing the most effective code (that is, the code with maximum number of words $ m $) in $ B _ {r} ^ {n} $ with correction of $ t $ errors is equivalent to that of the densest packing of the metric space $ B _ {r} ^ {n} $ by balls of radius $ t $. The code for the figures of a Russian postal index is not a one-error-correcting code, because $ d ( f ( 0) , f ( 8)) = 1 $ and $ d ( f ( 5) , f ( 8) ) = 2 $, while all other distances between code words are at least 3.
The problem of studying the minimal redundancy $ I _ {r} ( n , t ) $ of a code in $ B _ {r} ^ {n} $ for correction of $ t $ errors of substitution type is divided into two basic cases. In the first case, when $ t $ is fixed as $ n \rightarrow \infty $, the following asymptotic relation holds:
$$ \tag{2 } I _ {2} ( n , t ) \sim t \mathop{\rm log} _ {2} n , $$
where the "power" bound is attained, based on a count of the number of words of length $ n $ in a ball of radius $ t $. For $ t \geq 2 $ the asymptotic behaviour of $ I _ {r} ( n , t ) $ when $ r \geq 2 $ except $ r = 3 , 4 $ for $ t = 2 $, and also when $ r = 2 $ for many other types of errors (for example, arithmetic errors, deletions and insertions), is not known (1987). In the second case, when $ t = [ p n ] $, where $ p $ is some fixed number, $ 0 < p < ( r - 1 ) / 2 r $ and $ n \rightarrow \infty $, the "power" bound
$$ I _ {r} ( n , [ p n ] ) \stackrel{>}{\sim} n T _ {r} ( p) , $$
where $ T _ {r} ( p) = - p \mathop{\rm log} _ {r} ( p / ( r - 1 ) ) - ( 1 - p ) \mathop{\rm log} _ {r} ( 1 - p ) $, is substantially improved. It is conjectured that the upper bound
$$ \tag{3 } I _ {r} ( n , [ p n ] ) \leq n T _ {r} ( 2 p ) , $$
obtained by the method of random sampling of codes, is asymptotically exact for $ r = 2 $, that is, $ I _ {2} ( n , [ p n ] ) \sim n T _ {2} ( 2 p ) $. The proof or refutation of this conjecture is one of the central problems in coding theory.
The majority of the constructions of noise-immune codes are effective when the length $ n $ of the code is sufficiently large. In this connection, questions relating to the complexity of systems that realize the coding and decoding (encoders and decoders) acquire special significance. Restrictions on the admissible type of decoder or on its complexity may lead to an increase in the redundancy necessary to guarantee a prescribed noise-immunity. For example, the minimal redundancy of a code in $ B _ {2} ^ {n} $ for which there is a decoder consisting of a shift register and a single majorizing element and correcting one error has order $ \sqrt n $( compare with (2)). As mathematical models of an encoder and a decoder, diagrams of functional elements are usually considered, and by the complexity is meant the number of elements in the diagram. For the known classes of error-correcting codes, investigations have been carried out of the possible encoding and decoding algorithms and upper bounds for the complexity of the encoder and decoder have been obtained. Also, certain relationships have obtained among the transmission rate of the encoding, the noise-immunity of the encoding and the complexity of the decoder (see [5]).
Yet another line of the investigation in coding theory is connected with the fact that many results (e.g. Shannon's theorem and the upper bound (3)) are not "constructive" , but are theorems on the existence of infinite sequences $ \{ K _ {n} \} $ of codes $ K _ {n} \subseteq B _ {r} ^ {n} $. In this connection, efforts have been made to sharpen these results in order to prove them in a class of sequences $ \{ K _ {n} \} $ of codes for which there is a Turing machine that recognizes the membership of an arbitrary word of length $ l $ in the set $ \cup _ {n=} 1 ^ \infty K _ {n} $ in a time having slow order of growth with respect to $ l $( e.g. $ l \mathop{\rm log} l $).
Certain new constructions and methods for obtaining bounds, which have been developed within coding theory, have led to a substantial advance in questions that on the face of it seem very remote from the traditional problems in coding theory. One should mention here the use of a maximal code for the correction of one error in an asymptotically-optimal method for realizing functions of the algebra of logic by contact schemes (cf. Contact scheme); the fundamental improvement of the upper bound for the density of packing the $ n $- dimensional Euclidean space with identical balls; and the use of inequality (1) in estimating the complexity of realizing a class of functions of the algebra of logic by formulas. The ideas and results of coding theory find further developments in the synthesis of self-correcting schemes and reliable schemes of unreliable elements.
[1] C. Shannon, "A mathematical theory of communication" Bell Systems Techn. J. , 27 (1948) pp. 379–423; 623–656
[2] E. Berlekamp, "Algebraic coding theory" , McGraw-Hill (1968)
[3] E. Weldon jr., "Error-correcting codes" , M.I.T. (1972)
[4] , Discrete mathematics and mathematical problems in cybernetics , 1 , Moscow (1974) pp. Sect. 5 (In Russian)
[5] L.A. Bassalygo, V.V. Zyablov, M.S. Pinsker, "Problems of complexity in the theory of correcting codes" Problems of Information Transmission , 13 : 3 pp. 166–175 Problemy Peredachi Informatsii , 13 : 3 (1977) pp. 5–17
[6] V.M. Sidel'nikov, "New bounds for densest packings of spheres in -dimensional Euclidean space" Math. USSR-Sb. , 24 (1974) pp. 147–157 Mat. Sb. , 95 : 1 (1974) pp. 148–158
Two standard references on error-correcting codes and coding theory are [a1], [a2].
[a1] F.J. MacWilliams, N.J.A. Sloane, "The theory of error-correcting codes" , I-II , North-Holland (1977)
[a2] J.H. van Lint, "Introduction to coding theory" , Springer (1982)
Coding and decoding. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Coding_and_decoding&oldid=46509
This article was adapted from an original article by V.I. Levenshtein (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Coding_and_decoding&oldid=46509"
TeX auto | CommonCrawl |
Pullback exponential attractors for the three dimensional non-autonomous Navier-Stokes equations with nonlinear damping
Singular perturbations and scaling
January 2020, 25(1): 31-53. doi: 10.3934/dcdsb.2019171
Quasi-periodic solutions for a class of beam equation system
Yanling Shi 1,, and Junxiang Xu 2,
College of Mathematics and Physics, Yancheng Institute of Technology, Yancheng 224051, China
Department of Mathematics, Southeast University, Nanjing 211189, China
* Corresponding author: [email protected]
Received October 2018 Revised March 2019 Published July 2019
Fund Project: The first author is partially supported by NSFC Grant(11801492, 61877052), NSFJS Grant (BK 20170472) and NSF of Jiangsu Higher education Institute of China Grant(18KJB110030). The second author is supported by the NSFC Grant(11871146).
In this paper, we establish an abstract infinite dimensional KAM theorem. As an application, we use the theorem to study the higher dimensional beam equation system
$ \left\{ \begin{array}{lll} u_{1tt}+ \Delta^2 u_1 +\sigma u_1 +u_1u_2^2 & = & 0 \\ &&\\ u_{2tt}+ \Delta^2 u_2 +\mu u_2 +u_1^2 u_2 & = & 0 \end{array} \right. $
under periodic boundary conditions, where
$ 0<\sigma \in [ \sigma_1,\sigma_2 ], $
$ 0<\mu\in [ \mu_1,\mu_2 ] $
are real parameters. By establishing a block-diagonal normal form, we obtain the existence of a Whitney smooth family of small amplitude quasi-periodic solutions corresponding to finite dimensional invariant tori of an associated infinite dimensional dynamic system.
Keywords: Beam equation system, quasi-periodic solution, infinite dimensional KAM theory.
Mathematics Subject Classification: Primary: 37K55; Secondary: 35G30.
Citation: Yanling Shi, Junxiang Xu. Quasi-periodic solutions for a class of beam equation system. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 31-53. doi: 10.3934/dcdsb.2019171
M. Bambusi and S. Graffi, Time quasi-periodic unbounded perturbations of Schrödinger operators and KAM methods, Comm. Math. Phys., 219 (2001), 465-480. doi: 10.1007/s002200100426. Google Scholar
D. Bambusi, On long time stability in Hamiltonian perturbations of non-resonant linear PDEs, Nonlinearity, 12 (1999), 823-850. doi: 10.1088/0951-7715/12/4/305. Google Scholar
M. Berti and P. Bolle, Sobolev quasi periodic solutions of multidimensional wave equations with a multiplicative potential, Nonlinearity, 25 (2012), 2579-2613. doi: 10.1088/0951-7715/25/9/2579. Google Scholar
M. Berti and P. Bolle, Quasi-periodic solutions with Sobolev regularity of NLS on $\mathbb{T}^d$ with a multiplicative potential, Eur. J. Math., 15 (2013), 229-286. doi: 10.4171/JEMS/361. Google Scholar
J. Bourgain, Construction of quasi-periodic solutions for Hamiltonian perturbations of linear equations and applications to nonlinear PDE, Internat. Math. Res. Notices, 11 (1994), 475-497. doi: 10.1155/S1073792894000516. Google Scholar
J. Bourgain, Construction of periodic solutions of nonlinear wave equations in higher dimension, Geom. Funct. Anal., 5 (1995), 629-639. doi: 10.1007/BF01902055. Google Scholar
J. Bourgain, Quasi-periodic solutions of Hamiltonian perturbations of 2D linear Schrödinger equations, Ann. of Math., 148 (1998), 363-439. doi: 10.2307/121001. Google Scholar
J. Bourgain, Nonlinear Schrödinger equations, Hyperbolic Equations and Frequency Interactions (Park City, UT, 1995), 3–157, IAS/Park City Math. Ser., 5, Amer. Math. Soc., Providence, RI, 1999. doi: 10.1090/coll/046. Google Scholar
[9] J. Bourgain, Green's Function Estimates for Lattice Schrödinger Operators and Applications, Princeton Univ. Press, Princeton, 2005. doi: 10.1515/9781400837144. Google Scholar
W. Craig and C. E. Wayne, Newton's method and periodic solutions of nonlinear wave equations, Comm. Pure Appl. Math., 46 (1993), 1409-1498. doi: 10.1002/cpa.3160461102. Google Scholar
L. H. Eliasson and S. B. Kuksin, KAM for the nonlinear Schrödinger equation, Ann. of Math., 172 (2010), 371-435. doi: 10.4007/annals.2010.172.371. Google Scholar
J. Geng, X. Xu and J. You, An infinite dimensional KAM theorem and its application to the two dimensional cubic Schrödinger equation, Adv. Math., 226 (2011), 5361-5402. doi: 10.1016/j.aim.2011.01.013. Google Scholar
J. Geng and Y. Yi, Quasi-periodic solutions in a nonlinear Schrödinger equation, J. Differential Equations, 233 (2007), 512-542. doi: 10.1016/j.jde.2006.07.027. Google Scholar
J. Geng and J. You, A KAM theorem for one dimensional Schrödinger equation with periodic boundary conditions, J. Differential Equations, 209 (2005), 1-56. doi: 10.1016/j.jde.2004.09.013. Google Scholar
J. Geng and J. You, A KAM theorem for Hamiltonian partial differential equations in higher dimensional spaces, Comm. Math. Phys, 262 (2006), 343-372. doi: 10.1007/s00220-005-1497-0. Google Scholar
J. Geng and J. You, KAM tori for higher dimensional beam equations with constant potentials, Nonlinearity, 19 (2006), 2405-2423. doi: 10.1088/0951-7715/19/10/007. Google Scholar
B. Grebert and V. Rocha, Stable and unstable time quasi periodic solutions for a system of coupled NLS equations, 2018 IOP Publishing Ltd & London Mathematical Society, 31 (2018), arXiv: 1710.09173v1. doi: 10.1088/1361-6544/aad3d9. Google Scholar
S. B. Kuksin, Nearly Integrable Infinite-dimensional Hamiltonian Systems, Lecture Notes in Mathematics, 1556, Springer-Verlag, Berlin, 1993. doi: 10.1007/BFb0092243. Google Scholar
S. B. Kuksin and J. Pöschel, Invariant Cantor manifolds of quasi-periodic oscillations for a nonlinear Schrödinger equation, Ann. of Math., 143 (1996), 149-179. doi: 10.2307/2118656. Google Scholar
Z. Liang and J. You, Quasi-periodic solutions for 1D Schrödinger equations with higher order nonlinearity, SIAM J. Math. Anal., 36 (2005), 1965-1990. doi: 10.1137/S0036141003435011. Google Scholar
J. Pöschel, A KAM-theorem for some nonlinear partial differential equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 23 (1996), 119-148. Google Scholar
J. Pöschel, Quasi-periodic solutions for a nonlinear wave equation, Comment. Math. Helv., 71 (1996), 269-296. doi: 10.1007/BF02566420. Google Scholar
M. Procesi and X. Xu, Quasi-Töplitz functions in KAM theorem, SIAM J. Math. Anal., 45 (2013), 2148-2181. doi: 10.1137/110833014. Google Scholar
Y. Shi, J. Xu and X. Xu, On quasi-periodic solutions for a generalized Boussinesq equation, Nonlinear Anal., 105 (2014), 50-61. doi: 10.1016/j.na.2014.04.007. Google Scholar
Y. Shi, J. Xu and X. Xu, Quasi-periodic solutions of generalized Boussinesq equation with quasi-periodic forcing, Discrete and Continuous Dynamical System-B, 22 (2017), 2501-2519. doi: 10.3934/dcdsb.2017104. Google Scholar
Y. Shi, X. Lu and X. Xu, Quasi-periodic solutions for Schrödinger equation with derivative nonlinearity, Dynamical Systems, 30 (2015), 158-188. doi: 10.1080/14689367.2014.993924. Google Scholar
C. E. Wayne, Periodic and quasi-periodic solutions of nonlinear wave equations via KAM theory, Comm. Math. Phys., 127 (1990), 479-528. doi: 10.1007/BF02104499. Google Scholar
J. Xu and J. You, Persistence of lower-dimensional tori under the first Melnikov's non-resnonce condition, J. Math. Pures Appl., 80 (2001), 1045-1067. doi: 10.1016/S0021-7824(01)01221-1. Google Scholar
X. Yuan, Quasi-periodic solutions of completely resonant nonlinear wave equations, J. Differential Equations., 230 (2006), 213-274. doi: 10.1016/j.jde.2005.12.012. Google Scholar
M. Zhang and J. Si, Quasi-periodic solutions of nonlinear wave equations with quasi-periodic forcing, Phys. D, 238 (2009), 2185-2215. doi: 10.1016/j.physd.2009.09.003. Google Scholar
S. Zhou, An abstract infinite dimensional KAM theorem with application to nonlinear higher dimensional Schrödinger equation systems, arXiv: 1701.05727v1. Google Scholar
Wenmeng Geng, Kai Tao. Large deviation theorems for dirichlet determinants of analytic quasi-periodic jacobi operators with Brjuno-Rüssmann frequency. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5305-5335. doi: 10.3934/cpaa.2020240
Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037
Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002
Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216
Jianfeng Huang, Haihua Liang. Limit cycles of planar system defined by the sum of two quasi-homogeneous vector fields. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 861-873. doi: 10.3934/dcdsb.2020145
Jiangtao Yang. Permanence, extinction and periodic solution of a stochastic single-species model with Lévy noises. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020371
Rong Chen, Shihang Pan, Baoshuai Zhang. Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29 (1) : 1691-1708. doi: 10.3934/era.2020087
Taige Wang, Bing-Yu Zhang. Forced oscillation of viscous Burgers' equation with a time-periodic force. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1205-1221. doi: 10.3934/dcdsb.2020160
Elena Nozdrinova, Olga Pochinka. Solution of the 33rd Palis-Pugh problem for gradient-like diffeomorphisms of a two-dimensional sphere. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1101-1131. doi: 10.3934/dcds.2020311
Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127
Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020045
Mikhail I. Belishev, Sergey A. Simonov. A canonical model of the one-dimensional dynamical Dirac system with boundary control. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021003
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284
Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364
Editorial Office. Retraction: Xiao-Qian Jiang and Lun-Chuan Zhang, A pricing option approach based on backward stochastic differential equation theory. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 969-969. doi: 10.3934/dcdss.2019065
Yanling Shi Junxiang Xu | CommonCrawl |
Journal of Marine Science and Application
Modeling of Roll-Heave-Pitch Motions of a Ram Wing Translating over Non-uniform Surface
Konstantin I. Matveev
Ground-effect vehicles flying close to water or ground often employ ram wings which generate aerodynamic lift primarily on their lower surfaces. The subject of this paper is the 3-DOF modeling of roll, heave, and pitch motions of such a wing in the presence of surface waves and other ground non-uniformities. The potential-flow extreme-ground-effect theory is applied for calculating unsteady pressure distribution under the wing which defines instantaneous lift force and moments. Dynamic simulations of a selected ram wing configuration are carried out in the presence of surface waves of various headings and wavelengths, as well as for transient flights over a ground obstacle. The largest amplitudes of the vehicle motions are observed in beam waves when the periods of the encounter are long. Nonlinear effects are more pronounced for pitch angles than for roll and heave. The present method can be adapted for modeling of air-supported lifting surfaces on fast marine vehicles.
Ram wing Ground effect Seakeeping Dynamics Modeling Potential flow theory
Article Highlights
• Ram wings flying close to water or ground surfaces can produce high aerodynamic lift.
• Unsteady forces on wings moving over surface waves are modeled with a potential-flow method.
• Dynamics of a ground-effect vehicle is studied at different wave headings and amplitudes.
High-speed marine craft can benefit from application or aerodynamically supported wings or platforms. Examples include racing boats of hydroplane and tunnel-hull types, wing-in-ground vehicles, and fast amphibious platforms (Matveev and Kornev 2013). Their air-supported lifting elements often operate in strong ground effect, which usually enhances lift and reduces drag, whereas the upper surfaces of these wings either are weakly affected by the proximity to water or may not even contribute to the lift, e.g., if they used as cargo platforms. The wings of this sort are usually referred to as ram wings (Gallington and Miller 1970).
The most remarkable of aerodynamically assisted marine craft are wing-in-ground (WIG) vehicles. Large, up to 500 tons in displacement, WIG craft were developed in Russia in the last century for military purposes, but they were later abandoned due to high cost and unclear fit into the naval strategy. Smaller, more economical WIG crafts were intermittently produced in several countries, and projects for large WIG transports are still actively considered. One of the most important concerns with these vehicles is their stability and dynamics in open sea conditions due to potentially dangerous high-speed flight close to water.
An extensive list of references on WIG craft is given in a review by Rozhdestvensky (2006). As concerns ram wings operating at low clearances to the ground (less than 0.1 of the wing chord), an important work of Windall and Barrows (1970) can be noted where it was first shown that the airflow under ram wings becomes primarily two-dimensional in a horizontal plane. Gallington and Miller (1970) developed a simplified theory, carried out validating experiments, and constructed experimental model prototypes of ram wings. Staunfenbiel (1987) analyzed the stability of WIG craft, emphasizing dependency of their aerodynamic coefficients on height. One of the first attempts to use viscous solvers of computational fluid dynamics to model three-dimensional WIG was described by Hirata and Kodama (1995). A number of extensions of the extreme-ground-effect (EGE) potential-flow theory for ram wings, including lift-augmenting mechanisms, compressibility effects, and stability, are detailed in the book by Rozhdestvensky (2000). Benedict et al. (2002) analyzed the WIG take-off regimes when the wing is in close proximity to water. Tuck (1984), Barber (2007), and Zong et al. (2012) calculated water surface deformations caused by wings steadily flying in ground effect. Matveev and Chaney (2013), and Liang et al. (2014) modeled airfoils heaving above water surfaces.
Steady and unsteady aerodynamics of ram wings can be effectively modeled with help of EGE theory which assumes potential flow with dominant horizontal air velocities in the channel formed between the wing and the underlying surface (Windall and Barrows 1970; Rozhdestvensky 2000). The EGE theory has been previously validated against experimental data for ram wings with and without side plates (Rozhdestvensky 2000; Matveev 2013) and for power-augmented ram wings where air-based front propulsors produce high-speed airflow incident on the wing even in static conditions (Matveev 2008; Matveev and Soderlund 2008). Simplified modeling for heave-and-pitch motions of ram wings was developed by Matveev (2013). However, from the seaworthiness perspective, the roll dynamics is also of major importance in disturbed environments, such as water surface waves. The present study addresses modeling of small-amplitude 3-DOF motions of a ram wing with aerodynamic coupling between roll, heave, and pitch.
2 Mathematical Model
A ram wing flying close to the water surface is considered as shown in Fig. 1. The viscous effects are neglected. The distance from the wing's lower surface to the water is assumed to be much smaller than the wing chord. With additional assumptions of small slopes of the water waves and small attack angle of the wing, the horizontal (x and z) components of the airflow velocity under the wing become much greater than the vertical (y) component. This allows us to apply the two-dimensional extreme-ground-effect theory developed by Rozhdestvensky (2000). If compressibility effects are also neglected, then the mass conservation principle results in the following equation for the perturbed velocity potential φ in the channel under the wing,
$$ \frac{\partial }{\partial x}\left(h\frac{\partial \varphi }{\partial x}\right)+\frac{\partial }{\partial z}\left(h\frac{\partial \varphi }{\partial z}\right)=U\frac{\partial h}{\partial x}-\frac{\partial h}{\partial t} $$
where h = yp − yw is the local height of the channel between the wing and water, yp and yw are the vertical coordinates of the wing lower side and the water surface, respectively, and U is the constant forward speed of the vehicle. In the applied here, reference frame translating with the vehicle along axis x, the velocity of the incident airflow is −U. Rozhdestvensky (2000) showed that the appropriate boundary conditions in the limit of small ground clearances are φ = 0 at the wing leading edge and the zero gage pressure on the other edges, which imposes the following requirement for the velocity potential at those boundaries,
$$ 2U\frac{\partial \varphi }{\partial x}-2\frac{\partial \varphi }{\partial t}-{\left(\frac{\partial \varphi }{\partial x}\right)}^2-{\left(\frac{\partial \varphi }{\partial z}\right)}^2=0 $$
Schematic of ram wing moving over waves. Dashed lines in (b) represent wave crests
After determining a solution for φ, the pressure distribution on the wing lower surface can be calculated from the unsteady Bernoulli equation,
$$ p\left(x,z,t\right)=\rho \left[U\frac{\partial \varphi }{\partial x}-\frac{\partial \varphi }{\partial t}-\frac{1}{2}{\left(\frac{\partial \varphi }{\partial x}\right)}^2-\frac{1}{2}{\left(\frac{\partial \varphi }{\partial z}\right)}^2\right] $$
where ρ is the air density. After that, the instantaneous lift force and coordinates of the center of pressure are found by integrations as follows,
$$ L(t)={\int}_0^c\ {\int}_{-s/2}^{s/2}\ p\left(x,z,t\right)\kern0.1em \mathrm{d}z\mathrm{d}x $$
$$ {x}_p(t)=\frac{1}{L}{\int}_0^c\ {\int}_{-s/2}^{s/2}\ x\ p\left(x,z,t\right)\kern0.1em \mathrm{d}z\mathrm{d}x $$
$$ {z}_p(t)=\frac{1}{L}{\int}_0^c\ {\int}_{-s/2}^{s/2}\ z\ p\kern0.1em \left(x,z,t\right)\kern0.1em \mathrm{d}z\mathrm{d}x $$
where c and s are the wing chord and span, respectively (Fig. 1b).
In this study, only heave, pitch, and roll motions are considered. Since the vehicle is assumed to translate at a constant forward speed and the underwing lift force dominates in the EGE theory, the other forces (drag, thrust, lift on upper wing side, etc.) are much smaller and only weakly affected by ground effect. Hence, these forces and their moments are neglected in the present analysis of heave-pitch-roll motions. However, if one intends to do detailed modeling of a practical vehicle, these forces can be directly added to the present model.
Under the assumptions of low-amplitude motions and zero non-diagonal products of inertia, the governing dynamics equations can be written in simplified forms as follows,
$$ M{\ddot{z}}_{\mathrm{cg}}=L- Mg $$
$$ {I}_{zz}\ddot{\alpha}=L\left({x}_p-{x}_{\mathrm{cg}}\right) $$
$$ {I}_{xx}\ddot{\psi}=L\left({z}_p-{z}_{\mathrm{cg}}\right) $$
where xcg, ycg, and zcg are the coordinates of the vehicle's center of gravity, α and ψ are the trim and roll angles, respectively, M is the vehicle's mass, Ixx and Izz are the moments of inertia with respect to x and z axes, respectively, and g is the gravitational constant.
The numerical implementation of the model described above is accomplished using a finite-difference method. The second-order spatial discretization and the first-order time stepping are applied for finding the velocity potential, pressure distribution, and simulating vehicle's dynamics. The wing planform is divided into cells with dimensions ∆x and ∆z along x and z axes, respectively. At a node (xi, zj) away from the wing edges, the discretized form of Eq. (1) for the unknown perturbation velocity potential φ can be written as follows,
$$ \frac{h_{i+1,j}-{h}_{i-1,j}}{2\Delta x}\frac{\varphi_{i+1,j}-{\varphi}_{i-1,j}}{2\Delta x}+{h}_{i,j}\frac{\varphi_{i+1,j}-2{\varphi}_{i,j}+{\varphi}_{i-1,j}}{{\Delta x}^2}+\frac{h_{i,j+1}-{h}_{i,j-1}}{2\Delta z}\frac{\varphi_{i,j+1}-{\varphi}_{i,j-1}}{2\Delta z}+{h}_{i,j}\frac{\varphi_{i,j+1}-2{\varphi}_{i,j}+{\varphi}_{i-1,j+1}}{{\Delta z}^2}=U\frac{h_{i+1,j}-{h}_{i-1,j}}{2\Delta x}-{\left(\frac{\partial h}{\partial t}\right)}_{i,j} $$
where the local channel height and its time derivative (vertical velocity) are treated as known parameters from the previous time step. The boundary conditions (Eq. (2)) at the trailing and side edges are nonlinear with respect to φ. They are discretized with one-sided spatial derivatives and solved iteratively together with Eq. (1). For example, at the trailing edge (x1 = 0) the following numerical scheme is used,
$$ 2U\frac{-3{\varphi}_{1,j}+4{\varphi}_{2,j}-{\varphi}_{3,j}}{2\Delta x}-2\frac{\varphi_{1,j}-{\hat{\varphi}}_{1,j}}{\Delta t}-{\left(\frac{\partial \varphi }{\partial x}\right)}_{1,j}\frac{-3{\varphi}_{1,j}+4{\varphi}_{2,j}-{\varphi}_{3,j}}{2\Delta x}-{\left(\frac{\partial \varphi }{\partial z}\right)}_{1,j}\frac{\varphi_{1,j+1}-{\varphi}_{1,j-1}}{2\Delta z}=0 $$
where \( {\hat{\varphi}}_{i,j} \) is the velocity potential value from the previous time step. The coefficients (∂φ/∂x)1, j and (∂φ/∂z)1, j are initially taken as known parameters from the previous step. Then, a linear system of equations (Eqs. (10), (11)) is solved for φi, j. The derivatives (∂φ/∂x)1, j and (∂φ/∂z)1, j are evaluated with this new solution and substituted back into Eq. (11). This process is repeated until a converged solution is obtained for the velocity potential at each time step.
The input parameters in the present model include the wing geometry, initial conditions, and the water surface elevations. Mesh-independence studies have been conducted to establish the adequate spatial step. The sensitivities of the lift coefficient, CL = 2L/(ρU2S), and the longitudinal center of pressure, xp, to the cell size, ∆x, for a selected ram wing (described in section 3) in the equilibrium flight are shown in Fig. 2. As one can see, having 20 spatial intervals along the chord (and the same number along the span for a ram wing with aspect ratio of one) is sufficient to obtain mesh-independent results; this mesh was employed for all parametric calculations presented below. In time-dependent simulations with varying time step ∆t, it was found that the condition ∆t = ∆x/(2U) is adequate; selecting shorter time steps does not produce a noticeable effect.
Dependence of lift coefficient and center of pressure on the cell size
Most simulations in this study are conducted in the presence of low-amplitude water waves. The water surface elevations are described using the standard regular wave theory (Lewandowski 2004) in the reference frame translating along axis x with the vehicle speed U,
$$ {y}_w=A\ \sin \left[\omega t-k\left(x+ Ut\right)\ \cos \chi - kz\ \sin \chi \right] $$
where A is the wave amplitude, ω = 2π/T is the angular frequency, T is the period, k = 2π/λ = ω2/g is the wave number, λ is the wavelength, and χ is the direction of wave propagation in the Earth-fixed frame of reference (Fig. 1b), so that χ = 0° and χ = 180° correspond to the following and head waves, respectively. The wave amplitude is selected as A = λ/60, according to one of the common relationships for low-amplitude regular waves. Since the vehicle translation occurs at high Froude numbers, defined as \( Fr=U/\sqrt{gc} \), an assumption commonly used for ground-effect vehicles is invoked that wave systems are not affected by the flying craft. It was shown by Barber (2007) that effects of the water surface deformations on the vehicle's steady and unsteady aerodynamic coefficients are very small for typical non-augmented ram wings such as considered in this paper. However, these effects may need to be accounted for in case of power-augmented ram wings, especially at low Froude numbers.
The current mathematical model was previously validated for cases of steady flow around several configurations of ram wings in the extreme ground effect (Rozhdestvensky 2000; Matveev 2013), as well as for power augmented ram platforms hovering over solid ground and water (Matveev 2008; Matveev and Soderlund 2008). One validation example is given in the next section. No accurate measurements for unsteady motions of ram wings in the extreme ground effect are available in the technical literature, so conducting such experiments would represent a promising topic for future research.
3 Validation and Simulation Results
One validation example is shown here for the flat ram wing with side plates tested by Gallington and Miller (1970). The configuration with the main geometric parameters is shown in Fig. 3(a). The test data for the pressure coefficient on the wing lower surface and numerical predictions obtained with the present model are given in Fig. 3(b) for α = 5.7°, hp/c = 0.033, ht/c = 0.017, and the wing aspect ratio of 2/3. An agreement can be considered satisfactory keeping in mind unknown experimental uncertainties.
Tested ram wing setup and comparison of pressure coefficient along the wing centerline: squares, test data; line, numerical results
One representative configuration has been chosen in this study to illustrate dynamics of ram wings. The main specifications are listed in Table 1. The wing has an S-shaped lower surface (Fig. 1(a)) to ensure its stability without employing a tail wing (Matveev and Kornev 2013). The distance between the wing lower surface and the chord line is described by an equation yL(x) = − d sin (2πx/c). In the EGE theory, only lift on the lower side is accounted for so the upper side is not specified. In the equilibrium's steady motion over a flat surface, the wing trailing edge gap is selected as yp(0)/c = 0.04, trim angle (between the chord line and horizontal plane) is α = 4°, and the lift coefficient is CL = 0.209. The center of gravity lies in the wing center plane, so the equilibrium roll angle is ψ = 0°.
Main parameters of ram wing
Non-dimensional mass
\( \mu =\frac{2M}{\rho w{c}^2} \)
Non-dimensional moments of inertia
\( i=\frac{I_{xx}}{M{c}^2}=\frac{I_{zz}}{M{s}^2} \)
Longitudinal center of gravity
\( {X}_{\mathrm{cg}}=\frac{x_{\mathrm{cg}}}{c} \)
\( \mathrm{AR}=\frac{s}{c} \)
Froude number
\( Fr=\frac{U}{\sqrt{gc}} \)
Profile curvature
\( \frac{d}{c} \) (Fig. 1a)
To demonstrate stability of the selected setup, time histories of the vehicle's vertical position of the center of gravity and pitch and roll angles upon initial deviations from the equilibrium (∆ycg/c = 0.01, ∆α = 1°, ∆ψ = 1°) are shown in Fig. 4. The kinematic parameters approach the equilibrium values after a transient process. Initially, the altitude further increases since the pitch angle is greater than the equilibrium value. The pitch angle monotonically decreases to equilibrium (Fig. 4b), while the altitude initially increases due to a high pitch angle and then decreases (Fig. 4a). The roll motions show heavily damped oscillations.
Heave, pitch, and roll motions (solid curves) after initial deviations from equilibrium values (dashed horizontal lines)
A series of simulations has been carried out to model ram wing motions over regular water waves. Five different wave headings were explored ranging from the following waves (χ = 0°) to head waves (χ = 180°) with increments of 45°. The wavelength was set to three wing chords, λ/c = 3. At the start of simulations, the wing was assigned the equilibrium state. The wave amplitude was slowly increased from zero to the final value, A/λ = 1/60, to avoid any abrupt transient events. Eventually, the wing motions reached steady-state limit-cycle oscillations. Time variations of kinematic variables over three cycles in such regimes are illustrated in Fig. 5.
Variation of the vehicle kinematic variables in motion over waves. (a–c) Solid curves, in head waves; dotted curves, in following waves. d–f Solid curves, in bow waves; dotted curves, in quartering waves. (g–i) Solid curves, in beam waves. Horizontal dashed lines in all sub-figures represent values in equilibrium steady flight over flat surface
In case of the head and following wave headings (Fig. 5a–c), the frequency of vehicle's encounter with waves is high (the highest is for head waves), since the effective wavelength with respect to the moving wing is short (Fig. 1a). The heave and pitch amplitudes are greater for the following waves than for the head waves, as in the former case the wing has longer time to react to variations of the underlying surface. The heave motions only slightly deviate from sinusoidal functions, while non-linear distortions are more pronounced in the pitch response. The roll motions are absent, since there is no disturbing moment with respect to the x-axis at parallel courses of waves and the vehicle. The time-averaged position of the vehicle is greater than that in flight over the flat surface (dashed lines in Fig. 5), implying that the time-averaged lift is higher. This nonlinear effect has been known to happen for wing-in-ground craft moving over wavy surfaces (Rozhdestvensky 2006).
Simulation results for situations with bow (χ = 145°) and quartering waves (χ = 45°) are shown in Fig. 4(d–f). The frequencies of encounter are smaller than in previous cases due to oblique wave headings with respect to the wing direction. The heave amplitudes are greater for the quartering seas, as well as the time-averaged heave and pitch. With the appearance of the heeling moment, the roll motions are also present (Fig. 5f).
Simulated motions for the case with beam waves (χ = 90°) are depicted in Fig. 5(g–i). The roll amplitudes are the highest at this wave heading (Fig. 5i). The heave and pitch motions are also present (Fig. 5(g–h)), since variations in the under-platform channel in the transverse z-direction also lead to variation of the total lift force and longitudinal center of pressure. The heave and pitch amplitudes are even greater than for other wave headings. However, the period of these oscillations is several times higher, as the vehicle speed is perpendicular to the wave heading, and the period is defined only by the wave speed. In the limit of long waves (low-frequency forcing), the vehicle would essentially follow the wave contour.
Besides the wave headings, the lengths and amplitudes of waves also affect the vehicle motions. Another set of simulations was conducted in this study for a range of wavelengths, while the amplitude was kept as the same fraction of wavelength, A = λ/60. With increasing the absolute wave height, it is possible to encounter situations when the vehicle will collide with the water surface, which in practice often leads to catastrophic consequences. Hence, it is important to know the maximum wavelength that will allow a wing to fly without contact the water. Results of simulations are presented in Fig. 6 in form of vehicle's heave, pitch, and roll amplitudes normalized by the wave amplitude (for heave) and by the maximum wave slope kA (for pitch and roll), Since oscillations are not sinusoidal in large-amplitude waves, the effective amplitude of heave motion is defined as follows,
$$ {y}_1=\left[\max \left({y}_{\mathrm{cg}}(t)\right)-\min \left({y}_{\mathrm{cg}}(t)\right)\right]/2 $$
where ycg(t) is the vertical position of the center of gravity over at least one period of oscillations in the limit-cycle regime. The amplitudes of pitch and roll motions are defined similar to Eq. (13).
Normalized amplitudes of heave, pitch, and roll in motion over waves with variable wavelength and wave headings: Δ, head; ∇, bow; ◊, beam; □, quartering; o, following. Bold symbols correspond to longest waves with no contact between vehicle and water
For the chosen ram wing configuration, it was found that almost all wave headings (with the exception of beam waves) had maximum limiting wavelengths that allowed the wing not to touch water (Fig. 6). The smallest range of permissible wavelengths appears to be in head waves, as the wing does not have enough time to respond to the variation of the clearance and fly over the wave crests. The wave headings in order of increasing maximum permissible wavelength correspond to the bow, following, and quartering waves, respectively (Fig. 6). In case of the beam waves, the frequency of encounter is sufficiently small, so even in long and high waves, the vehicle has ample time to follow the wave contour without touching the surface, and its normalized heave and roll amplitudes approach one in the limit of very long waves.
Besides flying over surface waves, ram wings are likely to encounter other obstacles on the underlying surface, such as low-height islands and floating ice in water or ice ridges on ice sheets. The unsteady response of the wing to such non-uniformities can be also modeled with the current method. As an example, a triangular bump is considered here that may have variable orientation with respect to the vehicle heading (Fig. 7). The bump length and height are selected as L/c = 2 and H/c = 0.04, respectively, with two orientations, β = 0° and 60°.
Top view of ram wing moving toward a bump. Dashed-dotted line indicates bump peak and side view of a triangular bump
The variations of the vehicle's kinematic parameters are shown in Fig. 8. The non-dimensional time intervals tU/c with at least some portion of the wing being above the bump are 2–5 at orientation 0° and 0.13–6.87 at 60°. In both cases, the center of gravity moves up and then relaxes back to the equilibrium value with a small overshoot, and the heave motion is more pronounced for the longer-influencing oblique bump. The pitch response is similar but somewhat delayed in the beginning for β = 60°, and it is more oscillatory for β = 0° due to more abrupt disturbance. Also, oscillatory roll motions are present only for the oblique incidence, since one side the vehicle feels the bump presence earlier. Even though the bump height equals to the equilibrium flying height of the platform trailing edge (over the flat horizontal surface), no contact with ground occurs due to sufficient increase of the lift force resulting in the effective rise of the flying height.
Heave, pitch, and roll motions of ram wing passing over the bump with orientations: β = 0°, solid curves; β = 60°, dotted curves. Dashed lines indicate equilibrium values
A dynamic model for roll-heave-pitch motions of a ram wing has been developed using the extreme-ground-effect theory. It allows us to simulate motions of a vehicle flying over non-uniform surface. The model is computationally economical, since viscous effects are neglected and flow under the wing is considered to be two-dimensional.
The main conclusions of this paper include the following. The model is found to reasonably agree with test data for a ram wing in a steady condition. It is numerically confirmed that an S-shape of the wing lower surface can provide stability of tailless WIG craft. Nonlinear effects are more pronounced for pitch angles than other kinematic variables. The time-averaged vertical positions of the vehicle increase in the presence of waves in comparison with a flight above a flat surface. All motions (heave, pitch, and roll) have the greatest amplitudes in beam waves, although the oscillation periods are also longest in such conditions. The head waves are most dangerous from a collision standpoint, since a ram wing may not have enough time to respond to the water surface variation. Oblique course headings toward transverse obstacles on the ground can be recommended to reduce effective slopes of these obstacles along the vehicle direction.
The present model can be also used to evaluate motion amplitudes and occurrence of regimes when the wing touches water/ground. It can be extended by including other forces (e.g., thrust, drag), adding other degrees of freedom, introducing control surfaces (flaps), and simulating random waves and wind gusts. With incorporation of elements of the planing hull theory, one can possibly simulate takeoff and landing phases, as well as brief contacts between water and the wing flying in rough sea conditions.
Barber TJ (2007) A study of water surface deformation due to tip vortices of a wing-in-ground effect. J Ship Res 51(2):182–186 https://www.ingentaconnect.com/content/sname/jsr/2007/00000051/00000002/art00009 Google Scholar
Benedict K, Kornev NV, Meyer M, Ebert J (2002) Complex mathematical model of the WIG motion including the take-off mode. Ocean Eng 29:315–357. https://doi.org/10.1016/S0029-8018(01)00002-6 CrossRefGoogle Scholar
Gallington RW, Miller MK (1970) The ram-wing: a comparison of simple one-dimensional theory with wind tunnel and free flight results. Proceedings of AIAA Guidance, Control and Fluid Mechanics Conference. AIAA, Santa Barbara, CA, USA. paper No. 70–971Google Scholar
Hirata N, Kodama Y (1995) Flow computation for three-dimensional wing in ground effect using multi-block technique. J Soc Naval Archit Jpn 177:49–57. https://doi.org/10.2534/jjasnaoe1968.1995.49 CrossRefGoogle Scholar
Lewandowski EM (2004) The dynamics of marine craft. World Scientific Publishing, Singapore, 139–198.sCrossRefGoogle Scholar
Liang H, Wang X, Zou L, Zong Z (2014) Numerical study of two-dimensional heaving airfoils in ground effect. J Fluids Struct 48:188–202. https://doi.org/10.1016/j.jfluidstructs.2014.02.009 CrossRefGoogle Scholar
Matveev KI (2008) Static thrust recovery of PAR craft on solid surfaces. J Fluid Struct 24:920–926. https://doi.org/10.1016/j.jfluidstructs.2007.12.007 CrossRefGoogle Scholar
Matveev KI (2013) Unsteady motions of a ram wing flying above waves and low-height obstacles. Proceedings of the 31st AIAA Applied Aerodynamics Conference. AIAA, San Diego, CA, USA. paper No. 2013–2404Google Scholar
Matveev KI, Chaney C (2013) Heaving motions of a ram wing translating over water. J Fluids Struct 38:164–173. https://doi.org/10.1016/j.jfluidstructs.2012.10.006 CrossRefGoogle Scholar
Matveev KI, Kornev N (2013) Dynamics and stability of boats with aerodynamic support. J Ship Prod Des 29(1):17–24. https://doi.org/10.5957/JSPD.29.1.120033 CrossRefGoogle Scholar
Matveev KI, Soderlund RK (2008) Shallow-water zero-speed tests and modeling of PAR craft. J Eng Marit Environ 222(3):145–152. https://doi.org/10.1243/14750902JEME102 Google Scholar
Rozhdestvensky KV (2000) Aerodynamics of a lifting system in extreme ground effect. Springer-Verlag, Heidelberg, 47–84CrossRefzbMATHGoogle Scholar
Rozhdestvensky KV (2006) Wing-in-ground effect vehicles. Prog Aerosp Sci 42(3):211–283. https://doi.org/10.1016/j.paerosci.2006.10.001 CrossRefGoogle Scholar
Staunfenbiel RW (1987) On the design of stable ram wing vehicles. Proceedings of the Symposium on Ram Wing and Ground Effect Craft. London, 110–136Google Scholar
Tuck EO (1984) A simple one-dimensional theory for air-supported vehicles over water. J Ship Res 28(4):290–292Google Scholar
Windall SE, Barrows TM (1970) An analytic solution for two and three-dimensional wings in ground effect. J Fluid Mech 41(4):769–792. https://doi.org/10.1017/S0022112070000915 CrossRefzbMATHGoogle Scholar
Zong Z, Liang H, Zhou L (2012) Lifting line theory for wing-in-ground effect in proximity to a free surface. J Eng Math 74(1):143–158. https://doi.org/10.1007/s10665-011-9497-x CrossRefzbMATHGoogle Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.School of Mechanical and Materials EngineeringWashington State UniversityPullmanUSA
Matveev, K.I. J. Marine. Sci. Appl. (2019) 18: 123. https://doi.org/10.1007/s11804-019-00091-9
Received 29 April 2017
DOI https://doi.org/10.1007/s11804-019-00091-9
Publisher Name Harbin Engineering University | CommonCrawl |
Annals of General Psychiatry
Prevalence of postpartum depression and interventions utilized for its management
Reindolf Anokye ORCID: orcid.org/0000-0002-7669-70571,
Enoch Acheampong1,
Amy Budu-Ainooson2,
Edmund Isaac Obeng3 &
Adjei Gyimah Akwasi1
Annals of General Psychiatry volume 17, Article number: 18 (2018) Cite this article
Postpartum depression is a mood disorder that affects approximately 10–15% of adult mothers yearly. This study sought to determine the prevalence of postpartum depression and interventions utilized for its management in a Health facility in Ghana.
A descriptive cross-sectional study design using a quantitative approach was used for the study. The study population included mothers and healthcare workers. Simple random sampling technique was used to select 257 mothers, while a convenience sampling technique was used to select 56 health workers for the study. A Patient Health Questionnaire was used to screen for depression and a structured questionnaire comprising closed-ended questions was used to collect primary data on the interventions for the management of postpartum depression. Data were analyzed using statistical software SPSS version 16.0.
Postpartum depression was prevalent among 7% of all mothers selected. The severity ranged from minimal depression to severe depression. Psychosocial support proved to be the most effective intervention (p = 0.001) that has been used by the healthcare workers to reduce depressive symptoms.
Postpartum depression is prevalent among mothers although at a lower rate and psychosocial support has been the most effective intervention in its management. Postpartum depression may affect socialization behaviors in children and the mother, and it may lead to thoughts of failure leading to deeper depression. Frequent screening exercises for postpartum depression should be organized by authorities of the hospitals in conjunction with the Ministry of Health.
Postpartum depression (PPD) is a mood disorder that affects approximately 10–15% of adult mothers yearly with depressive symptoms lasting more than 6 months among 25–50% of those affected [1]. Postpartum depression often occurs within a few months to a year after birth. However, some studies have reported the occurrence of postpartum depression 4 years after birth [2]. Causes of PPD may be physiological, situational, or multifactorial [3].
Major predisposing factors for developing PPD are social in nature usually stressful life events, childcare stress, and prenatal anxiety appears to have predictive value for PPD. In addition, a history of the previous episode of PPD [4], marital conflict, and single parenthood are also predictive [5]. It was believed for a long time that only women from western societies suffered from PPD and that postnatal mood disorders were defined by culture [6]. However, conditions with similar symptoms have also been identified in other countries [7]. Some studies have found the same prevalence of PPD in different societies [8]; however, European and Australian women appear to have lower levels of PPD than women in the United States of America (USA). Women from Asia and South Africa have been identified as being most at risk [9]. The symptoms are similar to symptoms of depression at other times of life, but in addition to low mood, sleep disturbance, change in appetite, diurnal variation in mood, poor concentration, and irritability, women with PPD also experience guilt about their inability to look after their new baby [10]. For most women, symptoms are transient and relatively mild known as postpartum blues; however, 10–15% of women experience a more disabling and persistent form of mood disturbance [11].
More recent evidence suggests that postpartum psychiatric illness is virtually indistinguishable from psychiatric disorders that occur at other times during a woman's life [12]. Interventions for PPD include pharmacologic interventions, supportive interpersonal and cognitive therapy, psychosocial support through support groups, and complementary therapies. Electroconvulsant therapy has proven effective for mothers with severe PPD [5]. In severe cases of postpartum depression, especially in mothers who are at risk of suicide, inpatient hospitalization may be required [13].
Psychosocial interventions such as support groups have been reported as effective [1, 13]. Beck [1] states that support group attendance can give mothers a sense of hope through the realization that they are not alone. Support groups for couples can teach coping strategies and offer encouragement. They also give couples an opportunity to express needs and fears in a nonjudgmental environment [3].
Interpersonal psychotherapy conceptualizes depression as having three components symptom formation, social functioning, and personal contributions. Emphasis is placed on interpersonal relationships relating to role changes that accompany parenthood rather than on the depression itself. Interpersonal psychotherapy can also be initiated during pregnancy for women who are considered at high risk [13]. Recent research has found that women receiving IPT were significantly more likely to have a reduction in symptoms and recover from PPD than women who did not receive IPT treatment [25].
A study from the United Kingdom found that three brief home-based visits using counseling techniques were effective at accelerating the recovery rate for women suffering from PPD [23].
Prevalence of PPD has been difficult to determine because of the difference in criteria for the time of onset used by the DSM-IV and that used by most epidemiological studies. Prevalence has also been difficult to establish because of underreporting by mothers themselves [2]. It has been estimated that only 20% of women who experience symptoms of PPD report those symptoms to their healthcare providers. Symptoms of PPD are often minimized by both mothers and care providers as normal, natural consequences of childbirth [13]. Evidence has been presented that mothers may also be reluctant to disclose their feelings of depression for fear of stigmatization and fear that their depressive symptoms might be determined as evidence of being a "bad mother". Cooper et al. [23] reported that "almost half of those independently identified as depressed were not detected as such by their health visitor.
Despite the growing recognition as a global childbirth-related problem, the importance of detecting and treating it has until recently been largely overlooked in practice and it seems that knowledge about this problem is not very high [14]. PPD is a serious social issue due to its consequences, including an increased risk of suicide and infanticide. PPD is often under-diagnosed and untreated; therefore, efforts are needed to improve perinatal mental healthcare [15].
This research was carried out to determine the Prevalence of postpartum depression and interventions utilized by healthcare workers for its management in a Health facility in Ghana.
The study was conducted at Komfo Anokye Teaching Hospital in Ghana. The selected hospital is a primary government-owned health facility having several units such as Maternity unit, Reproductive and family planning services, Medical unit, Surgical unit, Adolescent unit, Child Welfare clinic, Outpatient Department, Radiology unit, Accounts, Administration, Medical records, Security, Health insurance unit among others and offer psychiatric services to patients. In this study, a cross-sectional study design with a quantitative approach was used. In cross-sectional studies, investigators do not follow individuals over time. Instead, they look at the prevalence of disease and/or exposure at one moment in time [16]. These studies take a "snapshot" of the proportion of individuals in the population that are, for example, diseased and nondiseased at one point in time. Descriptive cross-sectional studies simply characterize the prevalence of a health outcome in a specified population [16]. This study design was deemed appropriate for this study. The study population included mothers who were within 12 months after delivery because postpartum depression usually affects women within 12 months after giving birth and health workers who were recruited for this study to provide information on the psychosocial and psychological interventions that has been used in the management of postpartum depression at the hospital. The study was conducted within a period of 2 months.
Simple random sampling technique was used to select the mothers. This method selected by chance or none zero mothers for the study and data was collected within a period of 1 month using 5 research assistants. In selecting the respondents for the study, random numbers from a prepared random number table was assigned to names of mothers who were present each day data was collected. The numbers were randomly picked and whichever name that was assigned to the selected numbers that were picked was selected to take part in the study. The Yamane formula for determining samples was used to determine the appropriate sample for the study. A 95% confidence level [The value of (1 − α) in standard normal distribution z-table, which is 1.96 for 95%] and a Precision level/sampling error or margin of error of 0.05 or 5% which is the generally acceptable margin of error for social researches [17] were used to calculate for the sample using the equation;
$$n = \frac{N}{{1 + N\left( e \right)^{2} }}$$
n represents the sample size to be determined; N represents the estimated total population size, and e represents the level of precision/sampling error or margin of error. The population of the mothers who had given birth and were within 12 months after delivery at Komfo Anokye Teaching Hospital was estimated to be 451 for the month data was collected.
Therefore;
$$N = 4 5 1$$
$$1 + N \left( e \right)^{2} = { 1 } + { 451 }\left( {.0 5} \right)^{ 2}$$
$$n = \frac{451}{{1 + 451 (.05)^{2} }}$$
Assuming that 20% will not respond to the questionnaire due to the sensitive nature of the study, 45 (rounded from 42.4) were added to 212. and therefore the total sample size selected amounted to 257 mothers. A convenience sampling technique was also used to select 56 health workers for the study. They were recruited based on their availability and willingness to be part of the study. By the time the investigators completed data collection 56 health workers had availed themselves to be part of the study. The 56 health workers were recruited for this study to provide information on the psychosocial and psychological interventions that have been used in the management of postpartum depression at the hospital.
A Patient Health Questionnaire (PHQ-9) was used to screen for depression at the selected hospital. The PHQ-9 is a 9-question instrument given to patients in a primary care setting to screen for the presence and severity of depression. The PHQ-9 has been validated against in-depth mental health interviews [18, 19] and is reported to be specific (> 86% at scores of > 10) for identification of people with major depressive disorders (MDD) [18, 19].
A structured questionnaire with closed-ended questions was used. The questionnaire was deemed an appropriate instrument for data collection in this study to reap its advantages of cost efficiency, easy administration, and easy quantitative analysis. The questionnaire comprised of four (4) subsections which included questions on the demographic characteristics of respondents; interventions as well as the duration of intervention and influence of interventions on reduction of depressive symptomatology.
Data were analyzed using both descriptive and inferential statistical tools incorporated in statistical software SPSS version 16.0. To ensure validity and reliability of instruments, the questionnaire was pretested at the Animwaa Hospital, and conflicting issues were resolved before the final data collection (Fig. 1).
Demographic characteristics of respondents
The mean age was 27 years while more than half (54%) were married and the majority were Akan's. Also, more than half (66%) of the respondents had completed JHS/SHS whiles majority (83%) were working in the informal sector as shown in Table 1.
Table 1 The demographic characteristics of respondents
Prevalence of postpartum depression
Figure 2 illustrates the prevalence of postpartum depression among 212 respondents. Out of this total number of respondents, the majority (93%) did not have any indications of postpartum depression (PPD), while 7% had postpartum depression (PPD).
Depression severity
The severity of depression among respondents in the study was further examined and the outcomes are represented in Fig. 3 which shows that 39% out of the total number of respondents had minimal depression; 22% had moderate depression and mild depression, respectively; 6% had moderately severe depression with 11% of the respondents had severe depression.
Interventions utilized by healthcare workers for the management of postpartum depression
Figure 4 indicates the interventions used in the management of postpartum depression among respondents. The most common interventions used in the management of postpartum depression among respondents were psychosocial support (34%), professionally based postpartum home visits (28%), interpersonal psychotherapy (20%), and cognitive therapy (18%).
Psychosocial and psychological interventions
Duration of intervention
Table 2 shows the durations of interventions utilized by healthcare workers for the management of postpartum depression. From the table, it is observed that all the interventions were applied up to 6 months.
Table 2 Durations of interventions
Influence of interventions on reduction of depressive symptomatology (positive outcome)
From Table 3, cognitive therapy (p = 0.14), interpersonal psychotherapy (p = 0.356), and professionally based postpartum home visits (p = 0.121) had no significant impact on depressive symptomatology reduction, and only psychosocial support (p = 0.001) was found to significantly impact on depressive symptomatology reduction.
Table 3 Influence of interventions on reduction of depressive symptomatology
Association between demographic characteristics and depressive symptoms
Table 4 summarizes the result of the univariate and multivariate analysis of the association between demographic characteristics and the presence of depressive symptoms. In both the univariate analysis and the multivariate analysis, ethnicity and occupation had an association with depressive symptoms. Respondents who were Gonja's were 8.46 times more likely to develop Depressive Symptoms than those in another ethnicity: adjusted Odds Ratio (AOR) = 8.46 [95% confidence interval (CI) 1.57–65.2]. Respondents who were employed were 4.7 times more likely to develop depressive symptoms: adjusted Odds Ratio (AOR) = 4.72 [95% confidence interval (CI) 1.021–14.01].
Table 4 Odds ratio with 95% confidence interval for the association between demographic characteristics and depressive symptoms
Findings from this study showed a lower prevalence (7%) of postpartum depression among respondents compared to those found in similar African countries [20,21,22]. This may be attributed to the instruments used as the PHQ-9 instrument used for this study is different from the other instruments used in the other studies. Respondent's depressive symptoms varied from being minimal, moderate, mild, moderately severe depression and severe depression. A similar finding was found in South Africa study where prevalence rates of various depressive symptoms were found [23]. The most common interventions used in the management of postpartum depression among respondents were psychosocial support, professionally based postpartum home visits, interpersonal psychotherapy, and cognitive therapy. However, among these interventions the one which had a significant influence on the reduction of depressive symptomatology (positive outcome) was the psychosocial support while the others had minimal influence. Psychosocial interventions are unstructured and nonmanualized and include nondirective counseling and peer support. Psychosocial interventions such as support groups have been reported as effective [1, 13]. The effectiveness of this intervention in the management of postpartum depression (PPD) has been established by Holden [24] in his study 50 women with PPD were randomized to 8 weekly nondirective counseling sessions with a health visitor or routine primary care and it was found that the rate of recovery from PPD for counseling was significantly greater (69%) than that of the control group (38%). From this study, interpersonal psychotherapy intervention and cognitive therapy did not significantly influence the reduction of depressive symptoms. This implies that interpersonal psychotherapy cannot be relied on as an intervention for PPD in the study area. However, the effectiveness of interpersonal psychotherapy in postpartum depression management was confirmed in several studies, including a large randomized trial with a control group [25]. O'Hara et al. randomized 120 women with postpartum depression to receive 12 weekly 60-min individual sessions of manualized interpersonal psychotherapy by a trained therapist versus control condition of a waitlist [25]. The women who received interpersonal psychotherapy had a significant decrease in their depressive symptomatology (measured by Hamilton Depression Rating Scale and Beck Depression Inventory) compared to the waitlist group, as well as significant improvement in social adjustment scores. In another study by Clark et al. [26], 35 women with postpartum depression were assigned to individual interpersonal psychotherapy (12 sessions) versus mother–infant group therapy versus a waitlist condition. Both interpersonal psychotherapy and mother–infant group therapy were associated with greater reduction in depressive symptoms compared to the waitlist conditions. Both studies support the effectiveness of interpersonal psychotherapy as a treatment for PPD, though there is not enough data to suggest a specific benefit to interpersonal psychotherapy compared with other therapeutic modalities. It could, therefore, serve as the first-line treatment, especially for breastfeeding mothers [27].
The study was limited by a smaller sample size, the use of one screening tool for depression among other tools. The study, therefore, missed out on the many other mothers who were not present at the hospital at the time of the study. Moreover, the study failed to determine the prevalence of PPD based on the tools used in other epidemiological studies. However, the Patient Health Questionnaire (PHQ-9) is a multipurpose instrument for screening, diagnosing, monitoring, and measuring the severity of depression. The PHQ-9 incorporates DSM-IV depression diagnostic criteria with other leading major depressive symptoms into a brief self-report tool. While there may be limitations inherent in the study design and methods used, these limitations by no means, compromise the results reported.
Prevalence of PPD has been difficult to determine because of several factors. The interventions for PPD include pharmacologic interventions, supportive interpersonal and cognitive therapy, psychosocial support through support groups, and complementary therapies. This study found that postpartum depression was prevalent among mothers who were within 12 months after delivery though at a lower rate. Some of the respondents had minimal depression, moderate depression, and mild depression, as well as moderately severe depression, and extremely severe depression. The major predisposing factors for developing PPD are stressful life events, childcare stress, and prenatal anxiety, as well as the history of the previous episode of PPD.
The most-common psychosocial and psychological interventions utilized in the management of postpartum depression were psychosocial support, professionally based postpartum home visits, interpersonal psychotherapy, and cognitive therapy. However, among these interventions, psychosocial support proved to be the most effective intervention as it was reported to have influenced the reduction of depressive symptoms.
Postpartum depression may affect socialization behavior in children and the mother, and it may lead to thoughts of failure leading to deeper depression.
Frequent screening exercises for postpartum depression should be organized by authorities of the Komfo Anokye Teaching Hospital in conjunction with the Ministry of Health, Ghana Health Service and Nongovernmental Organizations.
The Ministry of Health and Ghana Health Service should collaborate with the National Commission on Civic Education to embark on public education on the effective use of psychosocial support as an intervention for postpartum depression at the various health facilities in Ghana.
KATH:
Komfo Anokye Teaching Hospital
CBT:
MDD:
major depressive disorders
PHQ-9:
ECN:
early childhood nurses
PPD:
Beck CT, Records K, Rice M. Further development of the postpartum depression predictors inventory-revised. J Obstet Gynecol Neonatal Nurs. 2006;35(6):735–45.
Mauthner NS. Re-assessing the importance and role of the marital relationship in postnatal depression: Methodological and theoretical implications. Journal of Reprod Infant Psychol. 1998;16(2–3):157–75.
Fishel AH. Mental health disorders and substance abuse. Maternity & women's health care; 2004:960–82.
Leopold KA, Zoschnick LB. Women's primary health grand rounds at the University of Michigan: postpartum depression. Female Patient Total Health Care Women 1997;22:12–30.
Andrews-Fike C. A review of postpartum depression. Primary Care Companion J Clin Psychiatry. 1999;1(1):9.
Bina R. The impact of cultural factors on postpartum depression: a literature review. Health Care Women Int. 2008;29(6):568–92.
Cox JL, Holden JM, Sagovsky R. Detection of postnatal depression: development of the 10-item Edinburgh Postnatal Depression Scale. Br J Psychiatry. 1987;150(6):782–6.
Huang YC, Mathers N. Postnatal depression–biological or cultural? A comparative study of postnatal women in the UK and Taiwan. J Adv Nurs. 2001;33(3):279–87.
Affonso DD, De AK, Horowitz JA, Mayberry LJ. An international study exploring levels of postpartum depressive symptomatology. J Psychosom Res. 2000;49(3):207–16.
Keller MC, Nesse RM. The evolutionary significance of depressive symptoms: different adverse situations lead to different depressive symptom patterns. J Pers Soc Psychol. 2006;91(2):316.
Craske MG. Origins of phobias and anxiety disorders: why more women than men?. New York: Elsevier; 2003. p. 13.
Buist A, Bilszta J, Milgrom J, Barnett B, Hayes B, Austin MP. Health professional's knowledge and awareness of perinatal depression: results of a national survey. Women Birth. 2006;19(1):11–6.
Nonacs R, Cohen LS. Postpartum mood disorders: diagnosis and treatment guidelines. J Clin Psychiatry. 1998;59:34–40.
McCue Horwitz S, Briggs-Gowan MJ, Storfer-Isser A, Carter AS. Prevalence, correlates, and persistence of maternal depression. J Women's Health. 2007;16(5):678–91.
Drozdowicz LB, Bostwick JM. Psychiatric adverse effects of pediatric corticosteroid use. In: Mayo clinic Proceedings 2014 Jun 1, vol. 89, no 6. New York: Elsevier. p. 817–834.
Lorraine KA, Lopes B, Ricchetti-Masterson K, Yeatts KB. ERIC notebook. Chapel Hill: The University of North Carolina at Chapel Hill, Department of Epidemiology Courses: Epidemiology; 2013. p. 710.
Barlett JE, Kotrlik JW, Higgins CC. Organizational research: Determining appropriate sample size in survey research. Inf Technol Learn Perform J. 2001;19(1):43.
Gilbody S, Richards D, Brealey S, Hewitt C. Screening for depression in medical settings with the Patient Health Questionnaire (PHQ): a diagnostic meta-analysis. J Gen Intern Med. 2007;22(11):1596–602.
Kroenke K, Spitzer RL, Williams JB. The phq 9. J Gen Intern Med. 2001;16(9):606–13.
Chinawa JM, Odetunde OI, Ndu IK, Ezugwu EC, Aniwada EC, Chinawa AT, Ezenyirioha U. Postpartum depression among mothers as seen in hospitals in Enugu, South-East Nigeria: an undocumented issue. Pan Afr Med J. 2016;23(1):180.
Nakku JN, Nakasi G, Mirembe F. Postpartum major depression at six weeks in primary health care: prevalence and associated factors. Afr Health Sci. 2006;6(4):207–14.
Sawyer A, Ayers S, Smith H. Pre-and postnatal psychological wellbeing in Africa: a systematic review. J Affec Disord. 2010;123(1):17–29.
Cooper PJ, Tomlinson M, Swartz L, Woolgar M, Murray L, Molteno C. Post-partum depression and the mother-infant relationship in a South African peri-urban settlement. Br J Psychiatry. 1999;175(6):554–8.
Holden JM, Sagovsky R, Cox JL. Counselling in a general practice setting: controlled study of health visitor intervention in treatment of postnatal depression. BMJ. 1989;298(6668):223–6.
O'Hara MW, Stuart S, Gorman LL, Wenzel A. Efficacy of interpersonal psychotherapy for postpartum depression. Arch Gen Psychiatry. 2000;57(11):1039–45.
Clark R, Tluczek A, Wenzel A. Psychotherapy for postpartum depression: a preliminary report. Am J Orthopsychiatry. 2003;73(4):441.
O'Hara MW, Stuart S, Watson D, Dietz PM, Farr SL, D'Angelo D. Brief scales to detect postpartum depression and anxiety symptoms. J Women's Health. 2012;21(12):1237–43.
The collection of data was done by the fifth and fourth authors (EIO and AGA). The secondary data compilation, data analysis, and interpretation were done by the first author (RA). The second and third authors (EA and AB) revised the manuscript thoroughly with their individual expertise. In the analysis of data, all authors played a significant part as well as in designing and preparing the manuscript. Proofreading and the final approval process was also shared accordingly among all authors, and all authors have agreed for its submission for publication. All authors read and approved the final manuscript.
Our gratitude goes out to the management and staff of the Komfo Anokye Teaching Hospital, Kumasi as well as all mothers who participated in this study. Further thanks to all whose works on postpartum depression helped in putting this work together.
A complete document of this study and its results can be found at the Library of the School of Medical Sciences, KNUST, Kumasi.
Consent to publish
Letter of introduction was sent to the Administration unit of the selected hospital to seek for permission to carry out research in their institutions. The study was approved by the committee on Human Research Publication and ethics at the Kwame Nkrumah University of Science and Technology, Kumasi-Ghana. Anonymity was ensured by using abbreviations for the respondents. Consent was sought from all the participants. A written informed consent was sought before administration of the questionnaire. Written consent was taken from respondents because they could read and write. The process was approved by the ethics committee after explaining why such approach was used. Participation was purely voluntary and any participant who wanted to withdraw was allowed. Confidentiality was guaranteed before administering the questionnaires. The study have been performed in accordance with the Declaration of Helsinki by the protection of the life, health, dignity, integrity, as well as ensuring the right to self-determination and the protection of the privacy, and confidentiality of personal information of research subjects.
No external funding was received for the purpose of this study. All cost related to this research was covered by the researchers themselves.
Centre for Disability and Rehabilitation Studies, Department of Community Health, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
Reindolf Anokye
, Enoch Acheampong
& Adjei Gyimah Akwasi
School of Public Health, Department of Health Education and Promotion, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
Amy Budu-Ainooson
Methodist University College, Accra, Ghana
Edmund Isaac Obeng
Search for Reindolf Anokye in:
Search for Enoch Acheampong in:
Search for Amy Budu-Ainooson in:
Search for Edmund Isaac Obeng in:
Search for Adjei Gyimah Akwasi in:
Correspondence to Reindolf Anokye.
Anokye, R., Acheampong, E., Budu-Ainooson, A. et al. Prevalence of postpartum depression and interventions utilized for its management. Ann Gen Psychiatry 17, 18 (2018) doi:10.1186/s12991-018-0188-0
Psychosocial and psychological intervention | CommonCrawl |
Adaptation of Pseudomonas sp. AKS2 in biofilm on low-density polyethylene surface: an effective strategy for efficient survival and polymer degradation
Prosun Tribedi1,
Anirban Das Gupta1 &
Alok K Sil1
Pseudomonas sp. AKS2 can efficiently degrade low-density polyethylene (LDPE). It has been shown that this degradation of LDPE by AKS2 is correlated to its ability to form biofilm on the polymer surface. However, the underlying mechanism of this biofilm-mediated degradation remains unclear. Since bioremediation potential of an organism is related to its adaptability in a given environment, we hypothesized that AKS2 cells undergo successful adaptation in biofilm on LDPE, which leads to higher level of LDPE degradation. To verify this, the current study investigated a number of parameters of AKS2 cells in biofilm that are known to be involved in adaptation process.
Successful adaptation always develops a viable microbial population. So we examined the viability of AKS2 cells in biofilm. We observed the presence of viable population in the biofilm. To gain an insight, the growth of AKS2 cells in biofilm on LDPE at different time points was examined. Results showed a better reproductive competence and more colonization for AKS2 biofilm cells than planktonic cells, indicating the increased fitness of AKS2 biofilm cells than their planktonic counterpart. Towards understanding fitness, we determined the hydrolytic activity, different carbon source utilization potentials, functional diversity and homogeneity of AKS2 biofilm cells. Results showed increased hydrolytic activity (approximately 31%), higher metabolic potential, higher functional diversity (approximately 27%) and homogeneity for biofilm-harvested cells than planktonic cells. We also examined cellular surface hydrophobicity, which is important for cellular attachment to LDPE surface. Consistent with the above results, the cell surface hydrophobicity of biofilm-harvested AKS2 cells was found to be higher (approximately 26%) compared to that of their planktonic counterpart. All these results demonstrated the occurrence of physiological as well as structural adaptations of AKS2 cells in biofilm on LDPE surface that resulted in better attachment, better utilization of polymer and better growth of AKS2 cells, leading to the development of a stable colony on LDPE surface.
The present study shows that AKS2 cells in biofilm on LDPE surface undergo successful adaptation that leads to enhanced LDPE degradation, and thus, it helps us to understand the underlying mechanism of biofilm-mediated polymer degradation process by AKS2 cells.
In the modern era, plastic-based materials have a variety of domestic and industrial applications. However, the widespread use of this non-biodegradable material poses a major threat to the environment. An example of a widely used non-biodegradable polymer is polyethylene. Although there are reports of microbial degradation of polyethylene, the rate is very slow [1,2]. Moreover, this microbial degradation requires the pre-oxidation of polyethylene, either by physical or chemical treatment [1-4]. Previously, we reported that Pseudomonas sp. AKS2 can degrade 5% ± 1% of low-density polyethylene (LDPE) in just 45 days, without any prior oxidation of polyethylene [5]. This report also documented that AKS2 developed biofilm on polyethylene surface efficiently, and there was a linear correlation between this biofilm formation and the ability to degrade polyethylene [5]. However, the underlying mechanism of this biofilm-mediated LDPE degradation by AKS2 remains unclear.
Biofilm represents a complex association of microorganisms in a given habitat [6], and its formation is a bacterial survival response to hostile environment [7]. Microorganisms are known to be capable of altering their structural and physiological activities through biofilm formation as this allows survival under varied environmental conditions. Such alteration in activities for better survival in a given habitat is known as adaptability.
In general, adaptation is a biological process by which an organism becomes more competent to live in a given habitat [8]. Following adaptation, microorganisms exhibit different physiological and structural activities compared to their non-adapted counterparts [9]. Thus, adaptation provides a kind of biological insurance for an organism to encounter varying environments in a given ecological niche. The adaptive traits may be structural, behavioural or physiological. Structural adaptation includes variations in shape and size of the organism. The alteration of membrane fluidity, by both psychrophilic and thermophilic bacteria, is an example of structural adaptation. This type of modulation in membrane fluidity serves as a protection against harsh temperatures. While behavioural adaptations are composed of inherited behaviour chains, physiological adaptations allow the organism to perform special functions for the adjustment of cellular growth and development, regulation of temperature, etc. Bacterial secretion of exo-polysaccharides for their attachment to a solid surface is an example of physiological adaptation.
Existing literature documents that bioremediation potential of an organism is related to its adaptability in a given environment [10]. Adapted Rhodococcus erythropolis DCL14 cells were shown to degrade alkanes and alcohols at higher rate compared to their non-adapted counterparts [11]. Thus, we hypothesized that successful adaptation of AKS2 population in biofilm on LDPE surface resulted in its sustained retention at this site, which resulted in enhanced polymer degradation.
To verify this hypothesis, we investigated the various parameters of AKS2 structure and physiology such as fitness, metabolic potential, functional diversity and homogeneity that are relevant to the adaptation. We observed increased fitness, higher levels of functional diversity and homogeneity, and increased cell surface hydrophobicity for biofilm adapted cells compared to the planktonic cells, indicating the physiological and structural adaptations of AKS2 cells in biofilm. Thus, these results demonstrate that successful adaptation of AKS2 cells in biofilm on LDPE surface resulted in enhanced degradation of LDPE.
Bacterial strain and culture condition
Pseudomonas sp. AKS2 was previously isolated from Kolkata municipal solid waste dumping ground soil (Kolkata, India) [12]. It is a potential degrader of polyethylene succinate (PES) [12] and LDPE [5]. This isolate was grown in 100 ml of sterile basal media containing 300 mg of sterile LDPE films at 30°C for different time periods as per the requirement of the experiments. Basal media were prepared as described previously [5]. Commercially available LDPE was used in all the experiments. LDPE films were made additive free by washing with 70% ethanol. The surface area of each polyethylene film used was 5 × 4 cm.
Dual staining for the assessment of AKS2 viability in biofilm
To determine the viability of AKS2 cells in biofilm on polyethylene surface after the incubation, LDPE films were removed from the conditioned medium and the adhered bacterial population, if any, were stained with 4 μg ml−1 acridine orange for 15 min [5]. Thereafter, LDPE films were washed with sterile Milli-Q water (Millipore Corporation, Billerica, MA, USA) and further treated with 4 μg ml−1 ethidium bromide for another 15 min. LDPE films were again washed with sterile Milli-Q water. Thereafter, dried films were observed under a fluorescence microscope (Olympus IX 71, Olympus Corporation, Tokyo, Japan).
Measurement of AKS2 fitness
Microbial fitness represents the ease of reproduction of an organism in a given environment. In order to examine the fitness of AKS2 biofilm cells, we compared the colonization and reproduction potential of biofilm-harvested cells with that of planktonic AKS2 cells. For this experiment, AKS2 cells were grown for 30 days in media containing LDPE films as sole C-source. After the incubation, cells that adhered to LDPE films were extracted. These cells represent biofilm-harvested cells. To compare the colonization and reproduction potential, biofilm-harvested and planktonic AKS2 cells were inoculated separately in equal numbers (approximately 104 cells) into 100 ml of basal media containing 300 mg of LDPE film as sole C-source and incubated at 30°C for different lengths of time. After incubation for 5 and 10 days, LDPE films were taken out from the growth media and examined under a fluorescence microscope after staining with acridine orange as described in the previous section. All experiments were performed in triplicate.
Fluorescein diacetate hydrolysis assay
To examine the hydrolytic activity of bacterial cells either harvested from biofilm or planktonic condition, fluorescein diacetate (FDA) hydrolysis assay was performed [13]. Briefly, equal numbers of cells (approximately 106 cells) taken from the respective origin were separately added to 1.5 ml of 60 mM sodium phosphate buffer, pH 7.6. FDA solution was added to it to attain a final concentration of 10 μg ml−1. The flask was then shaken at 30°C for 30 min. These samples were then centrifuged at 6,000 rpm for 5 min, and the absorbance of the supernatant was measured by a spectrophotometer (V-630, Jasco, Tokyo, Japan) at 494 nm. Samples without FDA served as a control.
Evaluation of bacterial cell surface hydrophobicity
Cell surface hydrophobicity of biofilm-harvested cells and planktonic cells was measured by bacterial adhesion to hydrocarbon (BATH) assay [14]. For this purpose, equal numbers of biofilm-harvested and planktonic AKS2 cells (approximately 106 cells) were added separately to several tubes containing increasing volumes (ranging from 0 to 0.2 ml) of n-hexadecane. Tubes were then shaken for 10 min and allowed to stand for 15 min to complete the phase separation. The OD400 of the aqueous suspensions was measured. Cell surface hydrophobicity was calculated using the following formula:
$$ \mathrm{Cell}\ \mathrm{surface}\ \mathrm{hydrophobicity}\ \left(\mathrm{in}\ \%\right)=100 \times \left(\mathrm{Initial}\ \mathrm{O}\mathrm{D}-\mathrm{Final}\ \mathrm{O}\mathrm{D}\right)/\mathrm{Initial}\ \mathrm{O}\mathrm{D} $$
Physiological profiles of AKS2 population
The patterns of potential carbon source utilization by biofilm-harvested AKS2 cells and their planktonic counterpart were assessed using BiOLOG-ECO plates (Biolog, Bremen, Germany) containing triplicates of 31 different environmentally relevant carbon sources [15]. To perform the experiment, 150 μl of either biofilm-harvested cells or planktonic cells containing approximately 103 CFU was separately added into each well of the BiOLOG-ECO plates. The plates were incubated at 30°C for 72 h, and the absorbance of each well was recorded at 590 nm. Microbial metabolic activity in each microplate, expressed as average well colour development (AWCD), was determined as follows: AWCD = ΣAbsorbancei/31, where Absorbancei is the absorbance at 590 nm from each well. The Shannon diversity index (H), an indicator of functional diversity, was calculated using the following equation: H = −Σpi ln pi, where pi is the ratio of the activity on each substrate (Absorbancei) to the sum of activities of all substrates (∑Absorbancei) and assuming absorbance at 590 nm of 0.25 as threshold for positive response [16]. The corresponding Lorenz curve that provides a graphical depiction of the information contained in the Shannon diversity index was plotted. Thereafter, this curve is used to derive the Gini coefficient (G), which is a measure of functional inequality, using the formula:
$$ G=1-2{\displaystyle \underset{0}{\overset{1}{\int }}L\ \mathrm{d}F} $$
where L is the Lorenz curve and F is the standardized cumulative distribution of the standardized population.
For cluster analysis, data from the richness tests using BiOLOG-ECO plates were collected from either biofilm-harvested cells or planktonic cells of AKS2. The similarity matrix was generated by Euclidean distances, which were used to build a dendrogram with the unweighted pair group mean averages (UPGMA) algorithm wherein the linkage was single. Cluster analysis was performed by using the software Minitab 16.
Experimental results were subjected to statistical analysis of one-way analysis of variance (ANOVA) in order to evaluate statistically significant differences among samples. Mean values were compared at different levels of significance using the software Minitab 16. All experiments were performed in triplicate.
AKS2 cells exhibit increased fitness for their growth on LDPE surface
In a habitat, an efficient adaptation of an organism should lead to the development of a viable population. Therefore, to verify the hypothesis that AKS2 cells undergo adaptation in biofilm, we examined the viability of AKS2 cells in biofilm on LDPE surface by performing dual staining with ethidium bromide and acridine orange. Acridine orange stains both the dead and viable cells, whereas ethidium bromide selectively stains the dead cells as it cannot pass through the intact membrane of living cells. Consistent with our previously published report [5], we observed AKS2 biofilm formation from day 30 onwards and a considerable weight loss of LDPE film (5%) after incubation with AKS2 for 45 days (Additional file 1: Figure S1 and Additional file 2: Figure S2). Dual staining of LDPE film obtained after 30 days of incubation with AKS2 showed a large number of green cells (stained with acridine orange) in contrast to a few red cells (stained with ethidium bromide), indicating that majority of AKS2 cells present in biofilm were live (Figure 1). This result demonstrated the development of viable AKS2 population on polyethylene surface signifying an adaptation of AKS2 cells in biofilm on LDPE surface.
AKS2 develops viable microbial population in biofilm on polyethylene surface. LDPE films recovered from the conditioned media after 30 days of incubation were stained with acridine orange and ethidium bromide and observed under a fluorescence microscope. The figure is a representative of images obtained from 20 different fields and from three independent experiments.
To better understand the adaptation, we examined the fitness of AKS2 cells in biofilm. Fitness is the ability of an organism to survive as well as reproduce in a given environment, and thus, successful adaptation is represented by better fitness. To examine the fitness of AKS2 cells in biofilm, we compared their colonization and reproduction potential on LDPE surface with that of planktonic AKS2 cells (see 'Methods' for details). For this purpose, biofilm-harvested and planktonic AKS2 cells were inoculated separately in equal numbers (approximately 104 cells) into basal media containing LDPE film as sole C-source and incubated for different lengths of time. The LDPE films were then examined under a fluorescence microscope after staining with acridine orange to monitor the extent of AKS2 attachment to LDPE. The result showed greater adherence to the LDPE film by biofilm-harvested cells than planktonic cells (Figure 2A). We also compared the reproduction efficiency between biofilm-harvested cells and planktonic cells of AKS2 and observed an approximately 2.5-fold increase in cell number for biofilm-harvested cells from day 5 to day 10 (Figure 2B). For planktonic cells, the corresponding fold increase over the same period of time was only approximately 1.4-fold (Figure 2B). These results indicate that biofilm-harvested cells have higher reproduction ability than their planktonic counterparts. Thus, there is a significant increase in the fitness of biofilm-harvested AKS2 cells compared to that of their planktonic counterpart.
Colonization and reproduction efficiency comparison between biofilm-harvested and planktonic AKS2 cells. Equal numbers (approximately 104 CFU) of biofilm-harvested cells and planktonic cells of AKS2 were separately inoculated in basal media containing sterile LDPE films and incubated at 30°C for different time points. After the incubation, polyethylene films were recovered from the conditioned media, stained with acridine orange and observed under a fluorescence microscope. Microbial colonization efficiency (A) and reproduction efficiency (B) were examined. The figure is representative of images obtained from 20 different fields and from three independent experiments. Statistical significance between the groups of reproduction efficiency was evaluated by ANOVA.
Biofilm-harvested AKS2 cells exhibit increased functional diversity and metabolic activity
In an ecological niche, metabolic functional diversity influences productivity of an organism by increasing their ability to utilize a greater variety of nutrients [17]. Thus, it is possible that in the biofilm, AKS2 cells increase metabolic functional diversity, which in turn enhances their LDPE degradation capability. To verify this, we compared the functional diversity of biofilm-harvested and planktonic AKS2 cells by determining the Shannon diversity index from the utilization spectrum of 31 different eco-sensitive carbon sources in BiOLOG-ECO plates [18,19]. We observed that the Shannon diversity index is significantly higher (approximately 27%) for biofilm-harvested cells than for planktonic cells (Figure 3A). To further validate the result, we compared metabolic potentials of biofilm-harvested cells and planktonic cells. Metabolic potential of an organism was examined by measuring the AWCD of BiOLOG-ECO plate [18]. As expected, the biofilm-harvested cells of AKS2 showed higher level of AWCD compared to the planktonic cells (Figure 3B). Thus, the results demonstrated increased functional diversity and metabolic potential of biofilm-harvested AKS2 cells.
Functional diversity and metabolic activity profile of biofilm-harvested and planktonic AKS2 cells. Equal numbers (approximately 103 CFU) of either biofilm-harvested or planktonic cells of AKS2 were separately added to each well of BiOLOG-ECO plate and incubated for 3 days at 30°C. Absorbance at 590 nm of each well was recorded at different time points. Shannon diversity index (A) and average well colour development (B) were derived from well colour absorbance and plotted. Three replicates have been used for each experiment, and the result represents the average of these three replicates. Error bars indicate standard deviation (±SD). Statistical significance between the groups was evaluated by ANOVA.
Viable microbial cells produce a large array of hydrolytic enzymes, which can cleave FDA to produce fluorescein that can be detected spectrophotometrically, and this assay is widely used to measure cell metabolic activity [18,20]. FDA is sensitive to esterase and lipase activity as it contains an ester linkage. Therefore, to verify the metabolic activity, we compared the FDA hydrolysis activity of biofilm-harvested AKS2 cells with the corresponding activity of AKS2 planktonic cells. The result showed higher level (approximately 31%) of FDA hydrolysis activity in the extract obtained from biofilm-harvested AKS2 cells than in the extract from the planktonic cells (Figure 4). This result indicates that biofilm-harvested AKS2 cells harbour higher level of hydrolytic enzymes, which ensures the enhanced metabolic activity for biofilm cells than for planktonic cells.
Comparison of hydrolytic activity. Equal numbers (approximately 106 CFU) of either biofilm-harvested or planktonic cells of AKS2 were separately examined for esterase activity by FDA hydrolysis. Three replicates have been used for each experiment, and the result represents the average of these three replicates. Error bars indicate standard deviation (±SD). Statistical significance between the groups was evaluated by ANOVA.
AKS2 cells exhibited increased functional homogeneity in biofilm
Functional homogeneity or evenness of a microbial population contributes towards the development of a stable colony, and therefore, we determined the functional homogeneity of biofilm-harvested cells. To examine it, we plotted the Lorenz curves deduced from different carbon source utilization patterns by AKS2 under different conditions. Lorenz curve is the graphical representation of the degree of inequality in a population. The Lorenz curve for biofilm-harvested cells was found to be closer to the line of equality compared to their non-adapted planktonic counterpart (Figure 5A). The closer the curve to the line of equality, the higher the evenness will be, i.e., the system is more homogenously distributed. Thus, this result indicates higher level of functional homogeneity for biofilm-harvested cells than planktonic cells. The subsequent analysis of Gini coefficient, a widely used inequality coefficient, showed a reduction in the coefficient value for biofilm-harvested cells compared to the planktonic cells (Figure 5A). Since evenness of a system varies inversely with the Gini coefficient, this result again indicates that AKS2 cells in biofilm have a higher degree of evenness with respect to their ability to utilize different C-sources than their planktonic counterpart. To lend support to this observation, we have also performed metabolic cluster analysis. AKS2 cells harvested from biofilm showed distinct and different patterns of metabolic cluster from their planktonic form (Figure 5B). Unlike planktonic cells, biofilm-harvested cells showed a large cluster (represented by a red colour) where most of the carbon sources were utilized with maximum similarity (Figure 5B). This result further confirms greater functional homogeneity or evenness for AKS2 cells in biofilm than for planktonic cells. Again, the increased evenness or functional homogeneity of AKS2 cells in biofilm contributes to better degradation of the polymer. Taken together, the increased levels of functional diversity, metabolic activity specially the hydrolytic activity and functional homogeneity indicate the occurrence of physiological adaptation of AKS2 cells on LDPE surface.
Biofilm-harvested AKS2 cells exhibit higher evenness. Equal numbers (approximately 103 CFU) of either biofilm-harvested or planktonic cells of AKS2 were separately added to each well of BiOLOG-ECO plate and incubated for 3 days at 30°C. Absorbance at 590 nm of each well was recorded at different time points. (A) Lorenz curves and Gini coefficient. (B) UPGMA cluster analysis. Statistical significance between the groups of Gini coefficient was evaluated by ANOVA.
Biofilm-harvested AKS2 cells exhibit higher level of cell surface hydrophobicity
The cell surface hydrophobicity significantly contributes to cellular attachment to polymer surface, and thus, it plays an important role towards biofilm formation and polymer degradation as it brings the substrate (LDPE polymer) in close proximity to the enzyme [5,21,22]. Therefore, we examined the cell surface hydrophobicity of biofilm-harvested AKS2 cells and compared the same with planktonic cells. The result showed significantly higher (approximately 26%) level of surface hydrophobicity in biofilm-harvested cells than planktonic AKS2 cells (Figure 6). It indicates the occurrence of structural adaptation of AKS2 cells in biofilm on LDPE surface.
Comparison of cell surface hydrophobicity. Equal numbers of either biofilm-harvested cells or planktonic cells of AKS2 were separately examined for cell surface hydrophobicity by BATH assay. Three replicates have been used for each experiment, and the result represents the average of these three replicates. Error bars indicate standard deviation (±SD). Statistical significance between the groups was evaluated by ANOVA.
Polyethylene surface modulates the adaptation of AKS2
It is possible that LDPE surface may contribute towards the observed adaptation of AKS2 cells on its surface as abiotic components are known to play an important role in the stability of an ecosystem [23]. Towards understanding the role of LDPE in this adaptation, we compared functional diversity and evenness of AKS2 biofilm cells on this polymer with those of AKS2 biofilm cells formed on another polymer, polyethylene succinate (PES), as these two components have direct correlation with adaptation. We observed a significant difference in functional diversity and evenness between biofilm cells taken from each polymer (Figure 7A,B). Since we started this experiment with an equal number of AKS2 cells for both the polymers, the different levels of functional diversity can only be attributed to the difference in the polymeric surface. Thus, this result clearly suggests a possible role of each polymer in the adaptation process of AKS2 in a given habitat.
Functional diversity of AKS2 cells on LDPE and PES surface. Equal numbers (approximately 103 CFU) of biofilm-harvested cells of AKS2, taken from both LDPE and PES surfaces, were aseptically separately added to each well of BiOLOG-ECO plate and incubated for 3 days at 30°C. Absorbance at 590 nm of each well was recorded at different time points. Shannon diversity index (A) and Gini coefficient (B) were derived from well colour absorbance at 590 nm and plotted. Three replicates have been used for each experiment, and the result represents the average of these three replicates. Error bars indicate standard deviation (±SD). Statistical significance between the groups was evaluated by ANOVA.
The present study investigated structural and physiological properties of AKS2 cells during LDPE degradation. It has been reported that adapted marine microorganisms can survive in extremely unfavourable environmental conditions containing high concentrations of pollutants and toxic substances like heavy metals, hydrocarbons, xenobiotics and other recalcitrant compounds by forming biofilm [10]. Similarly, LDPE degradation by AKS2 was also found to be increased concomitantly with the increased biofilm formation [5]. Though biofilm was shown to enhance bioremediation, the high cell density inside a biofilm is known to cause a stressful environment [24,25]. The ability of an organism to adapt to the different microenvironments in biofilm is an important survival strategy against this environmental stress [26,27]. In addition, a previous report has documented that under stressful conditions, microorganisms undergo phenotypic diversification to enhance their adaptive potential [28]. Towards understanding the adaptation, we observed that biofilm cells on polyethylene surface exhibited higher metabolic activity, in particular hydrolytic activity, compared to planktonic cells. This increased metabolic activity may help these cells to degrade and utilize the polymer to establish a sustainable population. Biofilm-harvested cells also exhibited increased functional diversity. This indicates that microorganisms are undergoing phenotypic diversifications, which leads to better adaptation. Moreover, biofilm-harvested cells exhibit lower Gini coefficient which is an indicator of increased functional homogeneity. Therefore, majority of the individual AKS2 cells within the population have similar levels of metabolic potential with respect to utilization of a wide range of carbon sources. This enables the individual microorganisms to efficiently grow and establish a stable population in biofilm as the possibility of intra-species competition is greatly reduced. It has been documented that Rhodococcus tolerates extreme conditions by structural adaptations such as the modification of cell membrane and the alteration in cell surface hydrophobicity [11]. We also observed increased cell surface hydrophobicity of biofilm-harvested AKS2 cells compared to their planktonic counterpart. This increased cell surface hydrophobicity enables AKS2 cells to attach to the hydrophobic LDPE surface more efficiently compared to planktonic cells. The emergence of this trait demonstrated the occurrence of structural adaptation in biofilm of AKS2 cells on polyethylene surface. Taken together, the increased metabolic potential, higher level of functional diversity and homogeneity, and the increased surface hydrophobicity resulted in better colonization and higher reproduction potential of biofilm-harvested AKS2 cells.
Phenotypic alteration by an organism in an imposed condition has been considered an important strategy for better adaptation [29]. In our previous study, we observed that AKS2 cells in biofilm on LDPE surface exhibited a significant alteration in their shape and size compared to the planktonic form [5]. In biofilm, AKS2 cells become more round shaped and smaller in size than planktonic cells [5]. This structural adaptation may provide AKS2 cells a better access to the available nutrients in its surroundings. In the same study, we also observed that AKS2 cells adhering to LDPE secreted exo-polysaccharides for their better attachment and formation of biofilm on the polymer [5]. Again, it suggests the occurrence of physiological adaptations of AKS2 cells on LDPE surface. Collectively, these results demonstrated that AKS2 cells have adapted successfully in biofilm on LDPE surface resulting in a viable population and better degradation of the LDPE polymer.
The stability of an ecosystem depends on the balanced interactions between biotic and abiotic components [23]. These biotic and abiotic components are connected together through nutrient cycles and energy flows [30]. Thus, the abiotic surface of an ecosystem is likely to play a significant role towards adaptation of an organism in a given ecological niche. Towards this, our results showed different levels of functional diversity and evenness of the same organism, AKS2 cells, for two different polymers: LDPE and PES. Thus, the result demonstrates the possible involvement of LDPE towards adaptation of AKS2 cells in biofilm on LDPE surface.
Towards understanding biofilm-mediated LDPE degradation by AKS2, we verified the adaptation of AKS2 cells in biofilm by examining their viability and fitness. The results showed a viable population of AKS2 cells in biofilm with increased fitness compared to their planktonic counterpart. Further investigation revealed higher metabolic potential, higher functional diversity and homogeneity, and higher level of surface hydrophobicity for the biofilm-harvested AKS2 cells than planktonic cells. All these physiological and structural properties are known to be connected to the adaptability of an organism, and thus, these observations strongly support the view that AKS2 cells have adapted successfully in biofilm and thus developed a viable and stable population which resulted in enhanced polymer degradation. Thus, the current study deciphers the underlying mechanism of LDPE degradation by AKS2 biofilm cells with an enhanced rate.
In conclusion, the current study demonstrates structural and physiological adaptation of AKS2 cells in biofilm on polyethylene surface wherein the nature of the polymer plays an important role. This adaptation leads to enhanced LDPE degradation through biofilm formation.
ANOVA:
AWCD:
Average well colour development
Bacterial adhesion to hydrocarbon
CFU:
Colony-forming unit
C-source:
FDA:
Fluorescein diacetate
Shannon diversity index
Lorenz curve
LDPE:
Optical density
PES:
Polyethylene succinate
UPGMA:
Unweighted pair group mean averages
Roy PK, Titus S, Surekha P, Tulsi E, Deshmukh C, Rajagopal C (2008) Degradation of abiotically aged LDPE films containing pro-oxidant by bacterial consortium. Polym Degrad Stab 93:1917–1922
Chatterjee S, Roy B, Roy D, Banerjee R (2010) Enzyme-mediated biodegradation of heat treated commercial polyethylene by Staphylococcal species. Polym Degrad Stab 95:195–200
Albertsson AC, Erlandsson B, Hakkarainen M, Karlsson S (1998) Molecular weight changes and polymeric matrix changes correlated with the formation of degradation products in biodegraded polyethylene. J Environ Polym Degrad 6:187–195
Volke-Sepulveda T, Saucedo-Castaneda G, Gutierrez-Rojas M, Manzur A, Favela-Torres E (2002) Thermally treated low density polyethylene biodegradation by Penicillium pinophilum and Aspergillus niger. J Appl Polym Sci 83:305–314
Tribedi P, Sil AK (2013) Low-density polyethylene degradation by Pseudomonas sp. AKS2 biofilm. Environ Sci Pollut Res Int 20:4146–4153
Cvitkovitch DG, Li YH, Ellen RP (2003) Quorum sensing and biofilm formation in streptococcal infections. J Clin Investig 112:1626–1632
Kim J, Kim HS, Han S, Lee JY, Oh JE, Chung S, Park HD (2013) Hydrodynamic effects on bacterial biofilm development in a microfluidic environment. Lab Chip 13:1846–1849
Dobzhansky T, Hecht MK, Steere WC (1968) On some fundamental concepts of evolutionary biology. In: Evolutionary biology volume 2 (1st edition). Appleton-Century-Crofts, New York, pp 1–34
Li YH, Hanna MN, Svensater G, Ellen RP, Cvitkovitch DG (2001) Cell density modulates acid adaptation in Streptococcus mutans: implications for survival in biofilms. J Bacteriol 183:6875–6884
Dash HR, Mangwani N, Chakraborty J, Kumari S, Das S (2013) Marine bacteria: potential candidates for enhanced bioremediation. Appl Microbiol Biotechnol 97:561–571
de Carvalho CCCR (2012) Adaptation of Rhodococcus erythropolis cells for growth and bioremediation under extreme conditions. Res Microbiol 163:125–136
Tribedi P, Sarkar S, Mukherjee K, Sil AK (2012) Isolation of a novel Pseudomonas sp. from soil that can efficiently degrade polyethylene succinate. Environ Sci Pollut Res Int 19:2115–2124
Chrzanowski TH, Crotty RD, Hubbard JG, Welch RP (1984) Applicability of the fluorescein diacetate method of detecting active bacteria in freshwater. Microb Ecol 10(2):179–185
Rosenberg M, Perry A, Bayer EA, Gutnick DL, Rosenberg E, Ofek I (1981) Adherence of Acinetobacter calcoaceticus RAG-1 to human epithelial cells and to hexadecane. Infect Immun 33:29–33
Choi KH, Dobbs FC (1999) Comparison of two kinds of BiOLOG microplates (GN and ECO) in their ability to distinguish among aquatic microbial communities. J Microbiol Method 36:203–213
Garland JL (1997) Analysis and interpretation of community-level physiological profiles in microbial ecology. FEMS Microbiol Ecol 24:289–300
Tilman D (2001) Functional Diversity. pp. 109-120. In: Encyclopedia of Biodiversity. Volume 3 (Levin, S.A., ed.). Academic Press, San Diego, 870 pp.
Teng Y, Luo Y, Sun M, Liu Z, Li Z, Christie P (2010) Effect of bioaugmentation by Paracoccus sp. strain HPD-2 on the soil microbial community and removal of polycyclic aromatic hydrocarbons from an aged contaminated soil. Bioresour Technol 101:3437–3443
Tribedi P, Sil AK (2013) Bioaugmentation of polyethylene succinate-contaminated soil with Pseudomonas sp. AKS2 results in increased microbial activity and better polymer degradation. Environ Sci Pollut Res Int 20:1318–1326
Killham K, Staddon WJ (2002) Bioindicators and sensors of soil health and the application of geostatistics. In: Burns RG, Dick R (eds) Enzymes in the environment: activity, ecology and applications. Marcel Dekker, New York, pp 391–405
Gilan(Orr) I, Hadar Y, Sivan A (2004) Colonization, biofilm formation and biodegradation of polyethylene by a strain of Rhodococcus ruber. Appl Microbiol Biotechnol 65:97–104
Balasubramanian V, Natarajan K, Hemambika B, Ramesh N, Sumathi CS, Kottaimuthu R, Rajash KV (2010) High-density polyethylene (HDPE)-degrading potential bacteria from marine ecosystem of Gulf of Mannar, India. Lett Appl Microbiol 51:205–211
Chapin FS, Pamela AM, Harold AM (2002) Principles of terrestrial ecosystem ecology. Springer, New York, ISBN 0-387-95443-0
deBeer D, Stoodley P, Roe F, Lewandowski Z (1994) Effects of biofilm structure on oxygen distribution and mass transport. Biotechnol Bioeng 43:1131–1138
Stoodley P, Sauer K, Davies DG, Costerton JW (2002) Biofilms as complex differentiated communities. Annu Rev Microbiol 56:187–209
Aertsen A, Michiels CW (2004) Stress and how bacteria cope with death and survival. Crit Rev Microbiol 30:263–273
Boles BR, Thoendel M, Singh PK (2004) Self-generated diversity produces "insurance effects" in biofilm communities. Proc Natl Acad Sci U S A 101:16630–16635
Koh KS, Lam KW, Alhede M, Queck SY, Labbate M, Kjelleberg S, Rice SA (2007) Phenotypic diversification and adaptation of Serratia marcescens MG1 biofilm-derived morphotypes. J Bacteriol 189:119–130
Price TD, Qvarnstrom A, Irwin DE (2003) The role of phenotypic plasticity in driving genetic evolution. Proc R Soc Lond B 270:1433–1440
Odum EP (1971) Fundamentals of ecology, 3rd edn. Saunders, New York. ISBN 0534420664
We thank Dr. Srimonti Sarkar for critical reading of the manuscript. This work is supported by a grant in aid from the Department of Biotechnology, Government of West Bengal, India (Sanction no. 555-BT (Estt)/RD-21/11).
Department of Microbiology, University of Calcutta, 35 B.C. Road, Kolkata, 700019, India
Prosun Tribedi, Anirban Das Gupta & Alok K Sil
Prosun Tribedi
Anirban Das Gupta
Alok K Sil
Correspondence to Alok K Sil.
PT and AKS conceived the idea and designed the experiments. PT performed all the experimental works. PT, AD and AKS interpreted the results and wrote the manuscript. All authors read and approved the final manuscript.
Scanning electron micrograph. The micrograph shows the formation of biofilm by AKS2 on LDPE surface.
Bar diagram. The bar diagram shows the extent of LDPE degradation by AKS2.
Tribedi, P., Gupta, A.D. & Sil, A.K. Adaptation of Pseudomonas sp. AKS2 in biofilm on low-density polyethylene surface: an effective strategy for efficient survival and polymer degradation. Bioresour. Bioprocess. 2, 14 (2015). https://doi.org/10.1186/s40643-015-0044-x
Polyethylene-based plastic material | CommonCrawl |
Why is Coronavirus all about Mathematics?
by Gabriel 13 Mar 2020
16 minutes read
"For since the fabric of the universe is most perfect, and is the work of a most wise Creator, nothing whatsoever takes place in the universe in which some relation of maximum and minimum does not appear."
― Leonhard Euler
On 11 March 2020, the World Health Organization officiallly declared the coronavirus (SARS-CoV-2 or COVID-19) a pandemic. But, fear not, this is not yet another opinionated response on the failing of society and the end of times. This is all about Mathematics!
Source: https://systems.jhu.edu/research/public-health/ncov/ as of March 12th.
Curiously enough, the numbers might help us understand all the fuss we hear about social distancing, flattening the curve, lockdowns and ultimately vaccinations.
An epidemic is a textbook example of an exponential growth. Not only an example, but its underlying principle is a centerpiece in the history of Mathematics and its discovery was fundamental for breakthroughs in Probability Theory, Mathematical Analysis, Differential Equations, Physics, Chemistry and Biology and even Pop Culture.
The idea is rather straightforward: it has do with the way a quantity changes. When the rate on which a given quantity changes is proportional to the quantity itself at any given moment, mathematicians called it exponential.
the rate of change is proportional to the quantity at any given time
Let us apply that idea on data from the beginning of the outbreak in China,
Probably many in the past observed this behavior on how populations expand, diseases spread, biology decays or even how science progresses, but it was formally brought to life in the work on logarithms.
Later, while studying compound interest - more money you have, more money you make - Jabob Bernoulli discovered a constant ironically known as Euler's number.
We shall have the opportunity to talk about that constant's multiple "origins" in the future.
Exponentiate!
Coming back to the virus. In our case, having an exponential growth suggests that the larger the infectious population, quicker we will have new cases and that infectious population will grow. We pratically experience that whenever people get sick around us (or whenever all of our friends start getting pregnant at the same…no! it is not exponential…or is it?)
Scientists denote ideas using mathematical script for convenience (sometimes laziness). Let us do it then,
I(t): infectious population
\[\begin{equation} \Delta I(t) \sim I(t) \implies \frac{dI}{dt} \sim I(t) \end{equation}\]
the rate of infection is proportional to the infectious population
Well, stating a relation of proportionality does not give us much. We still are not able to answer any questions. Is this virus dangerous? How contagious is it? Is it deadly? Are we all going to die?
We are going to formulate a model using variables and parameters to quantify how proportial that relation is. Using our intuition, there are two parameters we will take into account.
$\beta$: contact rate or how many people an infected person comes into contact with in given time
$\gamma$: recovery rate or how many people recover in given time
The rate on which people get infected should grow the more contact they have to each other and should decay as they recover. In another words, the rate of change is positively proportional to $\beta$ (contact rate) and negatively proportional to $\gamma$ (recovery rate)
Finally, we have our equation as follows,
\[\begin{equation} \frac{dI}{dt} = \left(\beta - \gamma\right) I \end{equation}\]
Now let us take some time to check if we can take some conclusions from it.
If the recovery rate is greater than the contact rate, the rate of change will always be negative, so the infection will eventually dissipate and even won't outbreak.
On the other hand, if the contact rate is greater the the recovery rate, we will have an outbreak. Apparently, those paramaters indicate how strong an epidemic can be.
If $\gamma$ is the recovery rate, then $1/\gamma$ is the infectious/recovery period or the period of time during which an infected person is sick and then can pass it on.
Consider the product between the $\beta$ and $1/\gamma$. That gives the average number of people an infected patient will pass the infection on. For example, let us say that in a given scenario the contact rate is $\beta = 0.2$ and the infectious/recovery period is $1/\gamma = 10$ days. Then we expect that each infected patient will pass the infection onto 2 people.
\[\begin{equation} R_{0} = \frac {\beta}{\gamma} \end{equation}\]
In fact, that indicator is known as the basic reproduction number. Note that when $R_0 > 1$ the infection will be able to start spreading in a population, but not if $R_0 < 1$. So that is indeed a very important indicator.
Probably everybody has that number in their minds in the present. When trying to mitigiate the contamination, we will see how important bringing it down is in order to slow down the epidemic.
After the outbreak in China, studies found that COVID-19's basic reproduction number is between 1.4-3.9, which means that, on avarage, a sick patient transmits the infection to 1.4 and 3.9 others. The recovery period is around 10 days.
We solved our equation numerically inputting different parameters to see how they play out.
We visually check how small decrements basic reproduction number will have a huge accumulative gain over time. That is reason why we should take agressive action, either by social distancing, avoiding and cancelling events.
We saw that the basic reproduction number depends on contact rate and recovery rate. Since the recovery rate is barely in our control - as it is typicallly a biological charateristic of the virus -, controlling the contact rate is our best chance to flatten the curve. That does not mean we should not invest in treatments and in the best case scenario we should bring both down.
Thou shall not exponentiate!
But here is the plot twist: an epidemic is not purely exponential! Our first model assumed that the infection will spread indefinitely, the population is infinite and there is no immunity or cure.
Of course, that is nonsense. A rather evil way of think of it is that once the entire population is infected, there is none left to be infected!
Let us address that in a new model by assuming the following,
fixed population (nobody dying, moving or being born)
recovery and immunity
In COVID-19's case, it is not clear how long the protection lasts. But we will assume it lasts longer than our simulated timeframe.
We will use a model that was first introduced by Anderson Ogilvy Kermack and Anderson Gray McKendrick called a compartmental model. The idea is to divide the population into group (or compartments) and describe how the infection moves from one to another.
Let the SIR model be as follows,
S(t): susceptible but not yet infected population
R(t): recovered population
N: total population
First, the equation S tell us that as the population gets infected, the susceptible population becomes smaller.
\[\begin{equation} \frac{dS}{dt} = - \frac{\beta SI}{N} \end{equation}\]
Second, the equation I is similar to our original model's, adding a factor of susceptibility.
\[\begin{equation} \frac{dI}{dt} = \frac{\beta SI}{N} - \gamma I \end{equation}\]
Last, the equation R says that the recovered population is proportinal to the recovery rate.
\[\begin{equation} \frac{dR}{dt} = \gamma I \end{equation}\]
Unfortunately, that system of differential equations is non-linear and does not have analytical solution. Fortunately, we solve it numerically with the following initial conditions,
I(0) = 1 (patient zero)
N = 1.000.000 (population)
Well…We added all that math…for nothing? Wait for it and zoom out for a moment,
Now we have it!
Note that our original model gave us 1.5M for $R_0 = 4.2$, exceeding our total simulated population of 1M. That's nonsense!
Let us see the rest of the solution,
Once more, we visually check how sensitive those systems are to the basic reproduction number.
Imagine that rather than each infected patient passing the infection onto other $4$ people, we could slow it down to $3$, that would result in a 25% reduction of active cases at the peak. If social distancing, from 4 to 2, the peak would be cut in more than half!
By mitigating the contamination, we will be able to not only to postpone the peak of the outbreak, but to bring the maximum number of active cases the at the peak to a lower ground and then to protect the most vulnerable by making them less susceptible and giving people the chance to get proper treatment.
Another important point to make is that our resources, like healthcare and food supply chain, are limited and not lowering the peak of infection could stress them to a point where they could not operate, leading to catastrophe.
Herd immunity (also called herd effect, community immunity, population immunity, or social immunity) is a form of indirect protection from infectious disease that occurs when a large percentage of a population has become immune to an infection, thereby providing a measure of protection for individuals who are not immune
I will let the numbers talk. Assuming that 50% of the polulation is immune/vaccinated initially,
and even in the event of the apocalypse,
That's why Coronavirus all about Mathematics. As Euler once wrote, "nothing whatsoever takes place in the universe in which some relation of maximum and minimum does not appear.".
If you enjoyed this story, please feel free go the code or launch the notebook.
I'm a mathematician who's passionate about learning and creating a positive impact on people's lives one algorithm at a time. That's why I enjoy facing challenging problems and putting them in a way that's easy for everyone to understand.
Have you ever listened to Miles Davis?
I had just finished doing an Armed Forces Day broadcast, you know, Voice of America...
Getting Started with Dask and Kubernetes
Dask enables parallel and out-of-core computation. We couple blocked algorithms with dynamic and memory aware... | CommonCrawl |
Talk:Normal form
This small page on normal forms has displaced the earlier and much more extensive page on matrix normal forms. Given their relative content, perhaps this page could be renamed to "normal forms (classification)" and "normal forms" be used for disambiguation. --Jjg 15:03, 19 April 2012 (CEST)
Or maybe this page itself is (or will be) an (extended) disambiguation page with links to detailed pages "normal form (for X)"? On Wikipedia in such cases one writes like this:
Matrices of linear maps between different linear spaces
Main article: Normal form (for matrices)
Such matrices are rectangular...
--Boris Tsirelson 17:05, 19 April 2012 (CEST)
This page is still under construction: I planned to complete it in the nearest future. By the way: I understand that it is a bad idea to "save page" when it is only partially written, but I have no idea on how to protect the work between sessions. The right solution would be to prepare the "complete" version in an off-line editor capable for expanding all EoM "macros", but I am unaware of any such editor under Windows (sorry, I know that's a bad taste ;-). The initial (rich) page is still available as Normal form (for matrices), and I plan to write a separate page for other types of normal forms in Dynamical systems, singularities, Lagrangian/Legendrian singularities, Hamiltonian systems etc. My idea was to collect under the common header "normal forms" different flavors of this notion with appropriate links to specific pages. Sergei Yakovenko 17:28, 19 April 2012 (CEST)
Yes, there is a nice way to do it: create and use a sandbox! I did; here are two examples: User:Boris Tsirelson/sandbox1, User:Boris Tsirelson/sandbox2. --Boris Tsirelson 21:10, 19 April 2012 (CEST)
I think some disambiguation would be helpful to the casual user (I'll admit that if I was looking for matrix normal forms then I'd head to the entry marked "normal forms" and be disappointed there were no matrices mentioned there)
--Jjg 17:36, 19 April 2012 (CEST)
Negative results
$\newcommand{\M}{\mathscr M}$ As was noted, the normal form of an object $M\in\M$ is a "selected representative" from the equivalence class $[M]$, usually possessing some nice properties. The set of all these "representatives" intersect each equivalence class exactly once; such set is called a transversal (for the given equivalence relation). Existence of a transversal is ensured by the axiom of choice for arbitrary equivalence relation on arbitrary set. However, a transversal in general is far from being nice. For example, consider the equivalence relation "$x-y$ is rational" for real numbers $x,y$. Its transversal (so-called Vitali set) cannot be Lebesgue measurable!
Typically, the set $\M$, endowed with its natural σ-algebra, is a standard Borel space, and the set $\{(x,y)\in\M\times\M:x\sim y\}$ is a Borel subset of $\M\times\M$; this case is well-known as a "Borel equivalence relation". Still, existence of a Borel transversal is not guaranteed (for an example, use the Vitali set again).
Existence of Borel transversals and related properties of equivalence relations are investigated in descriptive set theory. According to [K, Sect. 4], a lot of work in this area is philosophically motivated by problems of classification of objects up to some equivalence. A number of negative results are available. They show that in many cases, classification by a Borel transversal is impossible, and moreover, much weaker kinds of classification are also impossible.
[K] Alexander S. Kechris, "New directions in descriptive set theory", Bull. Symb. Logic 5 (1999), 161–174. Zbl 0933.03057
Do you like to include this section (near the end)? --Boris Tsirelson 17:02, 24 April 2012 (CEST)
Boris, it is almost "on purpose" that I avoided formal definitions in this page. Whatever equivalence relation you start with, very soon you reach the degeneracy level where no meaningful classification is possible, so the equivalence has to be "relaxed" to have any chance to continue. All the way around, while many classifications are partitions into orbits of suitable group actions, quite a few are not (cf. with the logical normal forms, but one can also think about Groebner bases etc. or your example from the set theory). As a result, I decided to subdivide the "normal form" cluster into "subject areas" the way it understand by the community. This is not always clear-cut, e.g., the page on the normal forms for matrices should include matrices of maps, matrices of operators, matrices of quadratic forms etc. In the "nonlinear" cases the corresponding object belong to different classes and are treated in detail on separate pages.
I suggest that the normal forms which do not arise from the "singular" classification problems, be mentioned at the disambiguation part near the top of the page and addressed either in separate pages, or as sections in the corresponding topical articles, like DST. Sergei Yakovenko 06:54, 25 April 2012 (CEST)
Well, maybe some day I'll create "Borel equivalence relation" article, and then you'll mention it here in the style you like. --Boris Tsirelson 08:14, 25 April 2012 (CEST)
Normal form. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Normal_form&oldid=25337
Retrieved from "https://encyclopediaofmath.org/index.php?title=Talk:Normal_form&oldid=25337" | CommonCrawl |
Spectral multiplicities for ergodic flows
DCDS Home
Periodic points and periods for operators on hilbert space
September 2013, 33(9): 4239-4269. doi: 10.3934/dcds.2013.33.4239
Ergodicity of group actions and spectral gap, applications to random walks and Markov shifts
Jean-Pierre Conze 1, and Y. Guivarc'h 2,
IRMAR, UMR CNRS 6625, Université de Rennes I, Campus de Beaulieu, 35042 Rennes Cedex
IRMAR, CNRS UMR 6625, Université de Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France
Received May 2011 Revised July 2011 Published March 2013
Let $(X, \cal B, \nu)$ be a probability space and let $\Gamma$ be a countable group of $\nu$-preserving invertible maps of $X$ into itself. To a probability measure $\mu$ on $\Gamma$ corresponds a random walk on $X$ with Markov operator $P$ given by $P\psi(x) = \sum_{a} \psi(ax) \, \mu(a)$. We consider various examples of ergodic $\Gamma$-actions and random walks and their extensions by a vector space: groups of automorphisms or affine transformations on compact nilmanifolds, random walks in random scenery on non amenable groups, translations on homogeneous spaces of simple Lie groups, random walks on motion groups. A powerful tool in this study is the spectral gap property for the operator $P$ when it holds. We use it to obtain limit theorems, recurrence/transience property and ergodicity for random walks on non compact extensions of the corresponding dynamical systems.
Keywords: spectral gap, recurrence, local limit theorem, non compact extension of dynamical system, random walk, random scenery., Nilmanifold.
Mathematics Subject Classification: Primary: 37A30, 37A40, 28D05, 22D40, 60F0.
Citation: Jean-Pierre Conze, Y. Guivarc'h. Ergodicity of group actions and spectral gap, applications to random walks and Markov shifts. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4239-4269. doi: 10.3934/dcds.2013.33.4239
J. Aaronson, "An Introduction to Infinite Ergodic Theory,", Mathematical Surveys and Monographs, 50 (1997). Google Scholar
J. Bourgain and A. Gamburd, Spectral gaps in $ SU(d)$,, C. R. Math. Acad. Sci. Paris, 348 (2010), 609. doi: 10.1016/j.crma.2010.04.024. Google Scholar
B. Bekka, P. de la Harpe and A. Valette, "Kazhdan's Property (T),", New Mathematical Monographs, 11 (2008). Google Scholar
B. Bekka and J.-R. Heu, Random products of automorphisms of Heisenberg nilmanifolds and Weil's representation,, Ergodic Theory Dynam. Systems, 31 (2011), 1277. doi: 10.1017/S014338571000043X. Google Scholar
B. Bekka and Y. Guivarc'h, On the spectral theory of groups of affine transformations on compact nilmanifolds,, , (). Google Scholar
L. Breiman, "Probability,", Addison-Wesley Publishing Company, (1968). Google Scholar
B. M. Brown, Martingale central limit theorems,, Ann. Math. Statist., 42 (1971), 59. Google Scholar
J.-P. Conze, Sur un critère de récurrence en dimension 2 pour les marches stationnaires, applications,, Ergodic Theory and Dynam. Systems, 19 (1999), 1233. doi: 10.1017/S0143385799141701. Google Scholar
J.-P. Conze and Y. Guivarc'h, Remarques sur la distalité dans les espaces vectoriels,, C. R. Acad. Sci. Paris Sér. A, 278 (1974), 1083. Google Scholar
J. Dixmier and W. G. Lister, Derivations of nilpotent Lie algebras,, Proc. Amer. Math. Soc., 8 (1957), 155. Google Scholar
A. Furman and Ye. Shalom, Sharp ergodic theorems for group actions and strong ergodicity,, Ergodic Theory Dynam. Systems, 19 (1999), 1037. doi: 10.1017/S0143385799133881. Google Scholar
A. Gamburd, D. Jakobson and P. Sarnak, Spectra of elements in the group ring of $ SU(2)$,, J. Eur. Math. Soc. (JEMS), 1 (1999), 51. doi: 10.1007/PL00011157. Google Scholar
M. I. Gordin and B. A. Lifšic, Central limit theorem for stationary Markov processes,, (Russian) Dokl. Akad. Nauk SSSR, 239 (1978), 766. Google Scholar
Y. Guivarc'h, Equirartition dans les espaces homogènes,, (French) in, (1976), 131. Google Scholar
Y. Guivarc'h, Limit theorems for random walks and products of random matrices,, in, (2006), 255. Google Scholar
Y. Guivarc'h and J. Hardy, Théorémes limites pour une classe de chaînes de Markov et applications aux difféomorphismes d'Anosov,, Ann. Inst. H. Poincar Probab. Statist., 24 (1988), 73. Google Scholar
Y. Guivarc'h and A. N. Starkov, Orbits of linear group actions, random walks on homogeneous spaces and toral automorphisms,, Ergodic Theory Dynam. Systems, 24 (2004), 767. doi: 10.1017/S0143385703000440. Google Scholar
Y. Guivarc'h and C. R. E. Raja, Recurrence and ergodicity of random walks on locally compact groups and on homogeneous spaces,, Ergodic Theory and Dynam. Systems, 32 (2012), 1313. doi: 10.1017/S0143385711000149. Google Scholar
V. F. R. Jones and K. Schmidt, Asymptotically invariant sequences and approximate finiteness,, Amer. J. Math., 109 (1987), 91. doi: 10.2307/2374553. Google Scholar
V. Kaimanovich, The Poisson boundary of covering Markov operators,, Israel J. Math., 89 (1995), 77. doi: 10.1007/BF02808195. Google Scholar
S. A. Kalikow, $T,T^{-1}$ transformation is not loosely Bernoulli,, Ann. of Math. (2), 115 (1982), 393. doi: 10.2307/1971397. Google Scholar
D. A. Kazhdan, Uniform distribution on a plane,, (Russian) Trudy Moskov. Mat. Ob., 14 (1965), 299. Google Scholar
H. Kesten, Symmetric random walks on groups,, Trans. Amer. Math. Soc., 92 (1959), 336. Google Scholar
H. Kesten and F. Spitzer, A limit theorem related to a new class of self-similar processes,, Z. Wahrsch. Verw. Gebiete, 50 (1979), 5. doi: 10.1007/BF00535672. Google Scholar
A. Krámli and D. Szász, Random walks with internal degrees of freedom. II. first-hitting probabilities,, Z. Wahrsch. Verw. Gebiete, 68 (1984), 53. doi: 10.1007/BF00535173. Google Scholar
S. Le Borgne, Examples of quasi-hyperbolic dynamical systems with slow decay of correlations,, C. R. Math. Acad. Sci. Paris, 343 (2006), 125. doi: 10.1016/j.crma.2006.05.010. Google Scholar
G. A. Margulis, "Discrete Subgroups of Semisimple Lie Groups,", Ergebnisse der Mathematik und ihrer Grenzgebiete (3), 17 (1991). Google Scholar
W. Parry, Ergodic properties of affine transformations and flows on nilmanifolds,, Amer. J. Math., 91 (1969), 757. Google Scholar
W. Parry, Dynamical systems on nilmanifolds,, Bull. London Math. Soc., 2 (1970), 37. Google Scholar
C. R. E. Raja, On the existence of ergodic automorphisms in ergodic $\mathbbZ^d$-actions on compact groups,, Ergodic Theory Dynam. Systems, 30 (2010), 1803. doi: 10.1017/S0143385709000728. Google Scholar
K. Schmidt, "Lectures on Cocycles of Ergodic Transformations Groups,", Lect. Notes in Math., (1977). Google Scholar
K. Schmidt, Asymptotically invariant sequences and an action of $SL(2, \mathbbZ)$ on the 2-sphere,, Israel J. Math., 37 (1980), 193. doi: 10.1007/BF02760961. Google Scholar
K. Schmidt, On joint recurrence,, C. R. Acad. Sci. Paris S. I Math., 327 (1998), 837. doi: 10.1016/S0764-4442(99)80115-3. Google Scholar
Ye. Shalom, Explicit Kazhdan constants for representations of semisimple and arithmetic groups,, Ann. Inst. Fourier (Grenoble), 50 (2000), 833. Google Scholar
J. Tits, Free subgroups in linear groups,, J. Algebra, 20 (1972), 250. Google Scholar
K. Uchiyama, Asymptotic estimates of the Green functions and transition probabilities for Markov additive processes,, Electron. J. Probab., 12 (2007), 138. doi: 10.1214/EJP.v12-396. Google Scholar
Ya. B. Vorobets, On the uniform distribution of orbits of finitely generated groups and semigroups of plane isometries,, (Russian) Mat. Sb., 195 (2004), 17. doi: 10.1070/SM2004v195n02ABEH000799. Google Scholar
P. P. Varjú, Random walks in Euclidean spaces,, , (). Google Scholar
R. Zimmer, "Ergodic Theory and Semisimple Groups,", Monographs in Mathematics, 81 (1984). Google Scholar
Philippe Marie, Jérôme Rousseau. Recurrence for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 1-16. doi: 10.3934/dcds.2011.30.1
Weigu Li, Kening Lu. Takens theorem for random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3191-3207. doi: 10.3934/dcdsb.2016093
James Nolen. A central limit theorem for pulled fronts in a random medium. Networks & Heterogeneous Media, 2011, 6 (2) : 167-194. doi: 10.3934/nhm.2011.6.167
Weigu Li, Kening Lu. A Siegel theorem for dynamical systems under random perturbations. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 635-642. doi: 10.3934/dcdsb.2008.9.635
Xiangnan He, Wenlian Lu, Tianping Chen. On transverse stability of random dynamical system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 701-721. doi: 10.3934/dcds.2013.33.701
Wafa Hamrouni, Ali Abdennadher. Random walk's models for fractional diffusion equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2509-2530. doi: 10.3934/dcdsb.2016058
Edward Belbruno. Random walk in the three-body problem and applications. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 519-540. doi: 10.3934/dcdss.2008.1.519
N. D. Cong, T. S. Doan, S. Siegmund. A Bohl-Perron type theorem for random dynamical systems. Conference Publications, 2011, 2011 (Special) : 322-331. doi: 10.3934/proc.2011.2011.322
Xiaoyue Li, Xuerong Mao. Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 523-545. doi: 10.3934/dcds.2009.24.523
Brendan Weickert. Infinite-dimensional complex dynamics: A quantum random walk. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 517-524. doi: 10.3934/dcds.2001.7.517
Chiu-Yen Kao, Yuan Lou, Wenxian Shen. Random dispersal vs. non-local dispersal. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 551-596. doi: 10.3934/dcds.2010.26.551
Kumiko Hattori, Noriaki Ogo, Takafumi Otsuka. A family of self-avoiding random walks interpolating the loop-erased random walk and a self-avoiding walk on the Sierpiński gasket. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 289-311. doi: 10.3934/dcdss.2017014
Lianfa He, Hongwen Zheng, Yujun Zhu. Shadowing in random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 355-362. doi: 10.3934/dcds.2005.12.355
Xinsheng Wang, Weisheng Wu, Yujun Zhu. Local unstable entropy and local unstable pressure for random partially hyperbolic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2020, 40 (1) : 81-105. doi: 10.3934/dcds.2020004
Yangrong Li, Shuang Yang. Backward compact and periodic random attractors for non-autonomous sine-Gordon equations with multiplicative noise. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1155-1175. doi: 10.3934/cpaa.2019056
Yujun Zhu. Preimage entropy for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 829-851. doi: 10.3934/dcds.2007.18.829
Ji Li, Kening Lu, Peter W. Bates. Invariant foliations for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3639-3666. doi: 10.3934/dcds.2014.34.3639
Shi Jin, Yingda Li. Local sensitivity analysis and spectral convergence of the stochastic Galerkin method for discrete-velocity Boltzmann equations with multi-scales and random inputs. Kinetic & Related Models, 2019, 12 (5) : 969-993. doi: 10.3934/krm.2019037
Bixiang Wang. Multivalued non-autonomous random dynamical systems for wave equations without uniqueness. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 2011-2051. doi: 10.3934/dcdsb.2017119
Yaofeng Su. Almost surely invariance principle for non-stationary and random intermittent dynamical systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6585-6597. doi: 10.3934/dcds.2019286
Jean-Pierre Conze Y. Guivarc'h | CommonCrawl |
EPIGENE: genome-wide transcription unit annotation using a multivariate probabilistic model of histone modifications
Anshupa Sahu1,2,
Na Li2,3,
Ilona Dunkel2 &
Ho-Ryun Chung ORCID: orcid.org/0000-0002-4132-09111,2
Understanding the transcriptome is critical for explaining the functional as well as regulatory roles of genomic regions. Current methods for the identification of transcription units (TUs) use RNA-seq that, however, require large quantities of mRNA rendering the identification of inherently unstable TUs, e.g. miRNA precursors, difficult. This problem can be alleviated by chromatin-based approaches due to a correlation between histone modifications and transcription.
Here, we introduce EPIGENE, a novel chromatin segmentation method for the identification of active TUs using transcription-associated histone modifications. Unlike the existing chromatin segmentation approaches, EPIGENE uses a constrained, semi-supervised multivariate hidden Markov model (HMM) that models the observed combination of histone modifications using a product of independent Bernoulli random variables, to identify active TUs. Our results show that EPIGENE can identify genome-wide TUs in an unbiased manner. EPIGENE-predicted TUs show an enrichment of RNA Polymerase II at the transcription start site and in gene body indicating that they are indeed transcribed. Comprehensive validation using existing annotations revealed that 93% of EPIGENE TUs can be explained by existing gene annotations and 5% of EPIGENE TUs in HepG2 can be explained by microRNA annotations. EPIGENE outperformed the existing RNA-seq-based approaches in TU prediction precision across human cell lines. Finally, we identified 232 novel TUs in K562 and 43 novel cell-specific TUs all of which were supported by RNA Polymerase II ChIP-seq and Nascent RNA-seq data.
We demonstrate the applicability of EPIGENE to identify genome-wide active TUs and to provide valuable information about unannotated TUs. EPIGENE is an open-source method and is freely available at: https://github.com/imbbLab/EPIGENE.
Transcription units (TUs) represent the transcribed regions of the genome which generate protein-coding genes as well as regulatory non-coding RNAs like microRNAs. Accurate identification of TUs is important to better understand the transcriptomic landscape of the genome. With the rapid development of low-cost high-throughput sequencing technologies, RNA sequencing (RNA-seq) has become the major tool for genome-wide TU identification. Hence, popular TU prediction tools such as AUGUSTUS [1], Cufflinks [2], StringTie [3], Oases [4] use RNA-seq data. Though RNA-seq-based TU prediction can be considered the state-of-the-art method to annotate the genome, its main drawback lies in its dependence on relatively high quantities of target RNAs. This is problematic for accurate identification of inherently unstable TUs like primary miRNA, etc. Recent studies have reported the presence of large number of TUs that are rapidly degraded [5,6,7], some of which have been associated with diseases like HIV [8], cancer [9,10,11], Alzheimer's disease [12, 13], etc. While some unstable microRNA precursors have been identified by nascent transcription approaches like GRO-seq [14], PRO-seq [15], NET-seq [16], TT-seq [17], these approaches, however, are laborious, time-consuming, limited to cell cultures, and require high amount of input material (range of 107 cells) [18,19,20]. In addition, most of these techniques were designed to answer very specific questions about RNA Polymerase II transcription and hence identify very specific stages of transcription such as transcription start site (TSS), RNA Polymerase II C-terminal domain modification, etc. [20]. These shortcomings of existing approaches can be alleviated with chromatin-based approaches [21, 22], due to the association between histone modifications and transcription.
Eukaryotic DNA is tightly packaged into macromolecular complex called chromatin, which consists of repeating units of 147 DNA base pairs (bp) wrapped around an octamer of four histones H2A, H2B, H3, and H4 called the nucleosome. Post-translational modifications (PTM) to histones in the form of acetylation, methylation, phosphorylation, and ubiquitination, play an important role in the transcriptional process. These PTMs are added, read, and removed by so-called writers, readers, and erasers, respectively. In this way nucleosomes serve as signalling platforms [23] that enable the localized activity of chromatin signalling networks partaking in transcription and other chromatin-related processes [24]. Indeed, it has been shown that histone modifications are correlated to the transcriptional status of chromatin [25, 26]. For example, H3K4me3 and H3K36me3 are positively correlated with transcription initiation [27, 28] and elongation [29] and are considered as transcription activation marks, whereas H3K9me3 and H3K27me3 are considered as repressive marks as they are commonly found in repressed regions [27, 30]. Therefore, it is reasonable to assume that histone modifications profiles can be used to identify cell type-specific TUs. Given a deluge of cell type-specific epigenome data available through many consortia, such as ENCODE [31], NIH Roadmap Epigenomics [32], DEEP [33], Blueprint [34], CEEHRC [35], and IHEC [36], a highly robust TU annotation pipeline based on epigenome markers becomes feasible.
Currently many computational approaches such as ChromHMM [37], EpicSeg [38], chroModule [39], GenoSTAN [40], etc., are available that use histone modifications as input to provide genome-wide chromatin annotation. These chromatin segmentation approaches use a variety of mathematical models with the most prominent one being hidden Markov models (HMM). These HMMs model the observed combination of histone modifications emitted by a sequence of hidden chromatin states according to emission probabilities. Moreover, the hidden chromatin states are linked by transition probabilities that introduce correlations in the observed histone modifications.
Based on the training, these HMMs can be classified as: (a) unsupervised methods that do not include prior biological information and require users to interpret and annotate the learned states based on existing knowledge about functional genomics (e.g. ChromHMM, EpicSeg, and GenoSTAN) and (b) supervised methods, that rely on a set of positive samples for training (e.g. chroModule). Although these approaches annotate genome modules such as promoter, enhancer, transcribed regions, etc., they fail to identify active TUs as they do not constrain the chromatin state sequence to begin with a transcription start site (TSS) and end with a transcription termination site (TTS).
To address these shortcomings, we developed a semi-supervised HMM, EPIGENE (EPIgenomic GENE), which is trained on the combinatorial pattern of IHEC class 1 epigenomes (H3K27ac, H3K4me1, H3K4me3, H3K36me3, H3K27me3, and H3K9me3) to infer hidden "transcription unit states". The emission probabilities represent the probability of a histone modification occurring in a TU state and the transition probabilities capture the topology of TU states. In addition to the TU states, the HMM also includes background states. The transcription start site (TSS), exons (first, internal, and last exon), introns (first, internal, and last intron) and transcription termination site (TTS) are referred to as the TU states. The emission probabilities of these states as well as the transition probabilities between them are learned from the structure of TUs given by an existing transcript annotation. The transition and emission probabilities of the background states, the transition probabilities from and to TSS and TTS states, and the transition probabilities between TSS and TTS states are learned in an unsupervised manner from the data.
In the forthcoming sections, we describe the EPIGENE approach, validate the predicted EPIGENE TUs with existing annotations, RNA-seq, and ChIP-seq evidence, compare the performance of EPIGENE to existing chromatin segmentation and RNA-seq-based methods within and across cell lines, and show that EPIGENE outperforms state-of-the-art RNA-seq and chromatin segmentation approaches in prediction resolution and precision. In summary, EPIGENE yields predictions with a high resolution and provides a pre-trained robust model that can be applied across cell lines.
Schematic overview of EPIGENE
EPIGENE uses a multivariate HMM, which allows the probabilistic modelling of the combinatorial presence and absence of multiple IHEC class 1 histone modifications. It receives a list of aligned ChIP and control reads for each histone modification, which is subsequently converted into presence or absence calls across the genome using normR (see "Binarization of ChIP-seq profiles" section; Fig. 1a (i)). By default, TU states were analysed at 200-bp non-overlapping intervals called bins. The HMM comprises 14 TU states and 3 background states where each TU state captures individual elements of a gene (i.e. TSS, exons, introns, and TTS). The TU state sequence was duplicated, running from TSS to TTS and from TTS to TSS, allowing identification of TUs on the forward and reverse strand, respectively (see Fig. 1a (ii)). The transition probabilities between the TU states were trained in a supervised manner using GENCODE annotations [41] and their emission probabilities were trained on a highly confident set of GENCODE transcripts [41] that showed an enrichment for RNA Polymerase II in K562 cell line (see "Training the model parameters" section). The transition and emission probabilities of background states, the transition probabilities from or to either the TSS or TTS state, and the transition probabilities between TSS and TTS states were trained in an unsupervised manner (see "Training the model parameters" section). The HMM outputs a vector where each bin is assigned to a TU state or to one of the three background states. This vector is then further refined to obtain active TUs (see Fig. 1b).
a Schematic overview of EPIGENE framework. b An example of EPIGENE prediction. EPIGENE predictions of METTL4 and NC80 gene show an enrichment of H3K27ac and H3K4me3 at TSS (tracks shown in light violet), H3K36me3 in gene body (tracks shown in green), enhancer mark H3K4me1 few bps upstream or downstream of TSS (tracks shown in pink), RNA Polymerase II in TSS and gene body (tracks shown in blue). The predictions also show an absence of repression marks H3K27me3 and H3K9me3 (tracks shown in black). The corresponding RNA-seq evidence in this genomic region can be seen in the lower-most track (track shown in dark pink)
Validation with existing gene annotations and RNA-seq
We validated the predicted TUs using existing gene annotations and RNA-seq evidence. For this, we combined the EPIGENE predictions (24,571 TUs) and RNA-seq predictions that were obtained from Cufflinks (32,079 TUs) and StringTie (101,656 TUs; Additional file 1: Tables S2–S4 for summary statistics) to generate a consensus TU set. This consensus TU set contains 24,874 TUs, which were then overlaid with GENCODE and CHESS gene annotation [41, 42] (Fig. 2). We found that 93% of EPIGENE TUs can be explained by existing gene annotations. We identified 14,797 (11,584: annotated, 3213: unannotated) RNA-seq-exclusive TUs and 1304 (718: annotated, 586: unannotated) EPIGENE-exclusive TUs. Additional integration of RNA Polymerase II ChIP and Nascent RNA-seq data revealed that 40% (232 out of 586 TUs) of EPIGENE unannotated TUs and 35% (1120 out of 3213 TUs) of RNA-seq unannotated TUs showed enrichment of RNA Polymerase II ChIP, TT-seq, and GRO-seq evidence. Also, 88.4% (518 out of 586 TUs) could be validated by either RNA Polymerase II ChIP or Nascent RNA-seq. Additional details about RNA Polymerase II ChIP and Nascent RNA-seq enrichment in the consensus TU set can be seen in Additional file 2: Table S5.
Overlap of EPIGENE predictions with existing gene annotations and RNA-seq-based predictions
Histone modifications and RNA Polymerase II occupancy
The correctness of predicted TUs was estimated in K562 cell line, due to the availability of matched RNA Polymerase II and RNA-seq profiles. We predicted 24,571 TUs in K562 majority of which showed typical gene characteristics, with high enrichment of H3K27ac, H3K4me3 and H3K36me3 in TSS and gene bodies (Fig. 3a).
Correctness of EPIGENE predictions. a EPIGENE-estimated parameters for K562 using 17 chromatin states, ranging from 0 (white) to 1 (dark green). b Distribution of RNA Polymerase II enrichment score in EPIGENE predictions. The EPIGENE predictions are classified as: high RPKM (RPKM ≥ upper quartile) and low RPKM (RPKM < upper quartile) based on RNA-seq evidence in predicted transcripts
It is known that eukaryotic transcription is regulated by phosphorylation of RNA Polymerase II carboxy-terminal domain at serine 2, 5 and 7. The phosphorylation signal for serine 5 and 7 is strong at promoter region, whereas signal for serine 2 and 5 is strong at actively transcribed regions [43]. Genome-wide RNA Polymerase II profile for K562 cell line was obtained using four antibodies (see "Library preparation of RNA polymerase II ChIP-seq" section) that capture RNA Polymerase II signal at transcription initiation and gene bodies. The enrichment of RNA Polymerase II in predicted TUs was computed using normR [44] (see "Binarization of ChIP-seq profiles" section). The predicted TUs were classified as having high or low RPKM based on mRNA levels (threshold = upper quartile). Figure 3b shows the distribution of RNA Polymerase II enrichment in both the classes of predicted TUs. We observed that a significant proportion of predicted TUs (78%) showed an enrichment of RNA Polymerase II and thus were likely to be true positives. We also came across 24 unannotated TUs that showed an enrichment of RNA Polymerase II (enrichment score above 0.5), but had reduced or no RNA-seq evidence.
Comparison with RNA-seq-based approaches
Currently, there is no gold standard set of true TUs. However, there is a plethora of experimental approaches for studying RNA Polymerase II transcription. In order to perform an unbiased comparison, we integrated RNA Polymerase II data from ChIP-seq and Nascent RNA-seq techniques. For individual cell lines, we defined a set of gold standard regions based on RNA Polymerase II ChIP-seq and Nascent RNA-seq evidence (see Fig. 4a). We compared the performance of EPIGENE with two existing RNA-seq based transcript prediction approaches, Cufflinks and StringTie, both of which are known to predict novel TUs in addition to annotated TUs. The method comparison was performed in two stages: within-cell type and cross-cell type comparison using RNA Polymerase II ChIP-seq and Nascent RNA-seq enrichment as performance indicator (see "Performance evaluation" section, Fig. 4b).
Performance of EPIGENE compared to existing RNA-seq-based transcription unit prediction methods: Cufflinks and StringTie. a Set of gold standard regions obtained by combining RNA Polymerase II ChIP-seq and Nascent RNA-seq profiles. b Contingency matrix used for method comparison. c Receiver-operating characteristic curve. d Precision–recall curve. e Area under ROC and PRC curve for varying RNA Polymerase II resolution for EPIGENE, Cufflinks and StringTie
Within-cell type comparison
For this comparison, we used the ChIP-seq profile of RNA Polymerase II in K562 cell line and the pre-existing nascent RNA TUs reported by Schwalb et al. [17] as performance indicator (see "Binarization of Nascent RNA-seq profiles" and "Performance evaluation" sections). The nascent RNA TUs have been reported to show an enrichment of TT-seq and GRO-seq [17]. The ChIP-seq profiles of RNA Polymerase II were obtained using PolIIS5P4H8 antibody because it can enrich RNA Polymerase II both at the TSS and in actively transcribed regions.
We performed the method comparison at 200-bp resolution and found that EPIGENE reports in both the precision–recall curve (PRC) and the receiver-operating characteristic (ROC) curves a higher AUC (PRC: 0.83, ROC: 0.85; Fig. 4c, d) compared to Cufflinks (PRC: 0.60, ROC: 0.63) and StringTie (PRC: 0.77, ROC: 0.82). We repeated this analysis for three different resolutions (50, 100, and 500 bp) and the corresponding AUC values are in Fig. 4e. Cufflinks achieved a lower AUC compared to StringTie and EPIGENE, which is likely due to the usage of the RABT assembler which results in large number of false positives [45].
StringTie reported a lower AUC than EPIGENE for varying RNA Polymerase II resolutions. We examined the precision, sensitivity, and specificity values for EPIGENE, Cufflinks, and StringTie and found that the lower AUC for RNA-seq-based methods was due to spurious read mappings of RNA-seq that results in higher false positives in StringTie and Cufflinks. Additional file 1: Figure S1 shows an example of Cufflinks and StringTie TU that was identified due to spurious read mapping. This TU exactly overlaps with a repetitive sequence that occurs in four chromosomes (chromosome 1, 5, 6, X).
Cross-cell type comparison
For this comparison, we used three different datasets provided by the GEO database [46], ENCODE [31], and DEEP [33] consortium:
IMR90: lung fibroblast cells with 6 histone modifications obtained from Lister et al. [47], one RNA Polymerase II obtained from Dunham et al. [48], two control experiments (one each for RNA Polymerase II [48] and histone modifications [47]), one RNA-seq obtained from Dunham et al. [48] and one GRO-seq profile obtained from Jin et al. [49],
HepG2 replicate 1 and HepG2 replicate 2: hepatocellular carcinoma with 6 histone modifications, one control experiment and one RNA-seq obtained from Salhab et al. [50] where two replicates per histone modification and RNA-seq were available, RNA Polymerase II ChIP and control experiments obtained from Dunham et al. [48] and one GRO-seq obtained from Bouvy-Liivrand et al. [51].
We applied the K562-trained EPIGENE model to IMR90 and HepG2 datasets and compared the predictions with Cufflinks and StringTie. The ChIP-seq profiles of RNA Polymerase II and GRO-seq profiles were used as performance indicator for both cell lines (see "Binarization of Nascent RNA-seq profiles" and "Performance evaluation" sections). As shown in Fig. 5 and Additional file 1: Figure S2, the K562-trained EPIGENE model consistently reports a higher AUC (PRC: 0.78, ROC: 0.77 in IMR90; PRC: 0.75, ROC: 0.77 in HepG2 replicate 1; PRC: 0.80, ROC: 0.80 in HepG2 replicate 2) compared to Cufflinks (PRC: 0.54, ROC: 0.54 in IMR90; PRC: 0.61, ROC: 0.64 in HepG2 replicate 1; PRC: 0.61, ROC: 0.64 in HepG2 replicate 2) and StringTie (PRC: 0.68, ROC: 0.72 in IMR90; PRC: 0.73, ROC: 0.77 in HepG2 replicate 1; PRC: 0.73, ROC: 0.78 in HepG2 replicate 2). These results suggest that EPIGENE generates accurate predictions across different cell lines, outperforming RNA-seq-based methods.
Performance of K562-trained EPIGENE models, Cufflinks and StringTie across cell lines
Comparison with chromatin segmentation approaches
Currently several chromatin segmentation approaches (like ChromHMM and Segway) exist that provide chromatin state annotation using histone modifications. These approaches were inherently designed to provide a whole-genome chromatin state annotation and hence, the model parameters do not represent a specific topology. We examined the results of these approaches to evaluate their accuracy in identifying TUs.
We compared EPIGENE predictions with a widely used chromatin segmentation approach, ChromHMM, as both methods use a binning scheme. We did not include Segway in this comparison because it operates at single base pair resolution and, therefore restricts fair comparison of different profiles. Additionally, Segway is quite slower than chromHMM.
TU identification with chromHMM was performed in two modes: strand-specific and unstranded. Strand-specific TUs were obtained by linking the promoter and transcription elongation states. We defined TU as a genomic region that begins with promoter state and proceeds through transcription elongation states. A promoter state was defined by an enrichment of H3K4me3 and H3K27ac (state 9 in Fig. 6a) and an elongation state was defined by an enrichment of H3K36me3 (state 4, 5 and 8 in Fig. 6a). Unstranded TUs were obtained by filtering chromHMM segmentations for transcription elongation states (state 4, 5 and 8 in Fig. 6a). The comparison was performed using the gold standard regions defined in "Comparison with RNA-seq based approaches" section. As shown in Fig. 6b–e and Additional file 1: Figure S3, EPIGENE consistently performed better (K562; ROC: 0.85, PRC: 0.83) than chromHMM strand-specific (K562, ROC: 0.73, PRC: 0.77) and unstranded TUs (K562, ROC: 0.79, PRC: 0.80). The lower AUC of strand-specific and unstranded chromHMM TUs was due to the presence of intronic enhancers and intermediate low coverage regions that resulted in fewer strand-specific chromHMM TUs and shorter strand-specific and unstranded chromHMM TUs (see Additional file 1: Figure S4).
a Emission probabilities of ChromHMM model trained in K562 cell line. b–e Performance of K562-trained EPIGENE model and K562-trained ChromHMM model in K562, IMR90 and HepG2
EPIGENE identifies transcription units with negligible RNA-seq evidence
Previous analyses (see "Histone modifications and RNA Polymerase II occupancy" and "Comparison with RNA-seq based approaches" sections) indicated the presence of TUs supported by RNA Polymerase II evidence but with reduced or no RNA-seq evidence. Here, we evaluated these TUs within and across cell lines by: (a) identifying cell type-specific TUs that showed TU characteristics but lack RNA-seq evidence, and (b) analysing the presence of microRNAs that were not identified by RNA-seq.
EPIGENE identifies cell type-specific transcription units
We created a consensus set of TUs by overlaying the EPIGENE predictions for K562, HepG2 and IMR90. This consensus TU set comprised 18,248 TUs, of which ~ 78% showed an enrichment for RNA Polymerase II. We identified 10,233 differential TUs of which 8047 were exclusive to cell lines (K562: 4247, IMR90: 2545, HepG2: 1255; see Additional file 1: Figure S5). We additionally identified 43 high-confidence cell-specific TUs (K562: 24, IMR90: 17, HepG2: 2; additional details in Additional file 3: Table S6), that lacked RNA-seq evidence but had typical characteristics of a TU, with RNA Polymerase II and GRO-seq enrichment at TSS and transcribed regions, H3K4me3 and H3K27ac enrichment at the TSS, and H3K36me3 enrichment in gene body (Fig. 7).
Example of EPIGENE-predicted TU that lacks RNA-seq evidence. The TU was predicted to be active in K562 but not in HepG2 and IMR90, and is located between pseudogene CASP3P1 and lncRNA RP5-952N6.1. The TU (shown in dark blue in EPIGENE-K562 track) shows an enrichment of H3K27ac and H3K4me3 at TSS (tracks shown in light violet), H3K36me3 in gene body (tracks shown in green), enhancer mark H3K4me1 few bps upstream of TSS (tracks shown in pink), GRO-seq in TSS (tracks shown in brown), K562 RNA Polymerase II in TSS and gene body (tracks shown in blue). The TU also shows an absence of repression marks H3K27me3 and H3K9me3 in K562 (tracks shown in black). We additionally observe the enrichment of repression mark in H3K27me3 in HepG2 and IMR90 indicating that the region is repressed in both these cell lines. There is a negligible RNA-seq evidence (shown in dark pink in K562-RNA-seq track) for this predicted TU
Identifying microRNAs that lack RNA-seq evidence
MicroRNAs are small (~ 22 bp), evolutionally conserved, non-coding RNAs [52, 53] derived from large primary microRNAs (pri-miRNA), that are processed to ~ 70 bp precursors (pre-miRNA) and consequently to their mature form by endonucleases [54, 55]. They regulate various fundamental biological processes such as development, differentiation, or apoptosis by means of post-transcriptional regulation of target genes via gene silencing [56, 57] and are involved in human diseases [58]. Due to the unstable nature of primary microRNA, traditional identification approaches relying on RNA-seq are challenging. Here, we investigated the presence of primary microRNAs that lack RNA-seq evidence across cell lines. We created a consensus TU set for individual cell lines (K562, HepG2 and IMR90) by combining EPIGENE and RNA-seq-based predictions. The RNA-seq-based predictions were obtained from Cufflinks and StringTie. The consensus TU set was overlapped with miRbase annotations [59] to obtain potential primary microRNA TUs. We identified 655 EPIGENE TUs in HepG2 (5% of total EPIGENE TUs common in both HepG2 replicates) that could be explained by miRbase annotations. We observed that majority of these were supported by RNA-seq and Polymerase II evidence (Fig. 8a and Additional file 1: Figure S6). We additionally identified 2 primary microRNA TUs in HepG2 cell line, which showed an enrichment for H3K27ac and H3K4me3 at their promoters, H3K36me3 in their gene body, and RNA Polymerase II in TSS and transcribed regions while lacking RNA-seq evidence. One of these TUs overlapped with a microRNA cluster located between RP-11738B7.1 (lincRNA) and NRF1 gene (Fig. 8b). This microRNA cluster has been shown to arise from the same primary miRNA and is also known to promote cell proliferation in HepG2 cell line [60, 61].
a Overview of potential primary miRNAs predicted by EPIGENE in HepG2. b Example of an EPIGENE-predicted TU overlapping a microRNA cluster in HepG2 cell line. This region is located between lincRNA RP11-738B7.1 and gene NRF1. The TU shows an enrichment of H3K27ac and H3K4me3 at TSS (tracks shown in light violet), H3K36me3 in gene body (tracks shown in green), enhancer mark H3K4me1 few bps upstream and downstream of TSS (tracks shown in pink), GRO-seq in TSS (tracks shown in brown) and RNA Polymerase II ChIP-seq in TSS (tracks shown in blue). The predictions also show an absence of repression marks H3K27me3 and H3K9me3 (tracks shown in black) and RNA-seq evidence (tracks shown in dark pink)
In this work, we introduced EPIGENE, a semi-supervised HMM that identifies active TUs using histone modifications. EPIGENE has TU (forward and reverse) and background sub-models. The TU sub-models were trained in a supervised manner on predefined training sets, whereas the background was trained in an unsupervised manner. This semi-supervised approach captures the biological topology of active TUs as well as the probability of occurrence of histone modifications in different parts of a TU.
We first showed that majority of the predicted TUs can be explained by existing gene annotations and were supported by RNA Polymerase II evidence. A quantitative comparison with RNA-seq revealed the presence of TUs with RNA Polymerase II enrichment but negligible RNA-seq evidence. Considering RNA Polymerase II ChIP-seq and Nascent RNA-seq as true transcription indicator, we compared the performance of EPIGENE with chromatin segmentation approach chromHMM and two RNA-seq-based approaches Cufflinks and StringTie. Based solely on the AUC of PRC and ROC curve as performance measure, EPIGENE achieves a superior performance than chromatin segmentation and RNA-seq-based approaches. We further showed that EPIGENE can be reliably applied across different cell lines without the need for re-training the TU states and accomplishes a superior performance than RNA-seq-based approaches.
We examined other performance scores like precision, sensitivity, and specificity values, and observed that the low AUC of RNA-seq-based approaches is due to RNA-seq mapping artefacts that resulted in higher number of false positives in Cufflinks and StringTie. We further evaluated the presence of differentially identified TUs in K562, HepG2, and IMR90 cell lines that lack RNA-seq evidence. The results suggested the presence of cell line-specific transcripts that lack RNA-seq evidence. We additionally identified potential microRNA precursors that lacked RNA-seq evidence presumably due to their instability. All of the aforementioned TUs showed an enrichment of RNA Polymerase II in TSS and gene body indicating that they had been transcribed.
It is important to note that EPIGENE does not differentiate between functional and non-functional units of a TU (exons and introns) as the association between histone modifications and alternative splicing is yet to be elucidated [62]. However, EPIGENE identifies active TUs with high precision as shown in "Comparison with RNA-seq based approaches" section and in the example regions presented in this work.
EPIGENE uses six core histone modifications that are available for many cell lines and species, which leads to a broad applicability. All the core histone modifications are essential for accurate TU identification, as the accuracy of TU prediction decreases in the absence of any of the core histone modification. In the absence of a core histone modification, imputation techniques such as ChromImpute [63] and PREDICTD [64] can be used to impute the missing histone modifications at 200-bp resolution and then use the imputed histone modification together with the available histone modifications to obtain active TUs. The accuracy of EPIGENE predictions also depends on the sequencing depth of the input histone modifications, therefore, high-quality ChIP-seq profiles of histone modifications would result in high confident TU annotation.
In summary, the superior performance within and across cell lines, identification of TUs, especially primary microRNAs lacking RNA-seq evidence as well as interpretability makes EPIGENE a powerful tool for epigenome-based gene annotation.
With increasing efforts in the direction of epigenetics, many consortia continue to provide high-quality genome-wide maps of histone modifications, but determining the genome-wide transcriptomic landscape using this data has remained unexplored so far. Extensive evaluations in this work demonstrated the superior accuracy of EPIGENE over existing transcript annotation methods based on true transcription indicators. EPIGENE framework is user-friendly and can be executed by solely providing binarized enrichments for ChIP-seq experiments, without the need to re-train the model parameters. The resulting TU annotations agree with RNA Polymerase II ChIP-seq and Nascent RNA-seq evidence and can be used to provide a cell type-specific epigenome-based gene annotation.
Library preparation of histone modifications ChIP-seq
For K562 cell line presented in this study, ChIP against six core histone modifications, H3K27ac, H3K27me3, H3K4me1, H3K4me3, H3K36me3 and H3K9me3, was performed. The sheared chromatin without antibody (input) served as control. 10 × 106 K562 cells were cultured as recommended by ATCC. Chromatin immunoprecipitations were performed using the Diagenode auto histone ChIP-seq kit and libraries were made using microplex kits according to manufacturer's instructions and 10 PCR cycles.
Library preparation of RNA Polymerase II ChIP-seq
K562 cells were cultured in IMDM (#21980Gibco) with 10% FBS and P/S. Cells at a concentration of 1.2 mio/ml were fixed with 1% formalin at 37 °C for 8 min. Nuclei were isolated with a douncer, chromatin concentration was measured and 750 µg chromatin per CHIP was used. Samples were sonicated with Biorupter for 33 cycles (3 × 11 cycles). Chromatin, antibodies (RNA Pol II Ser2P (H5), RNA Pol II Ser5P (4H8), RNA Pol II Ser7P (4E12) and PolII (8WG16)) and protein G beads were combined and rotated at 4 °C. For elution 250 µl elution buffer (1% SDS) was used and after reverse crosslinking DNA was isolated by phenol chloroform extraction and eluted in 1xTE. Final concentration was measured by Qubit. Bioanalyzer was done to check fragment sizes.
Sequencing and processing of ChIP-seq data
Sequencing for RNA Polymerase II and histone modifications was performed on an Illumina Highseq 2500 using a paired end 50-flow cell and version 3 chemistry. The resulting raw sequencing reads were aligned to the genome assembly "hs37d5" with STAR [65] and duplicates were marked using Picard tools [66]. We used plotFingerprint which is a part of deepTools [67] to access the quality metrics for all ChIP-seq experiments.
Processing of RNA-seq data
The raw reads from RNA-seq experiments were downloaded from European Nucleotide Archive (SRR315336, SRR315337 for K562), European Genome Archive (EGAD00001002527 for HepG2) and ENCODE (ENCSR00CTQ for IMR90) and were aligned to the genome assembly "hs37d5" with STAR [65].
Processing of Nascent RNA-seq data
The transcript annotation for K562 obtained from TT-seq were downloaded from Gene Expression Omnibus (GEO) (GSE75792). The genomic co-ordinates of transcripts were lifted over to hg19. For HepG2, raw reads from GRO-seq were downloaded from GEO (GSM2428726). The raw reads were aligned to the hg19 and the pre-processing was done based on the instructions specified in Liivard et al. [51]. For IMR90, we used GRO-seq profiles generated in Jin et al. [49]. The profiles were downloaded from GEO (GSM1055806) and lifted over to hg19.
Binarization of ChIP-seq profiles
EPIGENE requires the enrichment values of IHEC class 1 histone modifications in a binarized data form or a "class matrix" to learn a transcription state model. This was done by partitioning the mappable regions of the genome of interest into non-overlapping sub-regions of the same size called bins. In the current setup, the transcription states are analysed at 200-bp resolution, as it roughly corresponds to the size of a nucleosome and spacer region. Given the ChIP and input alignment files for each of the histone modifications, the class matrix for multivariate HMM was generated using the following approach:
Obtaining read counts Read counts for all the bins was computed using bamCount method from R package bamsignals [68], with the following parameter settings: mapqual = 255, filteredFlag = 1024, paired.end = midpoint.
Enrichment calling and binarization After having obtained the read counts, binarization of ChIP-seq signal for the histone modifications and RNA Polymerase II across all bins \(E\left( {{\text{bin}},{\text{HM}}} \right)\) and \(E\left( {{\text{bin}},{\text{RNAPolIIChIP}}} \right)\) were computed using enrichR (binFilter = zero) and getClasses (fdr = 0.2) method from normR [44]. This step yields the class matrix that serves as an input for the multivariate HMM.
The EPIGENE model
EPIGENE uses a multivariate HMM (shown in Fig. 1a (ii)) to model the class matrix and identify active transcription units. Class matrix \(C\) is a m × n matrix, where \(m =\) total number of 200 bp bins, and, \(n\) = number of histone modifications. Each entry \(Cij\) in the class matrix \(C\) corresponds to the binarized enrichment in ith bin for the jth histone modification. The model constitutes \(k\) number of hidden states (which is an input parameter of the algorithm), and each row of the class matrix corresponds to a hidden state. The emission probability vector for each hidden state corresponds to the probability with which each histone mark was found for that hidden state. The transition probabilities between the states enable the model to capture the position biases of gene states relative to each other. The emission probabilities of each state represent the probability with which each histone mark occurs in a state. Given this model, the algorithm does the following:
Initializes the emission, transition, and initial probabilities.
Fits the emission, transition, and initial probabilities using the Baum–Welch algorithm [69].
As we are concerned about the most probable sequence of active transcription unit, therefore the sequence of hidden states was inferred using the Viterbi algorithm [70].
Training the model parameters
The transition and emission probabilities of the multivariate HMM were trained using GENCODE annotations with the following approach:
Bins overlapping gencode transcripts were identified and termed as gencode bins.
The gencode bins were categorized as TSS, TTS, 1st, internal and last exon and intron bins, and were subsequently grouped based on transcript IDs.
The coverage (in bp) of individual transcription unit component (i.e. TSS, 1st exon, 1st intron, etc.) for each transcript was computed to generate the coverage list, where each entry of the coverage list contains the coverage information (in bp) for individual transcripts.
The transition probability of each "transcription unit state" was computed from the coverage list, and the missing probabilities from and to the "background state" were generated in an unsupervised manner.
The gencode transcripts were filtered to obtain transcripts that report an enrichment for RNA Polymerase II. This was done by clustering the binarized enrichment values of RNA Polymerase II in TSS and TTS bins of the transcripts and obtaining TSS and TTS bins that report a high cluster mean for RNA Polymerase II. The emission probability of each "transcription unit state" was computed from class matrix and coverage of these transcripts (coverage computed from Step 2). The missing emission probabilities for the background states were trained in an unsupervised manner.
Binarization of Nascent RNA-seq profiles
Nascent RNA transcript annotation for GRO-seq profiles was obtained using groHMM [71]. For HepG2, transcript annotation was obtained from GRO-seq using default parameter values, while for IMR90, transcript annotation was obtained from GRO-seq using parameter values specified in Chen et al. [71]. In K562, transcript annotation was obtained from Schwalb et al. [17]. For a given cell line \({\text{C}}\), the presence/absence of Nascent RNA-seq profiles across 200 bp bins \(E_{C} \left( {{\text{bin}},{\text{NascentRNA}}} \right)\) is given by:
$$E_{\text{C}} \left( {{\text{bin}},{\text{NascentRNA}}} \right) = \left\{ {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right.\begin{array}{ll} \quad {{\text{if}}\;O\left( {{\text{bin}},{\text{Tr}}_{\text{C}} } \right) \ge 1} \\ \quad {\text{otherwise}} \\ \end{array} ,$$
where \(O\left( {{\text{bin}},{\text{Tr}}_{\text{C}} } \right)\) is the overlap between the bin and cell line \({\text{C}}\) nascent RNA transcripts \({\text{Tr}}_{\text{C}}\).
The performance of EPIGENE and RNA-seq-based transcript prediction approaches was evaluated using RNA Polymerase as performance indicator. This was done by removing assembly gaps in the genomic regions of interest and partitioning the remaining contigs into non-overlapping bins of 200 bps. The actual transcription status of each 200 bp bin was given by the observed binarized RNA Polymerase II ChIP-seq and Nascent RNA-seq enrichment in the bin. The actual transcription \({\text{AT}} \left( {\text{bin}} \right)\) was given by:
$${\text{AT}} \left( {\text{bin}} \right) = \left\{ {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \right.\begin{array}{ll} \quad {{\text{if}}\;E\left( {{\text{bin}},{\text{RNAPolIIChIP}}} \right) \cap E\left( {{\text{bin}},{\text{NascentRNA}}} \right) = 1} \\ \quad {\text{otherwise}} \\ \end{array} ,$$
where \(E\left( {{\text{bin}},{\text{RNAPolIIChIP}}} \right)\) is enrichment of RNA Polymerase II ChIP-seq (obtained from "Binarization of ChIP-seq profiles" section) and \(E\left( {{\text{bin}},{\text{NascentRNA}}} \right)\) is enrichment of Nascent RNA-seq in the bin (obtained from "Binarization of Nascent RNA-seq profiles" section).
The predicted transcription status of the bin for method m, \({\text{PT}}_{\text{m}} \left( {\text{bin}} \right)\) was given by:
$${\text{PT}}_{\text{m}} \left( {\text{bin}} \right) = \left\{ {\begin{array}{ll }1 \\ 0 \\ \end{array} } \right.\begin{array}{ll} \quad {{\text{if}}\;O\left( {{\text{bin}},P_{\text{m}} } \right) \ge 1} \\ \quad {\text{otherwise}} \\ \end{array} ,$$
where \(O\left( {{\text{bin}},P_{\text{m}} } \right)\) is the overlap between the bin and method m predictions \(P_{\text{m}}\).
The predictions of EPIGENE and other RNA-seq-based approaches were evaluated by computing the area under curve for precision–recall (AUC-PRC) and receiver-operating characteristic curve (AUC-ROC) with primary focus on AUC-PRC. Considering a very high class imbalance, i.e. \({\text{bins}}_{{{\text{RNAPolymeraseII}}^{ + } }}\)\(\ll {\text{bins}}_{{{\text{RNAPolymeraseII}}^{ - } }}\), the AUC-PRC and AUC-ROC are computed using random sampling as:
$${\text{AUC}} = {\text{mean}}\left( {L_{\text{AUC}} } \right) - \left( {\frac{{{\text{stdDev}}\left( {L_{\text{AUC}} } \right)}}{\sqrt n }} \right),$$
where \(n\) is the sampling size or number of iterations and \(L_{\text{AUC}}\) is the list of AUCs obtained for sampling size \(n\).
Data for ChIP-seq experiments for K562 cell line are available via European Nucleotide Archive (PRJEB34999). Additional details about other ChIP-seq and RNA-seq data used in this work can be found in Additional file 1: Table S1. EPIGENE code is available at: https://github.com/imbbLab/EPIGENE.
Stanke M, Morgenstern B. AUGUSTUS: a web server for gene prediction in eukaryotes that allows user-defined constraints. Nucleic Acids Res. 2005;33(Web Server issue):W465–7.
Trapnell C, Williams BA, Pertea G, Mortazavi A, Kwan G, van Baren MJ, et al. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat Biotechnol. 2010;28(5):511–5.
Pertea M, Pertea GM, Antonescu CM, Chang T-C, Mendell JT, Salzberg SL. StringTie enables improved reconstruction of a transcriptome from RNA-seq reads. Nat Biotechnol. 2015;33(3):290–5.
Schulz MH, Zerbino DR, Vingron M, Birney E. Oases: robust de novo RNA-seq assembly across the dynamic range of expression levels. Bioinformatics. 2012;28(8):1086–92.
Preker P, Nielsen J, Kammler S, Lykke-Andersen S, Christensen MS, Mapendano CK, et al. RNA exosome depletion reveals transcription upstream of active human promoters. Science. 2008;322(5909):1851–4.
Tani H, Mizutani R, Salam KA, Tano K, Ijiri K, Wakamatsu A, et al. Genome-wide determination of RNA stability reveals hundreds of short-lived noncoding transcripts in mammals. Genome Res. 2012;22(5):947–56.
Li Y, Li Z, Zhou S, Wen J, Geng B, Yang J, et al. Genome-wide analysis of human microRNA stability. Biomed Res Int. 2013;2013:1–12.
Bail S, Swerdel M, Liu H, Jiao X, Goff LA, Hart RP, et al. Differential regulation of microRNA stability. RNA. 2010;16(5):1032–9.
Shah MY, Ferrajoli A, Sood AK, Lopez-Berestein G, Calin GA. microRNA therapeutics in cancer—an emerging concept. Amsterdam: Elsevier B.V.; 2016. p. 34–42.
Zhang Z, Lee J-H, Ruan H, Ye Y, Krakowiak J, Hu Q, et al. Transcriptional landscape and clinical utility of enhancer RNAs for eRNA-targeted therapy in cancer. Nat Commun. 2019;10(1):4562.
Wang J, Zhao Y, Zhou X, Hiebert SW, Liu Q, Shyr Y. Nascent RNA sequencing analysis provides insights into enhancer-mediated gene regulation. BMC Genomics. 2018;19(1):633.
Wang M, Qin L, Tang B. MicroRNAs in Alzheimer's disease. Lausanne: Frontiers Media S.A.; 2019.
Sethi P, Lukiw WJ. Micro-RNA abundance and stability in human brain: specific alterations in Alzheimer's disease temporal lobe neocortex. Neurosci Lett. 2009;459(2):100–4.
Core LJ, Waterfall JJ, Lis JT. Nascent RNA sequencing reveals widespread pausing and divergent initiation at human promoters. Science. 2008;322(5909):1845–8.
Kwak H, Fuda NJ, Core LJ, Lis JT. Precise maps of RNA polymerase reveal how promoters direct initiation and pausing. Science. 2013;339(6122):950–3.
Churchman LS, Weissman JS. Native elongating transcript sequencing (NET-seq). Curr Protoc Mol Biol. 2012;98(1):14.4.1–4.17.
Schwalb B, Michel M, Zacher B, Hauf KF, Demel C, Tresch A, et al. TT-seq maps the human transient transcriptome. Science. 2016;352(6290):1225–8.
Nojima T, Gomes T, Grosso ARF, Kimura H, Dye MJ, Dhir S, et al. Mammalian NET-seq reveals genome-wide nascent transcription coupled to RNA processing. Cell. 2015;161(3):526–40.
Gardini A. Global run-on sequencing (GRO-Seq). In: Methods in molecular biology (Clifton, NJ). 2017. p. 111–20.
Wissink EM, Vihervaara A, Tippens ND, Lis JT. Nascent RNA analyses: tracking transcription and its regulation. Nat Rev Genet. 2019;20(12):705–23.
Ozsolak F, Milos PM. RNA sequencing: advances, challenges and opportunities. Nat Rev Genet. 2011;12(2):87–98.
Ozsolak F, Poling LL, Wang Z, Liu H, Liu XS, Roeder RG, et al. Chromatin structure analyses identify miRNA promoters. Genes Dev. 2008;22(22):3172–83.
Turner BM. The adjustable nucleosome: an epigenetic signaling module. Trends Genet. 2012;28(9):436–44.
Perner J, Chung H-R. Chromatin signaling and transcription initiation. Front Life Sci. 2013;7(1–2):22–30.
Karlic R, Chung H-R, Lasserre J, Vlahovicek K, Vingron M. Histone modification levels are predictive for gene expression. Proc Natl Acad Sci. 2010;107(7):2926–31.
Li B, Carey M, Workman JL. The role of chromatin during transcription. Cell. 2007;128(4):707–19.
Barski A, Cuddapah S, Cui K, Roh T-Y, Schones DE, Wang Z, et al. High-resolution profiling of histone methylations in the human genome. Cell. 2007;129(4):823–37.
Bernstein BE, Humphrey EL, Erlich RL, Schneider R, Bouman P, Liu JS, et al. Methylation of histone H3 Lys 4 in coding regions of active genes. Proc Natl Acad Sci. 2002;99(13):8695–700.
Wagner EJ, Carpenter PB. Understanding the language of Lys36 methylation at histone H3. Nat Rev Mol Cell Biol. 2012;13(2):115–26.
Beisel C, Paro R. Silencing chromatin: comparing modes and mechanisms. Nat Rev Genet. 2011;12(2):123–35.
ENCODE Project Consortium TEP. The ENCODE (ENCyclopedia Of DNA Elements) project. Science. 2004;306(5696):636–40.
Bernstein BE, Stamatoyannopoulos JA, Costello JF, Ren B, Milosavljevic A, Meissner A, et al. The NIH roadmap epigenomics mapping consortium. Nat Biotechnol. 2010;28(10):1045–8.
The German epigenome programme 'DEEP.' http://www.deutsches-epigenom-programm.de/. Accessed 16 Mar 2020.
Adams D, Altucci L, Antonarakis SE, Ballesteros J, Beck S, Bird A, et al. BLUEPRINT to decode the epigenetic signature written in blood. Nat Biotechnol. 2012;30(3):224–6.
Canadian Epigenetics, Environment and Health Research Consortium (CEEHRC) Network—epigenomics. http://www.epigenomes.ca/. Accessed 16 Mar 2020.
IHEC—International Human Epigenome Consortium. http://ihec-epigenomes.org/. Accessed 16 Mar 2020.
Ernst J, Kellis M. ChromHMM: automating chromatin-state discovery and characterization. Nat Methods. 2012;9(3):215–6.
Mammana A, Chung H-R. Chromatin segmentation based on a probabilistic model for read counts explains a large portion of the epigenome. Genome Biol. 2015;16(1):151.
Won K-J, Zhang X, Wang T, Ding B, Raha D, Snyder M, et al. Comparative annotation of functional regions in the human genome using epigenomic data. Nucleic Acids Res. 2013;41(8):4423–32.
Zacher B, Michel M, Schwalb B, Cramer P, Tresch A, Gagneur J. Accurate promoter and enhancer identification in 127 ENCODE and roadmap epigenomics cell types and tissues by GenoSTAN. PLoS ONE. 2017;12(1):e0169249.
Frankish A, Diekhans M, Ferreira A-M, Johnson R, Jungreis I, Loveland J, et al. GENCODE reference annotation for the human and mouse genomes. Nucleic Acids Res. 2019;47(D1):D766–73.
Pertea M, Shumate A, Pertea G, Varabyou A, Breitwieser FP, Chang Y-C, et al. CHESS: a new human gene catalog curated from thousands of large-scale RNA sequencing experiments reveals extensive transcriptional noise. Genome Biol. 2018;19(1):208.
Komarnitsky P, Cho EJ, Buratowski S. Different phosphorylated forms of RNA polymerase II and associated mRNA processing factors during transcription. Genes Dev. 2000;14(19):2452–60.
Johannes Helmuth and Ho Ryun Chung. Introduction to the normR package. http://bioconductor.org/packages/release/bioc/vignettes/normr/inst/doc/normr.html. Accessed 12 Mar 2020.
Janes J, Hu F, Lewin A, Turro E. A comparative study of RNA-seq analysis strategies. Brief Bioinform. 2015;16(6):932–40.
Clough E, Barrett T. The gene expression omnibus database. New York: Humana Press; 2016. p. 93–110.
Lister R, Pelizzola M, Dowen RH, Hawkins RD, Hon G, Tonti-Filippini J, et al. Human DNA methylomes at base resolution show widespread epigenomic differences. Nature. 2009;462(7271):315–22.
Dunham I, Kundaje A, Aldred SF, Collins PJ, Davis CA, Doyle F, et al. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012;489(7414):57–74.
Jin F, Li Y, Dixon JR, Selvaraj S, Ye Z, Lee AY, et al. A high-resolution map of the three-dimensional chromatin interactome in human cells. Nature. 2013;503(7475):290–4.
Salhab A, Nordström K, Gasparoni G, Kattler K, Ebert P, Ramirez F, et al. A comprehensive analysis of 195 DNA methylomes reveals shared and cell-specific features of partially methylated domains. Genome Biol. 2018;19(1):150.
Bouvy-Liivrand M, Hernández de Sande A, Pölönen P, Mehtonen J, Vuorenmaa T, Niskanen H, et al. Analysis of primary microRNA loci from nascent transcriptomes reveals regulatory domains governed by chromatin architecture. Nucleic Acids Res. 2017;45(17):9837–49.
Lagos-Quintana M, Rauhut R, Lendeckel W, Tuschl T. Identification of novel genes coding for small expressed RNAs. Science. 2001;294(5543):853–8.
Lee RC, Ambros V. An extensive class of small RNAs in Caenorhabditis elegans. Science. 2001;294(5543):862–4.
Bartel DP. MicroRNAs. Cell. 2004;116(2):281–97.
He L, Hannon GJ. MicroRNAs: small RNAs with a big role in gene regulation. Nat Rev Genet. 2004;5(7):522–31.
Carleton M, Cleary MA, Linsley PS. MicroRNAs and cell cycle regulation. Cell Cycle. 2007;6(17):2127–32.
Plasterk RHA. Micro RNAs in animal development. Cell. 2006;124(5):877–81.
Calin GA, Croce CM. MicroRNA signatures in human cancers. Nat Rev Cancer. 2006;6(11):857–66.
Griffiths-Jones S, Grocock RJ, van Dongen S, Bateman A, Enright AJ. miRBase: microRNA sequences, targets and gene nomenclature. Nucleic Acids Res. 2006;34(90001):D140–4.
Xu D, He X, Chang Y, Xu C, Jiang X, Sun S, et al. Inhibition of miR-96 expression reduces cell proliferation and clonogenicity of HepG2 hepatoma cells. Oncol Rep. 2013;29(2):653–61.
Ma Y, Liang A-J, Fan Y-P, Huang Y-R, Zhao X-M, Sun Y, et al. Dysregulation and functional roles of miR-183-96-182 cluster in cancer cell proliferation, invasion and metastasis. Oncotarget. 2016;7(27):42805–25.
Zhou H-L, Luo G, Wise JA, Lou H. Regulation of alternative splicing by local histone modifications: potential roles for RNA-guided mechanisms. Nucleic Acids Res. 2014;42(2):701–13.
Ernst J, Kellis M. Large-scale imputation of epigenomic datasets for systematic annotation of diverse human tissues. Nat Biotechnol. 2015;33(4):364–76.
Durham TJ, Libbrecht MW, Howbert JJ, Bilmes J, Noble WS. PREDICTD parallel epigenomics data imputation with cloud-based tensor decomposition. Nat Commun. 2018;9(1):1402.
Dobin A, Davis CA, Schlesinger F, Drenkow J, Zaleski C, Jha S, et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013;29(1):15–21.
Wysoker A, Tibbetts K, Fennell T. Picard tools. 2013.
Ramírez F, Dündar F, Diehl S, Grüning BA, Manke T. deepTools: a flexible platform for exploring deep-sequencing data. Nucleic Acids Res. 2014;42(W1):W187–91.
Mammana Alessandro and Helmuth Johannes. Introduction to the bamsignals package. http://bioconductor.org/packages/release/bioc/vignettes/bamsignals/inst/doc/bamsignals.html. Accessed 16 Mar 2020.
Baum LE, Petrie T, Soules G, Weiss N. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann Math Stat. 1970;41(1):164–71.
Viterbi A. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans Inf Theory. 1967;13(2):260–9.
Chae M, Danko CG, Kraus WL. groHMM: a computational tool for identifying unannotated and cell type-specific transcription units from global run-on sequencing data. BMC Bioinform. 2015;16(1):222.
The authors would like to thank Clemens Thoelken for helpful comments on the manuscript. Many thanks to Sarah Kinkley, Anna Ramisch, Tobias Zehnder and Giuseppe Gallone from MPIMG for their valuable comments and inspiring discussions.
This work was supported by the Else Kröner-Fresenius-Stiftung grant (2016_A105). Funding for open access charge (2016_A105 to H.C.).
Institute for Medical Bioinformatics and Biostatistics, Philipps University of Marburg, 35037, Marburg, Germany
Anshupa Sahu & Ho-Ryun Chung
Otto-Warburg-Laboratory, Max Planck Institute for Molecular Genetics, 14195, Berlin, Germany
Anshupa Sahu, Na Li, Ilona Dunkel & Ho-Ryun Chung
Guangzhou Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou, 510623, China
Anshupa Sahu
Ilona Dunkel
Ho-Ryun Chung
The project was conceived by HC. AS performed all the analyses and wrote the manuscript with inputs from HC. NL performed the ChIP-seq for histone modifications in K562. ID performed the ChIP-seq for RNA Polymerase II in K562. All authors read and approved the final manuscript.
Correspondence to Ho-Ryun Chung.
Data details and additional results. Details of datasets used and additional results.
RNA Polymerase II enrichment. RNA Polymerase II enrichment in consensus TU set.
Cell specific TUs. Additional details about cell specific TUs that lack RNA-seq evidence.
Sahu, A., Li, N., Dunkel, I. et al. EPIGENE: genome-wide transcription unit annotation using a multivariate probabilistic model of histone modifications. Epigenetics & Chromatin 13, 20 (2020). https://doi.org/10.1186/s13072-020-00341-z
Histone modifications
Transcript identification | CommonCrawl |
What is the difference between superpositions and mixed states?
My understanding so far is: a pure state is a basic state of a system, and a mixed state represents uncertainty about the system, i.e. the system is in one of a set of states with some (classical) probability. However, superpositions seem to be a kind of mix of states as well, so how do they fit into this picture?
For example, consider a fair coin flip. You can represent it as a mixed state of "heads" $\left|0\right>$ and "tails" $\left|1\right>$: $$ \rho_1 = \sum_j \frac{1}{2} \left|\psi_j\right> \left<\psi_j\right| = \frac{1}{2} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} $$
However, we can also use the superposition of "heads" and "tails": specific state $\psi = \frac{1}{\sqrt{2}}\left( \left|0\right> + \left|1\right> \right)$ with density
$$ \rho_2 = \left|\psi\right> \left<\psi\right| = \frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} $$
If we measure in the computational basis, we will get the same result. What is the difference between a superposed and a mixed state?
superposition quantum-state
Norrius
NorriusNorrius
$\begingroup$ Possible duplicate of What's the difference between a pure and mixed quantum state? $\endgroup$ – Mithrandir24601♦ Mar 29 '18 at 21:00
$\begingroup$ This is probably also helpful: Physics SE: How is quantum superposition different from mixed state? $\endgroup$ – v.tralala Feb 16 '19 at 15:35
No, a superposition of two different states is a completely different beast than a mixture of the same states. While it may appear from your example that $\rho_1$ and $\rho_2$ produce the same measurement outcomes (and that is indeed the case), as soon as you measure in a different basis they will give measurably different results.
A "superposition" like $\newcommand{\up}{|\!\!\uparrow\rangle}\newcommand{\down}{|\!\!\downarrow\rangle}|\psi\rangle=\frac{1}{\sqrt2}(\up+\down)$ is a pure state. This means that it is a completely characterised state. In other words, there is no amount of information that, added to its description, could make it "less undetermined". Note that every pure state can be written as superposition of other pure states. Writing a given state $|\psi\rangle$ as a superposition of other states is literally the same thing as writing a vector $\boldsymbol v$ in terms of some basis: you can always change the basis and find a different representation of $\boldsymbol v$.
This is in direct contrast to a mixed state like $\rho_1$ in your question. In the case of $\rho_1$, the probabilistic nature of the outcomes depends on our ignorance about the state itself. This means that, in principle, it is possible to acquire some additional information that will tell us whether $\rho_2$ is indeed in the state $\up$ or in the state $\down$.
A mixed state cannot, in general, be written as a pure state. This should be clear from the above physical intuition: mixed states represent our ignorance about a physical state, while pure states are completely defined states, which just so happen to still give probabilistic outcomes due to the way quantum mechanics work.
Indeed, there is a simple criterion to tell whether a given (generally mixed) state $\rho$ can be written as $|\psi\rangle\langle\psi|$ for some (pure) state $|\psi\rangle$: computing its purity. The purity of a state $\rho$ is defined as $\operatorname{Tr} \,(\rho^2)$, and it is a standard result that the purity of state is $1$ if and only if the state is pure (and lesser than $1$ otherwise).
glSglS
The short answer is that there is more to quantum information than "uncertainty". This is because there is more than one way to measure a state; and that is because there is more than one basis in which, in principle, you can store and retrieve information. Superpositions allow you to express information in a different basis than the computational basis — but mixtures describe the presence of a probabilistic element, no matter which basis you use to look at the state.
The longer answer is as follows —
Measurement as you have described it is specifically measurement in the computational basis. This is often described just as "measurement" for the sake of brevity, and large subsets of the community think in terms of this being the primary way to measure things. But in many physical systems, it is possible to choose a measurement basis.
A vector space over $\mathbb C$ has more than one basis (even more than one orthonormal basis), and on a mathematical level there isn't much that makes one basis more special than another, aside from what is convenient for the mathematician to think about. The same is true in quantum mechanics: unless you specify some specific dynamics, there is no basis which is more special than the others. That means that the computational basis $$ \lvert 0 \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \qquad \lvert 1 \rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$ is not fundamentally different physically from another basis such as $$ \lvert + \rangle = \tfrac{1}{\sqrt 2}\begin{bmatrix} 1 \\ 1 \end{bmatrix}, \qquad \lvert - \rangle = \tfrac{1}{\sqrt 2}\begin{bmatrix} 1 \\ -1 \end{bmatrix},$$ which is also an orthonormal basis. That means that there should be a way to "measure" a state $\lvert \psi \rangle \in \mathbb C^2$ in such a way that the probabilities of the outcomes depend on projections onto these states $\lvert + \rangle$ and $\lvert - \rangle$.
In some physical systems, the way one performs this measurement is to literally take the same apparatus and tilt it so that it is aligned with the X axis instead of the Z axis. Mathematically, the way we do this is to consider the projectors $$ \Pi_+ = \lvert + \rangle\!\langle + \rvert = \tfrac{1}{2}\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}, \qquad \Pi_- = \lvert - \rangle\!\langle - \rvert = \tfrac{1}{2}\begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix}$$ and then to ask what the projections $\lvert \varphi_+ \rangle := \Pi_+ \lvert \psi \rangle$ and $\lvert \varphi_- \rangle := \Pi_- \lvert \psi \rangle$. The norm-squared of $\lvert \varphi_\pm \rangle$ determines the probability of "measuring $\lvert + \rangle$" and of "measuring $\lvert - \rangle$"; and normalising $\lvert \varphi_+ \rangle$ or $\lvert \varphi_- \rangle$ to have a norm of 1 yields the post-measurement state. (For a state on a single qubit, this will just be $\lvert + \rangle$ or $\lvert - \rangle$. More interesting post-measurement states may result if we consider multi-qubit states, and consider the projector $\Pi_+$ or $\Pi_-$ acting on one of many qubits.)
For density operators, one takes the state $\rho$ which you want to perform a measurement on, and consider $\rho_+ := \Pi_+ \rho \Pi_+$ and $\rho_- := \Pi_- \rho \Pi_-$. These operators may be sub-normalised in the same way that the states $\lvert \varphi_\pm \rangle$ might be, in the sense that they may have trace less than 1. The value of the trace of $\rho_\pm$ is the probability of obtaining the outcome $\lvert + \rangle$ or $\lvert - \rangle$ of the measurement; to renormalise, simply scale the projected operator to have trace 1.
Consider your state $\rho_2$ above. If you measure it with respect to the $\lvert \pm \rangle$ basis, what you will find is that $\rho_2 = \rho_{2,+} := \Pi_+ \rho_2 \Pi_+$. This means that projecting the operator with $\Pi_+$ does change the state, and that the probability of obtaining the outcome $\lvert + \rangle$ to the measurement is 1. If you do this instead with $\rho_1$, you will find a 50/50 chance of obtaining either $\lvert + \rangle$ or $\lvert - \rangle$. So the state $\rho_1$ is a mixed state, while $\rho_2 $ is not --- the difference being that $\rho_2$ has a definite outcome in a different measurement basis than the standard basis. You might say that $\rho_2$ stores a definite piece of information, albeit in a different basis than the computational basis.
More generally, a mixed state is one whose largest eigenvalue is less than 1, meaning that there is no basis in which you can measure it to get a definite outcome. Superpositions allow you to express information in a different basis than the computational basis; mixtures represent a degree of randomness about the state of the system you're considering, regardless of how you measure that system.
Along with glS' post:
A mixed state would be if you had a can of paint, but you weren't sure if it was blue or yellow. You know it is either one of the two, and once you pop the top and measure it, you'd know, but until you do it is in one of those two pure states. If you picked it up from a stack of cans where you knew there were equally many cans of blue paint as yellow, you would expect an equal chance of it being one or the other. 50% of the time it would be 100% yellow and 50% of the time it would be 100% blue.
A superposition is more like if you take half a can of blue and half a can of yellow and pour them together. You've now constructed a new pure state that is expressible as a combination of other pure states. If you test its 'blueness', it is about 50%. If you test its 'yellowness' it is about 50%. It is both yellow and blue at the same time. 100% of the time it is both 50% blue and 50% yellow.
If you measured the amount of blue and yellow in one stack of blue or yellow cans and then in another stack of green, you might be confused to see you have just as much blue and yellow in both stacks, but the difference is that the 'blueness' and 'yellowness' is in a mixed state in the later stack but is in a superposition in the latter.
DotDot
Not the answer you're looking for? Browse other questions tagged superposition quantum-state or ask your own question.
What's the difference between a pure and mixed quantum state?
What is the difference between $\vert 0 \rangle + \vert 1 \rangle$ and $\vert 0 \rangle \langle 0 \vert + \vert 1 \rangle \langle 1 \vert$?
Are classical bits quantum?
How to compute the average value $\langle X_1 Z_2\rangle$ for a two-qubit system?
No-cloning theorem and distinguishing between two non-orthogonal quantum states
How is a single qubit fundamentally different from a classical coin spinning in the air?
Optimal strategy to a quantum state game
How to show a density matrix is in a pure/mixed state?
Making a maximally mixed 2-qubit state in the IBM Q
How to amplify a specific part of the quantum state
Help in understanding an exercise on observable / measurement | CommonCrawl |
Hostname: page-component-7ccbd9845f-9nx8b Total loading time: 1.132 Render date: 2023-01-27T07:46:31.170Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Environment and Development Economics
>Volume 27 Issue 2
>Air pollution trade-offs in developing countries: an...
Environment and Development Economics
Study area and data
Empirical analysis and results
Air pollution trade-offs in developing countries: an empirical model of health effects in Goa, India
Published online by Cambridge University Press: 11 June 2021
Sanghamitra Das ,
Vikram Dayal ,
Anand Murugesan [Opens in a new window] and
Uma Rajarathnam
Sanghamitra Das
Indian Statistical Institute, New Delhi, India
Vikram Dayal
Institute of Economic Growth, Delhi, India
Anand Murugesan*
Central European University, Vienna, Austria
EGS Applied Research, Bangalore, India
*Corresponding author. E-mail: [email protected]
Save PDF (0.34 mb) View PDF[Opens in a new window]
Developing countries experience both household air pollution resulting from the use of biomass fuels for cooking and industrial air pollution. We conceptualise and estimate simultaneous exposure to both outdoor and household air pollution by adapting the Total Exposure Assessment model from environmental health sciences. To study the relationship between total exposure and health, we collected comprehensive data from a region (Goa) in India that had extensive mining activity. Our data allowed us to apportion individuals' exposure to pollution in micro-environments: indoor, outdoor, kitchen, and at work. We find that higher cumulative exposure to air pollution is positively associated with both self-reported and clinically- diagnosed respiratory health issues. Households in regions with higher economic (mining) activity had higher incomes and had switched to cleaner cooking fuels. In other words, household air pollution due to higher biomass use had been substituted away for outdoor air pollution in regions with economic activity.
air pollutionhousehold air pollutionhealthfuel choicemining
D13: Household Production and Intrahousehold Allocation I15: Health and Economic Development Q53: Air Pollution • Water Pollution • Noise • Hazardous Waste • Solid Waste • Recycling
Environment and Development Economics , Volume 27 , Issue 2 , April 2022 , pp. 145 - 166
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re- use, distribution and reproduction, provided the original article is properly cited.
Copyright © The Author(s), 2021. Published by Cambridge University Press
Nine out of ten people worldwide breathe polluted air, with one out of nine deaths in 2012 attributed to air-pollution related conditions (WHO, 2016a). Air pollution represents the most significant environmental risk to health. Developing countries experience the worst of both household air pollution resulting from biomass fuels for cooking and the air pollution resulting from industry and transport. While it is widely recognised that outdoor air pollution levels in developing countries often exceed the World Health Organization (WHO) guidelines, India among other developing countries suffers severely due to household air pollution (HAP) arising primarily from biomass cooking fuels (Smith et al., Reference Smith, Bruce, Balakrishnan, Adair-Rohani, Balmes and Chafe2014; Jeuland et al., Reference Jeuland, Pattanayak and Bluffstone2015b). Approximately 3 billion people, mostly in low-income countries, continue to use solid fuels (fuelwood, animal dung and crop waste) for cooking and heating (WHO, 2014), contributing to both deforestation (Bailis et al., Reference Bailis, Drigo, Ghilardi and Masera2015) and global climate change (Ramanathan and Carmichael, Reference Ramanathan and Carmichael2008).
India and China together constitute more than 50 per cent of the world population still using solid fuels, with another 21 per cent living in Sub-Saharan Africa (Jeuland et al., Reference Jeuland, Pattanayak and Bluffstone2015b). The concentrations of HAP in biomass fuel using households are even higher than the high levels of urban outdoor air pollution. The typical 24-hour concentration of $PM_{10}$ (particulates smaller than 10 microns in diameter) in homes using biomass as fuels may range from 200 to 5000 $\mu g/m^{3}$ or more, depending on the type of stove, fuel and housing (Ezzati and Kammen, Reference Ezzati and Kammen2002; Laumbach and Kipen, Reference Laumbach and Kipen2012). Since the pioneering work of Smith (Reference Smith1988) in epidemiology, it is believed that exposure to high levels of HAP causes substantial health effects in developing countries (Naeher et al., Reference Naeher, Brauer, Lipsett, Zelikoff, Simpson, Koenig and Smith2007; Smith, Reference Smith2013).
Exposure to air pollution results in a wide range of acute and chronic health outcomes ranging from minor physiological changes to death from respiratory and cardiac diseases (Bascom et al., Reference Bascom, Bromberg, Costa, Devlin, Dockery and Frampton1996; Dominici et al., Reference Dominici, McDermott, Zeger and Samet2003; Gauderman et al., Reference Gauderman, Urman, Avol, Berhane, McConnell and Rappaport2015, Reference Gauderman, Vora, McConnell, Berhane, Gilliland and Thomas2007). Epidemiological studies (Ezzati and Kammen, Reference Ezzati and Kammen2002; Salvi and Barnes, Reference Salvi and Barnes2009; Lozano et al., Reference Lozano, Naghavi, Foreman, Lim, Shibuya and Aboyans2012; Mannucci and Franchini, Reference Mannucci and Franchini2017) have estimated that in addition to ambient (or outdoor) air quality, there is robust evidence that HAP poses a serious threat to human health, especially in low-income countries that still use biomass fuels as an energy resource. The WHO estimated that air pollution was responsible for nearly seven million deaths every year, with 4.3 million due to HAP (WHO, 2014). Women and young children bear a disproportionately large burden of mortality, with 500,000 children under five that die due to acute respiratory infections (Langbein, Reference Langbein2017).
In addition to exposure to outdoor and household air pollution, workplace exposure could pose a potential risk to health. Millions of workers in a variety of occupations, such as mining, construction and abrasive blasting, are exposed to high levels of airborne dust particles. Inhalation of these particles may cause respiratory diseases such as bronchitis, silicosis and pneumoconiosis. Prevalence rate or trends in occupational respiratory problems in developing countries are mostly unknown, but the magnitude of the problem could be substantial (WHO, 2016b). The exposure to work-related pollution in our study includes a source of pollution not studied often, which is mining.
Jeuland et al. (Reference Jeuland, Pattanayak and Bluffstone2015b), in their review of HAP at a global level, used a conceptual model. Our attempt is to use a conceptual model in this specific, local context. In this study, we conceptualize an integrated framework to estimating cumulative exposure to air pollution over time and space that results in poor health, irrespective of whether it originates in a stove or a mine. Pollution is not only caused by mining and associated transport, but also by the combustion of fuels for cooking in the household. We estimate the simultaneous exposure to both outdoor and household air pollution by measuring pollutant concentrations and time spent in each location. We develop a model borrowing from conceptual foundations in environmental health sciences and the economics of households in developing countries. Specifically we draw from health production models (Harrington and Portney, Reference Harrington and Portney1987), agricultural household models (Singh et al., Reference Singh, Squire and Strauss1986), and a branch of environmental health sciences called Total Exposure Assessment (Smith, Reference Smith1993). Our analytical model examines the relationship between the cumulative exposure to air pollution from outdoor and cooking sources of individuals in a rural household in a developing country and their health. The empirical implementation of this framework that incorporates both household and outdoor air pollution required the use of a household questionnaire which included time budget questions, measurement of air pollution concentrations in different micro-environments, health diaries for self-reporting ailments and doctor visits, and clinical measurements of respiratory health.
Total exposure is the result of people spending time in different micro-environments (for example, indoors, in the kitchen, and outdoors) with different levels of air pollution concentration levels. Pitt et al. (Reference Pitt, Rosenzweig and Hassan2006) stressed the importance of gathering data on time allocation across different micro-environments. They used micro-data to examine how household structure affects the distribution of cooking time among women in rural Bangladeshi households, and the health effects of cooking time, as a proxy for exposure to HAP. Our study takes it further by unpacking the micro-environments into outdoor, indoor and work besides the kitchen. We chose a region where pollution due to iron ore mining and transportation activity heavily contributes to outdoor air quality, to study the relationship between cumulative exposure in different micro-environments and health.
The exposure is cumulative and over time leads to higher susceptibility to respiratory problems. As we aimed to study the relationship of total exposure with air pollution, we chose to study a region that characterises both household and outdoor air pollution in India. We collected data from regions which had varying levels and lengths of mining activity in Goa, India. In Goa, we studied this process in different mining clusters, with different levels of cumulative exposure among the population. The paper firstly examines the socio-economic correlates of time spent in polluting environments by individuals, followed by the choice of cooking fuel by households. We unpack the contributors to cumulative exposure, by apportioning it to different micro-environments, time spent in these environments and the type of fuel used. We finally examine the relationship between cumulative exposure to air pollution and respiratory health indicators.
We find that gender and age are associated with the time spent by individuals indoors, in the kitchen and outdoors, with middle-aged women spending much time cooking. We find that households in regions with higher mining activity had higher incomes on average and a higher proportion of cleaner fuels (LPG) used for cooking. Active mining clusters which experienced higher outdoor pollution levels had a significantly lower proportion of households that used polluting biomass fuels for cooking. In other words, HAP from biomass fuels is substituted with outdoor air pollution in regions with higher economic activity. Finally, we find that higher cumulative exposure is associated with higher levels of morbidity: (a) reported health measures are respiratory sick days and chronic respiratory sick days, and (b) observed clinical health measures are the doctor's diagnosis of the X-rays and lung function tests. Our use of two methods to measure health indicators – self-reported health and clinical examination – strengthens the validity of our results.
In section 2, we describe our study area and examine our data. In section 3, we develop our theoretical model and present our results in section 4. We discuss the results and conclude in section 5.
2. Study area and data
Our study area was the heavily iron ore mined regions of Goa, India. Iron ore mining was an integral part of the state's economy for almost fifty years and contributed to 60 per cent of India's iron ore exports at the time of the study (2003). Given the scale of iron ore mining in Goa and the documented environmental issues, it was an ideal setting to study total exposure to air pollution.Footnote 1 For the purposes of this study, we divided the mining regions of Goa into five clusters, including a control cluster with no mining activity at the time of data collection between June 2003 and May 2004. These clusters were chosen to have varying vintage and levels of mining activity. Cluster 1 was the mining region with the earliest mining activity (over 40 years at the time of the study) but where the activity had subsided relative to Cluster 2, the most intensively mined cluster, where mining had begun approximately 25 years prior to this study. Cluster 3 was the region where mining activity was relatively at its inception, having begun 15 years prior to the study. Cluster 4 was the mining corridor, that is, the region where trucks transported the ore from the mines to the barges or the coast. Cluster 5 was the control region that was away from the mining region and with no history of mining activity at the time of this study.
Table 1 presents the distribution of villages and the sample size of households and individuals selected for the study. We first selected the regions to represent the levels of mining activity across the state, and then randomly chose both the villages and (within these villages) the households from the census of the households. We surveyed 310 households and 1411 individuals from these households in the five clusters for a detailed assessment of individual and household characteristics, concentrations of pollutants ($PM_{10}$ ) in the micro-environments, and clinical and reported health measures.
Table 1. Sample size distribution across villages and clusters
The survey questionnaire had two modules: household and individual. Both questionnaires were conducted as a personal interview between the enumerator and the individuals, including the head of the household, who also responded to the household questionnaire. The questionnaires were translated into the local language and pilot tested before the actual surveys were carried out by trained enumerators (mostly local social workers).
2.1. Household survey
The first survey in the sampled households was administered to the head of the household and included questions eliciting demographic information, household income, housing characteristics (such as number of rooms, whether the kitchen has windows or exhaust fan), fuel and stove types (see the online appendix for the questionnaires and health diaries). Table 2 presents the summary statistics of the household characteristics used in the empirical analysis.
Table 2. Summary of variables used in regressions
2.2. Individual survey
The individual survey was conducted with each member of the household to gather detailed information on smoking status, occupation, time spent in each micro-environment and health status. We used the standardized respiratory health questions of the British Medical Research Council. For children (those aged 15 or below) the individual surveys and time activity information was collected from their mothers (or primary caretakers). The surveys used the recall method to ascertain the specific health problems in the last three months that were self-reported by the individuals, including doctor visits and fees. Given the focus on respiratory health in this study, illnesses reported in the individual survey were classified into three groups by the cardio-respiratory specialist, namely: (1) upper respiratory (illnesses and symptoms related to the upper respiratory tract that could be linked to air pollution, but not necessarily prolonged exposure); (2) lower respiratory (chronic illnesses related to the lower respiratory tract that are likely to occur as a result of prolonged exposure to air pollution); and (3) all other illnesses. In our main estimations, we use the sick days attributed to upper respiratory illness as respiratory sick days and the sick days from lower respiratory illness as chronic respiratory sick days (Cooper et al., Reference Cooper, Sridharan, Uma, D'Souza, Murugesan and Dayal2006).
The time budget (or time spent in the various micro-environments) of these individuals in a day was collected through the individual questionnaire. Responses were further verified by a field assistant when making household measurements. In addition, subjects in each household were provided health diaries (in Marathi, the local language) and asked to record details on type and days of illness, visits to the doctor, doctor fees, work lost and cost of treatment. Table 2 summarizes the key individual level information collected.
2.3. Air pollution measurement
The air pollution monitoring component of the study measured the exposure to both outdoor and household air pollution of the individuals from the sampled households. Environmental monitoring and the time budget survey of individuals for the exposure assessment were carried out for the study (between May 2003 and April 2004). A preliminary survey was conducted which aided in identifying the essential micro-environments necessary for estimating daily exposure. Four micro-environments were selected for the study: (1) indoor or living room, (2) cooking area during cooking, (3) outdoor or ambient, and (4) work area (including mining workers and truck drivers). The assessment of daily exposure entailed measuring concentrations of $PM_{10}$ (respirable suspended particulate matter or RSPM) in these micro-environments. RSPM in cooking and living room micro-environments was collected on a conditioned and pre-weighed filter paper using low volume universal pump (SKC, UK). In the living room, sampling was done for a period of 24 hours in all the sampled households. In the cooking micro-environment, monitoring was carried out for a subset of households during the cooking period (covering 2 or 3 meals cooking in a day) which typically was about 2 to 3 hours in a day.
Outdoor air samples were collected through high volume air samplers (Envirotech, India). The outdoor concentrations were measured in three locations in each of the four mining clusters. One location was chosen for outdoor concentration measurement in the control cluster. The sampling in each location was continuous for three days in two seasons, and the filters were replaced every 8 hours. After sampling, RSPM levels were calculated by the gravimetric method (difference in the weight of filter paper after sampling divided by volume of air sampled). The daily 24-hour average concentration was derived for each cluster from this data. RSPM sampling in the workplace was carried out for working hours in a day (about 8 hours) with a low volume personal air sampler (SKC, UK) for a sub-sample of 18 subjects working in mining-related occupations.
2.4. Health tests and diagnosis
The clinical measures were conducted by trained technicians in local clinics for a sub-sample of individuals from the sampled households. We collected data on the chest X-rays for 769 adults (900 including children) and pulmonary lung function test (PFT) for 668 adults (782 including children). The chest X-ray and PFT reports were analyzed and diagnosed by a cardio-respiratory health specialist for chronic respiratory symptoms. The X-rays are expected to highlight the impacts of long-term exposure while PFT measures lung efficiency/capacity at the time of the test. We use the specialist's interpretation of the reports by creating dummy variables: X-ray symptom (equals 1, if diagnosed "not normal") and PFT symptom (equals 1, if PFT results were diagnosed as "not okay"). The X-ray reports were provided to the subjects after the radiologist's and specialist's diagnosis.Footnote 2
Table 2 includes summary statistics of the individual characteristics, average 24-hr pollution exposure to $PM_{10}$ , respiratory sick days, clinical tests and medical diagnosis. In our sample, the mean age was 32 years and 50 per cent were male. Eleven per cent of the X-ray reports were diagnosed with respiratory problems and just over 4 per cent had below normal PFT measurements.
2.5. Fuel usage
In the overall sample, the fuel categories of biomass only, liquefied petroleum gas (LPG) only, and biomass and LPG account for almost equal proportions (table 3). However, there are sharp contrasts in the shares of fuels among the clusters. As expected, the control cluster, which is a relatively less connected region, has a very high proportion of households (79 per cent) that use biomass fuels only. In contrast, the corridor cluster, with better road connectivity and where we would expect the highest LPG availability, has the highest proportion of LPG only users (68 per cent). Clusters 1, 2 and 3 exhibited lower LPG usage than the corridor (but higher than the control region) and lower biomass only use compared to the control cluster (but higher than the corridor). The control cluster also had the highest number of kitchens located outside the house, while the corridor had the least. The mean income in the corridor was the highest (lowest in the control region) and the corridor correspondingly has the highest percentage of separate kitchens inside the household (the control the lowest). The income distributions among clusters observed in table 3 partly explain the fuel usage patterns, where the households with higher income (mining activity regions) had higher usage of cleaner fuels compared to the control cluster which had the lowest mean income.Footnote 3
Table 3. Fuel usage, income and concentration across clusters (%)
Note: Indoor concentration was measured in each household; outdoor in three locations per cluster.
Cooking concentration was measured in a sub-sample for each fuel type which was used to estimate the household concentration based on the fuels used. The fuel use percentages do not add up to 100% as some households did not have a kitchen (or do not report cooking).
Table 3 also shows that the outdoor air quality (discussed in detail in the next subsection) was the worst in the corridor, more than seven times higher than the control. Due to high LPG usage, the cooking concentration is the lowest among households in the corridor. Note that the indoor concentration will be affected both by outdoor air quality (due to infiltration) as well as cooking. The high concentration of $PM_{10}$ indoors among households in the corridor region (despite having the lowest cooking concentration) suggests that infiltration of pollutants from the outside can affect indoor air quality.
2.6. Air quality and exposure
We construct the total 24-hr exposure for each individual by computing the exposure in each micro-environment (share of the day spent in the micro-environment $\times$ concentration in the micro-environment) and summing it over all the micro-environments. We measured outdoor at the village level and indoor in the living area of all households, while cooking measurements from a subset were used with information about the fuel choice in the household to get the cooking concentration.
Table 4 illustrates the data and calculations for one of the individuals (anonymized) in the sample. We multiply the concentration in each micro-environment by the time spent by the individual in each micro-environment in a day, to arrive at the 24-hr exposure (see the last column in table 4). We then divide the total 24-hr exposure by 24 hours to arrive at the average 24-hr exposure. Thus, the units of concentration and total exposure (this is a weighted average of concentrations, with weights being the fraction of time spent in each micro-environment) are the same, $\mu g/m^{3}$ . Although the workplace exposure was measured for those working in the mines, mining offices or driving, for most individuals in the sample, workplace exposure was not applicable (as in the case of the individual in table 4).
Table 4. Illustration of total exposure calculation for an individual
The average 24-hr exposure for this individual is equal to:
\[ \frac{\sum_{m} \text{Concentration}_{m} \times \text{time spent}_{m}}{24} = 5732 / 24 = 239\,\mu g/m^3.\]
2.7. Cumulative exposure
The cumulative exposure to air pollution is the total 24-hr exposure to pollutants summed up over the years of residence for the individual in the regionFootnote 4 as:
Cumulative exposure$_{i}$ = total 24-hr exposure$_{i}\times 365 \times$ exposure years$_{i},$ which captures the accumulated exposure to air pollution over the years for each individual living in a particular environment which determines respiratory health. Therefore by construction, time spent in polluting micro-environments and their concentration will have a positive relationship with cumulative exposure. Biomass fuel usage will directly enter cumulative exposure via higher concentrations in the kitchen and indoor environment and correspondingly affect health (Das et al., Reference Das, Pedit, Handa and Jagger2018; Jeuland et al., Reference Jeuland, Soo J-S and Shindell2018; Pattanayak et al., Reference Pattanayak, Jeuland, Lewis, Usmani, Brooks and Bhojvaid2019).
Figure 1 captures the key argument of this paper. In the top panels of figure 1, we see that the distribution of outdoor concentration is very different from that of indoor concentration and cumulative exposure. In the far left scatter plot in the bottom panels of figure 1, in which we have plotted indoor concentration on the y-axis and outdoor concentration on the x-axis, we can see that there is a very low correlation between the two. Some observations are characterised by high values of indoor concentration and low values of outdoor concentration. This reinforces the claim that using either as a measure of exposure is inadequate. Outdoor concentrations only vary by cluster, and would be particularly inadequate, though their measurements would be reasonably accurate. Studies which focus on ambient (outdoor) concentration or household (indoor) air pollution in isolation may also fail to document the relationship between outdoor and indoor air quality in such a setting. In the other two scatterplots in the bottom panels of figure 1, we see that cumulative exposure has a weak positive relationship with outdoor concentration and a relatively stronger positive relationship with indoor concentration.
Figure 1. Relationship among concentrations (${\mu} g/m^{3}$ ) and cumulative exposure (million ${\mu} g/m^{3}$ hours). Top row shows distributions of outdoor, indoor concentrations and cumulative exposure; bottom row shows scatterplots.
Table 5 presents the two sample t-test for difference of means in exposure for the four mining clusters compared to the control cluster. Outdoor exposure in column (1) is higher in all the clusters (with mining activity) compared to the control region (with no mining activity), with Cluster 4 (the mining corridor) recording the highest outdoor exposure. On the other hand, Cluster 3 and the Corridor have lower cooking exposure (column (3)) compared to the control region, due to a higher proportion of LPG usage. The average 24-hr exposure in column (4) is a weighted measure of exposure to different micro-environments and is higher for all four clusters compared to the control.
Table 5. Difference in individuals' exposure to $PM_{10}$ (vs. control)
Standard errors in parentheses; **$p<0.05$ , ***$p<0.01$
Table 6 reports the time spent in micro-environments as elicited in the individual recall survey. The field assistants were able to verify the reported time spent during the household air quality measurements, but this would not completely address the issues with recall methods. In the empirical results section, we discuss how we try to address this concern.
Table 6. Time spent in micro-environments (hrs/day) and exposure
Standard deviation in parentheses; $^{\ast \ast }$ $p<0.05$ , $^{\ast \ast \ast }$ $p<0.01$ .
Men (adult males) spend 8.4 hours outdoors on average, and women (adult females) about half of that. Time spent by women in the kitchen on average is about 3.4 hours, while men on average spend less than half an hour. And yet, on average, the 24 hour average exposure is 280 micrograms per cubic metre for males and 277 for females, so total exposure balances out on average, in line with this paper's argument that we need to consider micro-environments together rather than separately.
3. The model
We now discuss how we conceptualize our theoretical model that accounts for exposure to air pollution across micro-environments. Jeuland et al. (Reference Jeuland, Pattanayak and Bluffstone2015b) use a conceptual model to help explain and think about issues in their excellent review of global HAP. We develop a model drawing on health production models (Harrington and Portney, Reference Harrington and Portney1987), agricultural household models (Singh et al., Reference Singh, Squire and Strauss1986), and a branch of environmental health sciences, Total Exposure Assessment (TEA) (Smith, Reference Smith1993). In health production models, health is an outcome of a production function. Agricultural household models try to model consumption and production activities of rural households in developing countries in the same model. TEA in the context of air pollution examines pathways from all sources of air pollution to exposure by humans.
We view the household model as an abstraction that captures key elements of HAP in Goa.Footnote 5 There is an obvious element of simplification and we note caveats at different points.
3.1. Theoretical model
We examine a household which consists of a child, an adult male and an adult female. We assume that a household aims to maximize its utility ($U$ ) which is a function of sickness ($S$ ) experienced by the child (indexed by $C$ ), the adult male (indexed by $AM$ ) and the adult female (indexed by $AF$ ), and non-food consumption $(C^{NF})$ , so $U=U(S^{C},\,S^{AM},\,S^{AF},\,C^{NF}).$
We assume that sickness is a function of total exposure to air pollution ($E$ ), consumption of cooked food ($CF$ ), doctor-visits ($D$ ) and individual characteristics ($Z$ ), so $S^{i}=S^{i}(E^{i},\,CF^{i},\,D^{i};Z^{i}),$ where $i=C,\,AM,$ and $AF$ .
The kinds of sickness that result from poor nutrition and from household pollution are different. The knowledge of or beliefs in causes of sickness of different sorts is a key variable that influences the household's actions (Jeuland et al., Reference Jeuland, Pattanayak and Bluffstone2015b).
Total exposure is a weighted sum of exposure in different micro-environments, which in turn, are equal to the product of time spent in these micro-environments ($t$ ) and the concentrations of air pollution in these micro-environments ($e$ ). We consider four micro-environments on which we have data: outdoors, indexed by $o$ ; cooking, indexed by $c$ ; work, indexed by $w$ ; and indoors, indexed by $i$ ,
$E^{i}=t_{o}^{i}e_{o}+t_{c}^{i}e_{c}+t_{w}^{i}e_{w}+t_{in}^{i}e_{in}.$
While the time spent in different micro-environments is person specific, the concentrations are not. To simplify, we assume that the time spent by the child in the four different micro-environments is the same as that of the adult female.
In our sample, almost all households cook with LPG or biomass. We take $t_{c}^{i}$ , the time in the cooking micro-environment, to be the sum of $t_{c}^{lpg}$ and $t_{c}^{b}$ , the time cooking with LPG and biomass, respectively. This is an approximation, since it is possible that LPG may be used with biomass at the same time. The key point though is that greater use of LPG is likely to reduce the amount of biomass burnt.
In our sample, cooking is mainly done by women, and so we assume that the adult female does the cooking. The concentration of air pollution in the cooking environment $(e_{c})$ is a function of the concentration outdoors (or ambient concentration) and the length and type of cooking, so
\[e_{c}=e_{c}(t_{c}^{lpg},\,t_{c}^{b},\,e_{o}).\]
We note that the concentration outdoors will be influenced by the total cooking pattern in a village; most notably the contrast will be between a village where every household uses LPG only and a village where every household uses biomass only.
Similar to the concentration of air pollution in the cooking environment, the concentration indoors will depend on time cooking and the concentration outdoors, such that
\[e_{in}=e_{in}(t_{c}^{lpg},\,t_{c}^{b},\,e_{o}).\]
The total amount of food cooked in the household is a function of the time spent cooking:
(1)\begin{equation} C_{F}=C_{F}(t_{c}^{lpg},\,t_{c}^{b}).\end{equation}
Equation (1) may give the impression that more cooked food requires more cooking time irrespective of fuel, but LPG cooking can reduce cooking time compared to biomass cooking.
$C_{F}^{i}$ , the amount of food consumed by each family member, is assumed to be some norm-based share $(\theta ^{i}\in \lbrack 0,\,1])$ of the total amount of food cooked in the household. The amount of raw food consumed $(R_{F})$ is assumed to be a linear function of the food cooked, $R_{F}=\eta _{1}C_{F},$ where $\eta _{1}$ is a constant; this is an approximation. With LPG we can quickly vary the intensity, from off to medium and high, but with biomass burning, it is more like a batch process. Similarly, the amount of fuel used ($q$ ) is assumed to be a linear function of the time spent cooking:
\[q^{LPG}=\eta _{2}t_{c}^{lpg}, \quad \textrm{and} \quad q^{B}=\eta _{3}t_{c}^{b}.\]
We also assume that a certain proportion $(\eta _{4}\in \lbrack 0,\,1])$ of the biomass fuel is gathered and we assume that it is the adult female who gathers biomass fuel, $q^{BG}=\eta _{4}\eta _{3}t_{c}^{b}.$
The time spent in gathering this fuel $(t_{c}^{g})$ is proportional to the quantity to be gathered. This is an approximation; for example, the same person may gather the same amount of fuel from different locations at different times, taking different time to gather the same amount of fuel, because the gathering of fuel may be combined with some other activity, $(t_{c}^{g})=\eta _{5}\eta _{4}\eta _{3}t_{c}^{b}.$
We assume, based on examining our data (see table 6 and associated discussion) that the amount an individual works is predetermined by the occupation of the person. In other words, the amount an individual works is not influenced by marginal cost and benefit considerations, and for this model, is predetermined. We assume that after cooking, working and gathering biomass, the adult female divides her remaining time in some given proportion $(\alpha _{AF})$ between the indoor $(t_{in}^{AF})$ and outdoor micro-environments. Total time outside $(t_{o}^{AF})$ is equal to remaining time spent outside and time gathering biomass,
\begin{align*}t_{in}^{AF}& =\alpha _{AF}(T^{AF}-t_{c}^{AF}-t_{w}^{AF}-t_{g}^{AF}),\\ t_{o}^{AF}& =T^{AF}-t_{in}^{AF}.\end{align*}
Since the adult male does not cook or gather biomass, the expressions for time indoors and time outdoors are different in the case of the adult male,
\begin{align*}t_{in}^{AM}& =\alpha _{AM}(T^{AM}-t_{w}^{AM}),\\ t_{o}^{AM}& =T^{AM}-t_{in}^{AM}.\end{align*}
The household maximizes utility subject to the following budget constraint:
\[t_{w}^{AM}P_{w}+t_{w}^{AF}P_{w}=P_{NF}C_{NF}+P_{r}R_{F}+P_{D}\sum D^{i}+q_{P}^{B}P_{B}+q_{LPG}P_{LPG},\]
by choosing $C_{NF},\,t_{c}^{lpg},\,t_{c}^{b}$ and $D^{i}.$
The first-order conditions are (denoting the Lagrange by L):
(2)\begin{equation} \frac{\partial L}{\partial C_{NF}}=\frac{\partial U}{\partial C_{NF}} +\lambda \lbrack -P_{NF}]=0.\end{equation}
Equation (2) is the usual consumer theory condition for consumption and says that the marginal utility from an additional unit of consumption should equal the marginal cost in utility terms, which is the product of the multiplier and the price.
(3)\begin{equation} \frac{\partial L}{\partial D^{i}}=\frac{\partial U}{\partial S^{i}}\frac{ \partial S^{i}}{\partial D^{i}}+\lambda \lbrack -P_{D}]=0.\end{equation}
In equation (3) the marginal benefit of spending a unit of money on doctor visits of the $i^{\text {th}}$ person in the household is the marginal utility of lower sickness of the $i^{\text {th}}$ person times the marginal product (in terms of lower sickness) from an additional doctor visit. The marginal cost is the price of a doctor visit multiplied by the multiplier, so that
\[\frac{\partial L}{\partial t_{c}^{lpg}}=\sum \frac{\partial U}{\partial S^{i}}\left[\frac{\partial S^{i}}{\partial E^{i}}\frac{\partial E^{i}}{\partial t_{c}^{lpg}}+\frac{\partial S^{i}}{\partial C_{F}^{i}}\theta ^{i}\frac{ \partial C_{F}}{\partial t_{c}^{lpg}}\right]-\lambda \left[ \eta _{2}P_{LPG}+\eta _{1}\frac{\partial C_{F}}{\partial t_{c}^{lpg}}P_{r}\right].\]
A change in time spent cooking with LPG or biomass is associated with higher emissions and therefore higher exposure and sickness (of all members), and higher cooked food and therefore lower sickness. It will also entail greater cost concerning the gathering of biomass or expenditure on purchase of fuel and raw food. The household will have imperfect information about the effects of cooking on exposure. Moreover, cooking affects women and children more than adult males since they stay in the cooking micro-environment:
\[\frac{\partial L}{\partial t_{c}^{b}}=\sum \frac{\partial U}{\partial S^{i}} \left[\frac{\partial S^{i}}{\partial E^{i}}\frac{\partial E^{i}}{\partial t_{c}^{b}}+\frac{\partial S^{i}}{\partial C_{F}^{i}}\theta ^{i}\frac{ \partial C_{F}}{\partial t_{c}^{b}}\right]-\lambda \left[ (1-\eta _{4})\eta _{3}P_{B}+\eta _{1}\frac{\partial C_{F}}{\partial t_{c}^{b}}P_{r}\right].\]
A change in time cooking has several effects on exposure, since it affects the time spent in different micro-environments (in the case of the adult female and the child) and the concentration in the indoor and cooking micro-environment. So, for example,
\[\frac{\partial E^{AF}}{\partial t_{c}^{lpg}}={-}e_{o}+e_{c}+\frac{\partial e_{c}}{\partial t_{c}^{lpg}}t_{c}-e_{in}+\frac{\partial e_{in}}{\partial t_{c}^{lpg}}t_{in}^{AF}.\]
Our model is static for simplicity. However, in reality, what we witness today is the outcome of the past. Mining tends to follow a life-cycle, with the initial expansion of mining and economic activity in an area finally leading to a slowing down of mining as new areas are found and exploited. During this mining life-cycle, the economic context and the environment (of which air pollution is one indicator) of the households change. Moreover, human health is affected by cumulative exposure, especially in the case of chronic air pollution-related ailments. In our main estimations, we study the association between cumulative exposure and health.
4. Empirical analysis and results
Following from the theoretical model, our primary objective is to estimate the relationship between cumulative exposure to air pollution and measures of respiratory health. Secondly, we characterize the socio-economic associations of time spent in micro-environments with different pollutant concentrations, and of fuel-choice. We also estimate the relationship between fuel usage and concentrations in the micro-environments. We model our primary relationship between cumulative exposure to air pollution and respiratory health using the following regression equation:
(4)\begin{equation} Y_{ihc}=\beta \times \text{Cumulative Exposure}_{ihc}+\Gamma ^{\prime }\times I_{ihc}+\Omega ^{\prime }\times H_{hc}+\lambda _{c}+\epsilon _{ihc}, \end{equation}
where the dependent variable is the outcome of interest for individual $i$ , in household $h$ , located in cluster $c$ . The parameter of interest $\beta$ is the coefficient on cumulative exposure levels. In equation (4), $I_{ihc}$ refers to the individual level attributes including age, gender and education; $H_{hc}$ refers to household characteristics like income. The $\lambda _{c}$ represent cluster fixed effects. The dependent variables are either reported health measures or clinical health measures. Respiratory sick days (upper respiratory illness) and chronic respiratory sick days (lower respiratory illness) are the reported measures of respiratory health, while the specialist's diagnosis of respiratory issues based on the X-ray report and the lung function tests are our measures of observed clinical health. We use a reduced form estimation approach where the choice of control variables is guided by the theoretical model. We cluster standard errors in our estimates at the household level.
4.1. Time in micro-environments
We begin by estimating the associations of time spent in micro-environments (reported in table 7), where we include biomass fuel usage and interact its usage with the female dummy along with the individual and household characteristics, as we discussed in theory (where we assumed that only females gathered biomass). The individual level attributes are age, age-squared, gender and never-smoker (dummy), and the household characteristics include the number of adults and children by gender and whether or not the house was pucca (constructed with solid materials as a permanent dwelling).
Table 7. Time spent in micro-environments
OLS estimations at individual level; $^{\ast }$ $p<0.10$ , $^{\ast \ast }$ $p<0.05$ , $^{\ast \ast \ast }$ $p<0.01$ .
Standard errors in parentheses and clustered at household level.
Other controls: number of adults and children (by gender).
Table 7 presents the results of regressions on the dependent variables of time spent: indoors, outdoors, in the kitchen and at work. We control for cluster level differences by including cluster dummies in our estimation. For time spent in the kitchen, we examine mean time spent in the kitchen by adults in the household. The time adults spend in the kitchen is expected to depend on the composition of adults and children, since one person can cook for several members. We therefore include the number of adults and children in the household by gender in these regressions.
Table 7 shows that age and gender are statistically significant regressors. Age is related negatively to time spent indoors, shown in column (1), but positively with time in the kitchen, shown in column (3). Males spent less time indoors and in the kitchen and more time outside the house or working. Age and education have a positive relationship with time spent working (column (4)). We call the reader's attention to the positive relationship between biomass fuel usage and time spent in the kitchen (column (3)). Also noteworthy is the positive relationship between biomass usage $\times$ female (dummy) on time spent outdoors (which is consistent with the assumption in our model that females spent time gathering biomass fuels).
4.2. Choice of fuel
In table 8, we present the relationship between household characteristics and their choice of fuel, biomass or LPG. The unit of observation here is the household (N = 308) and we estimate a linear probability model.Footnote 6 As a robustness check, we estimated the models with a binary dependent variable using a maximum likelihood method and find similar results (see columns (2) and (4) in table 8).
Table 8. Household fuel choice: biomass and LPG
Estimations at household level; probit marginal effects reported in columns (2) & (4).
Standard errors in parentheses; $^{\ast }$ $p<0.10$ , $^{\ast \ast }$ $p<0.05$ , $^{\ast \ast \ast }$ $p<0.01$ .
Other controls: number of adults and children in the household (by gender).
The dependent variables in table 8 are households who used biomass or LPG for cooking. We include the cluster dummies in the specifications. The regressor pucca house (dummy) is negatively associated with biomass only used for cooking – columns (1) and (2) – and positively for LPG only – columns (3) and (4), as pucca house proxies for higher income households. As expected, all four (mining-related) clusters are negatively associated with biomass only used for cooking compared to the control cluster. Except for Cluster 1, the other three mining clusters are more likely to be using LPG for cooking.
4.3. Health indicators
We now examine the association between cumulative exposure and health. In table 9, we present the results for both reported health indicators (respiratory and chronic respiratory sick days) and clinically-diagnosed respiratory health (expert's diagnosis of the X-ray and lung function test (PFT)). The respiratory sick days (e.g., laryngitis, sinusitis, pharyngitis) and chronic respiratory sick days (e.g., asthma, bronchitis, wheezing, emphysema) were self-reported by the participants. According to clinical experts, respiratory health (measured by X-ray reports) is a function of cumulative exposure rather than immediate exposure (Cooper et al., Reference Cooper, Sridharan, Uma, D'Souza, Murugesan and Dayal2006). As our key variable of interest is cumulative exposure to air pollution, the cardio-respiratory expert's diagnosis of X-rays provides the best measure of respiratory health for our purposes.Footnote 7 The pulmonary function test (as clinically measured with the peak flow meter instrument) indicates age-specific lung capacity and can be influenced by immediate 24-hr exposure.
Table 9. Cumulative exposure and health indicators
Standard errors (clustered at household level) in parentheses.
*$p<0.10$ , **$p<0.05$ , ***$p<0.01$ . Other controls: male and pucca (dummies).
Estimated only for adults (so numbers lower than table 2).
Table 9 presents the main results of the paper, the association of health measures and cumulative exposure to air pollution for adults. In addition to the detailed survey questionnaire administered by local enumerators, sampled households were provided with individual health diaries to record the type of ailment, date, number of days sick, number of visits to the doctor, doctor's fees and any additional comments. We chose the self-reported sick days for respiratory and chronic respiratory illness as the dependent variables for the results presented in columns (1) and (2). The key variable of interest, cumulative exposure, is statistically significant and positive, indicating a positive association between exposure and respiratory sick days. In all the estimates in table 9, we control for individual (age, age-squared, education, male dummy), household level pucca dummy and cluster dummies to account for fixed effects at the regional level. A one-unit change in cumulative exposure is associated with a 0.0529-unit change in respiratory sick days, shown in column (1), and a 0.0378-unit change in chronic respiratory sick days, shown in column (2). In terms of elasticity, a 1 per cent change in cumulative exposure (at the mean) is associated with a 0.79 per cent increase in respiratory sick days and a 0.86 per cent increase in chronic respiratory sick days.
Lastly, a crucial concern in the literature when using self-reported measures of health as the dependent variable are issues of under- (or over-) reporting (Short et al., Reference Short, Goetzel, Pei, Tabrizi, Ozminkowski and Gibson2009; Vaillant and Wolff, Reference Vaillant and Wolff2012). The use of health diaries may mitigate the concerns with self-reported health based on recall methods but does not completely address issues of the heterogeneity problem in reporting, since different populations may use different threshold levels when asked about their health (Shmueli, Reference Shmueli2003; Lindeboom and Van Doorslaer, Reference Lindeboom and Van Doorslaer2004). Studies find correlations between attributes such as education and self-reported health which may arise from measurement errors in self-assessment. We ameliorate some of these concerns by controlling for education and income. Furthermore, the results in columns (3) and (4) in table 9, where the dependent variables are diagnosed clinical measures of health, are consistent with our findings with self-reported health.
As noted, we were advised by the cardio-respiratory specialist that respiratory health (indicated by X-ray reports) is a function of cumulative exposure. Therefore we argue that a diagnosed respiratory issue with the X-ray reports is the key health indicator in our study (column (3) in table 9). The pulmonary function test (as clinically measured with the peak flow meter instrument) indicates age-specific lung capacity and can be influenced by immediate 24-hr exposure, so it will be more responsive to 24-hr average exposure as a determinant.
Columns (3) and (4) in table 9 present the results for the relationship between cumulative exposure and clinically-diagnosed respiratory health status. A sub-sample (about 50 per cent of the total) of individuals volunteered for these medical tests that were offered for free and these observations are therefore lower than previous individual level regressions. In column (3) we use the X-ray diagnosis by the respiratory health expert and find a positive relationship between cumulative exposure to air pollution and an X-ray report diagnosed with respiratory problems. In terms of elasticity, a 1 per cent change in cumulative exposure is associated with a 0.90 per cent change in the likelihood of an X-ray report diagnosing a respiratory issue. Column (4) reports a positive association between cumulative exposure and the lung function test (i.e., 'PFT not okay') although not statistically significant. A 1 per cent change in exposure is associated with a 0.75 per cent change in the PFT measure recording an abnormality. As we noted, PFT is responsive to recent exposure and therefore noisily captures long-run effects.
The finding in our study that the relationship between cumulative exposure and health indicators is similar (in terms of sign and significance for X-ray measure) for both self-reported measures as well as clinical assessments is useful to related studies. The detailed clinical assessment and medical expert diagnosis, as in our study, may be infeasible to collect or the data and resources may not be available. The positive correlation we find between clinical and self-reported health illustrates the value of other field studies even if they only use self-reported health.
4.4. Study limitations
Our study has some limitations. Firstly, part of the study uses survey data and the recall method for self-reported health, which is open to measurement errors and biases. Self-reports are amenable to social desirability biases when responding to questions about health (Ezzati et al., Reference Ezzati, Martin, Skjold, Hoorn and Murray2006), for example, when responding to questions about smoking habits in our survey. Our provision of health diaries at the start of the study to all sampled households could have potentially improved participant's recording and recall during the health survey. We find consistent results with the clinical measures of health. Participants' reports of time spent in micro-environments can be affected by such biases as well, but the presence of field assistants in the households during the indoor and kitchen concentration measurements and their independent verification of time spent should constrain the bias.
Although we cannot make causal claims in the paper about the effect of mining or traditional fuel usage on health, our elaborate data collection allows us to make careful inferences about source apportionment for pollution. We treat the assignment of mining activity as exogenous to households in our computation of cumulative exposure, but selective in- or out-migration could bias our estimates. Even if we do not deal with the out-migration issue, the fact that 77 per cent of our sampled households were originally from the cluster (the main results are qualitatively similar when restricting the analysis to this sub-sample) allows us to have confidence in the results.
Despite our attempts to tie the theory closely to the data collection process that allowed us to apportion exposure to pollution sources, we were still limited in our empirical strategy by the data. Our measure of cumulative exposure assumes that the current 24-hr exposure is indicative of exposure across the years for the individuals living in the location. But pollution levels could have varied considerably over the years in the locations which we do not account for in our exposure construction. Similarly, we do not measure cumulative smoking years of individuals. Moreover, we measured $PM_{10}$ rather than $PM_{2.5}$ , which could arguably be a better indicator of respirable pollutants. We only measured particulate matter concentrations while health is also impacted by other noxious matter (e.g., sulphur oxides).
Sophisticated treatments of costs and benefits have been published since the data collection for this study (Jeuland et al., Reference Jeuland, Bhojvaid, Kar, Lewis, Patange and Pattanayak2015a, Reference Jeuland, Soo J-S and Shindell2018). Our particular contribution is the incorporation of total exposure and micro-environments. Air pollution valuation studies may to some extent abstract from that or, at times, simply ignore HAP.
Our study develops an integrated empirical model to study the association between respiratory health and total air pollution (household and outdoor). The two distinct features of this paper are: (1) proposing an integrated empirical model of health effects of air pollution, and (2) using dis-aggregate data on exposure in different environments to test the empirical implications from the model. The delineation of exposure levels from different micro-environments offers insights into the comparative magnitude of impacts from both household and outdoor pollution. This approach has allowed us to examine the relationship between respiratory health and household and outdoor air pollution together.
In our empirical analysis, we examine: (a) the association between individual characteristics and time spent in different micro-environments, (b) the distribution of concentrations in the micro-environments among clusters, (c) the relationship between clusters and household fuel usage, and (d) the relationship between cumulative exposure to air pollution and health outcomes. To highlight, we found that: (a) biomass use was positively associated with time spent in the kitchen, which may be due to the lower efficiency and higher cooking time associated with biomass fuels; (b) there is a positive association between outdoor air pollution and LPG usage (negative between outdoor and biomass use) which, along with associated results on the cluster dummies, implies that regions with mining activity had a higher likelihood of LPG usage; (c) cumulative exposure is positively related to biomass fuel usage, time spent in the kitchen where biomass fuels were used, household and outdoor air quality; and (d) cumulative exposure to air pollution is positively associated with self-reported and clinically-diagnosed respiratory issues.
Our results emphasize the findings in several studies that HAP from traditional cooking technologies adversely affects respiratory health (Duflo et al., Reference Duflo, Greenstone and Hanna2008; Langbein, Reference Langbein2017; Jeuland et al., Reference Jeuland, Soo J-S and Shindell2018; Pattanayak et al., Reference Pattanayak, Jeuland, Lewis, Usmani, Brooks and Bhojvaid2019). We find that switching from traditional biomass cooking to LPG stoves is associated with a substantial reduction in cumulative exposure, which is similar to findings in the literature on fuel switching (Shupler et al., Reference Shupler, Godwin, Frostad, Gustafson, Arku and Brauer2018).
Findings from such a cross-disciplinary team can offer several direct implications for policy making. We chose a setting with a recognized outdoor air quality problem – a heavily-mined region in India – to study the relationship of both outdoor and household air pollution with health. Our design and data allowed us to compute total exposure to air pollution as an outcome of air quality and time spent in the micro-environment. Thus policies should not just focus on improving cooking technology and fuel choice, but also provide information that improves time allocations in polluted environments, including household kitchens.
In rural areas of developing countries – particularly in households using biomass fuels and poor kitchen ventilation – HAP is a relatively more significant health hazard. In our study, clusters with mining activity had a higher proportion of cleaner cooking fuel usage (LPG) than the control cluster, which relied on biomass fuels. Correspondingly, clusters with mining activity experienced an increase in outdoor air pollution and reduced HAP as they switched away from biomass fuels. The findings suggest that there may be trade-offs between indoor and outdoor air pollution: mining activity – while adversely impacting outdoor air pollution – may simultaneously increase income and reduce the costs of accessing cleaner stoves and fuels (LPG, electricity), therefore reducing HAP. The findings from this study can be treated as a proof of concept that economists can usefully borrow from the environmental health sciences (TEA). Further research is required to comprehensively identify and evaluate these trade-offs on health and other welfare outcomes.
The supplementary material for this article can be found at https://doi.org/10.1017/S1355770X21000152.
We thank the two anonymous referees, Anna Alberini, Dana Andersen, Kenneth McConnell, and numerous seminar participants at the University of Arizona and University of Maryland, College Park for their comments and suggestions; and Meena Sehgal and Sunil K. Chabbra for their input. We acknowledge the contribution and guidance of Sanghamitra Das who sadly passed away unexpectedly. We thank the International Development Research Center for their financial support.
Authors listed in alphabetical order.
1 At the time of this writing, iron ore mining has been banned in Goa (since 2018) and is estimated to have reduced the state GDP by approximately 25 per cent.
2 Subjects with diagnosed or potential problems were referred to their local doctors in the area for follow-up and required treatment.
3 Measuring income particularly of rural households is rife with issues (Ravallion, Reference Ravallion1999) and we therefore include pucca house dummy as an additional control in our estimations.
4 This can be lower than the age of the individual if the family moved to this region from another region.
5 For an excellent review of studies on HAP, see Jeuland et al. (Reference Jeuland, Pattanayak and Bluffstone2015b) who use a conceptual model to help explain and think about the issue.
6 In doing so, we have followed Angrist and Pischke (Reference Angrist and Pischke2008) recommendation of using the Linear Probability Model as they argue it does a good job of estimating the marginal effects even when the conditional expectation function is non-linear.
7 The X-rays were conducted and first interpreted by the local hospital clinical staff and later by the cardio-respiratory expert on the research team.
Angrist, JD and Pischke, J-S (2008) Mostly Harmless Econometrics: An Empiricist's Companion. Princeton, New Jersey: Princeton University Press.CrossRefGoogle Scholar
Bailis, R, Drigo, R, Ghilardi, A and Masera, O (2015) The carbon footprint of traditional woodfuels. Nature Climate Change 5, 266–272.CrossRefGoogle Scholar
Bascom, R, Bromberg, PA, Costa, DL, Devlin, R, Dockery, DW, Frampton, MW et al. (1996) Health effects of outdoor air pollution. American Journal of Respiratory and Critical Care Medicine 153, 477–498.Google Scholar
Cooper, S, Sridharan, P, Uma, R, D'Souza, M, Murugesan, A, Dayal, V et al. (2006) Environmental and social performance indicators and sustainability markers in minerals development: reporting progress towards improved ecosystem health and human well-being. Technical report, TERI, New Delhi, India.Google Scholar
Das, I, Pedit, J, Handa, S and Jagger, P (2018) Household air pollution (HAP), microenvironment and child health: strategies for mitigating HAP exposure in urban Rwanda. Environmental Research Letters 13, 045011.CrossRefGoogle ScholarPubMed
Dominici, F, McDermott, A, Zeger, SL and Samet, JM (2003) Airborne particulate matter and mortality: timescale effects in four US cities. American Journal of Epidemiology 157, 1055–1065.CrossRefGoogle ScholarPubMed
Duflo, E, Greenstone, M and Hanna, R (2008) Cooking stoves, indoor air pollution and respiratory health in rural Orissa. Economic and Political Weekly 43, 71–76.Google Scholar
Ezzati, M and Kammen, DM (2002) The health impacts of exposure to indoor air pollution from solid fuels in developing countries: knowledge, gaps and data needs. Resources for the Future, Discussion Paper 02–24.CrossRefGoogle Scholar
Ezzati, M, Martin, H, Skjold, S, Hoorn, SV and Murray, CJ (2006) Trends in national and state-level obesity in the USA after correction for self-report bias: analysis of health surveys. Journal of the Royal Society of Medicine 99, 250–257.CrossRefGoogle ScholarPubMed
Gauderman, WJ, Vora, H, McConnell, R, Berhane, K, Gilliland, F, Thomas, D et al. (2007) Effect of exposure to traffic on lung development from 10 to 18 years of age: a cohort study. The Lancet 369, 571–577.CrossRefGoogle ScholarPubMed
Gauderman, WJ, Urman, R, Avol, E, Berhane, K, McConnell, R, Rappaport, E et al. (2015) Association of improved air quality with lung development in children. New England Journal of Medicine 372, 905–913.CrossRefGoogle ScholarPubMed
Harrington, W and Portney, PR (1987) Valuing the benefits of health and safety regulation. Journal of Urban Economics 22, 101–112.CrossRefGoogle Scholar
Jeuland, M, Pattanayak, SK and Bluffstone, R (2015b) The economics of household air pollution. Annual Review of Resource Economics 7, 81–108.CrossRefGoogle Scholar
Jeuland, M, Soo J-S, T and Shindell, D (2018) The need for policies to reduce the costs of cleaner cooking in low income settings: implications from systematic analysis of costs and benefits. Energy Policy 121, 275–285.CrossRefGoogle Scholar
Jeuland, M, Bhojvaid, V, Kar, A, Lewis, JJ, Patange, O, Pattanayak, SK et al. (2015a) Preferences for improved cook stoves: evidence from rural villages in north India. Energy Economics 52, 287–298.CrossRefGoogle Scholar
Langbein, J (2017) Firewood, smoke and respiratory diseases in developing countries the neglected role of outdoor cooking. PloS one 12, e0178631.CrossRefGoogle ScholarPubMed
Laumbach, RJ and Kipen, HM (2012) Respiratory health effects of air pollution: update on biomass smoke and traffic pollution. Journal of Allergy and Clinical Immunology 129, 3–11.CrossRefGoogle ScholarPubMed
Lindeboom, M and Van Doorslaer, E (2004) Cut-point shift and index shift in self-reported health. Journal of Health Economics 23, 1083–1099.CrossRefGoogle ScholarPubMed
Lozano, R, Naghavi, M, Foreman, K, Lim, S, Shibuya, K, Aboyans, V et al. (2012) Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the global burden of disease study 2010. The Lancet 380, 2095–2128.CrossRefGoogle ScholarPubMed
Mannucci, PM and Franchini, M (2017) Health effects of ambient air pollution in developing countries. International Journal of Environmental Research and Public Health 14, 1048.CrossRefGoogle ScholarPubMed
Naeher, LP, Brauer, M, Lipsett, M, Zelikoff, JT, Simpson, CD, Koenig, JQ and Smith, KR (2007) Woodsmoke health effects: a review. Inhalation Toxicology 19, 67–106.CrossRefGoogle ScholarPubMed
Pattanayak, S, Jeuland, M, Lewis, J, Usmani, F, Brooks, N, Bhojvaid, V et al. (2019) Experimental evidence on promotion of electric and improved biomass cookstoves. PNAS 116, 13282–13287.CrossRefGoogle ScholarPubMed
Pitt, MM, Rosenzweig, MR and Hassan, MN (2006) Sharing the burden of disease: gender, the household division of labor and the health effects of indoor air pollution in Bangladesh and India. In Stanford Institute for Theoretical Economics Summer Workshop, volume 202.Google Scholar
Ramanathan, V and Carmichael, G (2008) Global and regional climate changes due to black carbon. Nature Geoscience 1, 221–227.CrossRefGoogle Scholar
Ravallion, M (1999) Issues in measuring and modeling poverty. Washington, DC: The World Bank Group.CrossRefGoogle Scholar
Salvi, SS and Barnes, PJ (2009) Chronic obstructive pulmonary disease in non-smokers. The Lancet 374, 733–743.CrossRefGoogle ScholarPubMed
Shmueli, A (2003) Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity. Social Science & Medicine 57, 125–134.CrossRefGoogle ScholarPubMed
Short, ME, Goetzel, RZ, Pei, X, Tabrizi, MJ, Ozminkowski, RJ, Gibson, TB et al. (2009) How accurate are self-reports? Analysis of self-reported healthcare utilization and absence when compared with administrative data. Journal of Occupational and Environmental Medicine 51, 786–796.CrossRefGoogle ScholarPubMed
Shupler, M, Godwin, W, Frostad, J, Gustafson, P, Arku, RE and Brauer, M (2018) Global estimation of exposure to fine particulate matter (PM2.5) from household air pollution. Environment International 120, 354–363.CrossRefGoogle ScholarPubMed
Singh, I, Squire, L and Strauss, J (eds.) (1986) Agricultural Household Models: Extensions, Applications, and Policy. Washington, DC: The World Bank Group.Google Scholar
Smith, KR (1988) Air pollution: assessing total exposure in developing countries. Environment: Science and Policy for Sustainable Development 30, 16–35.Google Scholar
Smith, KR (1993) Fuel combustion, air pollution exposure, and health: the situation in developing countries. Annual Review of Energy and Environment 18, 529–566.CrossRefGoogle Scholar
Smith, KR (2013) Biofuels, Air Pollution, and Health: A Global Review. New York: Plenum Press.Google Scholar
Smith, KR, Bruce, N, Balakrishnan, K, Adair-Rohani, H, Balmes, J, Chafe, Z et al. (2014) Millions dead: how do we know and what does it mean? Methods used in the comparative risk assessment of household air pollution. Annual Review of Public Health 35, 185–206.CrossRefGoogle ScholarPubMed
Vaillant, N and Wolff, F-C (2012) On the reliability of self-reported health: evidence from Albanian data. Journal of Epidemiology and Global Health 2, 83–98.CrossRefGoogle ScholarPubMed
WHO (2014) 7 million premature deaths annually linked to air pollution. Geneva, Switzerland: World Health Organization Media Center. Available at https://www.who.int/mediacentre/news/releases/2014/air-pollution/en/.Google Scholar
WHO (2016a) Ambient air pollution: A global assessment of exposure and burden of disease. Geneva, Switzerland: World Health Organization. Available at http://apps.who.int/iris/handle/10665/250141.Google Scholar
WHO (2016b) WHO air quality guidelines global update. Geneva, Switzerland: World Health Organization.Google Scholar
View in content
Das et al. supplementary material
Online Appendix
You have Access Open access
Sanghamitra Das (a1), Vikram Dayal (a2), Anand Murugesan (a3) and Uma Rajarathnam (a4) | CommonCrawl |
Modelling overflow metabolism in Escherichia coli with flux balance analysis incorporating differential proteomic efficiencies of energy pathways
Hong Zeng1 &
Aidong Yang1
BMC Systems Biology volume 13, Article number: 3 (2019) Cite this article
The formation of acetate by fast-growing Escherichia coli (E. coli) is a commonly observed phenomenon, often referred to as overflow metabolism. Among various studies that have been carried over decades, a recent work (Basan, M. et al. Nature 528, 99–104, 2015) suggested and validated that it is the differential proteomic efficiencies in energy biogenesis between fermentation and respiration that lead to the production of acetate at rapid growth conditions, as the consequence of optimally allocating the limited proteomic resource. In the current work, we attempt to incorporate this newly developed proteome allocation theory into flux balance analysis (FBA) to capture quantitatively the extent of overflow metabolism in different E. coli strains.
A concise constraint was introduced into a FBA-based model with three proteomic cost parameters to represent constrained allocation of proteome over two energy (respiration and fermentation) pathways and biomass synthesis. Linear relationships were shown to exist between the three proteomic cost parameters. Tests with three different strains revealed that the proteomic cost of fermentation was consistently lower than that of respiration. A slow-growing strain appeared to have a higher proteomic cost for biomass synthesis than fast-growing strains. Different assumed levels of carbon flowing into pentose phosphate pathway affected the absolute value of model parameters, but had no qualitative impact on the comparative proteomic costs. For the prediction of biomass yield, significant errors that occurred for one of the tested strains (ML308) were rectified by adjusting the cellular energy demand according to literature data.
With the aid of a concise proteome allocation constraint, our FBA-based model is able to quantitatively predict the onset and extent of the overflow metabolism in various E. coli strains. Such prediction is enabled by three linearly-correlated (as opposed to uniquely determinable) proteomic cost parameters. The linear relationships between these parameters, when determined using data from cell culturing experiments, render biologically meaningful comparative proteomic costs between fermentation and respiration pathways and between the biomass synthesis sectors of slow- and fast-growing species. Simultaneous prediction of acetate production and biomass yield in the overflow region requires the use of reliable cellular energy demand data.
The formation of acidic by-products, predominantly acetate, when Escherichia coli (E. coli) grows under aerobic-glucose conditions is a commonly observed phenomenon, which has been extensively studied over decades [1,2,3,4,5]. Lee reviewed 19 studies of recombinant E. coli where acetate was accumulated in fed-batch systems [6]. It has been reported that the portion of glucose converted into acetate can be as high as 15% [7], representing a seemingly huge waste of feedstock. The accumulation of acetate in the culture medium appears to be a major limiting factor for achieving high cell density [8], which is particularly severe in the growth of recombinant strains [9]. Acetate also impairs the microbial production of recombinant proteins [1] and drug precursors [9]. These complications of acetate in bioreactors thus call for elucidation of acetate-pertinent metabolic processes. A similar phenomenon has been observed in tumour cells (Warburg effect) [10,11,12]. The associated mathematical models for explaining the Warburg effect have recently been reviewed [13].
Traditionally, the aerobic formation of acetate has been referred to as overflow metabolism: the excess glucose saturates or inhibits the tricarboxylic acid (TCA) cycle, which subsequently forces the cell to modulate the redundant carbon to the acetate pathway [3, 14]. However, the study by Molenaar et al. suggested that the overflow metabolism as shown in the growth phenotype is probably a result of the global allocation of cellular resources, where the enzyme efficiency and the pathway yield were both taken into account to obtain the optimal growth strategies subject to different growth conditions [15]. Later in 2015, Basan et al. proposed and validated that the overflow metabolism in E. coli originates from the global physiological proteome allocation for rapid growth [16]. In particular, the proteomic efficiency of energy biogenesis through aerobic fermentation was found to be higher than that of respiration; this difference in proteomic efficiency between fermentation and respiration appears to play a central role in dictating the degree of overflow metabolism in E. coli.
Given the importance of the overflow metabolism, several phenomenological models were developed to depict this effect [5, 8, 17, 18], where the prediction of acetate excretion was dictated by a combination of (i) the constraints on oxygen and carbon supply and (ii) cellular mass and energy balance. Later, models that adopt conventional regulatory mechanisms of acetate metabolism [14], such as oxygen limitation, carbon source availability and tight regulation of cofactor pools were evaluated [19], with an attempt to explain the metabolic shifts from a fully aerobic mode to the aerobic acetate fermentation (overflow). More recently, constraint-based metabolic models [20] were established to analyse the optimal cellular growth strategy, incorporating principles of (i) limitation in the cellular resource on the maximal attainable growth rate, such as the maximum cytoplasmic density adopted by FBAwMC [21, 22] and the finite amount of resource to be allocated between metabolic network and ribosomes, as applied in RBA [23,24,25], (ii) metabolic regulation based on enzyme kinetic information, such as mechanistically detailed descriptions of gene expression and the synthesis of functional macromolecules used in ME-Model [26] and (iii) membrane occupancy-derived competition between glucose transporters and respiration chain (an extension of ME-Model) [27]. The major target of these models is to predict the maximum cellular growth rate. Predictions were validated quantitatively by the experimental data, while the overflow metabolism in fast-growing phase was mostly captured in a qualitative way. In addition, it was pointed out [16, 28] that cell volumes were empirically found to vary widely with virtually constant densities across different growth conditions [29], which suggests that the cytoplasmic density-based constraint might not be fully justified.
Inspired by a recent experimental work studying the proteomic cost of the core metabolic pathways of E. coli [16], a model named constrained allocation flux balance analysis (CAFBA) [28] managed to predict the rates of acetate production in the overflow metabolism for different E. coli strains, with good quantitative agreement with experimental data. However, the proteomic costs adopted in CAFBA were applied to individual metabolic reactions, without focusing on the exploration of the critical role played by specific metabolic modules such as energy biogenesis pathways.
In this work, we attempt to depict the overflow metabolism in various E. coli strains with quantitative accuracy, i.e. predicting aerobic steady-state rates of acetate production at different growth rates and validating the model with experimental data in literature. In particular, we adopt a concise proteome allocation constraint as identified by Basan et al. [16], referred to as the Proteome Allocation Theory (PAT) in this work. The PAT suggests that the choice of energy biogenesis pathways under different growth conditions results from the discrepancy of proteomic efficiencies between fermentation and respiration. E.coli cells tend to use the more protein-efficient fermentation pathway to generate energy in order to accommodate the high proteomic demand in biosynthesis under rapid growth. The key concepts of PAT are fully embedded and realised in our model. With a parsimonious, PAT-based metabolic model capable of accurately capturing the overflow metabolism, we further analyse the interdependency between pathway-level proteomic cost parameters, the disparity in these parameters between different E. coli strains, and the impact of cellular energy demand on the accuracy of the co-prediction of the overflow metabolism and the biomass yield on substrate.
Formulation of the PAT constraint
Following Basan et al. [16], the fractions of three proteome sectors in the entire proteome (i.e. the total protein content) of the cell sum to unity:
$$ {\phi}_f+{\phi}_r+{\phi}_{BM}=1 $$
where ϕf and ϕr are the fractions of the fermentation- and respiration-affiliated enzymes, respectively, which enable the fluxes for energy generation; ϕBM represents the fraction of the remaining part of the proteome enabling other cellular activities, broadly referred to as the sector of biomass synthesis [16, 28].
More specifically, ϕf represents the mass abundance of the enzymes that carry fermentation fluxes involved in glycolysis (glucose to acetyl-CoA), oxidative phosphorylation and acetate synthesis pathways (phosphotransacetylase and acetate kinase). ϕr comprises all the enzymes that catalyse the respiration-associated reactions in glycolysis, tricarboxylic acid (TCA) cycle and oxidative phosphorylation system. Same as in Basan et al. [16], in this work we adopt the linear dependences assumed in Hui et al. [30] to relate ϕf and ϕr with the fermentation and respiration fluxes respectively,
$$ {\phi}_f={w}_f{v}_f $$
$$ {\phi}_r={w}_r{v}_r $$
(2b)
where vf (vr) is the fermentation (respiration) pathway flux, which in this work is represented by the enzymatic reaction "acetate kinase ACKr" ("2-oxogluterate dehydrogenase AKGDH"); wf (wr) is the pathway-level proteomic cost, denoting the proteome fraction required per unit fermentation (respiration) flux.
On the biomass synthesis sector, ϕBM corresponds to the remaining part of proteome that is not covered by the fermentation and respiration sectors, including ribosomal proteins and anabolic enzymes (the major part, referred to as biomass synthesis), catabolic enzymes and cellular maintenance proteins. Motivated by the observed linear dependency between growth rate and proteome fraction for biomass synthesis [30,31,32], the following linear relationship is assumed:
$$ {\phi}_{BM}={\phi}_0+b\uplambda $$
where bλ is the growth rate-associated component with λ being the specific growth rate and the constant b quantifying the proteome fraction required per unit growth rate. In Basan et al. [16], ϕ0 was considered as a growth rate independent constant.
Combining Eqs. (1)–(3), we have
$$ {w}_f{v}_f+{w}_r{v}_r+b\uplambda =1-{\phi}_0 $$
Equation (4) implies that the sum of the three proteomic cost terms on the left-hand side remains constant. However, when the growth rate (and hence the fermentation and respiration fluxes) becomes very low, it is difficult to envisage numerically how this sum could still remain at a constant level. In fact, in Basan et al. [16] (see its Supplementary Information), it was acknowledged that at growth rates lower than that corresponding to the onset of the overflow phenomenon, the proteome sectors would no longer be constrained by the equality indicated by Eq. (4). This suggests that across the entire range of possible growth rates, ϕ0 is unlikely a growth rate independent constant: it may remain at a constant (and minimum) level in the overflow region where the proteomic resource is stretched, but become growth-rate dependent (and larger) at lower growth rates outside the overflow region, i.e. ϕ0, min ≤ ϕ0 ≤ 1, where ϕ0, min is a true constant. Defining ϕmax ≡ 1 − ϕ0, min, Eq. (4) then becomes
$$ {w}_f{v}_f+{w}_r{v}_r+b\uplambda =1-{\phi}_0\le 1-{\phi}_{0,\min}\equiv {\phi}_{\mathrm{max}} $$
In Vazquez and Oltvai (2016) [33], ϕ0 was also interpreted as a variable instead of a constant, with a (non-zero) minimum value. For simplicity, both sides of Eq. (5) is divided by ϕmax, leading to the final form of the proteome constraint adopted in this work, referred to as PAT constraint from this point on:
$$ {w}_f^{\ast }{v}_f+{w}_r^{\ast }{v}_r+{b}^{\ast}\lambda \le 1 $$
where \( {w}_f^{\ast}\equiv {w}_f/{\phi}_{max} \), \( {w}_r^{\ast}\equiv {w}_r/{\phi}_{max} \) and b∗ ≡ b/ϕmax. \( {w}_f^{\ast } \), \( {w}_r^{\ast } \) and b∗ are referred to as the proteomic cost parameters.
Predicting the acetate flux
Flux balance analysis (FBA) [20] is used to determine the optimal flux distribution under different growth condition, with a set of constraints:
maxfobj, subject to
$$ {\displaystyle \begin{array}{l}\left(\mathrm{i}\right)\kern0.5em \mathrm{Sv}=0\\ {}\left(\mathrm{i}\mathrm{i}\right)\kern0.5em {\mathrm{v}}^{\mathrm{L}}\le \mathrm{v}\le {\mathrm{v}}^{\mathrm{U}}\\ {}\left(\mathrm{i}\mathrm{i}\mathrm{i}\right)\kern0.5em {w}_f^{\ast }{v}_f+{w}_r^{\ast }{v}_r+{b}^{\ast}\lambda \le 1\end{array}} $$
where fobj is the assumed cellular objective. We specified minimizing substrate uptake as the objective function because in this study the commonly used objective 'growth rate' was used as the model input (with acetate production as the model output). S is the stoichiometric matrix defined by the metabolic model; v is a column vector comprising the reactions/fluxes described in the metabolic network; vL and vU represent the lower and upper limits of the reactions, respectively. The inequality constraint (iii) is same as Eq. (6) introduced earlier.
The prediction of the extent of overflow metabolism (rate of acetate production) requires of the parameter values of \( {w}_f^{\ast } \), \( {w}_r^{\ast } \) and b∗ in the third constraint (PAT constraint) of Eq. (7). We show in the next section that these three parameters cannot be uniquely determined by the experimentally measured growth rate-acetate production profile alone. A set of values for these parameters was randomly chosen from mathematically equivalent sets (see Additional file 1: Table S1). The PAT-based FBA was run at different growth rates under aerobic-glucose conditions. FBA was carried out using the core E. coli metabolic model [34], referred to as the core model in the rest of the paper. The optimal flux distribution was solved via COBRA toolbox [35] in MATLAB (R2016a). LP solution was determined by Gurobi 6.0. Detailed model descriptions such as uptake bounds and flux regulations are given in Additional file 2, sections 1 and 2.
Interdependency of proteomic cost parameters
A linear relationship was previously shown to hold between the fermentation (or respiration) flux and steady state growth rates in the overflow region [16]:
$$ {v}_f={k}_f\lambda +{v}_{f,0} $$
$$ {v}_r={k}_r\lambda +{v}_{r,0} $$
where, as introduced earlier, vf is the fermentation flux (referred to as "acetate line" in [16]); vr is the respiration flux. kf (kr) and vf, 0 (vr, 0) are constants representing the slope and intercept of the fermentation (respiration) line. Substituting Eq. (8) and Eq. (9) into Eq. (6), with the equal sign held for the overflow condition:
$$ \left({w}_f^{\ast }{k}_f+{w}_r^{\ast }{k}_r+{b}^{\ast}\right)\lambda =1-{w}_f^{\ast }{v}_{f,0}-{w}_r^{\ast }{v}_{r,0} $$
Equation (10) holds for any growth rate (λ) in the overflow region, which requires
$$ {w}_f^{\ast }{k}_f+{w}_r^{\ast }{k}_r+{b}^{\ast }=0 $$
$$ 1-{w}_f^{\ast }{v}_{f,0}-{w}_r^{\ast }{v}_{r,0}=0 $$
Equations (11) and (12) indicate that (i) there is a linear relationship between the fermentation and respiration proteomic cost parameters \( {w}_r^{\ast } \) and \( {w}_f^{\ast } \), and (ii) the third growth-rate dependent proteomic cost parameter b∗ is a linear combination of \( {w}_r^{\ast } \) and \( {w}_f^{\ast } \), thus b∗ also possesses a linear relationship with \( {w}_f^{\ast } \) (or \( {w}_r^{\ast } \)).
If experimental data exist that allow for both the fermentation line and the respiration line to be plotted (such as the steady state growth rate – acetate excretion and growth rate – CO2 revolution data given in [16]), their slopes and intercepts, appearing in Eqs. (11) and (12), can be obtained. However, the three proteomic cost parameters cannot be uniquely determined by the two equations, although specific values of similar parameters have previously been derived from measured cellular protein compositions [16].
In this work, kf and vf, 0 were directly determined from the experimentally measured growth rate-acetate excretion profile (data sources are shown in Fig. 1). To our knowledge, no directly experimental data were available for the rate of intracellular respiration. Alternatively we took the growth rate-acetate profile as the input of FBA (setting the objective function to the minimisation of glucose uptake) to estimate the respiration flux at each data point, which was subsequently used to determine kr and vr, 0. Flux variability analysis (FVA) was conducted which confirmed that all the relevant fluxes used in the model were uniquely determined. After obtaining kf (kr) and vf, 0 (vr, 0), a set of values of \( {w}_f^{\ast } \), \( {w}_r^{\ast } \) and b∗ can be determined by arbitrarily specifying the value for one of the parameters. In this work, we took \( {w}_f^{\ast } \) to be specified, in a range of [0, 0.11] for MG1655 and [0, 0.07] for ML308 and NCM3722. This thus allows us to present the parameter estimation results in the form of \( {w}_f^{\ast } \)-\( {w}_r^{\ast } \) and \( {w}_f^{\ast } \)-b∗ plots. Simulation results presented in this paper were obtained with a randomly chosen value of \( {w}_f^{\ast } \) within the ranges mentioned above and the correspondingly determined values of \( {w}_r^{\ast } \) and b∗. Note that different values chosen for \( {w}_f^{\ast } \) yielded identical simulation results (Additional file 2: Figures S9-S17).
Model predictions of overflow metabolism for MG, NCM and ML at nominal energy demand. The extent of overflow metabolism is represented by the acetate flux. Simulation results of the respiration flux are drawn to show the switch between fully-respiration and respiration-fermentation mode. Comparison is made between model predictions and experimental data for the rates of acetate production. uPPP% was set to 35% according to the flux measurement [41]. Other uPPP% values render similar results (see Additional file 1: Table S1 and Additional file 2, section 3). Experimental data were obtained from different sources [3, 16, 41]. Data points from [16] were converted using 1 mM A600nm− 1 h− 1 = 2 mmol gDW− 1 h− 1 according to [28]. "-nom" refers to nominal, the default energy demand specified in the core model. ac – acetate flux, vr – respiration flux, simu – simulation results, exp. – experimental data
Adjustment of cellular energy demand
We observed discrepancy in biomass yield between model predictions and the experimental data, especially for ML308 (Fig. 5), and hypothesised the reason being the inaccuracy of cellular energy demand assumed by the core model when applied to this particular species growing under non-overflow and overflow conditions. To test this hypothesis, we collected the growth data in the overflow region for ML308 (Table 7 in [3]) and found that the reported steady state ATP production rate was lower than what the core model suggested. Therefore, we decided to remodel the cellular energy demand by subtracting the surplus portion, reducing it to the strain-specific values reported in the literature (Additional file 1: Table S3).
The original cellular energy demand embedded in the core model is quantified by
$$ {r}_{ATP, nominal}=\mathrm{ATPM}+\sigma \lambda +{v}_{GLNS}+{v}_{PFK} $$
where rATP, nominal is the overall ATP consumption rate (equivalent to the ATP production rate at steady state); subscript "nominal" indicates the default specification of the core model; vGLNS and vPFK are the fluxes of the enzymatic reactions glutamine synthetase and phosphofructokinase (one mole ATP is required per mole flux of each reaction); (ATPM + σλ) denotes the maintenance energy required for non-metabolic processes, where ATPM corresponds to the non-growth-associated maintenance (NGAM) and σ to the growth-associated maintenance (GAM) [36].
The adjusted cellular energy demand is formulated as
$$ {r}_{ATP, new}={r}_{ATP, nominal}-S\left(\lambda \right) $$
where S(λ) is the offset energy, i.e. the amount of energy over-predicted by the core model. The mathematical analysis of the growth data of ML308 suggested that in the overflow region, the offset energy is linearly related with the growth rate (R2 = 0.9998):
$$ S\left(\lambda \right)= k\lambda +c $$
where k and c are constants. Substituting Eqs. (13) and (15) into Eq. (14), we have
$$ {r}_{ATP, new}=\left(\mathrm{ATPM}-c\right)+\left(\sigma -k\right)\lambda +{v}_{GLNS}+{v}_{PFK} $$
For simplicity, we define M ≡ ATPM − c and N ≡ σ − k; M and N are referred to as the (adjusted) maintenance parameters. The adjustment of the cellular energy demand in our FBA is achieved by manipulating the maintenance energy, more specifically, through the maintenance parameters. Note that in the case where growth energetic data is not available (i.e. for MG1655 and NCM3722), such adjustment was not possible, therefore the default maintenance parameter values were used (see Table 1).
Table 1 Values of the maintenance parameters
Alternative pathways in central metabolism
The model constructed in this work considers only the central metabolism of E. coli as detailed in the E. coli core model. The energy biogenesis pathways in the model consist of glycolysis (the EMP pathway), the TCA cycle, the acetate pathway (PTA-ACKA) and the terminal oxidative phosphorylation system. However we noted the existence of alternative pathways in the central carbon metabolism, which include the Entner-Doudoraff (ED) pathway, the pentose phosphate (PP) pathway and the more recently explored PEP-glyoxylate cycle [37].
ED pathway
The ED pathway was found to be three to five-fold more protein-efficient than the EMP pathway to achieve the same glycolytic flux [38], which provides a clear rationale for the utilisation of the ED pathway in a number of bacteria, e.g. Sinorhizobium meliloti, Rhodobacter sphaeroides, Zymomonas mobilis, and Paracoccus versutus [39]. However, Flamholz et al. acknowledged that E. coli, which is capable of using both the ED and the EMP pathways, tends to use the latter. Flux measurements also suggest that the usage of the ED pathway by E.coli K-12 is minimal: only about 2% of glucose catabolism proceeds by means of the ED pathway in batch cultures [40] and about 6% in mini-scale chemostats [41]. Furthermore, the activity of the ED pathway was detected only under slow- to mild-growing conditions [40, 41]. To our knowledge, no activity of ED pathway in E. coli has been reported under fast-growing scenarios.
PEP-glyoxylate cycle
As for the PEP-glyoxylate cycle, similar to the ED pathway, its usage was identified to be significant only under slow-growing conditions. Not even a trace activity of the PEP-glyoxylate cycle was found in wild-type batch cultures or more rapidly growing chemostats [37]. Furthermore, the flux comparison between the aceA-pckA knockout strain and the sucC knockout strain [16] verifies that compared to the TCA cycle, the alternative PEP-glyoxylate cycle plays a less significant role in glucose-limited fast-growing cultures of E. coli.
Based on above literature evidences, this work, focusing on the overflow metabolism that occurs at fast-growing cultures of E.coli with relatively sufficient substrate availability, has taken the assumption that the use of alternative ED pathway and PEP-glyoxylate cycle is negligible compared to the glycolysis (i.e. EMP pathway) and the TCA cycle.
PP pathway
The PP pathway, on the other hand, can function as a significant alternative to the upper part of glycolysis for carbon catabolism in E. coli. Previous studies showed that the carbon flow through the PP pathway could reach 20–35% of the total carbon intake and can vary with different growth rates [40, 41] hence neither a constant portion of carbon is diverted into PP pathway nor this portion of carbon flux negligible. The uncertainty embedded in the PP pathway flux motivates us to study the impact of different portion of substrate carbon allocated between the upper part of the EMP pathway and the PP pathway on the proteomic cost parameters and the model predictions. More details can be found in the Additional file 2, section 3.
We define the PP pathway ratio (PPP%) as the portion of substrate carbon directing to PP pathway to the total carbon intake:
$$ PPP\%=\frac{PGL}{EX\_ glc\left(\mathrm{e}\right)}\times 100\% $$
where 6-phosphogluconolactonase (PGL) is chosen to represent the PP pathway flux as it is a major and also the beginning enzymatic reaction in the pentose phosphate shunt; EX_glc(e) is the exchange reaction denoting glucose uptake rate. In our simulation, PPP% was controlled by setting the upper bound of the portion of carbon that is directed into the PP pathway, denoted as uPPP%, with the aid of an auxiliary term DM_PPP_RATIO:
$$ uPPP\%\times EX\_ glc(e)- PGL= DM\_ PPP\_ RATIO\ge 0 $$
E.coli MG1655, NCM3722 and ML308 have been selected as the model strains in this work and are referred to as MG, NCM and ML respectively from this point on. In this section, the simulated acetate excretion pattern is presented against experimental data to demonstrate the accuracy of the model prediction. Subsequently we elucidate the linear interdependency of the proteomic cost parameters. In particular, we reveal the similarities and differences between the three E. coli strains. With respect to the PP pathway ratio (PPP%), previous studies on the slow-growth strain MG show that the portion of carbon that goes into the PP pathway can be approximately 20% of the total carbon intake [40, 41]. In this work, we set the upper bound of the carbon flowing into PP pathway (uPPP%) to 25, 35 and 40% to investigate the potential effect of the change in PPP% on proteomic cost parameters and model prediction (more justification is provided in Additional file 2, section 3). On cellular energy demand, we refer to the original energy demand specified in the core model [34] as nominal energy demand, and present first the set of results which were generated on this basis. Subsequently, we show how an adjusted energy demand (particularly applied to ML, referred to as ML-new) affects the patterns of the estimated proteomic cost parameters and the accuracy of biomass yield prediction.
Model prediction of overflow metabolism with nominal energy demand
Here, the accuracy of the predicted acetate excretion rate is compared with experimental data. Variation in the proteomic cost parameters with the changed carbon level diverted into the PP pathway is also presented.
Model prediction of acetate production
Figure 1 shows that model prediction of the pattern of acetate excretion is in good agreement with the experimental observations for three different E. coli strains. The onset of the production of acetate is concomitant with the drop in the respiratory flux, indicating a switch between fully-respiration to respiration-fermentation mode. As the growth rate further increases, the acetate flux becomes dominant while the extent of respiration is gradually diminishing. It is worth noting that zero acetate production was commonly observed at low growth rates of different strains [3, 16, 41, 42]; To emphasise the (strain-specific) acetate production pattern, we only collect the data with non-zero acetate production. For all the strains, data involving growth rates lower than those presented in Fig. 1 are associated with non-detectable acetate excretion, hence are not shown here.
Linear relationships between proteomic cost parameters
When the nominal energy demand is adopted (indicated by "-nom"), the change of uPPP% leads to insignificant changes to the \( {w}_r^{\ast }-{w}_f^{\ast } \) line for each strain (Fig. 2). Between different strains, MG and NCM share nearly identical lines. The lines of ML-nom deviate from those of the former two, but not significantly (although this closeness will be altered with the adjusted energy demand, see Fig. 2 ML-new and the section below). In any case, \( {w}_r^{\ast } \) is clearly higher than the corresponding \( {w}_f^{\ast } \), implying that respiration has a higher (lower) proteomic cost (efficiency) than fermentation for energy production, which is consistent with what was derived from protein abundances data for comparable parameters in [16].
\( {w}_r^{\ast }-{w}_f^{\ast } \) relationship for MG, NCM and ML with nominal energy demand and for ML with new energy demand. "-nom" refers to nominal, the default energy demand specified in the core model. "-new" refers to the adjusted energy demand. uPPP% was set to 25, 35 and 40% for each strain
To inspect the insignificant disparity in the \( {w}_r^{\ast }-{w}_f^{\ast } \) lines when all strains use the nominal energy demand, Eq. (12) is re-arranged to
$$ {w}_r^{\ast }=\frac{v_{f,0}}{v_{r,0}}{w}_f^{\ast }+\frac{1}{v_{r,0}} $$
The slope and intercept of the \( {w}_r^{\ast }-{w}_f^{\ast } \) line are dictated by \( \frac{v_{f,0}}{v_{r,0}} \) and \( \frac{1}{v_{r,0}} \), respectively. vf, 0 can be determined directly by the experimental measurement of acetate production. vr, 0, on the other hand, is a result of the combination of (measured) rates of acetate production and the mass and energy balance structure of the metabolic model.
For a specific strain, vf, 0 only depends on the pattern of acetate excretion, not affected by assumed level of uPPP%. Therefore, the impact of uPPP% on the \( {w}_r^{\ast }-{w}_f^{\ast } \) line is through affecting the value of vr, 0, which turns out to be rather moderate. Between different strains, the ratio of vf, 0 and vr, 0 and the value of vr, 0 are nearly identical between MG and NCM, regardless of the level of uPPP% adopted, resulting in the very much overlapped pattern of the \( {w}_r^{\ast }-{w}_f^{\ast } \) relationship between MG and NCM. For ML, the value of the slope is slightly smaller than MG and NCM, while the intercept is about 25% larger (as shown in Additional file 1: Table S2). Figure 2 also suggests that the proteomic cost (efficiency) of respiration pathways for ML is higher (lower) than that for MG and NCM, regardless the modification in the energy demand.
Compared to the \( {w}_r^{\ast }-{w}_f^{\ast } \) relationship, that of \( {b}^{\ast }-{w}_f^{\ast } \) appears to be affected by the level of uPPP% more visibly (Fig. 3). Between different species, the difference is also more pronounced, and closeness is present between the two rapid-growth strains NCM and ML (as presented in Additional file 2: Figure S1).
\( {b}^{\ast }-{w}_f^{\ast } \) relationship for MG, NCM and ML strains with nominal energy demand. "-nom" refers to nominal, the default energy demand specified in the core model. uPPP% was set to 25 35 and 40% for each strain. The blue arrow shows the increase in b∗ with the increase of uPPP% at fixed \( {w}_f^{\ast } \); the yellow arrow shows the right-shifting trend of the \( {b}^{\ast }-{w}_f^{\ast } \) line with the increase of uPPP%
For a specific strain, the increase of uPPP% gradually moves the \( {b}^{\ast }-{w}_f^{\ast } \) line to the right (yellow arrow, Fig. 3), corresponding to an increase in b∗ (blue arrow, Fig. 3). This trend can be explained by inspecting a re-arrangement of Eq. (11):
$$ {b}^{\ast }=\left({k}_r\frac{v_{f,0}}{v_{r,0}}-{k}_f\right){w}_f^{\ast }+\frac{k_r}{v_{r,0}} $$
Equation (20) suggests that the shift of the \( {b}^{\ast }-{w}_f^{\ast } \) line results from the change in the respiratory flux (note the intercept, \( \frac{k_r}{v_{r,0}} \)). In E. coli, the PP pathway and the TCA cycle are two major sources for the production of NADPH [40]. At a given growth rate, the amount of NADPH needed for cell growth is fixed based on the mass and energy balance. As uPPP% increases, more carbon is predicted to enter the PP pathway. In the model simulation, an increase in the amount of NADPH produced via PP pathway would force a drop of the flux into the TCA cycle in order to maintain the constant total production rate of NADPH. The reduction in the TCA flux in turn manifests in a lower vr. In the overflow region, Eq. (6) is bounded by the equal sign. As shown earlier, the level of uPPP% has a negligible impact (when nominal energy demand is adopted) on the \( {w}_r^{\ast }-{w}_f^{\ast } \) line. Also recall that the relation between the rate of acetate excretion and steady state growth rate is fixed by the experimentally measured growth data. With all the other quantities (\( {w}_f^{\ast } \), vf, \( {w}_r^{\ast } \) and λ) fixed in Eq. (6), the drop in vr due to the increase in uPPP% will necessarily be accompanied by an increase in b∗.
Between different strains, b∗ varies significantly. In particular, b∗ for MG is remarkably larger than that of NCM and ML (as presented in Additional file 2: Figure S1). This disparity can again be explained by Eq. (6). For a certain value of \( {w}_f^{\ast } \), \( {w}_r^{\ast } \) is rather similar among different strains (with nominal energy demand) as shown by Fig. 2. In the overflow region, the respiration flux vr of MG is much smaller than the others (see Fig. 1), which thus leads to a lower value of the \( {w}_r^{\ast }{v}_r \) term for MG than NCM and ML. As the value of the \( {w}_f^{\ast }{v}_f \) term (for any selected value of \( {w}_r^{\ast } \)) is similar between these strains, due to their similarity in the relationship between \( {w}_r^{\ast } \) and \( {w}_f^{\ast } \), the value of the remaining term on the left-hand side of Eq. (6), b∗λ, must be higher for MG than for the other two strains. On the other hand, in the overflow region and at a same acetate excretion rate vf, the growth rate of MG has been shown to be much lower than that of NCM and ML. Now, a higher value of b∗λ coupled with a lower value of λ will undoubtedly lead to a higher value of b∗for MG, compared to the other two strains.
The above mathematical explanation in fact coincides with the known biological fact that the inverse of b∗ is proportional to the rate of protein synthesis [31]: the slower the rate of protein synthesis, the higher the value of b∗. Thus for the slow-growing strain MG, it is expected to have a higher value of b∗compared to the fast-growing strains NCM and ML.
Predicted evolution of PP pathway flux
The results presented above show rather moderate impact of the upper limit of PP pathway ratio (uPPP%) on the linear interdependency of the proteomic cost parameters. With an interest in the FBA solution of the flux distribution in PP pathway (at different growth rates), simulation results were recorded for three strains with uPPP%set to 35%; other uPPP% levels displayed a similar trend (as presented in Additional file 2: Figures S4 and S5). Flux variability analysis (FVA) [43] was performed to confirm that the trend of PPP% presented here was unique.
In general, PPP% gradually increases with the growth rate. Two turning points can be observed, which divide the whole curve into three distinct phases (Fig. 4a). A close inspection of the model simulation revealed that the variation of the predicted PP pathway ratio was co-related particularly with three fluxes, namely NAD transhydrogenase (NADTRHD), transketolase (TKT2) and NADP transhydrogenase (THD2).
a Simulation results of PPP%, NADTRHD, TKT2 and THD2 against growth rates at nominal energy demand. b Comparison between predicted trend of PPP% and experimental data. PP pathway ratio (PPP%) is divided by ten (0.1*PPP%) to unify the order of magnitude between different data types. Experimental data were obtained from [41]. uPPP% was set to 35%. Simulation was based on MG1655. NADTRH – NAD transhydrogenase, TKT – transketolase, THD2 – NADP transhydrogenase, simu – simulation results, exp. – experimental data
In phase I, only NADTRHD is active, with zero fluxes for both TKT2 and THD2. The enzymatic reaction NADTRHD functions to convert NADPH into NADH. Thus in phase I, it is likely that the amount of NADPH produced exceeds the required amount for biosynthesis; NADTRHD is thus activated to consume the surplus NADPH.
In phase II, an on/off swap occurs between NADTRHD and TKT2 while THD2 still remains silent. We infer that in this phase, NADPH produced satisfies the demand, but the amount of carbon flowing into the PP pathway surpasses the rate of the carbon withdrawal (for the synthesis of biomass precursors). Therefore, TKT2 is activated to direct the extra amount of four-carbon and five-carbon compounds back to the glycolysis.
In phase III, THD2 is finally switched on and becomes significantly active in the high-growth-rates region. TKT2 increases progressively while NADTRHD remains silent. It is presumed that in this phase, as the growth rate becomes higher, more NADPH is required for biomass synthesis. NADP transhydrogenases (THD2) is activated to produce NADPH needed in rapid growth. The surplus carbon flux in the PP pathway, which might result from the high glucose uptake rate at a high growth rate, is directed back to glycolysis via TKT2.
It would be desirable to verify the theoretical prediction of the evolution of PPP% with experimental measurements, which unfortunately have not been widely reported in the literature. Nevertheless, Fig. 4b shows a comparison with one set of experimental observations available [42], which suggests a good degree of qualitative similarity.
Adjusting cellular energy demand improves the prediction of biomass yield
Although combining the PAT constraint with the core model succeeded in predicting the rates of acetate production, the accuracy in biomass yield varied and was especially unsatisfactory for ML strain (Fig. 5). A similar deficiency in yield prediction was also reported in [28]. Focusing on the yield, two features can be observed: (i) in the overflow region, for a fixed growth rate (associated with an acetate excretion rate) the biomass yield for ML is higher than MG and NCM; and (ii) the rate of the drop in yield (i.e. the slope) of ML is sharper than the other two strains.
Comparison of the biomass yield between model predictions with nominal energy demand and the experimental data. "-nom" refers to nominal, the default energy demand specified in the core model. Biomass yield is calculated as gram biomass produced per gram substrate consumed. uPPP% was set to 35% for all strains. Experimental data were obtained from the same sources [3, 16, 41] of the acetate data shown in Fig. 1. simu – simulation results, exp. – experimental data
Intuitively, feature (i) suggests that in ML, the amount of energy required per unit mass of biomass formation should be less than NCM or MG. Therefore we collected the growth data of ML and remodelled the cellular energy demand (see Methods).
It is worth noting that for ML, the negative value of M (Table 1) clearly indicates a constrained applicable range of the maintenance parameters, i.e. valid only within the overflow region. As growth rate decreases, if M stays unchanged, the overall energy consumption (Eq. (16)) will drop to a negative value, which is clearly not biologically feasible. This then implies a certain degree of nonlinearity in the global relationship between (total or maintenance) energy requirement and growth rate. Such proposition was previously referred to as "varied non-growth-associated maintenance" [3]. Non-linearity in energy consumption manifesting before and after the onset of the overflow metabolism has also been observed and discussed in a recent work [44].
Model prediction of biomass yield with adjusted energy demand
We first re-estimated the set of proteomic cost parameters for ML with the adjusted energy demand (see Table 1 and Methods). Applying updated values of \( {w}_f^{\ast } \), \( {w}_r^{\ast } \) and b∗ together with the adjusted maintenance energy, our model is now able to effectively capture the unique trend of biomass yield for ML, without any compromise in the accuracy of predicting acetate excretion (Fig. 6). Simulation results for ML with adjusted energy demand are referred to as "ML-new".
Model prediction of acetate production and biomass yield for ML with adjusted energy demand compared with experimental data. "-new" refers to the adjusted energy demand. Simulation was done with adjusted energy demand (Table 1, M,N for ML) and updated proteomic cost parameters. Biomass yields are shown as ten times of the original value to unify the order of magnitude between different types of data. uPPP% was set to 35%. Experimental data was obtained from Table 7 in [3]. ac – acetate flux, Yxs – biomass yield, simu – simulation results, exp. – experimental data
It is worth noting that, our model also succeeds in matching the elevated reduction in the yield of ML in the overflow region as the growth rate increases. This captured trend appears to originate from the low energy demand of ML. Approximately, the yield reduction rate can be considered as being proportional to the ratio of the increase in the acetate excretion (ac2 − ac1) and the increase in glucose uptake rate (glc2 − glc1), while the growth rate rises from λ1 to λ2:
$$ yield\ reduction\ rate\propto \frac{ac_2-{ac}_1}{glc_2-{glc}_1}, for\ {\lambda}_1\to {\lambda}_2\ \left({\lambda}_2>{\lambda}_1\right) $$
NCM and ML exhibit similar acetate excretion rates, hence a similar value in "ac2 − ac1". However, the energy demand per unit growth of ML is much lower than NCM, which means that with a similar increase in acetate production, the increase in substrate intake (i.e. glc2 − glc1) for ML will be lower than NCM to achieve a given increment in the growth rate. According to Eq. (21), the yield reduction rate of ML will thus be higher than NCM.
Impact of the adjusted energy demand on \( {w}_r^{\ast }-{w}_f^{\ast } \) and \( {b}^{\ast }-{w}_f^{\ast } \) relationships
To investigate the impact of the change in cellular energy demand on the linear relationships of \( {w}_f^{\ast } \), \( {w}_r^{\ast } \) and b∗ of ML-new, we recalculated constants kf, vf, 0, kr and vr, 0 at different uPPP% values (25, 35 and 40%) to update the linear equations describing \( {w}_r^{\ast }-{w}_f^{\ast } \) line and \( {b}^{\ast }-{w}_f^{\ast } \) (Eqs. (11) and (12)). The resulting \( {w}_r^{\ast }-{w}_f^{\ast } \) lines for ML-new are plotted in Fig. 2, together with the results obtained earlier for MG/NCM/ML with nominal energy demand.
The switch to the adjusted energy demand makes the \( {w}_r^{\ast } \) value for ML-new much higher than that of ML-nom, the latter being rather close to those of the MG-nom and NCM-nom. This implies that the adjustment of the energy demand of ML leads to an enlarged gap in the proteomic efficiency between respiration and fermentation.
The similarity among MG-/NCM−/ML-nom has already been discussed in the previous section. Here we mainly focus on the discrepancy with ML-new. We found that both the slope and intercept of \( {w}_r^{\ast }-{w}_f^{\ast } \) line for ML-new are about three times larger than ML-nom (as shown in Additional file 1: Table S4). The dramatic changes in the slope and intercept of ML-new predominantly result from the reduction in the respiratory flux vr when applying the adjusted energy demand (see Fig. 7 and Additional file 2: Figure S3).
Comparison of the predicted respiration and acetate fluxes between ML-new and ML-nom. "-nom" refers to nominal, the default energy demand specified in the core model. "-new" refers to the adjusted energy demand. The predicted rates of acetate production for ML-nom and ML-new are completely overlapped with each other. uPPP% was set to 35% for both strains. Data source of acetate excretion is shown in Fig. 1. ac – acetate flux, vr – respiration flux, simu – simulation results, exp. – experimental data
The link between the drop in vr and the increase in \( {w}_r^{\ast } \) has been discussed in the previous section. The results presented herein indicate that it is the energy demand that plays a major role in distinguishing the \( {w}_r^{\ast }-{w}_f^{\ast } \) relationship between different strains, not the uPPP% or the acetate excretion pattern.
Applying the adjusted energy demand also has an impact on the relationship between b∗ and \( {w}_f^{\ast } \). As shown in Fig. 8, the \( {b}^{\ast }-{w}_f^{\ast } \) lines are significantly right-shifted when the model is changed from ML-nom to ML-new (i.e. red lines are located in a much right area than yellow lines). Given the identical pattern of acetate excretion between ML-nom and ML-new (as both predicted the same set of experimental data), the amount of energy produced through fermentation remains unchanged. For ML-new, as the energy demand per unit of growth is much lower than that of the nominal strain, the respiratory flux vr must decrease significantly to avoid energy overproduction, as confirmed in Fig. 7. Although the value of \( {w}_r^{\ast } \) for ML-new is higher than that for ML-nom (for a given value of \( {w}_f^{\ast } \), Fig. 2), the value of the product \( {w}_r^{\ast }{v}_r \) for ML-new still becomes lower (as the increase in \( {w}_r^{\ast } \) is not able to compensate for the sharp drop in vr). With no change in \( {w}_f^{\ast }{v}_f \) between ML-new and ML-nom (for a given \( {w}_f^{\ast } \)), Eq. (6) again dictates b∗ to become higher for ML-new than ML-nom, hence the right-shifting of the \( {b}^{\ast }-{w}_f^{\ast } \) lines.
Comparison of the \( {b}^{\ast }-{w}_f^{\ast } \) relationship between ML-nom and ML-new at different uPPP% levels. "-nom" refers to nominal, the default energy demand specified in the core model. "-new" refers to the adjusted energy demand. uPPP% was set to 25, 35 and 40% for both strains
In the case of ML-nom, it was shown earlier in Fig. 3 that the increase in uPPP% would lead to a reduction of vr, which in turn would lead to an increase in b∗ or right-shifting of the \( {b}^{\ast }-{w}_f^{\ast } \) line. Now for ML-new, an enlarged gap between the \( {b}^{\ast }-{w}_f^{\ast } \) lines at different uPPP% levels is observed compared to the case of ML-nom. This implies that the effect of vr reduction due to the increase in uPPP% is more pronounced with the adjusted energy demand.
Comparison with relevant models
In the study by Basan et al. [16], which has been the basis of the PAT constraint formulated in our model, the application of a constraint on proteome fractions (similar to Eq. (4)) with parameters derived from measured protein abundances was able to accurately predict the patterns of acetate excretion for E. coli under different growth conditions, when coupled with a simple energy balance equation. In this work, we have embedded the PAT into the core metabolic model of E. coli, taking the advantage of the latter in offering more rigorous modelling of intracellular mass and energy balances. Furthermore, the constraint-based metabolic model allows prediction of detailed metabolic fluxes as opposed to merely acetate production, which could provide more insights about metabolic pathways in connection with the overflow metabolism and pave the way for investigating acetate excretion in junction with possible manipulations of the metabolic network.
In addition, the respiratory flux in Basan's work was associated with the carbon dioxide produced in respiration, termed JCO2, r, whose value was deduced by subtracting the fermentation-dependent CO2 and the growth-dependent CO2 from the total CO2 production. As such, JCO2, r could not directly correspond to a specific flux in the metabolic network. In our model, the respiratory flux directly refers to a specific flux within the TCA cycle (AKGDH), which appears to be a convenient choice when the PAT is embedded into FBA. Using a constraint-based metabolic model that includes the TCA cycle with a reasonable level of detail, the respiration flux can be directly resolved via FBA, without the need for multi-step calculation along with different levels of assumptions and uncertainties.
Another important comparison we would like to make is with the recently developed model – CAFBA. It was mentioned in the Background section that unlike CAFBA which considers the proteomic cost of every individual reaction in metabolic network, the PAT constraint in our model follows the treatment of Basan et al.'s work and quantifies the proteomic costs at the pathway level. This simplification allows the model to explicitly incorporate the differential proteomic efficiencies of fermentation and respiration that are proven (in both Basan et al. and this work) to govern the flux split between the two pathways. This simplicity comes at the cost of the limited utility of our model: it is intended to be used only for predicting the interplay between the excretion of acetate (or other fermentation products) and the growth rate during overflow metabolism, not other effects of stressed resource allocation.
The proteome allocation constraint in CAFBA includes a C-sector (via a term expressed as wcvc) representing the proteome requirement for uptaking carbon source, which is not explicitly considered in this work. We have ignored this sector as the carbon overflow occurs only at the high growth rate region, where wc (proteomic cost of the C-sector) approaches zero at high substrate uptake rates, as shown in CAFBA [28]. In this region, the low value of wc makes the C-sector negligible compared with other proteome sectors. At low growth rate region, the value of wc becomes significant, however no acetate is excreted in this region, where the equality relationship in the PAT constraint in our work becomes inactive so that the significance of the C-sector becomes irrelevant.
As for the prediction of biomass yield, CAFBA noticed the difficulty in predicting the biomass yield of ML308. In this work, we have found that it is the cellular energy demand that significantly affects the FBA prediction of biomass yield. After replacing the default energy demand with data reported specifically for ML308 strain, our model was able to produce an accurate prediction. Therefore, we consider that it is important to carry out necessary adjustment to the cellular energy demand when applying such a constraint-based modelling approach to specific strains.
Parameterisation of the proteome allocation constraint
The modelling approach proposed in this work can be considered as "halfway" between the coarse-grained proteome allocation model of Basan et al. [16] and the FBA models that incorporate reaction-level resource allocation constraints such as CAFBA [28] and FBAwMC [21]. In CAFBA, the proteome constraint involves ~ 1000 proteomic cost parameters (wi) for a genome-scale model. Similarly in FBAwMC, a large number of crowing coefficients need to be specified. In both cases, the existence of numerous cost parameters originates from associating the resource cost with individual reactions. These parameters conceptually have a clear biological meaning and in principle can be determined experimentally by e.g. proteome measurements or extensive enzyme assays. However, in practice, it has appeared to be difficult to reliably obtain precise values for all the parameters, especially for different strains growing at different growth rates or conditions. In fact, instead of pursuing the exact values for all the individual parameters, CAFBA focused on applying the average value of the proteome fraction invested per unit flux, termed as 〈w〉, to capture the key flux pattern, along with evaluating the impact of possible heterogeneous values of the proteome parameter wi on the model prediction. Similarly, FBAwMC [21, 22] also appears to encounter a certain degree of "randomness" of its crowding coefficients due to the unknown enzyme kinetics and/or turnover numbers. Subsequently, molecular-crowding-based modelling normally treats this "randomness" as noise, where the crowding coefficients are chosen randomly from a distribution of crowding coefficients [45] or the majority of the crowding coefficients are estimated from a limited number of known enzyme turn-over (kcat) values [11].
In this work, we have intended to formulate a constraint with a greatly reduced number of proteomic cost parameters, while still capturing the essence of constrained cellular resource allocation. This is achieved by formulating the proteome allocation constraint at the pathway (as opposed to reaction) level. The proposal is the concise Eq. (6) (\( {w}_f^{\ast }{v}_f+{w}_r^{\ast }{v}_f+{b}^{\ast}\lambda \le 1 \)), involving only three proteomic cost parameters (representing proteomic efficiencies of fermentation, respiration and biomass synthesis pathways). In principle, these parameters can be obtained through the direct measurement of protein abundances, following an approach similar to that adopted by Basan et al.'s work [16]. However, in the current study, we attempted to parameterise this constraint using widely available growth data from cell culturing experiments, in particular growth rate and acetate production rate. It should be noted that cell culturing experiments often yield relatively simple data sets with measurements of a few process variables. It is infeasible to use such data sets to determine a large number of proteomic cost parameters encountered in a proteome constraint expressed at the individual reaction level. Even with the pathway-level constraint adopted in this work, our results show that cell growth and acetate production measurements (alone) cannot uniquely determine the three parameters, but two linear relationships between these parameters can be derived (Eqs. (11) and (12)).
Furthermore, our model shows that it is the two linear relationships (but not the absolute values) of the proteomic cost parameters that allow an accurate prediction of the overflow metabolism. We thus speculate that for a FBA-based model, the ability of capturing the overflow behaviour is rendered by (i) an extra constraint representing the constrained proteomic resources and (ii) certain relations or relative magnitudes of the proteomic cost parameters embedded in the proteome constraint. In reality, the proteomic efficiencies of the metabolic pathways may vary (within a certain range), at different points in time or between cells in a population which often exhibits heterogeneity [28]. However, as long as the specific relations or relative magnitudes of these efficiencies are maintained, one can expect that the overflow behaviour will emerge.
Applicability of the linear formulation of the proteomic cost
As indicated in the earlier section, our formulation of the proteomic cost (Eqs. (2a) and (2b)) reflects the observed linear dependency between proteome fraction and growth rate [30,31,32]. Combining this linear dependency with the assumption that the flux processed by a proteome sector i is proportional to the growth rate-dependent component of the associated proteome fraction [30], we have related ϕi linearly with the flux it carries. A similar model is also adopted in CAFBA [28], where the linear proteome-flux relation is derived on the assumption that the substrate concentration is proportional to the flux.
Note that our model is intended specifically for predicting the steady-state overflow metabolism in E. coli under glucose-limited conditions. In some other circumstances, observations not conforming to this relatively simple model have been reported. For example, Goel and his co-workers found hardly any changes in protein levels in anaerobic slow-growing Lactococcus lactis chemostats, when the cell shifted from a high yield metabolic mode to a low yield metabolic mode with an increased growth rate [46]. In this case, although the metabolic shift in L. lactis is similar to the overflow metabolism observed in E. coli, proteome allocation did not seem to accompany the changes in metabolic fluxes. In a study on yeast's transient transcript, enzyme and metabolite responses under metabolic perturbation, it is revealed that the reaction rates are jointly regulated by enzyme capacity and metabolite concentration due to the cell's tendency in sacrificing the local metabolite homeostasis to maintain fluxes and global metabolite homeostasis upon enzyme perturbation [47]. Another yeast-based study also suggests that changes in individual flux are predominantly regulated by the levels of metabolites, not enzymes [48].
The above-mentioned experimental observations suggest that the linear proteome-flux relationship modelled in this work might not be applicable to those circumstances. We hypothesise that this might be at least partially due to the differences between E. coli (the target organism of our model) and the organisms with which those observations were made. Besides, our model has been developed to describe the relationship between (i) the observed steady-state global proteome configuration and (ii) the growth rate or the corresponding flux, drawing on evidences and hypothesises from several previous studies [16, 28, 30, 32]. Such a model, being global and coarse-grained, is not intended to represent delicate regulatory mechanisms responsible for the transient metabolic changes to maintain cellular homeostasis under perturbations, and might not be suitable for revealing the local regulatory insights on the key factors dictating the individual reaction rates.
With three different E. coli strains, we have evaluated a new model that integrates a previously proposed proteome allocation theory (PAT) into the constraint-based modelling approach – flux balance analysis (FBA), which predicts the distribution of carbon fluxes between fermentation and respiration due to the differential proteomic efficiencies of the two energy biogenesis pathways. Using a simple proteome allocation constraint, our model allows the accurate prediction of acetate production at different steady state growth rates during overflow conditions (with sufficient oxygen and glucose). The model involves three pathway-level proteomic cost parameters linearly interrelated by two equations, which is the consequence of (i) the assumed linear dependency of proteomic costs and the growth rate and (ii) the experimentally observed linear correlation between the fermentation or respiration flux and the growth rate. The non-unique optimal values of the three parameters, or the two linear relationships between them, could be obtained by fitting the model to experimentally measured acetate excretion rates at specific growth rates.
The linear relationships between the parameters were shown to be affected, in varying degrees, by (i) the acetate excretion pattern, (ii) the assumed upper limit of the substrate carbon diverting into PP pathway and (iii) the cellular energy demand. The proteomic cost of the fermentation pathway was estimated always to a lower value than that of the respiration pathway, i.e. \( {w}_f^{\ast }<{w}_r^{\ast } \). The proteomic cost of the biomass synthesis sector was estimated to be higher in a slow-growing strain that excretes acetate at a lower growth rate, in comparison with the other two fast-growing strains, i.e. \( {b}_{\mathrm{MG}}^{\ast }>{b}_{\mathrm{NCM}/\mathrm{ML}}^{\ast } \). The estimated values of \( {w}_f^{\ast } \) and b∗both meet qualitatively the expectation from a biological point of view. Furthermore, the relationship between the proteomic efficiencies of fermentation and respiration, i.e. the \( {w}_f^{\ast }-{w}_r^{\ast } \) line, was shown to change between different strains most significantly with the cellular energy demand rather than with the pattern of acetate excretion. This \( {w}_f^{\ast }-{w}_r^{\ast } \) relationship remained relatively stable when the upper bound of the portion of substrate carbon flowing into PP pathway (uPPP%) varied in the modelling studies. On the other hand, the increase of uPPP% was shown to lead to a visible increase in the estimated proteomic cost of the biomass synthesis sector b∗, which mathematically results from a reduction in the predicted respiration flux.
Finally, and as a general point for constraint-based models, cellular energy demand appeared to have a major impact on the predicted biomass yield; tuning the default energy demand with strain-specific data was shown to be critical in making simultaneously accurate predictions of biomass yield and overflow metabolism.
Overall, this work demonstrates the potential of combining a detailed metabolic model with a coarse-grained, pathway-level resource allocation constraint in producing quantitatively accurate predictions of the overflow phenomenon in E coli; similar modelling approaches that feature this type of combination may prove suitable also for other applications.
FBA:
Flux balance analysis
Growth-associated maintenance
NADTRHD:
NAD transhydrogenase
NGAM:
Non-growth-associated maintenance
PAT:
Proteome allocation theory
PPP%:
Pentose phosphate pathway ratio
THD2:
NAD(P) transhydrogenase
TKT2:
Transketolase
uPPP%:
Upper bound of the pentose phosphate pathway ratio
Eiteman MA, Altman E. Overcoming acetate in Escherichia coli recombinant protein fermentations. Trends Biotechnol. 2006;24:530–6.
Farmer WR, Liao JC. Reduction of aerobic acetate production by Escherichia coli. Appl Environ Microbiol. 1997;63:3205–10.
Holms H. Flux analysis and control of the central metabolic pathways in Escherichia coli. FEMS Microbiol Rev. 1996;19:85–116.
Pan JG, Rhee JS, Lebeault JM. Physiological constraints in increasing biomass concentration of Escherichiacoli B in fed-batch culture. Biotechnol Lett. 1987;9:89–94.
Xu B, Jahic M, Enfors SO. Modeling of overflow metabolism in batch and fed-batch cultures of Escherichia coli. Biotechnol Prog. 1999;15:81–90.
Lee SY. High cell-density culture of Escherichia coli. Trends Biotechnol. 1996;14:98–105.
Holmes WH. The central metabolic pathways of Escherichia coli: relationship between flux and control at a branch point, efficiency of conversion to biomass, and excretion of acetate. Curr Top Cell Regul. 1986;28:69–105.
Luli GW, Strohl WR. Comparison of growth, acetate production, and acetate inhibition of Escherichia coli strains in batch and fed-batch fermentations. Appl Environ Microbiol. 1990;56:1004–11.
Zhou K, Qiao K, Edgar S, Stephanopoulos G. Distributing a metabolic pathway among a microbial consortium enhances production of natural products. Nat Biotechnol. 2015;33:377–83. https://doi.org/10.1038/nbt.3095.
Schulz TJ, Thierbach R, Voigt A, Drewes G, Mietzner B, Steinberg P, et al. Induction of oxidative metabolism by mitochondrial frataxin inhibits cancer growth Otto Warburg revisited. J Biol Chem. 2006;281:977–81.
Shlomi T, Benyamini T, Gottlieb E, Sharan R, Ruppin E. Genome-scale metabolic modeling elucidates the role of proliferative adaptation in causing the Warburg effect. PLoS Comput Biol. 2011;7. https://doi.org/10.1371/journal.pcbi.1002018.
Vazquez A, Liu J, Zhou Y, Oltvai ZN. Catabolic efficiency of aerobic glycolysis: the Warburg effect revisited. BMC Syst Biol. 2010;4. https://doi.org/10.1186/1752-0509-4-58.
Schuster S, Boley D, Möller P, Stark H, Kaleta C. Mathematical models for explaining the Warburg effect: a review focussed on ATP and biomass production. Biochem Soc Trans. 2015;43:1187–94. https://doi.org/10.1042/BST20150153.
Wolfe AJ. The acetate switch. Microbiol Mol Biol Rev. 2005;69:12–50. https://doi.org/10.1128/MMBR.69.1.12-50.2005.
Molenaar D, van Berlo R, de Ridder D, Teusink B. Shifts in growth strategies reflect tradeoffs in cellular economics. Mol Syst Biol. 2009;5. https://doi.org/10.1038/msb.2009.82.
Basan M, Hui S, Okano H, Zhang Z, Shen Y, Williamson JR, et al. Overflow metabolism in Escherichia coli results from efficient proteome allocation. Nature. 2015;528:99–104. https://doi.org/10.1038/nature15765.
Enjalbert B, Millard P, Dinclaux M, Portais JC, Létisse F. Acetate fluxes in Escherichia coli are determined by the thermodynamic control of the Pta-AckA pathway. Sci Rep. 2017;7:42135. https://doi.org/10.1038/srep42135.
Anane E, López CDC, Neubauer P, Cruz Bournazou MN. Modelling overflow metabolism in Escherichia coli by acetate cycling. Biochem Eng J. 2017;125:23–30. https://doi.org/10.1016/j.bej.2017.05.013.
Goel A, Wortel MT, Molenaar D, Teusink B. Metabolic shifts: a fitness perspective for microbial cell factories. Biotechnol Lett. 2012;34:2147–60.
Orth JD, Thiele I, Palsson BØ. What is flux balance analysis? Nat Biotechnol. 2010;28:245–8. https://doi.org/10.1038/nbt.1614.
Beg QK, Vazquez A, Ernst J, de Menezes MA, Bar-Joseph Z, Barabasi AL, et al. Intracellular crowding defines the mode and sequence of substrate uptake by Escherichia coli and constrains its metabolic activity. P Natl Acad Sci USA. 2007;104. https://doi.org/10.1073/pnas.0609845104.
Vazquez A, Beg QK, Demenezes MA, Ernst J, Bar-Joseph Z, Barabasi AL, et al. Impact of the solvent capacity constraint on E. coli metabolism. BMC Syst Biol. 2008;2. https://doi.org/10.1186/1752-0509-2-7.
Goelzer A, Fromion V, Scorletti G. Cell design in bacteria as a convex optimization problem. Automatica. 2011;47:1210–8.
Goelzer A, Fromion V. Bacterial growth rate reflects a bottleneck in resource allocation. Biochim Biophys Acta (BBA)-General Subj. 2011;1810:978–88.
Goelzer A, Fromion V. Resource allocation in living organisms. Biochem Soc Trans. 2017. https://doi.org/10.1042/BST20160436.
O'Brien EJ, Lerman JA, Chang RL, Hyduke DR, Palsson BO. Genome-scale models of metabolism and gene expression extend and refine growth phenotype prediction. Mol Syst Biol. 2014;9:693. https://doi.org/10.1038/msb.2013.52.
Zhuang K, Vemuri GN, Mahadevan R. Economics of membrane occupancy and respiro-fermentation. Mol Syst Biol. 2011;7. https://doi.org/10.1038/msb.2011.34.
Mori M, Hwa T, Martin OC, De Martino A, Marinari E. Constrained allocation flux balance analysis. PLoS Comput Biol. 2016;12(6):e1004913.
Woldringh CL, Binnerts JS, Mans A. Variation in Escherichia coli buoyant density measured in Percoll gradients. J Bacteriol. 1981;148:58–63.
Hui S, Silverman JM, Chen SS, Erickson DW, Basan M, Wang J, et al. Quantitative proteomic analysis reveals a simple strategy of global resource allocation in bacteria. Mol Syst Biol. 2015;11. https://doi.org/10.15252/msb.20145697.
Scott M, Gunderson CW, Mateescu EM, Zhang Z, Hwa T. Interdependence of cell growth and gene expression: origins and consequences. Science (80- ). 2010;330:1099–102. https://doi.org/10.1126/science.1192588.
You C, Okano H, Hui S, Zhang Z, Kim M, Gunderson CW, et al. Coordination of bacterial proteome with metabolism by cyclic AMP signalling. Nature. 2013;500:301–6. https://doi.org/10.1038/nature12446.
Vazquez A, Oltvai ZN. Macromolecular crowding explains overflow metabolism in cells. Sci Rep. 2016;6:31007. https://doi.org/10.1038/srep31007.
Orth JD, Palsson BØ, Fleming RMT. Reconstruction and use of microbial metabolic networks: the Core Escherichia coli metabolic model as an educational guide. EcoSal Plus. 2010;4. https://doi.org/10.1128/ecosalplus.10.2.1.
Schellenberger J, Que R, Fleming RMT, Thiele I, Orth JD, Feist AM, et al. Quantitative prediction of cellular metabolism with constraint-based models: the COBRA toolbox v2. 0. Nat Protoc. 2011;6:1290.
Feist AM, Henry CS, Reed JL, Krummenacker M, Joyce AR, Karp PD, et al. A genome-scale metabolic reconstruction for Escherichia coli K-12 MG1655 that accounts for 1260 ORFs and thermodynamic information. Mol Syst Biol. 2007;3. https://doi.org/10.1038/msb4100155.
Fischer E, Sauer U. A novel metabolic cycle catalyzes glucose oxidation and anaplerosis in hungry Escherichia coli. J Biol Chem. 2003;278:46446–51.
Flamholz A, Noor E, Bar-Even A, Liebermeister W, Milo R. Glycolytic strategy as a tradeoff between energy yield and protein cost. Proc Natl Acad Sci. 2013;110:10039–44.
Fuhrer T, Fischer E, Sauer U. Experimental identification and quantification of glucose metabolism in seven bacterial species. J Bacteriol. 2005;187. https://doi.org/10.1128/JB.187.5.1581-1590.2005.
Sauer U, Canonaco F, Heri S, Perrenoud A, Fischer E. The soluble and membrane-bound transhydrogenases UdhA and PntAB have divergent functions in NADPH metabolism of Escherichia coli. J Biol Chem. 2004;279:6613–9.
Nanchen A, Schicker A, Sauer U. Nonlinear dependency of intracellular fluxes on growth rate in miniaturized continuous cultures of Escherichia coli. Appl Environ Microbiol. 2006;72:1164–72.
Folsom JP, Carlson RP. Physiological, biomass elemental composition and proteomic analyses of Escherichia coli ammonium-limited chemostat growth, and comparison with iron-and glucose-limited chemostat growth. Microbiology. 2015;161:1659–70.
Becker SA, Feist AM, Mo ML, Hannum G, Palsson BØ, Herrgard MJ. Quantitative prediction of cellular metabolism with constraint-based models: the COBRA toolbox. Nat Protoc. 2007;2. https://doi.org/10.1038/nprot.2007.99.
Kayser A, Weber J, Hecht V, Rinas U. Metabolic flux analysis of Escherichia coli in glucose-limited continuous culture. I. Growth-rate-dependent metabolic efficiency at steady state. Microbiology. 2005;151:693–706 http://mic.microbiologyresearch.org/content/journal/micro/10.1099/mic.0.27481-0.
van Hoek MJ, Merks RM. Redox balance is key to explaining full vs. partial switching to low-yield metabolism. BMC Syst Biol. 2012;6:22. https://doi.org/10.1186/1752-0509-6-22.
Goel A, Eckhardt TH, Puri P, Jong A, dos Santos F, Giera M, et al. Protein costs do not explain evolution of metabolic strategies and regulation of ribosomal content: does protein investment explain an anaerobic bacterial Crabtree effect? Mol Microbiol. 2015;97:77–92.
Fendt S-M, Buescher JM, Rudroff F, Picotti P, Zamboni N, Sauer U. Tradeoff between enzyme and metabolite efficiency maintains metabolic homeostasis upon perturbations in enzyme capacity. Mol Syst Biol. 2010;6:356.
Hackett SR, Zanotelli VRT, Xu W, Goya J, Park JO, Perlman DH, et al. Systems-level analysis of mechanisms regulating yeast metabolic flux. Science (80- ). 2016;354:aaf2786.
We would like to thank the two anonymous reviewers for their constructive suggestions and comments on the original manuscript.
HZ is supported by the China Scholarship Council through a PhD scholarship. The funding body played no role in the design, analysis, data interpretation or manuscript writing of this study.
All model equations, parameter values are included in the main text or in the Supporting Information (Additional files 1 and 2). The MATLAB code for running simulations and the generated datasets in this study are available upon request from the corresponding author.
Hong Zeng & Aidong Yang
Hong Zeng
Aidong Yang
HZ performed the modelling work, analysed and interpreted the results, and wrote the manuscript. AY designed and supervised the study, analysed and interpreted the results, and revised the manuscript. Both authors have read and approved the manuscript.
Correspondence to Aidong Yang.
Supplementary Tables S1-S6. (XLSX 26 kb)
Supplementary text and Supplementary Figures S1-S17. (DOCX 229 kb)
Zeng, H., Yang, A. Modelling overflow metabolism in Escherichia coli with flux balance analysis incorporating differential proteomic efficiencies of energy pathways. BMC Syst Biol 13, 3 (2019). https://doi.org/10.1186/s12918-018-0677-4
Proteomic efficiency
Overflow metabolism
Acetate production
Biomass yield | CommonCrawl |
In vitro evaluation of nano zinc oxide (nZnO) on mitigation of gaseous emissions
Niloy Chandra Sarker1,
Faithe Keomanivong2,
Md. Borhan1,
Shafiqur Rahman ORCID: orcid.org/0000-0002-9737-58311 &
Kendall Swanson2
Journal of Animal Science and Technology volume 60, Article number: 27 (2018) Cite this article
Enteric methane (CH4) accounts for about 70% of total CH4 emissions from the ruminant animals. Researchers are exploring ways to mitigate enteric CH4 emissions from ruminants. Recently, nano zinc oxide (nZnO) has shown potential in reducing CH4 and hydrogen sulfide (H2S) production from the liquid manure under anaerobic storage conditions. Four different levels of nZnO and two types of feed were mixed with rumen fluid to investigate the efficacy of nZnO in mitigating gaseous production.
All experiments with four replicates were conducted in batches in 250 mL glass bottles paired with the ANKOMRF wireless gas production monitoring system. Gas production was monitored continuously for 72 h at a constant temperature of 39 ± 1 °C in a water bath. Headspace gas samples were collected using gas-tight syringes from the Tedlar bags connected to the glass bottles and analyzed for greenhouse gases (CH4 and carbon dioxide-CO2) and H2S concentrations. CH4 and CO2 gas concentrations were analyzed using an SRI-8610 Gas Chromatograph and H2S concentrations were measured using a Jerome 631X meter. At the same time, substrate (i.e. mixed rumen fluid+ NP treatment+ feed composite) samples were collected from the glass bottles at the beginning and at the end of an experiment for bacterial counts, and volatile fatty acids (VFAs) analysis.
Compared to the control treatment the H2S and GHGs concentration reduction after 72 h of the tested nZnO levels varied between 4.89 to 53.65%. Additionally, 0.47 to 22.21% microbial population reduction was observed from the applied nZnO treatments. Application of nZnO at a rate of 1000 μg g− 1 have exhibited the highest amount of concentration reductions for all three gases and microbial population.
Results suggest that both 500 and 1000 μg g− 1 nZnO application levels have the potential to reduce GHG and H2S concentrations.
The agricultural sector is recognized as one of the greatest sources of methane (CH4) and other gaseous emissions, and it is contributing approximately 250 million metric ton CO2 Eq. CH4 emission per year [1, 2]. Most of the CH4 emissions from the agricultural sector are from the livestock industry and manure management. Almost 70% of the agricultural sectors CH4 emission is from enteric fermentation [3]. Enteric fermentation includes fermentation in the rumen and hindgut paired with digestive hydrogen (H2) metabolism by microbial catalyst [1]. During enteric fermentation, CH4 and carbon dioxide (CO2) are the two main greenhouse gases (GHGs) emitted and contribute to global warming [1]. Hydrogen sulfide (H2S) is another pollutant gas generated during enteric fermentation, although its amount is not significant compared with CH4 and CO2. Hydrogen Sulfide might be a potential health hazard to livestock and workers depending on the concentration level [4]. Hence, the reduction of these gas emissions without altering animal productivity is a challenge for a healthy environment and sustainable livestock industries.
Fermentation of carbohydrates in the reticulorumen occurs for available hydrogen supply towards volatile fatty acid (VFA) production and eventually leads to CH4 production [5,6,7,8,9]. Additionally, fermentation and neutralization of hydrogen ions (H+), and bicarbonate ions (HCO3−) entering the rumen across the ruminal wall during VFA absorption contributes to CO2 production in the rumen [10, 11]. Similarly, sulfur-containing amino acids and sulfates are the main sources of H2S within the rumen; H2S generation depends on the microbial degradation of amino acids and sulfates [11,12,13].
Since all of these gaseous emissions pose potential environmental and safety concerns, scientists are striving to mitigate the production of these gases. Management of feeding strategy, application of biotechnology, and the introduction of additives are a few of the most common approaches that researchers are working on for abating enteric gaseous emissions [14]. Similarly, changes in the forage species, good forage processing, reduction of forage maturity, based on your excellent credentials, and increased feeding frequency are a few noteworthy gas mitigation strategies [14,15,16,17,18,19,20,21]. However, all of these approaches exhibit a very small amount of gaseous emission reduction, and in most of the cases, the mitigation strategy focused on the reduction of CH4 only. So, it is important to develop a new approach that can reduce multiple gaseous emissions without compromising animal health and productivity.
In recent years, nanotechnology has received attention for improving livestock production [22]. In U.S., only 26 of 160 agri-food nanotechnology research and development projects were relevant to livestock facilities [22]. Animal health, veterinary medicine, and other animal production facilities are a few of the livestock-related sectors on which nanoparticles (NPs) have their promising footprints [23,24,25]. For example, silver and zinc NPs have been added to animal feed to control microbial proliferation and promote animal growth, respectively. Similarly, zinc oxide (nZnO) NP is used to enhance growth and feed efficiency in piglets and poultry [26]. However, application of nanotechnology in mitigating gaseous emissions from livestock facilities is still limited. Swain et al. [26] reported nZnO changes the rumen fermentation kinetics in ruminants and can alter the volatile fatty acids, therefore it may affect enteric CH4 production. Similarly, application levels of NPs may also alter the microbial population, thus other gaseous emissions. Among the few studies performed with GHGs mitigation, nZnO were reported to have an inhibitory action towards CH4, CO2 and H2S from anaerobic storage of manure [27, 28]. Therefore, the objective of this study was to evaluate the efficacy of four different application rates (100, 200, 500, and 1000 μg g− 1 of feed) of nZnO in mitigating CO2, CH4, and H2S emissions from rumen fluid under anaerobic storage conditions. Other than the application rate of 1000 μg g− 1, nZnO application rates were within the general dietary guideline of the maximum tolerable level of Zn mineral concentration provided by the National Academies of Sciences [29]. The specific objective was to characterize the changes in the rumen fluid properties and find the gaseous reduction mechanisms such as by bacterial population reduction.
Ruminal fluid collection, processing and experimental setup
Ruminal fluid was collected from two ruminally-fistulated mature steers predominately of Angus breeding on a limit-fed grass hay-based diet fed to maintain body weight. Two hours after morning feeding, approximately one liter of ruminal fluid was collected from each steer. To ensure uniform representation of the liquid and fiber phase, random grab samples were collected both from ventral and dorsal ruminal sacs. Prior to mixing with McDougall's buffer [30], ruminal fluid from each steer was combined and strained through four layers of cheesecloth to remove the large particulate matter. Five treatments consisting of a control (no nZnO) and four levels of nZnO (100, 200, 500, and 1000 μg g− 1 of feed), with two different feeds (alfalfa and maize silage; Table 1) were used. Nutrient compositions of the two base diets are shown in Table 1. Levels of nZnO were selected based on the maximum allowable zinc (Zn) concentration (30 to 500 μg g− 1) in feed recommended by the [29]. The 1000 μg g− 1 of nZnO level was added to investigate the effect of high nZnO application level on ruminal gaseous emission. The nZnO application levels were weighed on a Sartorius CP2P microbalance (Sartorius Corporation, NY, USA) with an accuracy of 1 μg using small aluminum pans (DSC Consumables, Inc., AU, USA). The nZnO (US Research Nanomaterials, Inc., Texas, USA; Particle Size = 35–45 nm and 99.5% purity) was mixed with two feeds (e.g., alfalfa and maize silage) separately. In each ANKOMRF gas bottle, 1.5 g of ground alfalfa or maize silage (3 to 5 mm size) feed was added. Thereafter, 37.5 mL of the combined rumen fluid and 150 mL of McDougall's buffer were added to each bottle and a sub-sample of the mixed ruminal fluid was stored in the freezer for characterization. Treatment bottles was purged with CO2 to create an anaerobic environment and sealed with the ANKOMRF pressure monitor cap. Thus, in total, twenty (5 treatments × 4 replications) bottles were used for each feed type.
Table 1 Composition of the feeds (dry matter basis)
Ruminal pH and redox determination
The pH, and redox of the mixed ruminal fluid were determined before and after the ruminal fluid was treated with nZnO using a HANNA HI 4522 dual channel benchtop meter (VWR, TX, USA). Both probes were calibrated following manufacturer standard protocols. The reading of each probe was also confirmed with respective standard solutions before each measurement to ensure accurate reading of the probes.
Gas production measurement and monitoring system
All experiments were conducted using 250 mL ANKOMRF gas glass bottles and under the same conditions. After proper flushing and sealing of bottles, they were placed in a water bath (SWBR17 shaking water bath, Atkinson NH, USA) that oscillated and heated at 125 rpm and 39 ± 1 °C, respectively. Once they were placed in the water bath, a wireless gas production measurement system (ANKOM Technology Corp., Macedon, NY, USA) was used for monitoring and measuring gas production data. Data obtained from this system were converted from pressure (KPa) units to volume units (mL) using the ideal gas law as follows:
$$ n=p\left(\frac{V}{RT}\right)\kern12em Eqn\ 1 $$
$$ Gas\ produced\ (mL)=n\times 22.4\times 1000\kern1em Eqn\ 2 $$
Where: n = gas produced in moles (mol), P = pressure in kilopascal (kPa), V = head-space volume in the Glass Bottle in Liters (L), T = temperature in Kelvin (K), and R = gas constant (8.314472 L.kPa.K− 1.mol− 1).
Throughout the experimental period, Each bottle was connected to a Tedlar bag and once gas pressure inside a bottle reached a set-limit in the RF pressure sensor module and recorded by the ANKOMRF system, the headspace gas was released in the connected Tedlar bag. A typical in vitro study lasts for 24 h, however, in the present study it was continued for 72 h to examine the effects of nZnO on long term in vitro fermentation. After 72 h of the experimental period, gas samples from the Tedlar bags were drawn using a gas-tight syringe (5 mL, Luer-LokTM Tip Syringe, Franklin Lakes, NJ, USA) and analyzed for GHGs (CH4 and CO2), and H2S concentration. A Jerome Meter (Jerome 631X, Arizona Instrument LLC, Arizona, USA) was used to measure H2S concentration and a gas chromatograph (GC, 8610C, SRI instrument, California, USA) equipped with flame ionization detector (FID) and electron capture detector (ECD) detectors were used to measure CH4 and CO2 concentration. Based on the previous trials, collected gas was diluted 100 fold with pure nitrogen to keep the concentration in the measurable range of the analytical instruments and two measurements for individual bottle were taken for each of CH4, CO2, and H2S concentration. Nitrogen at 20 psi with a flow rate of 250 mL min− 1 was supplied to the GC as a carrier gas. Additionally, a built-in air compressor and external hydrogen generator were used to supply hydrogen and air to the GC. Temperatures of 300 and 350 °C were maintained respectively on the FID and ECD detectors before insertion of any sample gas into the GC sample loop [31]. Calibration gases were used to check the proper functioning of the instruments and blank samples were used to check any contamination within the instruments from previous measurements [32].
Analysis of microbial populations
Rumen fluid samples ( ̴ 5 mL) were collected at the beginning (just before the experiment) and at the end of the experiment (after 72 h of the experiment) and they were analyzed for the coliforms i.e. potential pathogens (particularly Escherichia coli) that is recommended by the American Public Health Association (APHA) and the Environmental Protection Agency (EPA). Microbial populations (coliforms) density were analyzed by counting total coliform bacteria in terms of colony forming units (CFUs) following the plate count method [33]. All reagents, labware, and Petri dishes used for microbial analysis were handled carefully and the whole experimental preparation was conducted in a sterile environment. One milliliter of the rumen fluid samples were collected from each treatment replications, and were diluted up to five-fold (10, 102, 103, 104 and 105) to find the optimum dilution for better visibility of the CFUs. Later on, all treatments were replicated three times with the optimum dilution. The 2 mL M-Endo broth ampule (P/N: 23735–50, HACH LANCH GmbH, Willstatterstrasse 11, Dusseldorf, Germany) was used as growth media to culture the bacteria in an incubator. The growth media was poured evenly over a gridded sterile membrane filter attached with absorbent pad (47 mm diameter, 0.45 μm pore size, WCN type, Whatman Limited, Maidstone, England, UK) that was placed in a sterile petri-dish (Anaerobic, Sterile Petri dishes, 60 mm diameter and 15 mm height, VWR, Radnor, PA, USA). Subsequently, 100 μL of the diluted rumen fluid samples were added to the absorbent pad and smeared evenly over the pad using a small sterile glass rod. The petri dishes with the growth media and bacterial culture were then incubated for 24 h at 35 ± 0.5 °C in an incubator (Lab Companion IB-01E Incubator, San Diego, CA, USA). After 24 h of incubation, CFUs were counted using a manual dark field colony counter with 1.5X magnification (Reichert, Inc. Depew, NY, USA).
Volatile fatty acids (VFAs) analysis
At the end of the experimental period, Whirl-Pak bags (Nasco, Fort Atkinson, WI and Modesto, CA, USA; 532-mL) were used to collect and store the rumen fluid subsamples at − 20 °C until further analysis. Thereafter, samples were equally composited using a vortex (Cat: 10153–842, VWR® digital vortex mixer, Radnor, PA, USA) and centrifuged (clinical 100 laboratory centrifuge, VWR, Rndor, PA, USA) at 2000×g for 20 min. They were filtered through a pore size 0.45 μm to separate out the supernatant and analyzed for VFAs using an Agilent 6890 N gas chromatograph (Agilent Technologies, Inc., Wilmington, DE, USA) equipped with an FID and fused silica column (Supleko brand, NUKUL 15 m × 0.53 mm × 0.5 μm, Sigma-Aldrich C., MO, USA), and 7683 series auto-injector following a widely used method [34].
The data were analyzed in a 2 × 2 factorial experiment using PROC GLM (SAS Inst. Inc., Cary, NC), which calculated the statistics for general linear models. Both of the feed types and five levels of nZnO were used as fixed effects models. Means were declared statistically significant at P ≤ 0.05 using Duncan multiple range test.
Effect of nZnO application levels on ruminal pH and redox
The pH of the rumen fluid incubated 72 h with alfalfa ranged between 7.20 to 7.25, whereas the pH of the maize silage based rumen fluid ranged between 6.92 to 6.96 (Table 2). Alfalfa based rumen fluid showed significantly higher pH than of maize silage (P < 0.0001). No interaction was found between feed types and nZnO levels (P = 0.401). Additionally, none of the zinc levels (100 to 1000 μg g− 1) were found to indicate a significant difference in pH values (P = 0.644). Redox potential among the treated rumen fluid and two different feed combinations ranged between − 296 to − 307 mV (Table 2), which is the preferred range for producing CH4 and CO2 anaerobically [35]. The rumen fluid redox potential between two feed types were not significantly different (P = 0.748). Additionally, similar to that of pH, no interaction among the feed types and nZnO levels was found for the rumen fluid redox potential (P = 0.217), and no significant difference was found among the nZnO levels (P = 0.947).
Table 2 Effect of nZnO levels on ruminal pH and redox (after 72 h of incubation)
Effect of nZnO application levels on ruminal VFA production
Among the four nZnO levels and the control treatment, the amount of total VFA (TVFA) ranged between 136.52 to 194.16 mM for the alfalfa-based rumen fluid, while it ranged between 161.36 to 192.8 mM for the maize silage based rumen fluid. Compared with the other treatments (nZnO levels), after 72 h of the experimental period, the control treatments exhibited the highest TVFA (Table 3). For the acetic acid, no significant difference was found among the feed types (P = 0.832), and no significant interaction between feed types and nZnO levels (P = 0.172) was found. Moreover, no significant interaction between feed type and nZnO concentrations (P = 0.688) were found for the propionic acid. However, rumen fluid with alfalfa had significantly lower propionic acid concentration than rumen fluid with maize silage (P < 0.001). Propionic acid was also found to be affected by nZnO levels (Table 3), although, no definite trend was found. Propionic acid to acetic acid (P/A) ratio was higher for maize silage based fermentation than the alfalfa-based fermentation. Alfalfa-based rumen fermentation's P/A ratio varied from 0.29 to 0.32, whereas this ratio varied from 0.38 to 0.55 for the maize silage-based fermentation.
Table 3 Effect of nZnO levels on the rumen fluid VFA (n = 4 observations/treatment)
Effect of nZnO application levels on ruminal gaseous emission and CH4, CO2, and H2S concentrations
Table 4 represents the amount of total gas produced, and gas concentrations in the ANKOMRF bottles over 72 h of incubation with four different nZnO application levels and two feed types. Produced total gas from the maize silage fermentation was two times higher than that of alfalfa fermentation (P < .0001). However, no significant difference in terms of total gas production among different applied nZnO levels was found (P = 0.875). Moreover, there was no significant interaction between feed types and zinc levels were evidential (P = 0.542).
Table 4 Effect of nZnO levels on cumulative gas volume and gas concentrations (n = 4 observations/treatment)
Measured total gas volume from the maize silage based rumen fluid was significantly higher than that of alfalfa based rumen fluid (P < .0001), although maize silage based rumen fluid produced lower CH4, CO2, and H2S gas concentrations than that of alfalfa based rumen fluid. However, all of the nZnO levels irrespective of the feed types showed a similar reduction trend for both CH4 and CO2 concentrations. Regardless of the feed and nZnO levels, CO2 concentrations were around five times higher than that of CH4 concentrations. In contrast, although the evidential significant interaction between feed types and zinc levels (P < .0001) was found for H2S concentration (Table 4), but the reduction trends were similar to that of CH4 and CO2. H2S concentration from the maize silage was ̴ 60% less than that of alfalfa. Regardless of the feed used, compared to control treatment higher nZnO application levels reduced higher amount of CH4, CO2, and H2S concentrations. Compare to the control, pooled average of the gas concentrations showed that applied levels of nZnO reduced CH4, CO2, and H2S concentration by 9.14 to 46.85%, 4.89 to 42.79%, and 9.33 to 53.65%, respectively. Among the treatments, the 1000 μg g− 1 of nZnO application level produced the highest reduction in CH4, CO2, and H2S concentration (P < .0001). Additionally, both 500 and 1000 μg g− 1 nZnO levels reduced significant (P < .0001) amount CO2 and H2S concentrations compared to other treatments (Table 4). However, no significant interaction between feed level and zinc was found for CH4 (P = 0.479) and CO2 (P = 0.948).
Effect of nZnO application levels on ruminal microbial population
Plate counts were done in terms of CFUs from pre- and post-treated rumen fluid samples to determine the effects of applied nZnO on coliforms (Table 5). The average initial CFUs were 88.4 counts with alfalfa feed based rumen fluid, and it was 85.2 counts with the maize silage feed based rumen fluid. Initial CFUs were similar regardless of feed types (P = 0.231) or nZnO inclusion (P = 998). In contrast, final CFU numbers exhibited a different trend than the initial number of CFUs (Table 5). At the end of the 72 h experimental period, CFU numbers increased by ̴ 98% for all of the treatments including control, and they ended up with an average of 4630, and 5155 counts for alfalfa and maize silage feeds, respectively. Irrespective of the nZnO application levels, final CFU counts were higher with maize silage compared with the alfalfa. Although, lower application levels of nZnO exhibited very small CFU reduction efficiency compared with the higher levels. The greatest reduction in microbial population was observed at the highest nZnO level.
Table 5 Effect of nZnO levels on ruminal microbial populations (n = 4 observations/treatment)
Lower pH of the rumen fluid incubated with maize silage based treatments might affect/inhibit acidogenic bacteria those are responsible for anaerobic digestion. In contrast, higher pH in alfalfa feed based treatments might increase the rate of fermentation, and contribute to the growth of spoilage microbes [36,37,38]. Moreover, the higher pH in the post-treated alfalfa-based rumen fluid would likely produce a higher amount of soluble protein, carbohydrate, and volatile fatty acids [39]. Hence, higher concentrations of all three gases (CH4, CO2, and H2S) were likely from the alfalfa based treatments compared with its counterpart. The resulted consistent redox potential among the treatments was preferred for anaerobic fermentation [40,41,42,43]. Additionally, redox potential among the treated rumen fluid and two different feed combinations were in the preferred range for producing CH4, CO2, and H2S anaerobically [44].
Volatile fatty acids are considered as one of the most important parameters for ensuring anaerobic fermentation. Higher TVFA amount in maize silage based rumen fluid compared with the alfalfa forage types might be an indication of the higher amount of digestible carbohydrate in the maize silage [45]. Subsequently, a higher amount of cumulative gas production from the maige silage-based fermentation was likely. The resulted pooled average of P/A ratio in the present study was 0.304 and 0.459 for the alfalfa and maize silage, respectively. The P/A ratio from the maize silage was 26% higher than the previously reported value, while the P/A ratio for the alfalfa was identical to the reported value (Ghimire, 2015). Higher P/A ratio might be an indication of imbalanced anaerobic fermentation with the maige silage-based rumen fluid fermentation [42]. Application of nZnO was hypothesized to affect either hydrolysis, acetogenesis, fermentation, methanogenesis or a combination of these processes in the fermentation process. In some cases, the bactericidal action of the applied higher nZnO level might kill the higher amount of methanogens, and hence a higher amount of unconverted TVFA was likely. Furthermore, increased energy utilization followed by ruminal microbial protein synthesis by the microbes in the early stages of fermentation might have increased the TVFA with the applied higher nZnO level as indicated by others [46].
Higher gas production from the maize silage fermentation might be due to probable higher carbohydrate content and subsequent higher fermentability of maize silage compared to alfalfa. None of the applied nZnO application levels were able to reduce total gas volume significantly, even 1000 μg g-1 of nZnO was not enough to reduce a significant amount of cumulative gas production. Therefore, nZnO at this application rate does not appear to decrease the digestibility of feed by the animal, and therefore, should not decrease productivity or growth. However, further studies are needed to understand the process and to verify if the productivity is sustained when nZnO is included in the diet.
It is noteworthy that CH4 concentrations with alfalfa were higher than those of maize silage (Table 4), although higher cumulative gas production was observed in maize silage-based fermentation (Table 4). This was likely due to appropriate P/A ratio and subsequent balanced fermentation with alfalfa-based rumen fluid that might prompt higher CO2 and H2S concentration as well [42]. Generally, a group of archaea belonging to the phylum Euryarcheota, and collectively known as methanogens are responsible for CH4 production within the animal rumen and hindgut [47]. Reduction of the CH4 concentration from rumen fluid at the highest application level of nZnO was likely due to the impact of excessive nZnO application rate (which was almost two-fold of the allowable limit as recommended by NAS as feed) specifically on methanogens [26]. As mentioned previously, the highest application rate (1000 μg g− 1) of nZnO did not affect total gas production, but likely reduced the enteric CH4 concentration due to inhibitory action on the CH4-producing methanogenic microbial community. Additionally, adsorption of the produced methane on the NPs surface might also contribute to the reduction in CH4 when nZnO was added to the rumen fluid. This situation warrants further study for investigating the effect of higher levels of zinc as a feed additive on animal growth and productivity.
CO2 concentration was five times higher than the CH4, which might be an indication of biocidal action of nZnO on methanogen archaea. During anaerobic digestion process, methanogenic archaea utilize CO2, and H2 to produce CH4. Nano zinc oxide might leave only a small amount of methanogenic archaea active, and thus higher amount of unconverted CO2 was likely. Furthermore, CO2 emission from rumen is directly related to the degradation of the organic constituents present in the feed, hence the decreasing trend in the CO2 concentration was likely to indicate lower degradation rate of the organic matter in the rumen. Application of NPs might have an adverse impact on the microbial community and as a consequence lower degradation of organic compounds might occur. However, additional microbial studies are needed to understand the in-depth process.
The Higher amount of H2S concentration from the alfalfa based feed compared with the maize silage was likely to be an indication of higher activity of the microorganisms. Since, in absence of oxygen (O2) sulfate-reducing bacteria utilize sulfate to oxidize organic compounds present in the feed and ends up with the H2S production as a byproduct, hence the reduction trend of H2S concentration might be due to the reduced activity of the sulfate-reducing bacteria [48]. However, the concentration reduction mechanism needs to be explored to investigate the adverse effect of the nZnO on the microbial community.
Initial CFUs were measured right after the application of the nZnO in the system, therefore, there was little or no effects of nZnO levels on CFUs. In this circumstance, irrespective of the nZnO application levels, the number of microbial populations was most likely to represent the similar number of the populations present in the rumen fluid. In contrast, addition of fresh feed was most likely to contribute towards the increasing amount of final CFUs. Compared with the control (final), lower CFU numbers in the nZnO treated samples were most likely due to the biocidal effect of nZnO. An insignificant amount of CFUs reduction from the treatments with lower application levels of nZnO might be an indication of the lower amount of available biocides. In contrast, a higher reduction in CFUs was observed with higher application levels of nZnO and the reduction was significant only with 1000 μg g− 1 inclusion level. Furthermore, the presence of higher CFUs in the maize silage based treatments were likely to validate the higher gas production from those treatments, and vice versa. Additional study at different levels and feed types are needed to understand in depth CFU reduction chemistry of nZnO.
Within the same feed type, application of nZnO has no impact on the rumen fluid pH, and redox potential. Compared with the control treatment, higher nZnO application levels (500 and 1000 μg g− 1) reduced CH4, CO2 and H2S concentrations significantly (ranged from 21.85 to 53.65%). Similarly, the 1000 μg g− 1 inclusion level reduced the microbial population in both feeds significantly (22.21%) as compared to control treatment. Based on this study, the inclusion of 500 or 1000 μg g− 1 nZnO may reduce enteric fermentation resulting in lower enteric GHG emission from grass fed beef. However, additional microbial studies are necessary to determine the mode of action. Additionally, further work is needed to assess the effect of nZnO inclusion on animal performance when cattle are fed ingredients commonly used in beef feedlot diets.
Moss AR, Jouany JP, Newbold J, editors. Methane production by ruminants: its contribution to global warming. Ann Zootech EDP Sciences. 2000;231–253. doi: https://doi.org/10.1051/animres:2000119.
EPA 430-P-18-001. Draft inventory of us greenhouse gas emissions and sinks: 1990-2016. 2009. https://www.epa.gov/sites/production/files/2018-01/documents/2018_complete_report.pdf. Accessed: February 8, 2018.
EIA. Emissions of greenhouse gases in the U. S. 2009. https://www.eia.gov/environment/emissions/ghg_report/notes_sources.php. Report number: doe/eia-0573(2009). Report number: doe/eia-0573(2009). Accessed: January 11, 2018.
Hughes MN, Centelles MN, Moore KP. Making and working with hydrogen sulfide: the chemistry and generation of hydrogen sulfide in vitro and its measurement in vivo: a review. Free Radic Biol Med. 2009;47(10):1346–53. https://doi.org/10.1016/j.freeradbiomed.2009.09.018.
Johnson KA, Johnson DE. Methane emissions from cattle. J Anim Sci. 1995;73(8):2483–92.
Hogan KB. Anthropogenic methane emissions in the United States, estimates for 1990. 1993. https://nepis.epa.gov/. Accessed 2 Nov 2018.
Wolin M, Miller T. Microbe interactions in the rumen microbial ecosystem. The rumen ecosystem (ed PN Hobson). 1988;343–59.
Bauchop T, Mountfort DO. Cellulose fermentation by a rumen anaerobic fungus in both the absence and the presence of rumen methanogens. Appl Environ Microbiol. 1981;42(6):1103–10.
Ushida K, Jouany J. Methane production associated with rumen-ciliated protozoa and its effect on protozoan activity. Lett Appl Microbiol. 1996;23(2):129–32. https://doi.org/10.1111/j.1472-765X.1996.tb00047.x.
Hristov A, Oh J, Lee C, Meinen R, Montes F, Ott T, et al. Mitigation of greenhouse gas emissions in livestock production: A review of technical options for non-CO2 emissions. FAO Animal Production and Health Paper No. 2013;177:1–206. doi: https://doi.org/10.1017/S1751731113000876.
Dehority BA. Rumen microbiology. Nottingham: Nottingham University Press; 2003.
Drewnoski M, Beitz DC, Loy DD, Hansen SL, Ensley SM. Factors affecting ruminal hydrogen sulfide concentration of cattle. Anim Ind Rep. 2011;657(1):11.
Morine S, Drewnoski M, Hansen S. Increasing dietary neutral detergent fiber concentration decreases ruminal hydrogen sulfide concentrations in steers fed high-sulfur diets based on ethanol coproducts. J Anim Sci. 2014;92(7):3035–41. https://doi.org/10.2527/jas.2013-7339.
Martin C, Morgavi D, Doreau M. Methane mitigation in ruminants: from microbe to the farm scale. Animal. 2010;4(03):351–65. https://doi.org/10.1017/S1751731109990620.
Boadi D, Benchaar C, Chiquette J, Massé D. Mitigation strategies to reduce enteric methane emissions from dairy cows: update review. Can J Anim Sci. 2004;84(3):319–35. https://doi.org/10.4141/A03-109.
Benchaar C, Pomar C, Chiquette J. Evaluation of dietary strategies to reduce methane production in ruminants: a modelling approach. Can J Anim Sci. 2001;81(4):563–74. https://doi.org/10.4141/A00-119.
Robertson L, Waghorn G, editors. Dairy industry perspectives o methane emissions and production from cattle fed pasture or total mixed rations in New Zealand. Proceedings-new zealand society of animal production; 2002.
Dong Y, Bae H, McAllister T, Mathison G, Cheng K. Lipid-induced depression of methane production and digestibility in the artificial rumen system (rusitec). Can J Anim Sci. 1997;77(2):269–78. https://doi.org/10.4141/A96-078.
Dohme F, Machmüller A, Wasserfallen A, Kreuzer M. Comparative efficiency of various fats rich in medium-chain fatty acids to suppress ruminal methanogenesis as measured with rusitec. Can J Anim Sci. 2000;80(3):473–84. https://doi.org/10.4141/A99-113.
Machmüller A, Kreuzer M. Methane suppression by coconut oil and associated effects on nutrient and energy balance in sheep. Can J Anim Sci. 1999;79(1):65–72. https://doi.org/10.4141/A98-079.
Wright A, Kennedy P, O'Neill C, Toovey A, Popovski S, Rea S, et al. Reducing methane emissions in sheep by immunization against rumen methanogens. Vaccine. 2004;22(29):3976–85. https://doi.org/10.1016/j.vaccine.2004.03.053.
Kuzma J, VerHage P. Nanotechnology in agriculture and food production: anticipated applications: project on emerging nanotechnologies; 2006.
Bollo E. Nanotechnologies applied to veterinary diagnostics. Vet Res Commun. 2007;31:145–7. https://doi.org/10.1007/s11259-007-0080-x.
Scott N. Nanotechnology and animal health. Revue Scientifique Et Technique-Office International Des Epizooties. 2005;24(1):425.
Narducci D. An introduction to nanotechnologies: What's in it for us? Vet Res Commun. 2007;31:131–7.
Swain PS, Rao SB, Rajendran D, Dominic G, Selvaraju S. Nano zinc, an alternative to conventional zinc as animal feed supplement: A review. Anim Nutri. 2016;2(3):134–41. https://doi.org/10.1016/j.aninu.2016.06.003.
Mu H, Chen Y, Xiao N. Effects of metal oxide nanoparticles (TiO 2, Al 2 O 3, SiO 2 and ZnO) on waste activated sludge anaerobic digestion. Bioresour Technol. 2011;102(22):10305–11. https://doi.org/10.1016/j.biortech.2011.08.100.
Luna-delRisco M, Orupõld K, Dubourguier H-C. Particle-size effect of CuO and ZnO on biogas and methane production during anaerobic digestion. J Hazard Mater. 2011;189(1):603–8. https://doi.org/10.1016/j.jhazmat.2011.02.085.
National Academies of Sciences, Engineering, and Medicine. Nutrient requirements of beef cattle. Washington DC: National Academies Press; 2016.
McDougall E. Studies on ruminant saliva. The composition and output of sheep's saliva. Biochem J. 1948;43(1):99.
Borhan MS, Capareda SC, Mukhtar S, Faulkner WB, McGee R, Parnell CB. Greenhouse gas emissions from ground level area sources in dairy and cattle feedyard operations. Atmosphere. 2011;2(3):303–29. https://doi.org/10.3390/atmos2030303.
Rahman S, Lin D, Zhu J. Greenhouse gas (GHG) emissions from mechanically ventilated deep pit swine gestation operation. J Civil Environ Eng. 2012;2:104. https://doi.org/10.4172/2165-784X.1000104.
Sarker, N. C., Rahman, S., Borhan, M. S., Rajasekaran, P., Santra, S., & Ozcan, A. (2018). Nanoparticles in mitigating gaseous emissions from liquid dairy manure stored under anaerobic condition. J Envron Sci. (In Press) doi: https://doi.org/10.1016/j.jes.2018.03.014.
Goetsch A, Galyean M. Influence of feeding frequency on passage of fluid and particulate markers in steers fed a concentrate diet. Can J Anim Sci. 1983;63(3):727–30. https://doi.org/10.4141/cjas83-084.
Sigg L. Redox potential measurements in natural waters: significance, concepts and problems. Redox: Springer; 2000. p. 1–12.
Nutrition L. A. Target pH levels in silage. Dairy Herd Management 2016 http://www.dairyherd.com/quality-silage/target-ph-levels-silage. Accessed 12 Jan 2018.
Bhandari S, Ominski K, Wittenberg K, Plaizier J. Effects of chop length of alfalfa and corn silage on milk production and rumen fermentation of dairy cows. J Dairy Sci. 2007;90(5):2355–66. https://doi.org/10.3168/jds.2006-609.
Grant R, Mertens D. Influence of buffer pH and raw corn starch addition on in vitro fiber digestion kinetics. J Dairy Sci. 1992;75(10):2762–8. https://doi.org/10.3168/jds.S0022-0302(92)78039-4.
Wu H, Yang D, Zhou Q, Song Z. The effect of pH on anaerobic fermentation of primary sludge at room temperature. J Hazard Mater. 2009;172(1):196–201. https://doi.org/10.1016/j.jhazmat.2009.06.146.
Shete S, Tomar S. Ruminating Over Methane Emissions. NISCAIR-CSIR. 2010;31–32.
Colmenarejo M, Sánchez E, Bustos A, Garcıa G, Borja R. A pilot-scale study of total volatile fatty acids production by anaerobic fermentation of sewage in fixed-bed and suspended biomass reactors. Proc Biochem. 2004;39(10):1257–67. https://doi.org/10.1016/S0032-9592(03)00253-X.
Lee SJ. Relationship between oxidation reduction potential (ORP) and volatile fatty acid (VFA) production in the acid-phase anaerobic digestion process. 2008. doi: http://hdl.handle.net/10092/1262.
Blanc FC, Molof AH. Electrode potential monitoring and electrolytic control in anaerobic digestion. J Water Pollut Control Fed. 1973;45(4):655–67.
Environmental Y. ORP Management in wastewater as an indicator of process efficiency. YSI, Yellow Springs, OH. 2008. https://www.ysi.com/File%20Library/Documents/Application%20Notes/A567-ORP-Management-in-Wastewater-as-an-Indicator-of-Process-Efficiency.pdf. Accessed 2 Nov 2018.
Moran J. Tropical dairy farming: feeding management for small holder dairy farmers in the humid tropics: Csiro publishing; 2005.
Zhisheng C. Effect of nano-zinc oxide supplementation on rumen fermentation in vitro. Chinese J Anim Nutr. 2011;8:023. https://doi.org/10.14202/vetworld.2015.888-891.
Hook SE, Wright A-DG, McBride BW. Methanogens: methane producers of the rumen and mitigation strategies. Archaea. 2010;2010. https://doi.org/10.1155/2010/945785.
Pouliquen F, Blanc C, Arretz E, Labat I, Tournier-Lasserve J, Ladousse A, et al. Ullmann's encyclopedia of industrial chemistry. 1985.
Thanks to Debra Baer, Technical Communication Specialist, Agricultural and Biosystems Engineering Department, NDSU, USA, for reviewing the manuscript.
This study was conducted using the discretionary funding of the corresponding author. No specific funding was involved.
The data generated or analyzed during the current study are available upon a reasonable request to the corresponding author.
Agricultural and Biosystems Engineering Department, North Dakota State University, Fargo, ND, 58108, USA
Niloy Chandra Sarker
, Md. Borhan
& Shafiqur Rahman
Animal Sciences Department, North Dakota State University, Fargo, ND, 58108, USA
Faithe Keomanivong
& Kendall Swanson
Search for Niloy Chandra Sarker in:
Search for Faithe Keomanivong in:
Search for Md. Borhan in:
Search for Shafiqur Rahman in:
Search for Kendall Swanson in:
Shafiqur Rahman was the PI for the project and designed the experiment. Niloy Chandra Sarker did the experiment, drafted and wrote this manuscript, did statistical analysis and statistical work. Faithe Keomanivong, Md. Borhan, Shafiqur Rahman, and Kendall Swanson helped to set up the experiment and data collection. All the authors read and approved the final manuscript.
Correspondence to Shafiqur Rahman.
The authors have an IACUC approval specifically for doing the in vitro digestion studies at North Dakota State University (NDSU) and the protocol approval number is A15038.
All authors agreed to submit the manuscript to this journal.
The authors declare that they have no competing interest.
Sarker, N.C., Keomanivong, F., Borhan, M. et al. In vitro evaluation of nano zinc oxide (nZnO) on mitigation of gaseous emissions. J Anim Sci Technol 60, 27 (2018) doi:10.1186/s40781-018-0185-5
Rumen
Nanoparticle | CommonCrawl |
We think you are located in South Africa. Is this correct?
Yes, I reside in South Africa
Change country/curriculum
We use this information to present the correct curriculum and to personalise content to better meet the needs of our users.
Mathematics Grade 7
Mathematics Grade 10
Mathematical Literacy Grade 10
Natural Sciences Grade 7
Physical Sciences Grade 10
Life Sciences Grade 10
For Learners and Parents
Chapter Summary
Newton'S Laws
Forces between masses
End of chapter exercises part 1
Don't get left behind
Join thousands of learners improving their science marks online with Siyavula Practice.
2.5 Chapter summary (ESBM3)
Presentation: 23K5
The normal force, \(\vec{N}\), is the force exerted by a surface on an object in contact with it. The normal force is perpendicular to the surface.
Frictional force is the force that opposes the motion of an object in contact with a surface and it acts parallel to the surface the object is in contact with. The magnitude of friction is proportional to the normal force.
For every surface we can determine a constant factor, the coefficient of friction, that allows us to calculate what the frictional force would be if we know the magnitude of the normal force. We know that static friction and kinetic friction have different magnitudes so we have different coefficients for the two types of friction:
\(\mu_s\) is the coefficient of static friction
\(\mu_k\) is the coefficient of kinetic friction
The components of the force due to gravity, \(\vec{F}_g\), parallel (\(x\)-direction) and perpendicular (\(y\)-direction) to a slope are given by: \begin{align*} {F}_{gx} & = {F}_g\sin(\theta)\\ {F}_{gy} & = {F}_g\cos(\theta) \end{align*}
Newton's first law: An object continues in a state of rest or uniform motion (motion with a constant velocity) unless it is acted on by an unbalanced (net or resultant) force.
Newton's second law: If a resultant force acts on a body, it will cause the body to accelerate in the direction of the resultant force. The acceleration of the body will be directly proportional to the resultant force and inversely proportional to the mass of the body. The mathematical representation is:\[\vec{F}_{net} = m\vec{a}\]
Newton's third law: If body A exerts a force on body B, then body B exerts a force of equal magnitude on body A, but in the opposite direction.
Newton's law of universal gravitation: Every point mass attracts every other point mass by a force directed along the line connecting the two. This force is proportional to the product of the masses and inversely proportional to the square of the distance between them.\[F=G\frac{m_1m_2}{d^2}\]
Physical Quantities
Quantity Unit name Unit symbol
Acceleration (\(a\)) metres per second squared \(\text{m·s$^{-1}$}\)
Distance (\(d\)) metre \(\text{m}\)
Force (\(F\)) Newton \(\text{N}\)
Mass (\(m\)) kilogram \(\text{kg}\)
Tension (\(T\)) Newton \(\text{N}\)
Weight (\(N\)) Newton \(\text{N}\)
Table 2.1: Units used in Newtons laws
Help for Learners
Help for Teachers
About Siyavula
Follow Siyavula:
All Siyavula textbook content made available on this site is released under the terms of a Creative Commons Attribution License. Embedded videos, simulations and presentations from external sources are not necessarily covered by this license.
Terms and Conditions and Privacy Policy. | CommonCrawl |
August 2017 , Volume 45, Issue 8, pp 1852–1864 | Cite as
Design, Analysis and Testing of a Novel Mitral Valve for Transcatheter Implantation
Selim Bozkurt
Georgia L. Preston-Maher
Ryo Torii
Gaetano Burriesci
First Online: 03 April 2017
Mitral regurgitation is a common mitral valve dysfunction which may lead to heart failure. Because of the rapid aging of the population, conventional surgical repair and replacement of the pathological valve are often unsuitable for about half of symptomatic patients, who are judged high-risk. Transcatheter valve implantation could represent an effective solution. However, currently available aortic valve devices are inapt for the mitral position. This paper presents the design, development and hydrodynamic assessment of a novel bi-leaflet mitral valve suitable for transcatheter implantation. The device consists of two leaflets and a sealing component made from bovine pericardium, supported by a self-expanding wireframe made from superelastic NiTi alloy. A parametric design procedure based on numerical simulations was implemented to identify design parameters providing acceptable stress levels and maximum coaptation area for the leaflets. The wireframe was designed to host the leaflets and was optimised numerically to minimise the stresses for crimping in an 8 mm sheath for percutaneous delivery. Prototypes were built and their hydrodynamic performances were tested on a cardiac pulse duplicator, in compliance with the ISO5840-3:2013 standard. The numerical results and hydrodynamic tests show the feasibility of the device to be adopted as a transcatheter valve implant for treating mitral regurgitation.
Transcatheter mitral valve implantation (TMVI) Heart valve development Heart valve assessment Mitral valve Bioprosthetic bi-leaflet valve
Associate Editor Umberto Morbiducci oversaw the review of this article.
Selim Bozkurt and Georgia L. Preston-Maher share first authorship.
Mitral regurgitation is one of the major mitral valve pathologies leading to heart failure.27 It is a result of primary anatomical changes affecting the mitral valve leaflets, or left ventricular remodelling which may lead to dislocation of papillary muscles.15 Although mild and moderate mitral regurgitation may be tolerated and do not require surgical intervention, patients with severe symptomatic mitral regurgitation have a very low survival rate in absence of interventions40 which restore the coaptation of the mitral valve leaflets,11 or replace the mitral valve with a prosthetic device.30 While non-randomised reports suggest that repairing techniques have significantly lower mortality rates,54 randomised studies indicate no significant difference in the mortality rates3 between replacement and repair20 in ischemic related mitral regurgitation. Whenever practicable, surgical repair remains the best option for the treatment of degenerative mitral regurgitation.19,20 Nevertheless, in elderly patients surgical intervention is often associated with comorbidities such as diabetes, pulmonary disease, perioperative hemodialysis and low ejection fraction, which increase considerably the risk of operative mortality.5,49 As a result, only a small portion of patients suffering from functional mitral regurgitation and approximately half of those suffering from degenerative mitral regurgitation currently undergo surgery.7 Minimally invasive transcatheter implantation can reduce the risks in these patients and offer an alternative to surgical therapies for mitral valve diseases.34
Transcatheter techniques to treat mitral regurgitation can be classified as leaflet and chordae repair; indirect annuloplasty; left ventricular remodelling; and replacement.25 Leaflet and chordae repair techniques can be effective and durable in a wide variety of pathologies, even without annuloplasty in selected patients.21,36 Indirect annuloplasty releases devices which support remodelling of the annulus in the coronary sinus, improving leaflet coaptation. However this procedure is associated with adverse cardiovascular events, such as myocardial infarction and coronary sinus rupture,24,47 and data available on the short- and long-term outcome are still limited.32,37 Left ventricular remodelling is applied to reduce a dilated left ventricle diameter which may tether the mitral valve leaflets.22 Despite the initial attempts demonstrated benefits, this technique is not available commercially at the moment.
Although these transcatheter techniques can successfully reduce mitral regurgitation, a valve replacement would allow to restore the unidirectional blood flow in a wider patients' anatomical selection. Transcatheter mitral valve (TMV) replacements, which attempt to conjugate the lessons from surgical mitral valve interventions35,42 with the successful transcatheter aortic valve (TAV) experience, are still in developmental stages. A number of TMVs have been proposed, and are at different stages of evaluation.1,23,41 These are typically adapted from TAVs,41 and adopt the same three leaflets circular configuration. Possible issues that may arise with these devices include suboptimal placement in native mitral position, due to the irregular non-circular shape of the mitral annulus, and recurrence of paravalvular leakage.30 This is known to reduce the survival rates after TAV replacement, and is a more critical problem for mitral valve implants, where the implantation sizes and the peak transvalvular pressures are higher.25
In this paper, a novel mitral valve device suitable for transcatheter implantation, based on a bi-leaflet configuration with D-shaped orifice, is presented. In particular, the development of the proposed valve, in terms of design optimisation and in vivo hydrodynamic assessment is described.
Leaflet Design Optimisation and Manufacturing
Leaflets were designed to minimise structural and functional failure. Structural failure typically occurs due to excessive stresses, with the locations of structural failure in explanted bioprosthetic heart valves often associated with the peak regions of maximum principal stress.9 Design optimisation was performed using parametrically-varied CAD models by means of finite element analysis for both structural and functional criteria.
Leaflets were designed to lie, in their unstressed open configuration, on a ruled surface characterised by a D-shaped orifice cross section with a ratio between the antero-posterior and the inter-commissural diameters equal to 3:4 (Fig. 1). Similarly to healthy native mitral valve,58 leaflets were designed with a conical shape, reducing their cross section linearly form the inlet to the outlet. This solution was preferred to minimise the risk of ventricular outflow tract obstruction, by decreasing the tendency of the leaflets to diverge from their design configuration, especially when the valve is placed in annuli significantly smaller than the nominal valve dimension. Also, shorter free edges were observed to reduce the leaflets fluttering during diastole, which is typically associated with increased calcification, haemolysis, regurgitation and early fatigue failure.6 A scale factor (SF), defined as the ratio between the outlet (D V ) and inlet (D A ) intertrigonal dimensions of the device (Fig. 1a), was introduced to quantify the leaflets conicity in the free unloaded configuration. A set of five scale factors of 0.745, 0.798, 0.852, 0.906 and 0.960 were chosen for investigation, with the smallest corresponding to a maximum reduction of the D-shape cross sectional area from the base to the edge of the leaflets equal to 60%. A coaptation height parameter, CH, was defined, referring to the vertical distance from the arris between the aortic and mural leaflets to the middle of the leaflets free edge. This has the function to allow the adjustment of the leaflets edge and avoid excess of redundant material, which results in localised buckling, commonly associated with failure of pericardial leaflets.50 Five evenly spaced coaptation lengths were chosen for investigation, from 0 to 30% of the leaflets height. The combination of five scale factors and coaptation lengths resulted in twenty-five incrementally different bi-leaflet CAD models.
(a) Sketch of the leaflets design: CH represents the coaptation height, D V and D A are the dimensions used to define scale factor (SF) in the design. (b) Experimental data points describing the constitutive behavior of the used pericardium, and fitted curve with the adopted Ogden model.
The leaflets were designed in their assembled configuration as surfaces using 3D CAD software Rhinoceros 4.0 (Robert McNeel & Associates), using an inter-trigonal dimension equal to 26 mm. Numerical analyses of structural mechanics were performed using an explicit solver in LS-DYNA (Livermore Software Technology Corporation). The analysis of the twenty-five initial designs provided coaptation area and peak maximum principal stress data for hypertensive systolic loading conditions, i.e. when they are fully closed and a peak of transmitral pressure equal to 200 mmHg is applied.
Glutaraldehyde fixed bovine pericardium was selected as material for the leaflets, due to its long clinical use in bioprosthetic heart valves and favorable hemodynamic performance.26 Calf pericardial sacs were obtained from a local abattoir, and fixed in a 0.5% solution of glutaraldehyde for 48 h, after removing the fat and parietal pericardium by hand.26 Three sets of leaflets were obtained from visually homogeneous regions of the pericardial sac of thickness in the range of 400 μm ±10% (measured using a thickness gauge - Mitutoyo Corporation, Tokyo, Japan). One dumbbell-shaped sample of 4 mm width and 16 mm gauge length was extracted from the unused portion of each patch, using a die cutter.
Specimens were conditioned with uniaxial tensile cycles from 0 to 6 N with 20 mm/min rate until stabilisation, using a ZwickiLine testing machine (Zwick/Roell, Germany) equipped with a media container maintaining 40 °C, and used to determine the representative mechanical properties for the used material. The constitutive behaviour observed for the treated pericardium was modeled in the numerical analyses using a four parameters Ogden equation:
$$W = \frac{{\mu_{1} }}{{\alpha_{1} }}\left( {\lambda_{1}^{{\alpha_{1} }} + \lambda_{2}^{{\alpha_{1} }} + \lambda_{3}^{{\alpha_{1} }} - 3} \right) + \frac{{\mu_{2} }}{{\alpha_{2} }}\left( {\lambda_{1}^{{\alpha_{2} }} + \lambda_{2}^{{\alpha_{2} }} + \lambda_{3}^{{\alpha_{2} }} - 3} \right)$$
where the strain energy density W is expressed in terms of the principal stretches λ 1, λ 2 and λ 3, and the four material constants μ 1, μ 2, α 1 and α 2. The material constants best fitting the average stress–strain curve obtained from the experiments were: μ 1 = 7.6 × 10−6; μ 2 = 5.7 × 10−4; α 1 = α 2 = 26.26 (R 2 = 0.981). The experimental data points and fitted curve are reported in the graph in Fig. 1b.
The coaptation of the leaflets was modelled using a frictionless master-slave contact condition.9 The effect of the inertia of blood in reducing system oscillations was reproduced by using a damping coefficient of 0.9965, consistent with what identified in previous works based on similar simulations.9 Each leaflet was discretised with approximately 1820 quadrilateral 2D constant strain Belytschko-Lin-Tsay shell elements with 5 points of integration across the thickness. The leaflet thickness was set to 0.4 mm, approximating the value selected for the patches used for the valve manufacturing. To simulate leaflet closure, a uniformly distributed opening pressure of 4 mmHg was initially applied to the leaflets, starting from their unloaded position, and then reverted and ramped to a closing pressure of 115 mmHg. This corresponds to the typical mean transmitral systolic pressure difference obtained by testing the valve prototypes in the pulse duplicator, for a cardiac output of 5 L/min, a frequency of 70 beats per minute (with 65% of diastolic time) and a normotensive aortic pressure of 100 mmHg. A minimum safety factor of 3, based on the strength reported for glutaraldehyde fixed bovine pericardial tissue,4 was accepted for the predicted peaks of stress.
Frame Design and Optimisation
The TMV frame is designed to match and support the two leaflets along their constrained edge and provide their anchoring. Its structure is obtained from super elastic NiTi wires of 0.6 mm diameter.
The valve anchoring to the host anatomy is provided by the counteracting action from a set of proximal smoothly arched ribs, expanding into the atrium (portions 7 and 8 in Fig. 2a) and two petal-like structures protruding into the ventricle between the native mitral leaflets (portions 3 and 4 in Fig. 2a). The portion of the petals engaging with the anterior native leaflets (portions 4 in Fig. 2a) are designed to keep this in tension by expanding its anterioro-lateral and posterior-medial parts12 laterally, in the attempt to reduce its systolic motion without pushing it markedly in subaortic position and minimise the risk of left ventricular outflow tract obstruction.59
(a) Sketch of the valve wireframe; and (b) schematic representation of the implanted prosthetic valve.
The distal margin of the ventricular structures includes distal loops (portions 1 and 2 in Fig. 2) which act as torsion springs, reducing the levels of stress in the crimped frame and dampening the load experienced by the leaflets during the operating cycles. The loops are also used to host control tethers which allow the valve recollapse into a delivery sheath by adopting the same approach described in Rahmani et al. 45
3D solid models of the wireframe (Fig. 2) were developed using NX CAD (Siemens PLM Software) program. Each solid model was discretised with approximately 110,000 tetrahedral elements of maximum edge size equal to 0.2 mm. The wireframe was modeled as NiTi shape memory alloy by using an austenitic Young's modulus (E A) of 50 GPa, martensitic Young's modulus (E M) of 25 GPa, and 0.3 for the Poisson's ratio of both austenitic and martensitic (ν A , ν M) phases.56 The transformation stresses of the NiTi wire for the austenite start (σ as,s), austenite finish (σ as,f), martensite start (σ sa,s) and martensite finish (σ sa,f) were 380, 400, 250 and 220 MPa respectively.56 The sleeves were modeled as stainless steel by using a Young's modulus of 210 kN/mm2 and a Poisson's ratio of 0.3, and were connected to the wireframe by applying stress free projected glued contact to their surfaces. The relative motion between the TMV and catheter during crimping was simulated by fixing the displacement of the top of the loops.
The wireframe geometry was optimised to maintain the maximum von Mises stress below the martensitic yield stress, when crimped to 8 mm (24 French) diameter. Simulations were performed using the FEA software MSC.Marc/Mentat and an implicit solver utilizing single-step Houbolt time integration algorithm, by gradually reducing the diameter of a surround cylindrical contact surface. Critical regions subjected to the highest levels of stress during crimping were identified in the initial geometry and optimised iteratively, using the approach described in Burriesci et al. 10 For each portion indicated in Fig. 2, the length, curvature and angle values were updated in each simulation in order to obtain a parameter set minimising the crimping stress on the wireframe.
Valve Prototypes
Prototypes of the wireframe structure were manufactured by thermomechanical processing of nitinol wires, mechanically joined at specific locations by means of stainless steel crimping sleeves. The leaflets and the sealing cuff made from bovine pericardium were sutured to the inner portions of the frame extensions (portions 5 and 6 in Fig. 2) using polypropylene surgical sutures. The skirt, made from a polyester mesh (Surgical Mesh PETKM2004, Textile Development Associates, USA), was included to gently distribute the anchoring force over the annulus (between portions 5, 6 and 7 in Fig. 2). The nominal valve size of the prototypes, defined based on the inter-trigonal dimension of the designed leaflets, was equal to 26 mm. This is suitable for preclinical in vivo evaluation in large animal models.
Hydrodynamic Tests
The hydrodynamic performances of the three valve prototypes were assessed on a hydro-mechanical cardiovascular pulse duplicator system (ViVitro Superpump SP3891, ViVitro, BC) (Fig. 3). The flow through the heart valves is measured with two electromagnetic flow probes and two Carolina Medical flow meters (Carolina Medical Electronics, USA), and the pressures in the aorta, left ventricle and left atrium are acquired using Millar Mikro-Cath pressure transducers. The working fluid was buffers phosphate saline solution at 37 °C. Hydrodynamic assessment of the prototypes was performed at 70 bpm heart rate, 5 L/min mean cardiac output and 100 mmHg mean aortic pressure, in compliance with the ISO 5840-3:2013 standard. The pulse duplicator was operated to simulate systole/diastole ratio as 35/65 over a cardiac cycle and a bileaflet mechanical heart valve Sorin Bicarbon size 25 was used to represent the aortic valve. Silicone models of the mitral annulus and native leaflets were built, based on the geometry previously described in Lau et al.33 with inter-trigonal diameters ranging from 21 to 25 mm, and used to house the test valves. This dimensional range, at least one millimeter smaller than the nominal size of the test valve, was selected to allow some anchoring force and verify the valve securing and hydrodynamic performance over a large anatomical range.
Experimental set-up for the hydrodynamic assessment of the proposed device: (a) pulse duplicator; (b) picture of the valve prototype indicating the leaflets, the sealing cuff and the anchoring skirt (top); and picture of the device after positioning in the valve holder (bottom).
Hydrodynamic performances of the prototypes were assessed by calculating the effective orifice area (EOA), regurgitant fraction and mean transmitral diastolic pressure. The effective orifice area was estimated using the Gorlin Equation (Eq. 2), as described in the ISO 5840.
$${\text{EOA}} = \frac{{Q_{\text{mv,rms}} }}{{51.6\sqrt {\Delta p_{\text{mv}} /\rho } }}$$
where, Q mv,rms represents the root mean square of the flow rate through the mitral valve, Δp mv is the mean positive differential pressure across the mitral valve and ρ is the density of the circulating fluid. The regurgitant fraction is calculated as the ratio of the measured closing regurgitant volume (back flow during valve closure) plus the leakage volume (leaking flow after closure) and the forward flow volume during the ventricular filling.
Seventeen of the twenty five bi-leaflet designs simulated numerically were functionally patent, and all had an acceptable peak of maximum principal stress below 5 MPa.61 Due to the need to ensure adequate valve function for a wide range of possible expansion sizes and shapes, the design providing maximum coaptation area was selected (Fig. 4) and the wireframe was subsequently made to fit this.
Maximum principal stress distribution for the optimal transcatheter mitral valve leaflets in their critical loading mode when fully closed, peak value 3.51 N/mm2 at the arris between the leaflets.
The selected design, characterised by a coaptation area of 1.8 cm2, met the peak maximum principal stress design criteria, with an estimated peak value below 5 MPa (3.51 MPa), located at the arris between the leaflets. The resulting stress distributions for the optimal geometry of the crimped wireframe are shown in Fig. 5. The critical points of maximum stress during crimping occurred around the sleeves. The highest stress, as expected, occurred at the maximum collapse diameter of 8 mm, and was 835 N/mm2. This remains below the yield stress reported for martensite in superelastic Nitinol, at the operating range of temperature.46
Stress distributions for the optimal geometry of the transcatheter mitral valve wireframe, crimped to different diameter sizes.
The optimised wireframe geometry was closely replicated physically by thermomechanical processing of Nitinol wire, and mechanical crimping with stainless steel sleeves. Comparison between the free and crimped TMV wireframe geometries for the numerical model and prototype are given in Fig. 6.
The transcatheter mitral valve wireframe: (a) solid model; (b) numerical model crimped in a 8 mm diameter cylinder; (c) manufactured prototype; (d) prototype crimped in a 8 mm diameter tube
Elastic deformation of the wireframe in an 8 mm diameter tube shows that the portions functioning as springs (Fig. 2a: portions 3 and 4) and the portions holding the mitral valve leaflets (Fig. 2a: portions 5 and 6) do not intersect with each other, this leaves sufficient space for the leaflets and sealing cuff when crimped. Additionally, the geometry of the crimped wireframe was in good agreement with the numerical prediction.
Diagrams of the effective orifice area, regurgitant fraction and mean diastolic transmitral pressure difference for the prototypes in the different annulus sizes are represented in Fig. 7. The estimated EOA increased with the size of the host valve, with the mean for the three prototypes raising from 1.26 to 1.70 cm2 when moving from the 21 to the 25 mm annulus. All valves exceeded the effective orifice area required by the ISO 5840-3:2013 standard, for the different implantation sizes (larger than 1.05 cm2 and 1.25 cm2 for mitral annuluses of size 23 and 25 mm, respectively).
Hydrodynamic assessment results for the three tested prototypes (P1, P2, and P3; M represents the mean of the three tests) in six different annulus sizes: (a) effective orifice area; (b) regurgitation fraction; and (c) mean transmitral pressure difference during diastole. Minimum performance requirements for 23 and 25 mm, as per ISO 5840-3:2013, are indicated by the asterisk symbol, with the arrows pointing the allowed region.
Regurgitant fractions did not show a clear pattern with the implantation size, and ranged from 8.2 to 17.8%. However, all prototypes met the minimum performance requirements in the ISO5840-3:2013 standard (regurgitant flow fraction ≤20% for both 23 and 25 mm annuli—no specifications for smaller sizes). The mean diastolic transmitral pressure difference decreased in the larger annuluses and reached a maximum value of about 9 mmHg in the 21 mm annulus, reducing to 5 mmHg in the 25 mm annulus.
A sequence of snapshot images of one of the prototypes acquired during the forward mitral valve flow for 23 mm implantation size with 29 fps frame rate are shown in Fig. 8a. The valve leaflets fully opened at the beginning of the left ventricular filling. The anterior leaflet remained fully open during the forward mitral valve flow while the posterior leaflet was fluttering. Duration of the leaflet open phase was approximately 60% of the entire cardiac cycle.
Sequence of snapshot images of one of the tested prototypes during the forward mitral valve flow for 23 mm implantation (a–o). The anterior and posterior leaflets are on the left and right side, respectively. For the test in the sequence are also reported: (p) left ventricular, left atrial and aortic pressure signals (plv, pla and pao, respectively); (q) transmitral pressure difference signal (Δp mv); and (r) flow rate signal through the TMV (Q mv)
The peak (systolic) transmitral pressure difference was 125 mmHg, while the maximum diastolic opening pressure was about 45 mmHg. Regurgitant flow was observed over the ventricular systole, primarily due to paravalvular leakage between the mitral annulus and the device. The closing regurgitation (due to closure of the mitral valve leaflets) was higher in the larger annuluses. Anchoring was adequate for all tests, and no valve migration was observed for any of the test conditions. Typical pressure and flow rate diagrams through the valve, obtained for one of the three prototypes in an annulus of 23 mm over a cardiac cycle, are provided in Fig. 8.
Currently, no device specifically designed for TMV implantation has been approved for the European or American market. However, a number of solutions have been proposed, with many already at the stage of clinical trial (these include the CardiAQ 51,52 and Fortis,2,8 Edwards Lifescience; the Tendyne,39 Tendyne Holdings Inc., Roseville MN, USA; the Tiara,14 Neovasc, Richmond, Canada; the NaviGate, NaviGate Cardiac Structures Inc., Lake Forest, CA, USA; and the Intrepid, Medtronic, Dublin, Ireland).31 Despite the reduced number of patients involved in the trials and the large 30 days mortality rate, justified by the compassionate ground of the implants, this early experience has confirmed the potential benefit of the treatment and the ability of transcatheter solutions to successfully replace the mitral valve function.31 All devices under investigation are based on three occluding leaflet, replicating the configuration and function of semilunar valves. These are supported by self-expanding stents, obtained from laser-cut nitinol tubes, mechanically deformed and thermoset.41 The stents bulge or expand in a flange covered with a fabric material, designed to apply pressure on the atrial inflow portion, and used to minimise paravalvular leakage while counteracting the ventricular anchoring force providing the valve securing. From a technical point of view, a major distinction between the devices currently under investigation is represented by the method they use to generate the ventricular anchoring force, which can be based on ventricular tethers (e.g. Tendyne), native valve anchors (e.g. CardiAQ, Fortis, Tiara and NaviGate) or dual stent structures with barbs.38
The device presented in this paper introduces a number novel concepts, providing new and alternative features. Contrary to competing TMVs, the proposed solution is based on two asymmetric flexible leaflets, describing a D-shape cross section designed to better conform to the irregular anatomy of the valve annulus and minimise the disturbance to the sub-valvular apparati. This allows to maximise the geometrical orifice area of the prosthesis without interfering with the aortic valve anatomy and function. The leaflets are sutured onto a self-expanding frame, obtained from a nitinol wire, thermo-mechanically formed and mechanically crimped at five locations. This defines a set of arched ribs expanding into the atrium and two petal-like structures protruding into the ventricle between the native mitral leaflets, whose counteracting action generates the anchoring force, whilst limiting the systolic motion of the native anterior leaflet and the associated risk of left ventricular outflow tract obstruction. The wireframe configuration results in minimum metallic material, and relies on a skirt made from polymeric mesh (allowing integration from the host tissues), tensed between the atrial petals and the leaflets, to gently distribute the contact pressure over the annulus region. Paravalvular sealing is provided by a pericardial cuff extending around the entire framework of the valve, which inflates during systole as effect of the transvalvular closing pressure. The valve, designed in the presented version for transapical implantation, can be retrieved into the delivery system after complete expansion, using a similar mechanism to that described by the authors for a TAVI device.44
The structural numerical analyses, though inherently limited in their ability to represent the physics involved in heart valve leaflet closure, were adequate to predict the systolic function of the leaflets. In particular, this approximation does not take into account the interaction between the working fluid and the structural components, which determine the flow patterns and the pressure differences acting under real physiological conditions. Fluid structure interaction modelling would be more accurate for the simulation of the opening and closing leaflets dynamics. However, the peak of stress in the leaflets during the cardiac cycle is essentially led by the closing transvalvular pressure load,33 so that neglecting the local pressure variation and fluid shear stresses due to blood flow can still yield to sufficiently accurate results for the design evaluation stage.10
The valve wireframe optimisation was carried out until obtaining an optimal geometry which has lower stresses than NiTi yielding. Portions 5 and 6 in Fig. 2a were imposed by the leaflets geometry and kept unchanged for all wireframe models. The geometry of the wireframe is relatively complex, and includes a number of geometric parameters which needed to be optimised to obtain a suitable design. Each section was iteratively modified to minimise local stresses, resulting in a final geometry which fits adequately into the host mitral anatomy, maintaining acceptable levels of stress in the crimped configuration. The finite element analyses of a wireframe crimped to a diameter of 8 mm resulted in a maximum stress less than 900 MPa, which corresponds to a typical yield stress for Nitinol.46 The stress concentrations were predicted in the vicinity of the crimping sleeves, with local maxima around 600 MPa. Therefore, plastic deformation is not expected in the crimped wireframe, and this was confirmed by loading and unloading the physical prototype in an 8 mm diameter tube multiple times, without observable changes in shape. Besides, the presented version of the wireframe is designed to be ideally implantable from transapical route, which tolerates the use of larger sheath profiles (up to 33 French, 11 mm), resulting in further reduction of the stresses on the NiTi wireframe.60 Crimping of the TMV wireframe was simulated by gradually shrinking a cylindrical contact surface surrounding the prosthesis along its entire length. In the current application, the valve distal loops (Fig. 2a, portions 1 and 2) are engaged by a set of tethers, used to pull the valve into the catheter from the side at the outflow.45 Nevertheless, the resulting geometry of the crimped wireframe in the numerical simulations resulted visually accurate.
The valve design and prototypes were of a nominal size equal to 26 mm, corresponding to the largest inter-trigonal dimension of the prosthetic leaflets. This is suitable for patient's annuli with inter-trigonal diameters equal or lower than 25 mm. Though this range is smaller than the average size in adult humans, it is more suitable for preclinical in vivo evaluation in ovine models,43 which is expected to be one of the next developmental steps. The prototypes were tested in mock host annuli of inter-trigonal diameters ranging from 21 to 25 mm. As expected, the diastolic transmitral pressure difference raised nonlinearly as the dimensions of the host annulus reduced, increasing from about 5 mmHg for the 25 mm annulus, to about 9 mmHg for the 21 mm annulus. A high peak in the initial diastolic transmitral pressure drop is measured in the tests (up to 45 mmHg). This is often observed in tests performed on hydro-mechanical pulse duplicators,16,28,29,48,53,55 and could be due to the non-physiological ventricular compliance, which may determine steeper flow waves and higher pressure gradients associated with early passive filling during ventricular relaxation. The calculated EOA well reflected the variation in the area of the implantation annulus, varying proportionally. Regurgitant fraction did not show a clear pattern associated with the implantation size for the different prototypes, although the mean value reduced progressively from 21 to 24 mm, inverting the trend at 25 mm. The reduction with the size may be associated with the different length of the mock native leaflets, which were designed proportional to the annulus size and, therefore, provided different covering of the sealing cuff of the prosthetic valves. On the other hand, the increased regurgitant fraction in the 25 mm annulus may be justified by the presence of gaps between the device and the mitral annulus. Globally, the device met the hydrodynamic requirements requested for transcatheter mitral valves in the standard ISO5840-3:2013, for all implantation sizes. Direct comparison of the hydrodynamic performance with competing solutions is not possible, as these are not available in the market and no in vitro data quantifying their diastolic and systolic efficiency have been published. However, measured values of transmitral diastolic pressure drops are consistent with those reported for transcatheter mitral implantation of off-label TAVI devices in failed mitral valve bioprostheses or annuloplasty rings, and in severe calcific mitral stenosis.13,18 Regurgitant fractions were inferior to those previously measured on the same system for commercially available TAVI devices.45 This is very encouraging, in consideration of the fact that, for the mitral position, closure is associated with higher transvalvular pressure drop and longer durations with respect to the cardiac cycle.
In terms of anchoring, no migration was observed for any of the test configurations, covering host annuli with inter-trigonal diameters between 21 and 25 mm. However, it needs to be taken into account that the mock host valves did not model the physiological contraction, and cordae tendineae and papillary muscles were absent. Ex vivo isolated beating heart or pressurised animal heart platforms17,57 and acute in animal trials could provide more reliable insights on the fitting and performance of a transcatheter valve.44 These studies would also be essential to verify the efficacy of the anchoring mechanism to avoid left ventricular outflow tract obstruction by preventing the systolic motion of the native anterior leaflet.
A novel TMV was developed, consisting of two bovine pericardial leaflets designed to ensure proper functionality across a range of implantation configurations and a sealing cuff, supported by a wireframe, optimised to minimise stresses whilst crimped. The device exceeded the minimum performance requirement from the international standards, thereby proving its feasibility as a mitral valve substitute to treat mitral regurgitation. In vitro durability assessment of the valve by means of accelerated cyclic tests is now being conducted, with the aim of verifying that the solution guarantees a survival equal or superior to the requirement for flexible leaflets heart valves (200 × 106 cycles). The next steps in the development will include in vivo preclinical evaluation by means of in animal implants (possibly complemented by ex vivo studies), to validate the design principles and the efficacy of the device.
If these will confirm the predicted performance, the proposed device could provide a viable alternative to transcatheter repair techniques and, due to its geometric similarity to the human mitral valve anatomy, may result a more appropriate option compared to the other TMVs in development.
This work was supported by the British Heart Foundation (PG/13/78/30400). Authors wish also to acknowledge Dr Benyamin Rahmani and Dr Michael Mullen for their assistance and advices, and Lithotech Medical for their support in the frames manufacturing.
The authors do not have any conflict of interest to declare.
Supplementary material 1 (MOV 21234 kb)
Abdul-Jawad Altisent, O., E. Dumont, F. Dagenais, M. Sénéchal, M. Bernier, K. O'Connor, S. Bilodeau, J. M. Paradis, F. Campelo-Parada, R. Puri, M. Del Trigo, and J. Rodés-Cabau. Initial experience of transcatheter mitral valve replacement with a novel transcatheter mitral valve: procedural and 6-month follow-up results. J. Am. Coll. Cardiol. 66:1011–1019, 2015.CrossRefPubMedGoogle Scholar
Abdul-Jawad Altisent, O., E. Dumont, F. Dagenais, M. Sénéchal, M. Bernier, K. O'Connor, J.-M. Paradis, S. Bilodeau, S. Pasian, and J. Rodés-Cabau. Transcatheter mitral valve implantation With the FORTIS device: insights into the evaluation of device success. JACC Cardiovasc. Interv. 8:994–995, 2015.CrossRefPubMedGoogle Scholar
Acker, M. A., M. K. Parides, L. P. Perrault, A. J. Moskowitz, A. C. Gelijns, P. Voisine, P. K. Smith, J. W. Hung, E. H. Blackstone, J. D. Puskas, M. Argenziano, J. S. Gammie, M. Mack, D. D. Ascheim, E. E. Bagiella, E. G. Moquete, T. B. Ferguson, K. A. Horvath, N. L. Geller, M. A. Miller, Y. J. Woo, D. A. D'Alessandro, G. Ailawadi, F. Dagenais, T. J. Gardner, P. T. O'Gara, R. E. Michler, I. L. Kron, and CTSN. Mitral-valve repair versus replacement for severe ischemic mitral regurgitation. N. Engl. J. Med. 370:23–32, 2014.CrossRefPubMedGoogle Scholar
Aguiari, P., M. Fiorese, L. Iop, G. Gerosa, and A. Bagno. Mechanical testing of pericardium for manufacturing prosthetic heart valves. Interact. Cardiovasc. Thorac. Surg. 2015. doi: 10.1093/icvts/ivv282.PubMedGoogle Scholar
Andalib, A., S. Mamane, I. Schiller, A. Zakem, D. Mylotte, G. Martucci, P. Lauzier, W. Alharbi, R. Cecere, M. Dorfmeister, R. Lange, J. Brophy, and N. Piazza. A systematic review and meta-analysis of surgical outcomes following mitral valve surgery in octogenarians: implications for transcatheter mitral valve interventions. EuroIntervention J. Eur. Collab. Work. Group Interv. Cardiol. Eur. Soc. Cardiol. 9:1225–1234, 2014.Google Scholar
Avelar, A. H. D. F., J. A. Canestri, C. Bim, M. G. M. Silva, R. Huebner, and M. Pinotti. Quantification and analysis of leaflet flutter on biological prosthetic cardiac valves. Artif. Organs 2016. doi: 10.1111/aor.12856.PubMedGoogle Scholar
Bach, D. S., M. Awais, H. S. Gurm, and S. Kohnstamm. Failure of guideline adherence for intervention in patients with severe mitral regurgitation. J. Am. Coll. Cardiol. 54:860–865, 2009.CrossRefPubMedGoogle Scholar
Bapat, V., L. Buellesfeld, M. D. Peterson, J. Hancock, D. Reineke, C. Buller, T. Carrel, F. Praz, R. Rajani, N. Fam, H. Kim, S. Redwood, C. Young, C. Munns, S. Windecker, and M. Thomas. Transcatheter mitral valve implantation (TMVI) using the Edwards FORTIS device. EuroIntervention J. Eur. Collab. Work. Group Interv. Cardiol. Eur. Soc. Cardiol. 10:U120–U128, 2014.Google Scholar
Burriesci, G., I. C. Howard, and E. A. Patterson. Influence of anisotropy on the mechanical behaviour of bioprosthetic heart valves. J. Med. Eng. Technol. 23:203–215, 1999.CrossRefPubMedGoogle Scholar
Burriesci, G., F. C. Marincola, and C. Zervides. Design of a novel polymeric heart valve. J. Med. Eng. Technol. 34:7–22, 2010.CrossRefPubMedGoogle Scholar
Calafiore, A. M., S. Gallina, A. L. Iacò, M. Contini, A. Bivona, M. Gagliardi, P. Bosco, and M. Di Mauro. Mitral valve surgery for functional mitral regurgitation: should moderate-or-more tricuspid regurgitation be treated? a propensity score analysis. Ann. Thorac. Surg. 87:698–703, 2009.CrossRefPubMedGoogle Scholar
Carpentier, A. Cardiac valve surgery–the "French correction". J. Thorac. Cardiovasc. Surg. 86:323–337, 1983.PubMedGoogle Scholar
Cheung, A. W., R. Gurvitch, J. Ye, D. Wood, S. V. Lichtenstein, C. Thompson, and J. G. Webb. Transcatheter transapical mitral valve-in-valve implantations for a failed bioprosthesis: a case series. J. Thorac. Cardiovasc. Surg. 141:711–715, 2011.CrossRefPubMedGoogle Scholar
Cheung, A., D. Stub, R. Moss, R. H. Boone, J. Leipsic, S. Verheye, S. Banai, and J. Webb. Transcatheter mitral valve implantation with Tiara bioprosthesis. EuroIntervention J. Eur. Collab. Work. Group Interv. Cardiol. Eur. Soc. Cardiol. 10:U115–U119, 2014.Google Scholar
De Bonis, M., F. Maisano, G. La Canna, and O. Alfieri. Treatment and management of mitral regurgitation. Nat. Rev. Cardiol. 9:133–146, 2012.CrossRefGoogle Scholar
De Gaetano, F., M. Serrani, P. Bagnoli, J. Brubert, J. Stasiak, G. D. Moggridge, and M. L. Costantino. Fluid dynamic characterization of a polymeric heart valve prototype (Poli-Valve) tested under continuous and pulsatile flow conditions. Int. J. Artif. Organs 38:600–606, 2015.CrossRefPubMedPubMedCentralGoogle Scholar
de Hart, J., A. de Weger, S. van Tuijl, J. M. A. Stijnen, C. N. van den Broek, M. C. M. Rutten, and B. A. de Mol. An ex vivo platform to simulate cardiac physiology: a new dimension for therapy development and assessment. Int. J. Artif. Organs 34:495–505, 2011.CrossRefPubMedGoogle Scholar
Eleid, M. F., A. K. Cabalka, M. R. Williams, B. K. Whisenant, O. O. Alli, N. Fam, P. M. Pollak, F. Barrow, J. F. Malouf, R. A. Nishimura, L. D. Joyce, J. A. Dearani, and C. S. Rihal. Percutaneous transvenous transseptal transcatheter valve implantation in failed bioprosthetic mitral valves, ring annuloplasty, and severe mitral annular calcification. JACC Cardiovasc. Interv. 9:1161–1174, 2016.CrossRefPubMedGoogle Scholar
Enriquez-Sarano, M., A. J. Tajik, H. V. Schaff, T. A. Orszulak, K. R. Bailey, and R. L. Frye. Echocardiographic prediction of survival after surgical correction of organic mitral regurgitation. Circulation 90:830–837, 1994.CrossRefPubMedGoogle Scholar
Gillinov, A. M., E. H. Blackstone, E. R. Nowicki, W. Slisatkorn, G. Al-Dossari, D. R. Johnston, K. M. George, P. L. Houghtaling, B. Griffin, J. F. Sabik, and L. G. Svensson. Valve repair versus valve replacement for degenerative mitral valve disease. J. Thorac. Cardiovasc. Surg. 135:885–893, 2008.CrossRefPubMedGoogle Scholar
Goar, F. G. S., J. I. Fann, J. Komtebedde, E. Foster, M. C. Oz, T. J. Fogarty, T. Feldman, and P. C. Block. Endovascular edge-to-edge mitral valve repair short-term results in a porcine model. Circulation 108:1990–1993, 2003.CrossRefGoogle Scholar
Grossi, E. A., N. Patel, Y. J. Woo, J. D. Goldberg, C. F. Schwartz, V. Subramanian, T. Feldman, R. Bourge, N. Baumgartner, C. Genco, S. Goldman, M. Zenati, J. A. Wolfe, Y. K. Mishra, N. Trehan, S. Mittal, S. Shang, T. J. Mortier, C. J. Schweich, and RESTOR-MV Study Group. Outcomes of the RESTOR-MV Trial (Randomized Evaluation of a Surgical Treatment for Off-Pump Repair of the Mitral Valve). J. Am. Coll. Cardiol. 56:1984–1993, 2010.CrossRefPubMedGoogle Scholar
Guerrero, M., A. B. Greenbaum, and W. O'neill. Early experience with transcatheter mitral valve replacement. Card. Interv. Today 2015:61–67, 2015.Google Scholar
Harnek, J., J. G. Webb, K.-H. Kuck, C. Tschope, A. Vahanian, C. E. Buller, S. K. James, C. P. Tiefenbacher, and G. W. Stone. Transcatheter implantation of the MONARC coronary sinus device for mitral regurgitation: 1-year results from the EVOLUTION phase I study (Clinical Evaluation of the Edwards Lifesciences Percutaneous Mitral Annuloplasty System for the Treatment of Mitral Regurgitation). JACC Cardiovasc. Interv. 4:115–122, 2011.CrossRefPubMedGoogle Scholar
Herrmann, H. C., and F. Maisano. Transcatheter therapy of mitral regurgitation. Circulation 130:1712–1722, 2014.CrossRefPubMedGoogle Scholar
Hülsmann, J., K. Grün, S. El Amouri, M. Barth, K. Hornung, C. Holzfuß, A. Lichtenberg, and P. Akhyari. Transplantation material bovine pericardium: biomechanical and immunogenic characteristics after decellularization vs. glutaraldehyde-fixing. Xenotransplantation 19:286–297, 2012.CrossRefPubMedGoogle Scholar
Irvine, T., X. Li, D. Sahn, and A. Kenny. Assessment of mitral regurgitation. Heart 88:iv11–iv19, 2002.CrossRefPubMedPubMedCentralGoogle Scholar
Jensen, M. Ø. J., A. A. Fontaine, and A. P. Yoganathan. Improved in vitro quantification of the force exerted by the papillary muscle on the left ventricular wall: three-dimensional force vector measurement system. Ann. Biomed. Eng. 29:406–413, 2001.CrossRefPubMedGoogle Scholar
Jun, B. H., N. Saikrishnan, S. Arjunon, B. M. Yun, and A. P. Yoganathan. Effect of hinge gap width of a St. Jude medical bileaflet mechanical heart valve on blood damage potential—an in vitro micro particle image velocimetry study. J. Biomech. Eng. 136:091008, 2014.CrossRefPubMedGoogle Scholar
Kheradvar, A., E. M. Groves, C. A. Simmons, B. Griffith, S. H. Alavi, R. Tranquillo, L. P. Dasi, A. Falahatpisheh, K. J. Grande-Allen, C. J. Goergen, M. R. K. Mofrad, F. Baaijens, S. Canic, and S. H. Little. Emerging trends in heart valve engineering: part III. Novel technologies for mitral valve repair and replacement. Ann. Biomed. Eng. 43:858–870, 2015.CrossRefPubMedGoogle Scholar
Krishnaswamy, A., S. Mick, J. Navia, A. M. Gillinov, E. M. Tuzcu, and S. R. Kapadia. Transcatheter mitral valve replacement: a frontier in cardiac intervention. Cleve. Clin. J. Med. 83:S10–S17, 2016.CrossRefPubMedGoogle Scholar
Langer, F., M. A. Borger, M. Czesla, F. L. Shannon, M. Sakwa, N. Doll, J. T. Cremer, F. W. Mohr, and H.-J. Schäfers. Dynamic annuloplasty for mitral regurgitation. J. Thorac. Cardiovasc. Surg. 145:425–429, 2013.CrossRefPubMedGoogle Scholar
Lau, K. D., V. Diaz, P. Scambler, and G. Burriesci. Mitral valve dynamics in structural and fluid–structure interaction models. Med. Eng. Phys. 32:1057–1064, 2010.CrossRefPubMedPubMedCentralGoogle Scholar
Maisano, F., N. Buzzatti, M. Taramasso, and O. Alfieri. Mitral Transcatheter Technologies. Rambam Maimonides Med. J. 4:, 2013.Google Scholar
Maisano, F., O. Alfieri, S. Banai, M. Buchbinder, A. Colombo, V. Falk, T. Feldman, O. Franzen, H. Herrmann, S. Kar, K.-H. Kuck, G. Lutter, M. Mack, G. Nickenig, N. Piazza, M. Reisman, C. E. Ruiz, J. Schofer, L. Søndergaard, G. W. Stone, M. Taramasso, M. Thomas, A. Vahanian, J. Webb, S. Windecker, and M. B. Leon. The future of transcatheter mitral valve interventions: competitive or complementary role of repair vs. replacement? Eur. Heart J. 36:1651–1659, 2015.CrossRefPubMedGoogle Scholar
Maisano, F., A. Caldarola, A. Blasio, M. De Bonis, G. La Canna, and O. Alfieri. Midterm results of edge-to-edge mitral valve repair without annuloplasty. J. Thorac. Cardiovasc. Surg. 126:1987–1997, 2003.CrossRefPubMedGoogle Scholar
Maisano, F., V. Falk, M. A. Borger, H. Vanermen, O. Alfieri, J. Seeburger, S. Jacobs, M. Mack, and F. W. Mohr. Improving mitral valve coaptation with adjustable rings: outcomes from a European multicentre feasibility study with a new-generation adjustable annuloplasty ring system. Eur. J. Cardio-Thorac. Surg. Off. J. Eur. Assoc. Cardio-Thorac. Surg. 44:913–918, 2013.CrossRefGoogle Scholar
Meredith, I., V. Bapat, J. Morriss, M. McLean, and B. Prendergast. Intrepid transcatheter mitral valve replacement system: technical and product description. EuroIntervention J. Eur. Collab. Work. Group Interv. Cardiol. Eur. Soc. Cardiol. 12:Y78–Y80, 2016.Google Scholar
Muller, D. W. M., R. S. Farivar, P. Jansz, R. Bae, D. Walters, A. Clarke, P. A. Grayburn, R. C. Stoler, G. Dahle, K. A. Rein, M. Shaw, G. M. Scalia, M. Guerrero, P. Pearson, S. Kapadia, M. Gillinov, A. Pichard, P. Corso, J. Popma, M. Chuang, P. Blanke, J. Leipsic, P. Sorajja, and Tendyne Global Feasibility Trial Investigators. Transcatheter mitral valve replacement for patients with symptomatic mitral regurgitation: a global feasibility trial. J. Am. Coll. Cardiol. 69:381–391, 2017.CrossRefPubMedGoogle Scholar
Otto, C. M. Evaluation and Management of chronic mitral regurgitation. N. Engl. J. Med. 345:740–746, 2001.CrossRefPubMedGoogle Scholar
Preston-Maher, G. L., R. Torii, and G. Burriesci. A technical review of minimally invasive mitral valve replacements. Cardiovasc. Eng. Technol. 6:174–184, 2015.CrossRefPubMedGoogle Scholar
Puri, R., O. Abdul-Jawad Altisent, M. Del Trigo, F. Campelo-Parada, A. Regueiro, H. Barbosa Ribeiro, R. DeLarochellière, J.-M. Paradis, E. Dumont, and J. Rodés-Cabau. Transcatheter mitral valve implantation for inoperable severely calcified native mitral valve disease: a systematic review. Catheter. Cardiovasc. Interv. Off. J. Soc. Card. Angiogr. Interv. 87:540–548, 2016.CrossRefGoogle Scholar
Quill, J. L., A. J. Hill, and P. A. Iaizzo. Comparative anatomy of aortic and mitral valves in human, ovine, canine and swine hearts. J. Card. Fail. 12:S24, 2006.CrossRefGoogle Scholar
Rahmani, B., S. Tzamtzis, R. Sheridan, M. J. Mullen, J. Yap, A. M. Seifalian, and G. Burriesci. A new transcatheter heart valve concept (the TRISKELE): feasibility in an acute preclinical model. EuroIntervention J. Eur. Collab. Work. Group Interv. Cardiol. Eur. Soc. Cardiol. 12:901–908, 2016.Google Scholar
Rahmani, B., S. Tzamtzis, R. Sheridan, M. J. Mullen, J. Yap, A. M. Seifalian, and G. Burriesci. In vitro hydrodynamic assessment of a new transcatheter heart valve concept (the TRISKELE). J Cardiovasc. Transl. Res. 2016. doi: 10.1007/s12265-016-9722-0.PubMedPubMedCentralGoogle Scholar
Robertson, S. W., A. R. Pelton, and R. O. Ritchie. Mechanical fatigue and fracture of Nitinol. Int. Mater. Rev. 57:1–37, 2012.CrossRefGoogle Scholar
Sack, S., P. Kahlert, L. Bilodeau, L. A. Pièrard, P. Lancellotti, V. Legrand, J. Bartunek, M. Vanderheyden, R. Hoffmann, P. Schauerte, T. Shiota, D. S. Marks, R. Erbel, and S. G. Ellis. Percutaneous transvenous mitral annuloplasty: initial human experience with a novel coronary sinus implant device. Circ. Cardiovasc. Interv. 2:277–284, 2009.CrossRefPubMedGoogle Scholar
Schampaert, S., K. A. M. A. Pennings, M. J. G. van de Molengraft, N. H. J. Pijls, F. N. van de Vosse, and M. C. M. Rutten. A mock circulation model for cardiovascular device evaluation. Physiol. Meas. 35:687, 2014.CrossRefPubMedGoogle Scholar
Seeburger, J., V. Falk, J. Garbade, T. Noack, P. Kiefer, M. Vollroth, F. W. Mohr, and M. Misfeld. Mitral valve surgical procedures in the elderly. Ann. Thorac. Surg. 94:1999–2003, 2012.CrossRefPubMedGoogle Scholar
Shah, S. R., and N. R. Vyavahare. The effect of glycosaminoglycan stabilization on tissue buckling in bioprosthetic heart valves. Biomaterials 29:1645–1653, 2008.CrossRefPubMedPubMedCentralGoogle Scholar
Sondergaard, L., M. Brooks, N. Ihlemann, A. Jonsson, S. Holme, M. Tang, K. Terp, and A. Quadri. Transcatheter mitral valve implantation via transapical approach: an early experience. Eur. J. Cardio-Thorac. Surg. Off. J. Eur. Assoc. Cardio-Thorac. Surg. 48:873–877, 2015; (discussion 877–878).CrossRefGoogle Scholar
Søndergaard, L., O. De Backer, O. W. Franzen, S. J. Holme, N. Ihlemann, N. G. Vejlstrup, P. B. Hansen, and A. Quadri. First-in-human case of transfemoral CardiAQ mitral valve implantation. Circ. Cardiovasc. Interv. 8:e002135, 2015.CrossRefPubMedGoogle Scholar
Tanné, D., E. Bertrand, L. Kadem, P. Pibarot, and R. Rieu. Assessment of left heart and pulmonary circulation flow dynamics by a new pulsed mock circulatory system. Exp. Fluids 48:837–850, 2010.CrossRefGoogle Scholar
Thourani, V. H., W. S. Weintraub, R. A. Guyton, E. L. Jones, W. H. Williams, S. Elkabbani, and J. M. Craver. Outcomes and long-term survival for patients undergoing mitral valve repair versus replacement: effect of age and concomitant coronary artery bypass grafting. Circulation 108:298–304, 2003.CrossRefPubMedGoogle Scholar
Toma, M., M. Ø. Jensen, D. R. Einstein, A. P. Yoganathan, R. P. Cochran, and K. S. Kunzelman. Fluid-structure interaction analysis of papillary muscle forces using a comprehensive mitral valve model with 3D chordal structure. Ann. Biomed. Eng. 44:942–953, 2016.CrossRefPubMedGoogle Scholar
Tzamtzis, S., J. Viquerat, J. Yap, M. J. Mullen, and G. Burriesci. Numerical analysis of the radial force produced by the Medtronic-CoreValve and Edwards-SAPIEN after transcatheter aortic valve implantation (TAVI). Med. Eng. Phys. 35:125–130, 2013.CrossRefPubMedGoogle Scholar
Vismara, R., A. M. Leopaldi, M. Piola, C. Asselta, M. Lemma, C. Antona, A. Redaelli, F. van de Vosse, M. Rutten, and G. B. Fiore. In vitro assessment of mitral valve function in cyclically pressurized porcine hearts. Med. Eng. Phys. 38:346–353, 2016.CrossRefPubMedGoogle Scholar
Votta, E., E. Caiani, F. Veronesi, M. Soncini, F. M. Montevecchi, and A. Redaelli. Mitral valve finite-element modelling from ultrasound data: a pilot study for a new approach to understand mitral function and clinical scenarios. Philos. Transact. A 366:3411–3434, 2008.CrossRefGoogle Scholar
Walker, C. M., G. P. Reddy, T.-L. H. Mohammed, and J. H. Chung. Systolic anterior motion of the mitral valve. J. Thorac. Imaging 27:W87, 2012.CrossRefPubMedGoogle Scholar
Walther, T., V. Falk, J. Kempfert, M. A. Borger, J. Fassl, M. W. A. Chu, G. Schuler, and F. W. Mohr. Transapical minimally invasive aortic valve implantation; the initial 50 patients. Eur. J. Cardio-Thorac. Surg. Off. J. Eur. Assoc. Cardio-Thorac. Surg. 33:983–988, 2008.CrossRefGoogle Scholar
Xuan, Y., Y. Moghaddam, K. Krishnan, D. Dvir, J. Ye, M. Hope, L. Ge, and E. Tseng. Impact of size of transcatheter aortic valves on stent and leaflet stresses. Book of Abstracts EuroPCR 2016, 2016, n. Euro16A-POS0558.Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.UCL Mechanical Engineering, Cardiovascular Engineering LaboratoryUniversity College LondonLondonUK
2.Ri.MED Foundation, Bioengineering GroupPalermoItaly
Bozkurt, S., Preston-Maher, G.L., Torii, R. et al. Ann Biomed Eng (2017) 45: 1852. https://doi.org/10.1007/s10439-017-1828-2
Accepted 25 March 2017
First Online 03 April 2017
Publisher Name Springer US
Biomedical Engineering Society (BMES) | CommonCrawl |
The effect of increasing the coinsurance rate on outpatient utilization of healthcare services in South Korea
Hyo Jung Lee1, 2,
Sung-In Jang2, 3 and
Eun-Cheol Park2, 3Email author
Accepted: 7 February 2017
The Korean healthcare system is composed of costly and inefficient structures that fail to adequately divide the functions and roles of medical care organizations. To resolve this matter, the government reformed the cost-sharing policy in November of 2011 for the management of outpatients visiting general or tertiary hospitals with comparatively mild diseases. The purpose of the present study was to examine the impact of increasing the coinsurance rate of prescription drug costs for 52 mild diseases at general or tertiary hospitals on outpatient healthcare service utilization.
The present study used health insurance claim data collected from 2010 to 2013. The study population consisted of 505,691 outpatients and was defined as those aged 20–64 years who had visited medical care organizations for the treatment of 52 diseases both before and after the program began. To examine the effect of the cost-sharing policy on outpatient healthcare service utilization (percentage of general or tertiary hospital utilization, number of outpatient visits, and outpatient medical costs), a segmented regression analysis was performed.
After the policy to increase the coinsurance rate on prescription drug costs was implemented, the number of outpatient visits at general or tertiary hospitals decreased (β = −0.0114, p < 0.0001); however, the number increased at hospitals and clinics (β = 0.0580, p < 0.0001). Eventually, the number of outpatient visits to hospitals and clinics began to decrease after policy initiation (β = −0.0018, p < 0.0001). Outpatient medical costs decreased for both medical care organizations (general or tertiary hospitals: β = −2913.4, P < 0.0001; hospitals or clinics: β = −591.35, p < 0.0001), and this decreasing trend continued with time.
It is not clear that decreased utilization of general or tertiary hospitals has transferred to that of clinics or hospitals due to the increased cost-sharing policy of prescription drug costs. This result indicates the cost-sharing policy, intended to change patient behaviors for healthcare service utilization, has had limited effects on rebuilding the healthcare system and the function of medical care organizations.
Increased coinsurance rate
Prescription drug cost policy
Since South Korea established a national health insurance program in 1989, health expenditures and accessibility to healthcare and medical needs have rapidly increased [1]. In South Korea, total health expenditures accounted for 7.6% of gross domestic production in 2012 [2]. This total health expenditure is lower than the Organization for Economic Co-operation and Development (OECD) average of 9.3%. However, the growth rate in health expenditure per person from 2001 to 2011 is higher than that of most OECD countries (South Korea: 9.3%; OECD average: 4.0%). Thus, achieving control over the constantly increasing health expenditure has become a key healthcare reform concern in South Korea [3].
There have been various discussions surrounding the financial stability of health insurance in South Korea. One of the proposed strategies is to build a sustainable healthcare system. To improve healthcare systems, simultaneous pursuit of three aims—improving the experience of care, improving the health of populations, and reducing per capita costs of health care—is required. The role of the South Korean government, as an integrator that accepts responsibility for these three aims, includes the redesign of primary care, population health management, and financial management [4]. In particular, the South Korean government attempted to improve primary health care and manage finances efficiently by assigning an appropriate role for medical care organizations according to size and function [5]. In South Korea, medical care is divided between clinical and hospital organizations by function, and hospitals are further divided into specialist and general hospitals according to structural characteristics. Medical law defines clinics (less than 30 beds) as centers treating outpatients and hospitals (more than 30 beds) as treating inpatients. General hospitals have more than 100 beds and at least 7 medical departments, including essential medical departments designated by medical law. Additionally, the Minister of Health and Welfare is able to specify tertiary hospitals as more specialist hospitals treating severe diseases compared to general hospitals with several requirements such as manpower, facilities, and equipment.
However, despite the use of this classification, individuals are able to choose any medical care organizations, from clinics in their community to hospitals [6]. Thus, the South Korean healthcare system contains inefficient structures that fail to adequately divide the functions and roles of medical care organizations. Accordingly, their functions overlap and all medical care organizations compete with each other, regardless of hospital type [7]. In addition, patients are focused on general hospitals in metropolitan areas despite having mild diseases and have to pay more for hospital services compared to clinic services. When evaluating diagnostic codes of outpatients according to hospital type, hypertension, diabetes mellitus, and acute upper respiratory tract infections, which are treatable in the primary healthcare setting, are the most frequent diseases treated in all hospital types. In addition, 44 tertiary hospitals (0.07% of medical care organizations) account for 23% of health insurance expenditure and this percentage is increasing [8]. Thus, in October 2011, the government reformed the policy for the management of outpatients visiting general hospitals for the care of comparatively mild diseases. This policy resulted in an increase in the existing 30% of coinsurance rate on prescription drug costs for 52 types of diseases to 50% in tertiary hospitals and 40% in general hospitals [5, 9].
In general, cost sharing, including coinsurance, copayment, and deductible, refers to any financing arrangement where the cost of the services used is supported in part by the user. The main objective is to prevent unnecessary utilization of health services and to stabilize insurance finances [10]. A further objective is to shift health care expenditures from public to private resources and secure additional finances to sustain the functioning of health services [11]. There have been many previous studies on cost sharing. The findings of the RAND Health Insurance Experiment and other studies of non-elderly insured populations reported that cost sharing reduced total health care spending and utilization without harming the health of individuals [12, 13]. However, some studies have reported higher cost sharing to be associated with adverse outcomes, particularly among vulnerable populations such as elderly and poor patients [14–16]. In studies that were not limited to patients with certain chronic illnesses, increased cost sharing was not found to be associated with increased number of outpatient visits, emergency department visits, or hospitalizations [17].
Previous research on cost sharing performed by South Korea found that low-income patients were more sensitive to cost sharing than high-income patients, and users of general hospitals were less sensitive to cost sharing than the users of clinics [18]. Furthermore, another study suggested that cost sharing among the elderly had little effect on controlling health care utilization [10]. However, some studies have demonstrated that cost sharing decreases medical costs and visit days per outpatient [19, 20]. The results of studies regarding the effect of increasing the coinsurance rate of prescription drug costs—our policy of interest—were inconsistent [9, 21, 22]. In addition, few studies have considered individual characteristics, such as sex, age, and income, even where individual characteristics are important factors for healthcare utilization, particularly in South Korea due to free choice of medical care organizations and a payment system based on fee-for-service. Therefore, the purpose of the present study was to examine the impact of changing the coinsurance rate of prescription drug costs for 52 mild diseases on outpatient healthcare service utilization using nationally representative data.
The present study used National Sample Cohort data, including all medical claims, from 2010 to 2013 released by the National Health Insurance Service (NHIS), which consists of details of patient healthcare utilization. The data included approximately 100 million people sampled by sex, age, employment status (employed or self-employed), income, and individual total medical costs. Our study population was defined as outpatients aged 20–64 years who had visited medical care organizations more than once, both before and after the policy change. for the treatment of 52 diseases, including acute bronchitis, gastritis, duodenitis, and hypertension. These 52 diseases are classified according to the International Classification of Diseases groupings (ICD-10) and details regarding each type and its description are presented in the Additional file 1. Additionally, the present study included only National Health Insurance (NHI) beneficiaries who were enrolled in health insurance provided by public sector. Health insurance in South Korea is classified into either NHI or Medical Aid. Individuals whose single-family household income is less than $600 per month qualify for Medical Aid while others should join NHI. Since NHI and Medical Aid have slightly differing copayment systems, we included NHI beneficiaries only. The present study was approved by the Institutional Review Board, Yonsei University Graduate School of Public Health (2014-239). The requirement for informed consent from patients was waived as patient information was anonymized prior to the study analysis.
We used the proportion of general or tertiary hospital utilization, number of outpatient visits, and medical costs as dependent variables to reflect the shift of outpatients into hospitals or clinics from general or tertiary hospitals. All dependent variables were calculated in units of person–month. General or tertiary hospital utilization was defined as the proportion of general or tertiary hospital utilization among total healthcare utilization. The proportion of general or tertiary hospital utilization per month was calculated as (the number of outpatient visits into general or tertiary hospitals per person–month/the number of outpatient visits into total healthcare utilization per person–month) × 100. The numbers of outpatient visits and medical costs per person–month were analyzed by categorizing costs into general or tertiary hospital and hospital or clinic. Medical costs indicated the total costs of visiting physicians and prescription drugs. The monetary unit of medical costs was KRW, with 1000 KRW corresponding to approximately 1 US$.
For the analysis of the relationship between the introduction of the policy and healthcare utilization, we adjusted for individual characteristics. Individual characteristics included age, sex, income, residence region, Charlson Comorbidity Index (CCI), and all cause admission during the previous year. Demographic factors, including age, sex, income, and residence region, are known to be associated with health care utilization [23–25]. Further, health-related factors such as CCI and recently history of admission may affect the pattern of health care utilization [26–28]. Age in years was classified into five groups as follows; 20–29, 30–39, 40–49, 50–59, and 60–64. Regions were categorized into urban and rural. Income level was estimated using the average monthly health insurance premium. Individuals with NHI provided by their employer paid a monthly insurance premium according to annual salary, and those who were self-employed paid a premium according to their property value. Low-income was defined as the bottom 20 percentiles of health insurance premiums, middle-income was defined as the 20–80 percentiles of the premiums, and high-income was defined as the top 20 percentiles of premiums. The CCI was used to account for the effects of comorbid disorders or diseases. CCI was calculated monthly according to Quan's methods [29]. Nineteen diseases were classified into scores of 1, 2, 3, and 6 [30]. The CCI was calculated from the sum of all scores and given extra scores in accordance with age. In the present study, CCI was grouped as scores of 0, 1, 2, and 3 or over.
We examined the distribution of individual characteristics by analyzing their frequencies. Student's t-test was performed for dependent variables, proportion of general or tertiary hospital utilization, number of outpatient visits, and medical costs both before and after the introduction of the program. Segmented regression analysis of interrupted time series analysis was used to examine policy effects [31]. Our segmented regression analysis equation was:
$$ \begin{array}{c}\hfill {Y}_{it}={\beta}_0+{\beta}_1\times tim{e}_t+{\beta}_2\times 2011\kern0.5em policy+{\beta}_3\times tim e\kern0.5em after\kern0.5em 2011\kern0.5em policy\kern0.5em +\hfill \\ {}\hfill {\beta}_4\times seaso{n}_t+{X}_{it}+{e}_{it}\hfill \end{array} $$
dependent variables
each variables
time :
a continuous variable beginning in January 2010
2011 policy :
changing coinsurance rate on prescription drug cost in November 2011, a binary variable (0 before; 1 after)
time after 2011 policy :
a continuous variable beginning in November 2011
seasonality (spring, summer, fall, winter)
independent variables
error term
In the present study analysis, the 2011 policy began in November 2011, as there was a 1-month-lagged effect after implementation of the policy. For the segmented regression analysis, the Generalized Estimation Equation (GEE) was used. PROC GENMOD was performed for the GEE with link identity, normal distribution, and type = AR (1). Repeated measures were considered and the unit of analysis was person-month. Subgroup analyses by income and sex were also performed. All statistical analyses were performed using SAS statistical software version 9.2. All calculated p-values were two-sided and considered statistically significant at p < 0.05.
Table 1 shows the general characteristics of the study population. A total of 505,691 outpatients were included in the analysis. The highest proportion was in the 50–59 years old group at 131,556 (26.0%). There were 230,371 (45.6%) men and 275,320 (54.4%) women. More than half (59.7%) were middle-income earners, lived in an urban area (71.2%), and had 0 points on the CCI (64.4%). The majority of the study population had no disability (96.4%). A total of 48,922 (9.7%) outpatients were admitted during the previous year. Based on the number of outpatients, the most common disease was acute bronchitis, unspecified (ICD-10: J20.9, 335,686 outpatients, 66.4%). When viewed in terms of the total numbers of visits, patients with essential hypertension (ICD-10: I10) had the most visits to medical care organizations (1,638,083 cases, 13.6%).
General characteristics of the study population
Low (0–20%)
Middle (20.1–80%)
High (80.1–100%)
Charlson comorbidity index
Mild disability
Severe disability
All cause admission at previous year
Non-admission
Most frequent disease (ICD-10 code)
Acute bronchitis, unspecified (J20.9)
Acute tonsillitis (J03.0 ~ J03.9)
Gastritis and duodenitis (K29.0 ~ K29.9)
Acute upper respiratory infections of multiple and unspecified sites (J06.0 ~ J06.9)
Allergic contact dermatitis due to other agents or unspecified cause (L23.8, L23.9)
The trends of each dependent variable before and after the policy are shown in Table 2. The proportion of general or tertiary hospital utilization was 5.9% before the 2011 policy and 5.4% after the 2011 policy. The number of outpatient visits decreased in terms of general or tertiary hospital utilization per month but increased in hospital or clinic utilization after the 2011 policy (general or tertiary hospital utilization: 0.099 - >0.092, p < 0.0001; hospital or clinic utilization: 1.576 - >1.617, p < 0.0001). Outpatient medical costs also decreased for general or tertiary hospital utilization per month but increased in hospital or clinic utilization after the 2011 policy (general or tertiary hospital utilization: 9273.8 - >6316.4, p < 0.0001; hospital or clinic utilization: 44,935.1 - >46,206.1, p < 0.0001). The trends of each dependent variable per month are shown in Fig. 1.
The trends of each dependent variable before and after the 2011 policy. Unit: Mean ± SD
Before intervention
After intervention
2010.1–2011.9
2011.10–2013.12
Percentage of general and tertiary hospital utilization
5.921 ± 0.390
General and tertiary hospital utilization
Number of outpatient visits
Outpatient medical costs
9273.8 ± 542.8
Hospital and clinic utilization
44935.1 ± 1242.6
46206.1 ± 976.2
The trends of each dependent variable for month
Table 3 shows the results of the segmented regression analysis. The 2011 policy decreased the percentage of general or tertiary hospital utilization (β = −1.6184, p < 0.0001). After the 2011 policy, the number of outpatient visits decreased for general or tertiary hospitals (β = −0.0114, p < 0.0001) and increased for hospitals or clinics (β = 0.0580, p < 0.0001). However, the number of outpatient visits in hospitals or clinics exhibited a downward trend after the policy. Outpatient medical costs decreased in both medical care organizations (general or tertiary hospitals: β = −2913.4, p < 0.0001; and hospital or clinic utilization: β = −591.35, p <0.0001). This trend continued to decrease over time. The percentage of general or tertiary hospital utilization by outpatients with acute bronchitis recovered (β = 0.0167, p = 0.0010), while that by outpatients with essential hypertension decreased (β = −0.0154, p = 0.0029) over time after the 2011 policy began.
The results of the segmented regression analysis
2011 policy
Time after 2011 policy
Estimate*
Percentage of general or tertiary hospital utilization
−0.0044
−2913.4
−23.684
Hospitals and clinic utilization
−591.35
Essential hypertension (I10.0, I10.9)
*Adjusted for age, sex, income, region, CCI, disability, and all cause admission during the previous year
After the policy began, the increasing coinsurance rate, number of outpatient visits, and outpatient medical costs in both medical care organizations demonstrated the same trend in the total population regardless of income and sex (Table 4). General or tertiary hospital utilization, including both the number of outpatient visits and outpatient medical costs, decreased; however, hospital or clinic utilization did not increase to the same extent as the decrease in general or tertiary hospital utilization. Rather, outpatient medical costs decreased. Unlike the results of the subgroup analysis by income, we identified that the absolute values of the coefficient for time after the 2011 policy were higher in women than in men, indicating women were more likely to be sensitive to the 2011 policy.
The results of the segmented regression analysis by income
Low-income*
General or tertiary hospital utilization
Hospital or clinic utilization
Middle-income*
High-income*
*Adjusted for age, sex, region, CCI, disability, and all cause admission at previous year
+Adjusted for age, income, region, CCI, disability, and all cause admission at previous year
Medical costs per visit for the same diagnostic code are higher in larger hospitals and are 3–4 times higher in tertiary hospitals than in clinics [8]. Thus, if outpatients are concentrated in tertiary hospitals, health insurance finances become an economic burden. In addition, since patients are less likely to visit clinics and small hospitals, the quality of care in clinics and small hospitals is reduced, which may lead to patient distrust of clinics and small hospitals. Patients will be then much less likely to visit clinics and small hospitals. Once this vicious cycle is repeated, financial difficulties, both in health insurance and in clinics or small hospitals, will further increase [32]. Tertiary hospitals may not be able to adequately perform their primary function of treating severe diseases. Accordingly, outpatients presenting for 52 mild diseases paying prescription drug costs differently depending on the hospital type represents a potential method of resolving the above matters.
The findings of the present study indicate that changing the coinsurance rate on prescription drug costs was associated with changes in outpatient healthcare service utilization. The introduction of the 2011 policy decreased the number of outpatient visits in general or tertiary hospitals. The number of outpatient visits in hospitals or clinics increased with the introduction of the 2011 policy; however, it decreased over time after the 2011 policy. In addition, outpatient medical costs decreased in both general or tertiary hospitals and hospitals or clinics. Therefore, the 2011 policy, changing the coinsurance rate on prescription drug costs, partially shifted visits from general hospitals or tertiary hospitals to clinics.
Studies have reported that cost sharing reduces the needs of various health services and the burden of health insurance [17]. The reducing rate of outpatient medical cost increases for hospitals or clinics, despite the increase in the number of outpatient visits immediately after the 2011 policy, may be attributable to decreased pharmaceutical use, which could be caused by reduced unnecessary prescription drug treatment in the treatment intensity, although the decision making process on treatment intensity needs to be examined more carefully [33]. However, the effect of cost sharing is likely to involve side effects as our results demonstrate that the number of outpatient visits for general or tertiary hospitals tended to increase while that for hospitals or clinics decreased over time after the 2011 policy [21]. There are several possible explanations for these results. First, increasing the coinsurance rate from 10 to 20% may have been insufficient to prevent patients from excess visits to general or tertiary hospitals, and the effect of the cost-sharing policy may not have been maintained for a substantial period of time [34]. Second, increased cost sharing may be related to adverse events such as hospitalization and worsening clinical outcomes due to the decline in access to general or tertiary hospitals [33, 35]. Thus, the increase in hospitalization may have led to a decrease in outpatients, causing the number of outpatient visits for hospitals or clinics to decrease over time. Additionally, the present study demonstrates that both the number of outpatient visits and medical costs associated with hypertension decreased compared to other diseases after the 2011 policy in all medical care organizations. This observation may be explained in two ways. First, the policy increases the coinsurance rate for patients with specific diagnostic codes. Thus, outpatients with hypertension are able to continue visiting general and tertiary hospitals using other diagnostic codes such as for hypertensive heart disease (ICD-10: I11) rather than essential hypertension (ICD-10: I10). Alternatively, it is possible that hospitalization for hypertension increased over the study period [21].
We examined the effect of the 2011 policy on healthcare utilization according to income and sex via subgroup analysis. Income is an important factor for cost-sharing policies. We observed that women were more likely to be sensitive to the 2011 policy since the absolute values of the coefficient for time after the implementation of the 2011 policy were higher in women than in men. This result was consistent with those of previous studies [36, 37]. However, our study did not identify any direct evidence for a difference in healthcare utilization according to income level, despite previous reports that patients with lower income are more sensitive to the cost-sharing policy [18, 38, 39]. In the present study, the increased coinsurance rate did not have a consistent effect difference based on income. Thus, we were unable to evaluate price elasticity or moral hazards.
As a result, the 2011 policy did not control healthcare utilization and health insurance finance in the long term. It is important to observe the effect of increasing the coinsurance rate on prescription drug costs for 52 diseases in the future. In addition, establishment of the criteria for determining the primary diagnosis to prevent using another diagnostic code and follow-up investigations for continuously monitoring patients and hospitals are needed.
The present study has some strengths compared to previous studies. First, we used data from a nationally representative large sample size, which reflect the overall medical information of South Koreans. Such data are especially helpful in establishing evidence-based health policies. Second, to our knowledge, there are few previous studies from South Korea that have analyzed the correlations between policy introduction and healthcare utilization with consideration of individual characteristics. Although some studies have assessed policy effects, they have used the total sum of outpatient visits and medical costs per month rather than per person-month. Thus, we are able to provide more detailed information on the policy related to the coinsurance rate of prescription drug costs.
The present study also had some limitations related to limited data and methodology issues. First, there may have been other external factors, not considered in our study, which affected healthcare utilization. For example, in the case of hypertension, the South Korean government reformed the prices of existing drugs in April 2012 and revised guidelines restricting prescription for antihypertensive drugs in January 2013. In addition, some individuals, irrespective of the coinsurance rate, prefer general or tertiary hospitals over clinics. Thus, our results require careful interpretation. Further, since the most frequently treated diseases of the 52 are influenced by seasonal, socio-economic and demographic characteristics, and personal health status, healthcare utilization may have been affected [18, 40, 41]. Furthermore, detailed covariates related with each disease were not adjusted as we analyzed all 52 diseases, which included chronic diseases, as well as acute diseases. As each disease has different characteristics, other covariates may be needed. Furthermore, we did not assess the severity of disease as information related to this assessment was not available in the present study. However, we considered severity (in terms of CCI) for a more detailed study. Last, hospital characteristics, including the quality of the offered services, were not captured in our study, and there may be hospital effects such as quality on healthcare utilization.
Our findings demonstrate that the introduction of the 2011 policy increasing the coinsurance rate on prescription drug costs decreased utilization of outpatient visits in general or tertiary hospitals. However, outpatient medical costs decreased in all medical care organizations. As we did not consider other external factors related to healthcare utilization in our analysis, it is not clear whether decreased utilization of general or tertiary hospitals transferred to demand for clinics or hospitals care of the 2011 policy. This result indicates that the price policy intended to change behaviors for healthcare service utilization had limited effects on rebuilding the healthcare system or the function of medical care organizations.
GEE:
Generalized estimation equation
ICD:
International Classification of Diseases groupings
NHI:
NHIS:
National Health Insurance Service
OECD:
Both earlier version and revised version of this article have been reviewed by professional medical editing and proofreading services.
This study did not receive any external funding.
Data is available from the Korean National Health Insurance Service (NHIS), however it is allowed only to researchers who propose a study subject and plans with a standardized proposal form and are approved by the NHIS review committee. Details of data releasing process are now available at http://nhiss.nhis.or.kr/bd/ab/bdaba000eng.do. The authors received permission from the NHIS to access and use their records containing the health insurance claim data.
All authors had full access to all of the data in the study and took responsibility for the integrity of the data and the accuracy of the data analysis. HJL designed the study, researched data, performed statistical analyses and wrote the manuscript. SIJ and ECP contributed to the discussion and reviewed and edited the manuscript. ECP is the guarantor of this work and gave critical revision for important intellectual content. All authors read and approved the final manuscript.
This study was approved by the Institutional Review Board, Yonsei University Graduate School of Public Health (2014–239), and the use of National Sample Cohort data was approved by the Korean National Health Insurance Service (NHIS) review committee. this study did not include informed consents from the patients, because datasets are completely anonymous and contain no personal information of participants.
Additional file 1: 52 diseases to apply the cost-sharing policy. (DOC 119 kb)
Department of Public Health, Graduate School, Yonsei University, Seoul, Republic of Korea
Institute of Health Services Research, Yonsei University College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul, 120-752, Republic of Korea
Department of Preventive Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
Kwon S. Thirty years of national health insurance in South Korea: lessons for achieving universal health care coverage. Health Policy Plan. 2009;24(1):63–71.View ArticlePubMedGoogle Scholar
OECD. OECD Health Statistics: How does Korea compare?, OECD Publishing; 2014. http://www.oecd.org/els/health-systems/Briefing-Note-KOREA-2014.pdf.
OECD. Health at a Glance: OECD Indicators, OECD Publishing; 2013. http://dx.doi.org/10.1787/health_glance-2013-en.
Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759–69.View ArticleGoogle Scholar
Ministry of Health and Welfare: Plan on the re-establishment of functions of health care institutions. Seoul; 2011.Google Scholar
Lee J-H, Choi Y-J, Lee SH, Sung N-J, Kim S-Y, Hong JY. Association of the length of doctor-patient relationship with primary care quality in seven family practices in Korea. J Korean Med Sci. 2013;28(4):508–15.View ArticlePubMedPubMed CentralGoogle Scholar
Korea Institute for Health and Social Affairs. Trends and issues for improvement of Korea Healthcare delivery system. 2014. http://repository.kihasa.re.kr:8080/handle/201002/13872.
Research Institute for Healthcare Policy. Study for the reformation healthcare delivery system-focusingon the role of tertiary hospitals. 2010.http://www.dbpia.co.kr/Journal/ArticleDetail/NODE02471088.
Lee PH. Outpatient clinical trends in mild disease in the last five years. Health Insurance Review & Assessment service Policy Brief. 2013;7(6):67–79.Google Scholar
Kwon MKS. The effect of outpatient cost sharing on health care utilization of the elderly. J Prev Med Public Health. 2010;43(6):496–504.View ArticlePubMedGoogle Scholar
Ros CC, Groenewegen PP, Delnoij DM. All rights reserved, or can we just copy? Cost sharing arrangements and characteristics of health care systems. Health Policy. 2000;52(1):1–13.View ArticlePubMedGoogle Scholar
Hsu J, Price M, Brand R, Ray GT, Fireman B, Newhouse JP, Selby JV. Cost-sharing for emergency care and unfavorable clinical events: findings from the safety and financial ramifications of ED copayments study. Health Serv Res. 2006;41(5):1801–20.View ArticlePubMedPubMed CentralGoogle Scholar
Goodell S, Swartz K. Cost-sharing: effects on spending and outcomes. Policy Brief. 2010;20:42–5.Google Scholar
Zeber JE, Grazier KL, Valenstein M, Blow FC, Lantz PM. Effect of a medication copayment increase in veterans with schizophrenia. Am J Manag Care. 2007;13(6):335–47.PubMedGoogle Scholar
Trivedi AN, Moloo H, Mor V. Increased ambulatory care copayments and hospitalizations among the elderly. N Engl J Med. 2010;362(4):320–8.View ArticlePubMedGoogle Scholar
Hartung DM, Carlson MJ, Kraemer DF, Haxby DG, Ketchum KL, Greenlick MR. Impact of a Medicaid copayment policy on prescription drug and health services utilization in a fee-for-service Medicaid population. Med Care. 2008;46(6):565–72.View ArticlePubMedGoogle Scholar
Goldman DP, Joyce GF, Zheng Y. Prescription drug cost sharing: associations with medication and medical utilization and spending and health. JAMA. 2007;298(1):61–9.View ArticlePubMedGoogle Scholar
Kim J, Ko S, Yang B. The effects of patient cost sharing on ambulatory utilization in South Korea. Health Policy. 2005;72(3):293–300.View ArticlePubMedGoogle Scholar
Hong S-W. The effects of copayments on health services utilization in the type I medicaid beneficiaries. J Korean Acad Nurs Adm. 2009;15(1):136–46.Google Scholar
Ko S, Kim J, Yang B. The effect of out-of-pocket price on ambulatory utilization. Korean J Health Econ Policy. 2002;8(1):1–27.Google Scholar
Byeon J, Ghang H, Lee H. Differential cost-sharing and utilization of outpatients care by types of medical institutions. Korea Social Policy Review. 2014;21(2):35–55.View ArticleGoogle Scholar
Kim H-J, Kim Y-H, Kim H-S, Woo J-S, Oh S-J. The impact of outpatient coinsurance rate increase on outpatient healthcare service utilization in tertiary and general hospital. Health Policy and Management. 2013;23(1):19–34.View ArticleGoogle Scholar
Kim C-W, Lee S-Y, Hong S-C. Equity in utilization of cancer inpatient services by income classes. Health Policy. 2005;72(2):187–200.View ArticlePubMedGoogle Scholar
Bertakis KD, Azari R, Helms LJ, Callahan EJ, Robbins JA. Gender differences in the utilization of health care services. J Fam Pract. 2000;49(2):147.PubMedGoogle Scholar
Schappert SM, Rechsteiner EA. Ambulatory medical care utilization estimates for 2006. Natl Health Stat Report. 2008;6(8):1–29.Google Scholar
Young BA, Lin E, Von Korff M, Simon G, Ciechanowski P, Ludman EJ, Everson-Stewart S, Kinder L, Oliver M, Boyko EJ. Diabetes complications severity index and risk of mortality, hospitalization, and healthcare utilization. Am J Manag Care. 2008;14(1):15–23.PubMedPubMed CentralGoogle Scholar
Struijs JN, Baan CA, Schellevis FG, Westert GP, van den Bos GA. Comorbidity in patients with diabetes mellitus: impact on medical health care utilization. BMC Health Serv Res. 2006;6(1):84.View ArticlePubMedPubMed CentralGoogle Scholar
Valderas JM, Starfield B, Sibbald B, Salisbury C, Roland M. Defining comorbidity: implications for understanding health and health services. Ann Fam Med. 2009;7(4):357–63.View ArticlePubMedPubMed CentralGoogle Scholar
Quan H, Sundararajan V, Halfon P, Fong A, Burnand B, Luthi J-C, Saunders LD, Beck CA, Feasby TE, Ghali WA. Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Medical Care 2005:1130–39Google Scholar
Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–83.View ArticlePubMedGoogle Scholar
Wagner AK, Soumerai SB, Zhang F, Ross‐Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299–309.View ArticlePubMedGoogle Scholar
Noh HK. Win-win growth of the medical profession: urgent tasks that can not be postponed for a moment. In: Vol. 11: Research Institute for Healthcare Policy Korean Medical Association. 2013.Google Scholar
Kan M, Suzuki W. Effects of cost sharing on the demand for physician services in Japan: evidence from a natural experiment. Japan and the World Economy. 2010;22(1):1–12.View ArticleGoogle Scholar
Schreyögg J, Grabka MM. Copayments for ambulatory care in Germany: a natural experiment using a difference-in-difference approach. Eur J Health Econ. 2010;11(3):331–41.View ArticlePubMedGoogle Scholar
Heisler M, Langa KM, Eby EL, Fendrick AM, Kabeto MU, Piette JD. The health effects of restricting prescription medication use because of cost. Med Care. 2004;42(7):626–34.View ArticlePubMedGoogle Scholar
Jang Y, Kim G, Chiriboga DA. Health, healthcare utilization, and satisfaction with service: barriers and facilitators for older Korean Americans. J Am Geriatr Soc. 2005;53(9):1613–7.View ArticlePubMedGoogle Scholar
Lee S-YD, Tsai T-I, Tsai Y-W, Kuo KN. Health literacy, health status, and healthcare utilization of Taiwanese adults: results from a national survey. BMC Public Health. 2010;10(1):614.View ArticlePubMedPubMed CentralGoogle Scholar
Lostao L, Regidor E, Geyer S, Aïach P. Patient cost sharing and social inequalities in access to health care in three western European countries. Soc Sci Med. 2007;65(2):367–76.View ArticlePubMedGoogle Scholar
Kiil A, Houlberg K. How does copayment for health care services affect demand, health and redistribution? A systematic review of the empirical evidence from 1990 to 2011. Eur J Health Econ. 2014;15(8):813–28.View ArticlePubMedGoogle Scholar
Skriabikova O, Pavlova M, Groot W. Empirical models of demand for out-patient physician services and their relevance to the assessment of patient payment policies: a critical review of the literature. Int J Environ Res Public Health. 2010;7(6):2708–25.View ArticlePubMedPubMed CentralGoogle Scholar
Remler DK, Greene J. Cost-sharing: a blunt instrument. Annu Rev Public Health. 2009;30:293–311.View ArticlePubMedGoogle Scholar | CommonCrawl |
Space Exploration Stack Exchange is a question and answer site for spacecraft operators, scientists, engineers, and enthusiasts. It only takes a minute to sign up.
Are sun-synchronous orbits always North to South?
So someone told me that weather satellites orbit from north to south generally because that helps them to obtain a sun-synchronous orbit (his explanation for why this was true was too complicated for me to understand, something to do with inclinations, something retrograde, and the earth's relation to the sun, and more). At least, that's how I heard their comment.
But does this make any sense given that once a satellite reaches the South Pole it will just begin ascending in a south to north pattern? Was that person right and I just don't understand something about how satellites are described? Any explanation here is much appreciated. Thanks!
artificial-satellite orbit low-earth-orbit sun-synchronous
Wayne Conrad
appleLoverappleLover
$\begingroup$ Welcome to Space! "Explain X to me" questions should contain some evidence of research for several reasons. One is that it helps those who would consider answering have a better idea at what level to write and which specific issues to address, so that their answer doesn't have to be a book chapter in length. You might check Wikipedia, or just search this site by typing "sun-synchronous" in the search bar. Once you find something specific you'd like explained, you can edit your question and describe it. $\endgroup$
You are correct in your understanding,
Once it reaches the vicinity of the South Pole, specifically the most southern point in its orbit it will start going north.
The reason they said north to south is to differentiate it from the orbits going west to east (which by the way stay west to east).
This is also very neatly demonstrated by the satellite's ground track: The satellite alternates between going south and north.
(Source: tornado.sfsu.edu)
The north-south motion (or south-north) itself doesn't produce a sun-synchronous orbit. It's actually the deviation from straight north-south, coupled with the primary's (the planet or other body the satellite is orbiting) oblateness, that allows sun-synchronous orbits.
The wikipedia article on nodal precession is a good general source on this topic.
If a planet is located far from any significant gravitating object (such as the sun) and isn't rotating, gravity makes it assume a spherical shape. Orbits around a spherically symmetric object are very simple: they don't change shape, they don't change orientation, etc. But no planets are exactly spherical, and a planet's deviation from sphericity can make for interesting evolution of orbits around it.
If a planet is rotating, centrifugal force makes the equator bulge. That situation can be viewed as a somewhat smaller spherical mass, with the bulge being "extra mass" (enough to make the planet's actual mass) centered around the rotating planet's equator. If an object is orbiting with an inclination of, say, 45°, when over the northern hemisphere, that extra mass gently pulls the satellite toward the south. Eventually it gets to the equator, where it "crosses a node". Nodes are the places where the orbit intersects the equatorial plane. The ascending node is where the satellite crosses from the southern to the northern hemisphere, and the descending node is the opposite. Because of that southerly pull while over the northern hemisphere, the satellite reaches the descending node a bit earlier, farther to the west, than it would have were the planet purely spherical. The plane of its orbit has rotated westward!
While over the southern hemisphere, the pull of the bulge is northward, making it arrive at the ascending node even earlier, so the orbit plane has rotated even farther, in the same direction. The bulge (and the torque on the satellite arising from it) causes precession of the orbit plane.
The equation that relates the rate of that precession to the planet's and orbit's characteristics is $$\omega_p = -\frac{3}{2} J_2\frac{R_p^2}{p^2}\omega \cos i$$ where $\omega_p$ is the precession rate in radians per second, $J_2$ is a parameter describing the gravity field's deviation from spherical resulting from the bulge, $R_p$ is the planet's average radius, $p$ is the orbit's "semi-latus rectum", a parameter related to the orbit's size and eccentricity, $\omega$ is the orbiting object's average angular velocity around the primary ($2\pi$ radians divided by the orbit period), and $i$ is the orbit's inclination. The Wikipedia article gives a slightly different version of this equation, but they are equivalent.
For a qualitative description of precession you don't need to pay attention to most of that equation. If the orbit's eccentricity and size remains fixed, then everything to the left of $\cos i$ is constant. With a positive $i$ ("prograde orbit"), the ascending node migrates westward (the "negative" direction), as in the qualitative example above.
But as Earth orbits around the sun, the direction from the center of Earth to the sun migrates eastward in a reference frame fixed to the stars (an "inertial" frame). To establish a sun-synchronous orbit, the inclination has to make $\cos i$ negative, reversing that westward precession direction. To make $\cos i$ negative, $i$ must be larger than 90° ($\frac{\pi}{2}$ radians), or retrograde—but only slightly.
If you plug all the parameter values into the equation and assume an object in low circular Earth orbit (circular LEO), $i$ winds up being somewhere around 97-98°, depending on the precise orbit altitude. This is only 7-8° away from straight north-south, so it is generally referred to as a polar orbit. But that 7-8° of retrograde component, the deviation from exactly polar, is critical for being sun-synchronous. Indeed, if the orbit is exactly polar, $i$ is 90° so $\cos i$ is zero, and no precession occurs.
For orbits at higher altitudes $\omega$ is smaller, so to maintain $\omega_p$ at the sun-synchronous value $\cos i$ must have a larger negative value. This means its orbit inclination must be farther from exactly polar.
asdfex
Tom SpilkerTom Spilker
$\begingroup$ your equations are better looking than my equations ;-) space.stackexchange.com/a/34558/12102 $\endgroup$
$\begingroup$ great answer, but doesn't answer the question asked $\endgroup$
$\begingroup$ @JCRM Read the first paragraph for context. $\endgroup$
– Tom Spilker
$\begingroup$ I did. That sentence could equally apply two two classes of orbits, the north-south orbits and the south-north orbits. $\endgroup$
$\begingroup$ This is an excellent description of sun-synchronous orbits both in the qualitative and quantitative sense. @JCRM is right that it doesn't answer the question since technically the question is just asking what is meant by a "north-south obrit." The correct answer is alluded to in your first sentence by essentially implying north-south and south-north are equivalent. $\endgroup$
In addition to Hans' answer, these are the terms used in the spaceflight community to describe the orbits you're referring to.
Satellites that orbit "north to south" are called polar orbits. These are all orbits that place the satellite over the poles. These orbits have an inclination (angle between the orbit's plane and the equator) of about 90º.
A sun-synchronous orbit belongs to the group of polar orbits. Its inclination is slightly more than 90º (depending on the orbit's altitude). This ensures that the orbit stays in the same position relative to the sun. This means the satellite will pass over a given spot on Earth at the same time of day every day, which is valuable for some applications.
HobbesHobbes
Thanks for contributing an answer to Space Exploration Stack Exchange!
Not the answer you're looking for? Browse other questions tagged artificial-satellite orbit low-earth-orbit sun-synchronous or ask your own question.
Graduation of Space Exploration
Mechanics of Sun-Synchronous Orbit & North Korea's KMS-4
Are sun-synchronous orbits possible around any body?
How risky is launching a rocket during a geomagnetic and solar radiation storm?
Are sun synchronous orbits possible for any place on Earth?
Perturbation effects on sun-synchronous orbit
How are "terminator-riding" sun-synchronous satellites never passing over the pole?
Satellites that take advantage of, or require constant availability of sunlight on the spacecraft itself, available in Sun-synchronous orbits? | CommonCrawl |
Activities by year
Filter by research sector
Select sector HECAP CMSP MATH ESP AP QLS New Research Areas
Each year, ICTP organizes more than 60 international conferences, workshops, and numerous seminars and colloquia.
Interested in attending an activity? Complete an online application form: Events in the scientific calendar that have an "smr" number require participants to apply online. Click on the activity and complete the online application form.
Have a question about an "smr" activity? Please contact the organizers at smrXXXX (x being the 4-digit number of the activity)@ictp.it
Wondering about the status of your application to an "smr" activity? Access your profile in the ICTP portal to check.
CALL FOR 2021 ACTIVITIES: Want to propose a conference, school or workshop? See our guidelines.
External organizations can pay for and organize their own high-level scientific and cultural events at ICTP. Details for these "Hosted Activities" are available in the Logistic Guidelines for Hosted Activities (pdf download).
Travel fellowships for ICTP conferences and workshops are available.
Download a pdf of the 2020 Scientific Calendar.
Sync your calendar
Download 321 records:
Permanent link for calendar synchronization:
Showing 321 records
Europe/Rome Understanding virulence remains a central problem in human health, pest control, disease ecology and evolutionary biology. Bacterial virulence is typically quantified by phenomenological indicators such as the LT50 (i.e. the time taken to kill 50% of an infected population). However, virulence emerges as a result of complex processes that occur at different stages: the pathogen needs to breach the primary host defenses, find a suitable environment to replicate, and finally express the virulence factors that cause lethality. It is well-known that pathogens exhibit a very broad spectrum of strategies to accomplish these three tasks, yet, phenomenological indicators such as the LT50 cannot distinguish the ability of the pathogen to invade the host from its ability to kill the host. Here, we propose a physical host-pathogen theory that shows how to disentangle colonization, growth, and pathogen lethality from the survival kinetics of a host population. Experimental data from Caenorhabditis elegans nematodes exposed to various human pathogens shows that host mortality becomes severe only once the pathogen population has reached its carrying capacity within the host. In the talk, I will present our theory and compare predictions against experimental data. ICTP ICTP [email protected]
@ ICTP
QLS Seminar: Disentangling bacterial invasiveness from lethality in an experimental host-pathogen system
Room: Central Area, 2nd floor, old SISSA building, Via Beirut
Speaker(s): Tommaso Biancalani - Massachusetts Institute of Technology
Support E-Mail: [email protected]
Seminar Start Time: 11:30
Europe/Rome Strongly correlated quantum systems exhibit a wide range of phases with unconventional behavior. These phases are characterized by non-trivial global entanglement patterns and cannot be described within the Landau paradigm due to their lack of local order parameters. In my talk, I will discuss how quantum information theory allows us to describe such systems in a way which reconciles their global entanglement with a local description, based on the framework of tensor networks. I will show how tensor networks allow to capture both the structure of the physical interactions as well as global topological entanglement within a unified local description, and how this allows us to build a comprehensive framework to study topologically ordered systems and their excitations. I will then discuss applications of this framework: First, I will show how it allows to characterize the precise nature of topological spin liquids; and second, I will discuss how it can be used to explain topological phase transitions driven by anyon condensation through phases in their entanglement, allowing us to devise measurable order parameters for anyon condensation and thus to study topological phase transitions at a microscopic level. ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar - Topological Order and Tensor Networks: A Local Perspective on Global Entanglement
Room: Leonardo Building - Luigi Stasi Seminar Room
Speaker(s): Norbert SCHUCH (MPI fuer Quantenoptik, Garching, Germany)
Europe/Rome The laws of thermodynamics can be extended to the nanoscale, where fluxes are fluctuating quantities. Little is known about extreme-value statistics of thermodynamic fluxes characterising the most extreme deviations from the average behaviours. Using Martingale theory, we study statistics of the negative records of stochastic entropy production in nonequilibrium steady states, and derive universal inequalities for such distributions of records. Furthermore, we explore the implications of our results in two non-equilibrium nanoscopic systems: single-electron transistors and molecular motors. We report on the experimental measurement of stochastic entropy production and of records of negative entropy in a metallic double dot under a constant external DC bias. Experimental results on the double dot confirm our theory and reveal a novel bound for the maximal heat that a mesoscopic system can absorb from its environment. We also explore our results in active biological processes and find predictions for the maximal excursion of a molecular motor against the direction of an external force. ICTP ICTP [email protected]
QLS Seminar: Records of Entropy Production at the Nanoscale
Speaker(s): Edgar Roldan, Max Planck Institute for the Physics of Complex Systems, Dresden, Germany
Europe/Rome Fitness is a macroscopic quantity that characterizes the potential success of a growing population of organisms and cells. Fitness is also a major target or objective for controlling growth and expansion of pathological bacteria and tumors by perturbing properties of individual cells such as replication rate, death rate and so on. Better understanding of the nature of growing populations and the controllability over them requires us to clarify the relation how the fitness is determined by these single-cell or single-organism level properties. To this end, we investigate a multi-type age-structured population model in which the inter-division interval and phenotypic states of cells are accounted. By using a path integral formulation of the model and its contraction onto the pairwise empirical quantities, we derive a response relation of the fitness to the change in the several single-cell-level parameters. This work is an extension of our previous work with Dr. Yuki Sughiyama [1]. [1] Y. Sughiyama, T.J. Kobayashi, et al., Pathwise thermodynamic structure in population dynamics, Phys. Rev. E. 2015. ICTP ICTP [email protected]
QLS Seminar: Fitness response relation of growing population: an application of large deviation theory and semi-Markov processes
Address: Via Beirut
Room: Central Area, 2nd floor, old SISSA building
Speaker(s): Tetsuya J. Kobayashi - Institute of Industrial Science, University of Tokyo
Europe/Rome Abstract: In an ecological system pathogens often need to share their host with other pathogens, and therefore compete for the resources with different spreading strategies. They can interact in different manners: for a short period cooperation between pathogens can lead to faster and larger host occupation [1-7]. Spanish Flu and HIV are examples of such cases. The cooperation, however, can lead to death of the host population and consequently also pathogens' death. Therefore on a long run, the cooperation strategy is not necessary the best. We propose and study an evolutionary game model in order to understand the co-evolutionary dynamics of two co-infecting pathogens [8]. They have a common host and the host does not evolve on the same time scale as the pathogens. We consider two kind of disease species, while each of them have two different strategies: cooperation and defection. Agents (pathogens) accumulate a payoff, based on the history of their contagion records. This gives rise to two main scenarios: The first is when the disease infects an empty host. In this scenario, the pathogen does not meet any resistance, and all the host resources are available to him. But when the host is already occupied by another disease, things become a bit more complicated. More specifically in the second scenario, there are four possible combinations of pairs of strategies: (C, C), (C, D), (D, C) and (D, D) corresponding to different payoffs; A cooperator pathogen does not show any resistance to the contagion by another disease and will share the host resources with it. However, a defector entering a host populated by a cooperator will seize the majority of available resources. Considering Hawk-Dove game, in a mean-field approximation, we first show under which conditions cooperation may or may not be a meaningful strategy. Then we show how evolution affects spreading dynamics, and if and how any strategy can win. Finally we show how underlying transmission and contact networks may promote both the spreading and the emergence of cooperation. Moreover, we show non-trivial dynamical effects of cooperation and competition in an ecological framework [9, 10]; we study two strains competing with each other for host resources in the presence of a third pathogen cooperating with both of them. We treat dynamics in a homogeneously mixed population by means of mean-field theory and stability analysis and also on complex transmission networks. We study the impact of cooperation on the outcome of the two-pathogen competition, which can be quantified in terms of dominance of one competing pathogen or the co- circulation of both of them. We show that the presence of a third cooperating pathogen can alter the outcome of competition as it may favor the more cooperative pathogen over the more infectious one. [1] L. Chen, F Ghanbarnejad, W. Cai, P. Grassberger, EPL 104, 5 (2013). [2] W. Cai, L. Chen, F. Ghanbarnejad, P. Grassberger, Nature Physics 11, 936 (2015). [3] P. Grassberger, L. Chen, F. Ghanbarnejad, W. Cai, Physical Review E 93, 042316 (2016). [4] J. P. Rodríguez, F. Ghanbarnejad, V. M. Eguíluz, Frontiers in Physics, V 5, P 46 (2017). [5] S. Sajjadi, F. Karimi, M. R. Ejtehadi, F. Zarei, S. Moghimi-Araghi, F. Ghanbarnejad, "Coinfection in different time scales", manuscript in preparation. [6] L. Chen, F. Ghanbarnejad, D. Brockmann, New J. Phys. 19, 103041(2017). [7] J. P. Rodríguez, F. Ghanbarnejad, V. M. Eguíluz, "How mobility affect cooperative spreading diseases", manuscript in preparation. [8] F. Ghanbarnejad, K. Seegers, A. Cardillo, P. Hoevel, "Evolutionary cooperation, yes or no?", manuscript in preparation. [9] F. Pinotti, F. Ghanbarnejad, P. Hoevel, C. Poletto, "How to compete in presence of a cooperative agent", manuscript in preparation. [10] S. Meloni, F. Ghanbarnejad, Y. Moreno "coinfection on multilayer networks", manuscript in preparation. ICTP ICTP [email protected]
QLS Seminar: Cooperation vs. Competition in an Evolutionary Ecological Framework
Room: Central Area, 2nd floor, SISSA Building
Speaker(s): Fakhteh Ghanbarnejad - Institute of Theoretical Physics, Technical University of Berlin, Berlin, Germany
Europe/Rome Recent experiments on large chains of Rydberg atoms [H. Bernien et al., arXiv:1707.04344] have demonstrated the possibility of realizing 1D systems with locally constrained Hilbert spaces, along with some surprising signatures of non-ergodic dynamics, such as persistent oscillations following a quench from the Neel product state. I will argue that this phenomenon is a manifestation of a "quantum many-body scar", i.e., a concentration of extensively many eigenstates of the system around special many-body states. The special states are analogs of unstable classical periodic orbits in the single-particle quantum scars. I will present a model based on a single particle hopping on the Hilbert space graph, which quantitatively captures the scarred wave functions up to large systems of 32 atoms. These results suggest that scarred many-body bands give rise to a new universality class of quantum dynamics, which opens up opportunities for creating and manipulating novel states with long-lived coherence in systems that are now amenable to experimental study. ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Quantum Many-body Scars and Non-ergodic Dynamics in the Fibonacci Chain
Speaker(s): Zlatko PAPIC (University of Leeds, School of Physics & Astronomy, Leeds, U.K.)
Europe/Rome Abstract: In this talk we will discuss questions about the boundedness and continuity of classical and fractional maximal operators acting on Sobolev spaces and spaces of functions of bounded variation. ICTP ICTP [email protected]
On the regularity of maximal operators
Speaker(s): Jose Ramon Madrid Padilla, ICTP
Europe/Rome We propose to realize one-dimensional topological phases protected by SU(N) symmetry using alkali or alkaline-earth atoms loaded into a bichromatic optical lattice. We derive a realistic model for this system and investigate it theoretically. Depending on the parity of N, two different classes of symmetry-protected topological (SPT) phases are stabilized at half-filling for physical parameters of the model. For even N, the celebrated spin-1 Haldane phase and its generalization to SU(N) are obtained with no local symmetry breaking. In stark contrast, at least for N=3, a new class of SPT phases, dubbed chiral Haldane phases, that spontaneously break inversion symmetry, emerge with a two-fold ground-state degeneracy. The latter ground states with open-boundary conditions are characterized by different left and right boundary spins which are related by conjugation. Our results show that topological phases are within close reach of the latest experiments on cold fermions in optical lattices. arXiv:1709.10409 ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: "Haldane" Phases with Ultracold Fermionic Atoms in Double-Well Optical Lattices
Speaker(s): Pierre M.A. FROMHOLZ (Dept. de Physique, Univ. de Cergy-Pontoise, France)
Europe/Rome Abstract. In this talk I will discuss the non-linear infrared effects that plagues Eulerian Perturbation Theory for Large Scale Structure. Understanding and resumming these contributions is essential to achieve a detailed description of galaxy correlation functions around the BAO scale. In particular I will present a resummation scheme called IR-resummation and show how it works for the 2- and 3-point correlation functions of matter. ICTP ICTP [email protected]
IR-Resummation in the Effective Field Theory of Large Scale Structure
Room: Leonardo Building - Euler Lecture Hall
Speaker(s): Gabriele TREVISAN (CCPP, New York University)
Europe/Rome In this talk I will motivate the interest for studying SU(N) quantum magnetism, and present three recent results on: i) a microscopic model exhibiting SU(N) chiral spin liquids and their characterization, ii) the phase diagram of SU(N) two-leg spin ladders and iii) finite temperature "phase diagrams" of SU(N) Heisenberg models on two-dimensional lattices. ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: SU(N) Quantum Magnetism in 1D and 2D
Speaker(s): Andreas LAUCHLI (Inst. for Theoretical Physics, Univ. Innsbruck, Austria)
Europe/Rome Abstract: Let γ : M × [0.T ) → Rn+1 be a family of smooth immersions of an n−dimensional manifold M evolving by mean curvature flow. When M is compact, it is well known that the mean curvature flow is defined up to a finite singular time T at which the curvature of the hypersurface becomes unbounded. For instance, if the initial surface is convex, the evolving surfaces become spherical and contract to a point. On the other hand, if the initial surface is not convex, the evolution may become singular without shrinking. The study of singularities for the mean curvature flow is of great interest but, also in the last decades several methods have been developed to continue the flow beyond the first singular time. Huisken and Sinestrari con- sidered a new approach based on a surgery procedure to extend the mean curvature flow after singularities. This construction was inspired by a procedure originally introduced by Hamilton for the Ricci flow which played a fundamental role in the work by Perelman to prove the Poincare` and Geometrization conjectures. The results about singularities and surgery for the mean curvature flow were obtained for manifolds of codi- mension 1. In this case the comparison principle is fundamental which, among others things, ensures that the embeddedness is preserved. In contrast, the comparison principle cannot be applied for higher codimension mani- folds making more difficult the study of such evolution. In this talk I will introduce the types of singularities as well as the procedure of surgery for mean curvature flow. We are going to discuss some new results concerning the simplest case in higher codimension, space curves. ICTP ICTP [email protected]
Singularities of mean curvature flow and surgery
Speaker(s): Karen CORRALES ESCALONA, ICTP
Europe/Rome ICTP ICTP [email protected]
Strings and Higher Dimensional Theories
Joint ICTP-SISSA String Seminars: Constraining Particle Physics from Quantum Gravity Conjectures
Speaker(s): Irene VALENZUELA (Utrecht University, Netherlands)
Joint ICTP-SISSA String Seminars: Large Field Inflation and the Weak Gravity Conjecture
Speaker(s): Arthur HEBECKER (Heidelberg University)
Europe/Rome The main objective of the Forum is to bring together the main partners for realising international laboratories in South East Europe combining capacity building with bringing nations together following the CERN model. Among them are designers of the projects, potential future users of the region, scientists and engineers from universities and national laboratories, representatives of industry, and of government agencies. In introductory presentations the importance of international cooperation for science and training in the region will be illuminated. Two large projects from which the region could benefit particularly will be presented. Their concepts have been worked out by two groups of international experts and they are based on most modern technological developments enabling cutting-edge research in many domains relevant to society. One is a synchrotron light source providing radiation research possibilities in many domains from physics to biology, material science, environmental science and even archaeology and medicine. The users' community would come mainly from university faculties and also industry. The second project is a facility for cancer research and treatment with heavy particles (protons and carbon nuclei) which would also enable biomolecular research including experiments with small animals. For both facilities a number of options will be presented and it is hoped that the discussions will provide indications which choices would be most interesting for the region. Training of scientists, engineers and technicians is considered to be important from the beginning. Since the realisation of these projects will require several years this time will be extensively used to train experts in order to build-up expertise, i.e. to form a sufficient critical mass of staff members for operation of the machines as well as to create users' communities. Thus these opportunities will be mainly for the benefit of the young generation although they will also serve to reverse the brain drain. Since the general know-how and technology transfer will be of particular interest for the region several presentations will consider how this can be achieved in the most efficient way. Two special activities instigated by the projects may trigger developments for the whole region. This would be the development of powerful digital networks and large data handling necessary to transfer data from the central laboratory to the users. Setting up of such a network for oncologists might be of particular interest. Another activity reaching beyond the particular projects considered would be the installation of solar power panels to reduce the operating cost for electricity of the facilities, boosting the general development of alternative energy production in the region. GRANTS: a limited number of grants are available to support the attendance of selected participants, with priority given to participants from the South East Europe. There is no registration fee. Requests for contributions to the session "Short presentations of interest to participate in IIST (International Institute for Sustainable Technologies)", on 26 January morning, are invited to be made in the online form (comment section) indicating also the title of the contribution. Visit to the Elettra synchrotron light source facility and the FERMI free electron laser facility in Trieste (afternoon of January 26): participants interested in the visit to the Elettra and Fermi facilties are also invited to indicate it in the online form (comment section). DEADLINES FOR REQUESTING PARTICIPATION: 15 December 2017 (firm deadline if financial support is required) 14 January 2018 (if no financial support is required) OUTCOME OF THE SELECTION: Applicants requesting financial support will be informed via individual e-mail/letter by 22 December. Applicants not requesting financial support will be informed as soon as possible via individual e-mail/letter. The ICTP will remain closed during Winter break from 23 December 2017 to 7 January 2018, inclusive. ICTP ICTP [email protected]
Forum on New International Research Facilities for South East Europe | (smr 3251)
Address: Via Grignano, 9 I - 34151 Trieste (Italy)
Room: Kastler Lecture Hall (AGH)
Organizer(s): ORGANISING COMMITTEE:, Herwig Schopper (Chairman, former DG of CERN), Fernando Ferroni (President of INFN), Christoph Quitmann (Director of MAXIV, Sweden), Nicholas Sammut (Deputy Dean, University of Malta)), Hans J. Specht (Heidelberg Univ., former DG of GSI), Ruediger Voss (President of EPS), Local Organisers: Nadia Binggeli, Saša Ivanovic (MNA)
Cosponsor(s): Ministry of Science of Montenegromscm, United Nations Educational, Scientific and Cultural Organizationunesco, International Atomic Energy Agencyiaea3, European Physical Societyeps, Fondazione Internazionale Triestefit
Secretary: Stanka Tanaskovic
Support E-Mail: [email protected]
No additional info
Europe/Rome A new algorithm to solve the TDDFT equations in the space of the density fitting auxiliary basis set has been developed and implemented in ADF.1 The TDDFT equations are recast to a non-homogeneous linear system, whose size is much smaller than in Casida formulation, allowing to calculate a wide portion of the absorption spectrum for large systems. The method extracts the spectrum from the imaginary part of the polarizability at any given photon energy, avoiding the bottleneck of Davidson diagonalization. The original idea which made the present scheme very efficient consists in the simplification of the double sum over occupied-virtual pairs in the definition of the dielectric susceptibility, which allows an easy calculation of such matrix as a linear combination of constant matrices with photon energy dependent coefficients. The method has been applied to very different systems in nature and size (from H2 to [Au309]-)2,3. In all cases, the maximum deviations found for the excitation energies with respect to Casida approach are below 0.2 eV. The new algorithm has the merit to calculate the spectrum at whichever photon energy but also to allow a deep analysis of the results, in terms of Transition Contribution Maps,4 plasmon scaling factor analysis5, induced density analysis, and with a fragment projection analysis6. Circular Dichroism of large systems becomes also affordable.7 References 1.O. Baseggio, G. Fronzoni and M. Stener, J. Chem. Phys., 2015, 143, 024106. 2. O. Baseggio, M. De Vetta, G. Fronzoni, M. Stener, L. Sementa, A. Fortunelli and A. Calzolari, J. Phys. Chem. C, 2016, 120, 12773. 3. A. Fortunelli, L. Sementa, V. Thanthirige, T. Jones, M. Stener, K. Gagnon, A. Dass, G. Ramakrishna, J. Phys. Chem. Lett, 2017, 8, 457. 4. S. Malola, L. Lehtovaara, J. Enkovaara and H. Häkkinen, ACS Nano 2013, 7, 10263. 5. S. Bernadotte, F. Evers, C. R. Jacob, J. Phys. Chem. C 2013, 117, 1863. 6. L. Sementa, G. Barcaro, O. Baseggio, M. De Vetta, A. Dass, E. Aprà, M. Stener, A. Fortunelli, J. Phys. Chem. C submitted. 7. O. Baseggio, D. Toffoli, G. Fronzoni, M. Stener, L. Sementa, A. Fortunelli J. Phys. Chem. C, 2016, 120, 24335. ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: A New Efficient Time Dependent Density Functional Algorithm for Large Systems: Theory, Implementation and Plasmonics Applications
Speaker(s): Mauro STENER (Dipt. Scienze Chimiche e Farmaceutiche, Univ. di Trieste)
Europe/Rome By the eigenstate thermalization hypothesis (ETH), a highly excited energy eigenstate behaves like a thermal state. It is related to the black hole information paradox by the AdS/CFT correspondence. I will talk about ETH in two-dimensional large central charge CFT and compare the excited state of a primary operator with the thermal state. To define ETH precisely, one needs to know how similar, or equivalently dissimilar, the excited state and thermal state are. I will talk about short interval expansions of the entanglement entropy, relative entropy, Jensen-Shannon divergence. For the canonical ensemble, the excited state and thermal state are the same at the leading order of large central charge and are different at the next-to-leading order. I will also discuss briefly ETH for generalized Gibbs ensemble, and ETH for the descendant excited states. SISSA, Via Bonomea 265, Room 128 ICTP [email protected]
@ SISSA, Via Bonomea 265, Room 128
Joint ICTP/SISSA Statistical Physics Seminar: Eigenstate Thermalization Hypothesis in Two-Dimensional Large Central Charge CFT
Speaker(s): Jia-Ju ZHANG (University of Milano Bicocca, Italy)
Europe/Rome An ICTP PREPARATORY SCHOOL (29 January - 2 February 2018) will be organized the week before the Winter College (5-16 February 2018) for a limited number of selected participants. The Preparatory School will provide background tutorials and exercises designed to help the participants in following the College lectures. The aim of the Winter College is to offer Ph.D. students a broad training on extreme light science: from ultra short and ultra intense laser pulse generation, to attosecond and Free Electron Laser (FEL) technology, focusing on applications of attosecond pulse generation in atomic and molecular physics, photo-chemistry and nanoscience, and the application of extreme light sources to matter-radiation interactions in general. Free Electron Lasers (FELs) will be accompanied by visits to the FERMI FEL and the Elettra Syncrotron Facility located in Trieste. In addition to lectures, there will be tutorials on simulation tools for ultrashort pulse and nonlinear optics modelling as well as hands-on laboratory sessions. Posters are encouraged from all participants and prizes will be sponsored by the International Society for Optics and Photonics (SPIE). Topics: • Ultrashort laser pulse generation and characterization • Ultrafast non-linear optics and ultrabroadband parametric amplification • Novel low-cost mode-locked laser systems • Principles of attosecond technology: generation and metrology • XUV optics for attosecond beamlines • Attosecond electron dynamics in atoms and molecules • Attosecond spectroscopy of nanoparticles and nanostructures • Strong field physics phenomena and high-intensity laser technologies • Free Electron Lasers: physics, properties and recent advances HOW TO APPLY: To apply to the Preparatory School and Winter College, please click on the "Apply here" link in the left menu of this page. Please note that it is highly recommended that only candidates NOT holding a PhD or without an appropriate background should apply to the Preparatory School. Acceptance will be at the discretion of the Organizers. To apply to the Winter College only, please click on the "Apply here" link of the following page: http://indico.ictp.it/event/8295/ DEADLINE: 20 October 2017 ICTP ICTP [email protected]
Preparatory School and Winter College on Extreme Non-linear Optics, Attosecond Science and High-field Physics | (smr 3186)
Room: Giambiagi Lecture Hall (AGH)
Secretary: Federica Delconte
Organizer(s): Francesca Calegari ((IFN-CNR, Milan - Italy and DESY, Hamburg - Germany), David Blaschke (University of Wroclaw - Poland and NRNU MEPhI Moscow - Russia), Miltcho Danailov (Elettra-Sincrotrone, Trieste – Italy), Zhiyi Wei (Institute of Physics, Chinese Academy of Sciences, Beijing - China), Local Organiser: Joseph Niemela
Europe/Rome Materials are crucial to scientific and technological advances and industrial competitiveness, and to tackle key societal challenges — from energy and environment to healthcare, information and communications, manufacturing, safety and transportation. Computational science, with the current accuracy and predictive power, can play a very relevant role in this respect by boosting materials design and discovery. In this scenario, scientific objectives and computational technological perspectives are nowadays more intertwined than ever. The MaX Centre (Materials Design at the Exascale), one of the nine Centres of Excellence for computing applications funded by the EU since 2015, has been acting along two core lines, namely, high-performance computing (HPC), in view of the transition to exascale architecture, and high-throughput computing (HTC), with the final aim of building a materials modelling ecosystem. The MaX Centre takes the opportunity of its general conference to share with the scientific community its current efforts and future perspectives, as well as to underline the state of the art of both the scientific and the technological innovation which can disruptively impact on material discovery in the next decade. This conference aims at gathering scientists active in the field of materials modelling, together with HPC and HTC experts to discuss the most recent advancements in the field, including, but not limited to: Advances in high performance computing for materials science New avenues from data analytics/artificial intelligence in materials science High throughput computing for materials discovery Trends in high performance computing and codesign towards exascale Novel algorithms for first principles simulations Confirmed Invited Speakers include: Stefano Baroni, SISSA (IT) Luca Benini, Università di Bologna (IT) & ETH (CH) Stephan Bluegel, FZ-Juelich (DE) Pietro Bonfà, CINECA (IT) Luigi Brochard, Lenovo HPC (FR) Roberto Car, Princeton University, NJ (USA) Ivan Carnimeo, SISSA (IT) Juan Carrasquilla, Perimeter Institute (CA) Carlo Cavazzoni, CINECA (IT) Alessandro Curioni, IBM (CH) Thierry Deutsch, CEA-Inac (FR) Massimiliano Fatica, NVIDIA (USA) Luca Grisanti, SISSA (IT) Geoffroy Hautier, UCL-NAPS (BE) Juerg Hutter, University of Zurich (CH) Karsten Jacobsen, Technical University of Denmark (DK) Anton Kohzenvnikov, CSCS (CH) Boris Kozinsky, Harvard University (USA) Lin Lin, University of California, Berkeley, CA (USA) Stephan Mohr, BSC (ES) Pablo Ordejon, icn2 (ES) Dirk Pleiter, Juelich Supercomputing Center (DE) Deborah Prezzi, CNR Nano (IT) Maria Clelia Righi, Unimore (IT) Davide Sangalli, CNR-ISM (IT) Stefano Sanvito, Trinity College Dublin (IE) Marivi Fernandez-Serra, Stony Brook University (USA) Filippo Spiga, ARM (USA) Thomas Sterling, Indiana University, (USA) Leopold Talirz, EPFL (CH) David Wilkins, EPFL (CH) ICTP ICTP [email protected]
MaX Conference on the Materials Design Ecosystem at the Exascale: High-Performance and High-Throughput Computing | (smr 3161)
Address: Strada Costiera, 11 I - 34151 Trieste (Italy)
Room: Budinich Lecture Hall (LB)
Secretary: Erica Sarnataro
Organizer(s): Stefano Baroni (SISSA), Elisa Molinari (CNR-NANO), Sandro Scandolo (ICTP), Local Organisers: Ralph Gebauer, Ivan Girotto
Cosponsor(s): MAX EU Centre of Excellencemax2
Europe/Rome ICTP's Salam Distinguished Lecture Series is an annual presentation of talks by renowned, active scientists. The aim is to showcase important research developments as well as provide a visionary forward view. The lecture series is supported by the Kuwait Programme at ICTP. Alan Guth is the Victor F. Weisskopf Professor of Physics and a Margaret MacVicar Faculty Fellow at the Massachusetts Institute of Technology. Trained in particle theory at MIT, Guth held postdoc positions at Princeton, Columbia, Cornell, and SLAC (the Stanford Linear Accelerator Center) before returning to MIT as a faculty member in 1980. His work in cosmology began at Cornell, when fellow postdoc Henry Tye persuaded him to study the production of magnetic monopoles in the early universe. Using standard assumptions, they found that far too many would be produced. Continuing this work at SLAC, Guth discovered that the magnetic monopole glut could be avoided by a new proposal which he called the inflationary universe. Guth's honors include ICTP's Dirac Prize, the Breakthrough Prize in Fundamental Physics, and the 2014 Kavli Prize in Astrophysics. Guth is still busy exploring the consequences of inflation. He has also written a popular-level book called "The Inflationary Universe: The Quest for a New Theory of Cosmic Origins" (Addison-Wesley/Perseus Books, 1997). There will be 3 lectures, on 29, 30 and 31 January 2018, start time: 17.00 hrs. Lecture I: "Inflationary Cosmology: Is Our Universe Part of a Multiverse?" Abstract: Inflationary cosmology gives a plausible explanation for many observed features of the universe, including its uniformity, its mass density, and the patterns of the ripples that are observed in the cosmic microwave background. Beyond what we can observe, most versions of inflation imply that our universe is not unique, but is part of a possibly infinite multiverse. I will describe the workings of inflation, the evidence for inflation, and why I believe that the possibility of a multiverse should be taken seriously. Lecture II: "Eternal Inflation and its Implications" Abstract: This lecture will further explore the connection between inflation and the multiverse. I will describe the mechanism of inflation in more detail, showing why most versions lead to eternal inflation: once inflation starts, it never completely stops, but instead the inflating region grows forever, producing "pocket universes" ad infinitum. Eternal inflation is in some ways very attractive, because, for example, it offers a possible explanation for why the energy density of the vacuum is so incredibly small. But it also leads to the "measure problem": how does one define probabilities in an infinite system in which any allowed event is expected to occur an infinite number of times? Lecture III: "Infinite Phase Space and the Two-Headed Arrow of Time" Abstract: One of the unsolved mysteries of physics is the arrow of time: the laws of physics make no distinction between the future and the past, but in our experience they are entirely different. The arrow of time can be identified with the growth of entropy, but what caused the entropy to be lower in the past? I will describe a speculative picture which shows how an arrow of time can develop naturally, provided that the available phase space is infinite, even in a system with time-reversible laws of physics, and with no special initial conditions. I will also discuss the alternative possibility that the phase space available to the universe is finite, arguing that this assumption leads to serious cosmological problems. ICTP ICTP [email protected]
Salam Distinguished Lectures 2018: Inflationary Cosmology: Is Our Universe Part of a Multiverse?
Address: Strada Costiera 11, 34151 Trieste Italy
Room: Leonardo Building - Budinich Lecture Hall
Speaker(s): Prof. Alan H. Guth, Center for Theoretical Physics, Massachusetts Institute of Technology, USA
Europe/Rome We present a new method to compute Renyi entropies in one-dimensional critical systems, using the mapping of the Nth Renyi entropy to a correlation function involving twist fields in a ZN cyclic orbifold. When the CFT describing the universality class of the critical system is rational, so is the corresponding cyclic orbifold. It follows that the twist fields are degenerate : they have null vectors. From these null vectors a Fuchsian differential equation is derived, although this step can be rather involved since the null-vector conditions generically involve fractional modes of the orbifold algebra. The last step is to solve this differential equation and build a monodromy invariant correlation function, which is done using standard bootstrap methods. This method is applicable in a variety of situations where no other method is available, for instance when the subsystem A is not connected (e.g. two-intervals EE). SISSA, Via Bonomea 265, room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Entanglement Entropies of 1d Critical Systems, Orbifold and Null-vectors
Speaker(s): Benoit ESTIENNE (UPMC Paris, France)
Europe/Rome Abstract. Current and upcoming experimental efforts to unveil the details of the inflationary epoch have primordial non-Gaussianity as one of the main targets. Indeed, detecting a deviation from Gaussianity in the statistics of primordial perturbations would shed light on the interactions between the degrees of freedom active during inflation. Deviations of the CMB spectrum from a black-body are a powerful probe of the three-point function of curvature perturbations. Dissipation of acoustic waves in the photon- electron-baryon fluid heats the plasma: the heating is not balanced by an appropriate change in photon number, and a Bose-Einstein spectrum is formed. A non-zero three-point function of curvature perturbations makes the heating rate spatially dependent, so that the observed chemical potential in the sky will be anisotropic and correlated with large-scale temperature fluctuations. In this talk I discuss how the angular correlation of temperature anisotropies and spectral distortions is insensitive to contamination from late-time projection effects (unlike other observables like the CMB temperature bispectrum), making it an excellent, albeit futuristic, probe of primordial non- Gaussianity. ICTP ICTP [email protected]
CMB Spectral Distortions and Primordial non-Gaussianity
Speaker(s): Giovanni CABASS (Max-Planck Institute for Astrophysics, Garching, Germany)
Europe/Rome Abstract: Let K be a number field and A/K an abelian surface. By the Mordell-Weil theorem, the group of K-rational points on A is finitely generated and as for elliptic curves, its rank is predicted by the Birch and Swinnerton-Dyer conjecture. A basic consequence of this conjecture is the parity conjecture: the sign of the functional equation of the L-series determines the parity of the rank of A/K. Under suitable local constraints and finiteness of the Shafarevich-Tate group, we prove the parity conjecture for principally polarized abelian surfaces. We also prove analogous unconditional results for Selmer groups. ICTP ICTP [email protected]
Parity of ranks of abelian surfaces
Speaker(s): Céline MAISTRET, University of Bristol
Europe/Rome The aim of the Winter College is to offer Ph.D. students a broad training on extreme light science: from ultra short and ultra intense laser pulse generation, to attosecond and Free Electron Laser (FEL) technology, focusing on applications of attosecond pulse generation in atomic and molecular physics, photo-chemistry and nanoscience, and the application of extreme light sources to matter-radiation interactions in general. Free Electron Lasers (FELs) will be accompanied by visits to the FERMI FEL and the Elettra Syncrotron Facility located in Trieste. In addition to lectures, there will be tutorials on simulation tools for ultrashort pulse and nonlinear optics modelling as well as hands-on laboratory sessions. An ICTP PREPARATORY SCHOOL will be organized the week before the College (from 29 January to 2 February 2017) for a limited number of selected participants. Posters are encouraged from all participants. Prizes sponsored by: • the International Commission for Optics (ICO) • the International Society for Optics and Photonics (SPIE) • the Optical Society (OSA) Topics: • Ultrashort laser pulse generation and characterization • Ultrafast non-linear optics and ultrabroadband parametric amplification • Novel low-cost mode-locked laser systems • Principles of attosecond technology: generation and metrology • XUV optics for attosecond beamlines • Attosecond electron dynamics in atoms and molecules • Attosecond spectroscopy of nanoparticles and nanostructures • Strong field physics phenomena and high-intensity laser technologies • Free Electron Lasers: physics, properties and recent advances Lecturers: 1) Orazio Svelto (Politecnico di Milano, Milan, Italy) 2) Cristian Manzoni (IFN-CNR, Milan, Italy) 3) Marc Vrakking (MBI, Berlin, Germany) 4) Fernando Martin (Universidad Autonoma de Madrid, Madrid, Spain) 5) Luca Poletto (IFN-CNR, Padova, Italy) 6) Katalin Varjú ( Extreme Light Infrastructure, Szeged, Hungary) 7) Luca Giannessi (ENEA-Frascati and Elettra-Sincrotrone Trieste, Italy) 8) Claudio Maschiovecchio (Elettra-Sincrotrone Trieste, Italy) 9) Kevin Prince (Elettra-Sincrotrone Trieste, Italy) 10) Martina dell'Angela (CNR-IOM, Trieste, Italy) 11) Hirofumi Yanagisawa (MPQ, Garching, Germany) 12) Antonino Di Piazza (MPIK, Heidelberg, Germany) 13) Felix Karbstein (Helmholtz Institute Jena, Jena, Germany) 14) Gihan Kamel (SESAME, Jordan) 15) Thomas Cowan (Helmholtzzentrum Dresden-Rossendorf, Germany) 16) Caterina Vozzi (INF-CNR, Milan, Italy) Additional Information from the Marie Curie Library: OSA and SPIE are granting free access to their full text online databases to all ICTP Winter College on Optics participants. To gain access you need to log-in on ICTP intranet wireless line (credentials in your badge) and visit: - OSA Publishing https://www.osapublishing.org/ - SPIE Digital Library http://spiedigitallibrary.org/ (in case of connection problems please check SETUP REQUIREMENTS) The Marie Curie Library is available for any further assistance in consulting these or other resources. http://library.ictp.it/ ICTP ICTP [email protected]
5 Feb 2018 - 16 Feb 2018
Winter College on Extreme Non-linear Optics, Attosecond Science and High-field Physics | (smr 3184)
Organizer(s): Francesca Calegari (IFN-CNR, Milan - Italy and DESY, Hamburg - Germany), David Blaschke (University of Wroclaw - Poland; JINR Dubna and NRNU Moscow - Russia), Miltcho Danailov (Elettra-Sincrotrone, Trieste – Italy), Zhiyi Wei (Institute of Physics, Chinese Academy of Sciences, Beijing - China), Local Organiser: Joseph Niemela
Cosponsor(s): Optical Society of Americaosa, International Society for Optics and Photonicsspie, International Commission for Opticsico, European Optical Societyeos, Società Italiana di Ottica e Fotonicasiof, International Society on Optics Within Life Sciencesowls
Europe/Rome Irreversibility, which is usually quantified by the entropy production, is one of the most fundamental concepts in thermodynamics, with deep scientific and technological consequences. It is also an emergent concept, that stems from the complex interactions between a system and its environment. However, as will be discussed in this talk, the standard theory of entropy production breaks down in the quantum case, in particular in the limit of zero temperature. Motivated by this, I will present recent results which overcome these difficulties using the idea of phase space entropy measures for bosonic systems. As I will show, our theory not only overcomes the zero temperature limitations but also allows one to extend the results to deal with non-equilibrium reservoirs. As an application, we will consider squeezed thermal baths, which are instance of a grand-canonical Generalized Gibbs Ensemble and therefore allow us to construct an Onsager transport theory, akin to the theory of thermoelectricity. Finally, I will also discuss how entropy production emerges from the perspective of the environment and the system environment correlations. SISSA, Via Bonomea 265, room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Measures of Irreversibility in Quantum Phase Space
Speaker(s): G. LANDI (University of Sao Paulo, Brazil)
Europe/Rome Abstract - see file below ICTP ICTP [email protected]
Gerstenhaber structure of a class of special biserial algebras
Speaker(s): Andrea Solotar (Universidad de Buenos Aires)
Europe/Rome A standard textbook picture of photovoltaic/thermoelectric/fuel cells (PTF) and biological engines (e.g. proton pumps) assumes a direct transformation of light, heat or chemical energy into electric current. However, this scheme is inconsistent with the basic principles of electrodynamics and thermodynamics. To solve this problem the mechanism of collective electric charge self-oscillations fed by a constant energy supply, has been proposed. A simple analog system - a steam engine used to propel the so-called "putt-putt boat" is used to illustrate the physics of work generation in PTF. The main new prediction of the proposed theory is the emission of electromagnetic radiation by PTF in THz or IR region, or conversely, resonant stimulation of PTF by electromagnetic oscillations. Remarkably, both phenomena has been recently observed in photovoltaic devices based on organic materials, but treated as only auxiliary effects enhancing their efficiency. ICTP ICTP [email protected]
Condensed Matter Seminar: Self-Oscillations in Photovoltaic/Thermoelectric/Fuel Cells and Biological Engines
Speaker(s): Robert ALICKI (Institute of Theoretical Physics and Astrophysics, Univ. of Gdansk, Poland)
Europe/Rome Abstract: Hochschild (co)homology spaces are homological invariants of associative algebras that are useful to describe several properties of the concerned algebra. In this talk we will focus on geometric regularity. If the algebra is commutative and finitely generated, then Hochschild homology can be used to decide whether the associated algebraic variety is regular or not. In the non-commutative case, the meaning of all this is not so clear. In this talk I will first describe Hochschild (co)homology and the results that are known in the commutative setting, and I will comment afterwards on the non-commutative case. ICTP ICTP [email protected]
BASIC NOTIONS SEMINAR - Hochschild (co)homology and geometric regularity
Europe/Rome The School has the goal of teaching participating scientists about modern computer hardware and programming to provide a foundation for future computational research using High Performance Computing (HPC). Participants will go through an intensive programme with a focus on practical skills. School participants will learn to improve the efficiency of their research codes, and to parallelize them. Lectures on a selection of technical aspects of modern HPC hardware will be mixed with introductions to widely used parallel programming tools and libraries. The hands-on sessions will allow participants to practice on small example problems of general scientific interest. Example topics will cover numerical methods and parallel strategies, as well as data management. The programme specifically addresses the needs of scientists using, writing, or modifying HPC applications, and will not assume, require, or provide significant IT and HPC resource management skills. It will be mainly based on fundamental HPC-relevant features in widely used scientific software for high-performance computing: • Computer architectures for HPC and how to optimize for them • Parallel programming tools (MPI & OpenMP) • Portable, flexible and parallel I/O (HDF5) • Parallel programming best practices • Floating-point math • High-performance libraries for the solution of common math problems NOTES: • The School is organized in collaboration with the CADING-Red Group. • Accommodation and meals are covered at the campus for all selected participants. DEADLINE FOR APPLICATION: expired on 26 November 2017 -------------------------------------------------------------- ICTP Secretariat contact: [email protected] - Mexico ICTP [email protected]
@ - Mexico
Latin American Introductory School on Parallel Programming and Parallel Architecture for High Performance Computing | (smr 3187)
Address: CINVESTAV and ININ Carretera México-Toluca Km 38.5, Salazar, Ocoyoacac, 52740 Estado de México, MEXICO
Secretary: Nicoletta Ivanissevich
Organizer(s): Jaime Klapp (ININ and Cinvestav-Abacus), Isidoro Gitler (Cinvestav-Abacus), Leonardo Sigalotti (UAM-Azcapotzalco), Elí Santos Rodríguez (Mesoamerican Centre for Theoretical Physics), Marcela Cruchaga (Universidad de Santiago de Chile), ICTP Scientific Contact: Ivan Girotto
Cosponsor(s): EPCCepcc, Cinvestavcinvestav, Abacus Cinvestavcinvestav2, ININinin, UAM-Azcapotzalcouam, MCTPmctp2, CONACYTconacyt2, CYTEDcyted, CADINGcading
**DEADLINE: 26/11/2017**
New analytic approaches for the conformal bootstrap
Speaker(s): Aninda SINHA (Centre for High Energy Physics, IISC, Bangalore, India)
--- Please note last minute change in venue!!! ---
Europe/Rome In this talk I will discuss the motion of a tracer particle driven by an external constant force through a quiescent lattice gas. Due to the interaction between the tracer and the bath particles, here modelled as an exclusion process, the driven tracer reaches a steady-state when the external force and the friction exerted by the bath balance each other. The steady-state is characterised by a non equilibrium broad inhomogeneity of the bath density surrounding the driven tracer yielding a rich variety of behaviours. I show that depending on the effective dimension of the lattice, the driven tracer exhibits from sub-diffusive to strong super-diffusive transport in the limit of high of bath particles. Moreover, when more than one driven tracers exist, the external and friction forces mediate an anisotropic attractive interacting force between the tracers, leading to the formation of clusters. I will show through numerical results that such scenario extends into continuous-space and continuous-time dynamics. SISSA, Via Bonomea 265, room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: 'Driven Tracer in Quiescent Baths: Anomalous Diffusion and Induced-Interaction'
Speaker(s): Carlos MEJIA MONASTERIO (Univ. Politecnica de Madrid, Spain)
Phenomenology of Particle Physics
Hawking Genesis
Speaker(s): John MARCH-RUSSELL (Rudolf Peierls Centre for Theoretical Physics, Oxford, UK)
Europe/Rome Abstract: Estimates in geometric terms of the capacity of compact sets are of great interesting and appears naturally both in Geometry and Physics (e.g., in electrostatic and physical descriptions of flows where Laplace equations is used). I will discuss about capacity inequalities involving the total mean curvature of hypersurfaces with boundary in convex cones and the mass of asymptotically flat manifolds with non-compact boundary. In fact, I will present analogous of Pölia-Szegö, Alexandrov-Fenchel and Penrose type inequalities in this setting. ICTP ICTP [email protected]
Capacity estimates and rigidity of domains with corners
Speaker(s): Tiarlos NOGUEIRA, Universidade Federal de Alagoas
Europe/Rome Abstract: In this talk we will discuss on the L¹-Liouville property for positive, superharmonic functions by providing many evidences that its validity relies on geometric conditions localized on large enough portions of the space. By the way, we present some examples in any dimension showing that the L¹-Liouville property is strictly weaker than the stochastic completeness of the manifold. The main tool in our investigations is represented by the potential theory of a manifold with boundary subject to Dirichlet boundary conditions. This is a joint work with Professors Stefano Pigola and Alberto G. Setti (Università degli Studi dell'Insubria - Italia) ICTP ICTP [email protected]
On the Dirichlet Parabolicity and the L¹-Liouville Property
Speaker(s): Leandro PESSOA, Universidade Federal do Piauí
Europe/Rome Anna Gagliardo earned a degree in Biology and a PhD in Animal Behaviour at the University of Pisa (Italy) studying the role of olfaction in pigeon navigation. She is currently researcher at the Biology Department (University of Pisa), where she studies the mechanisms underlying birds' navigation, with a particular interest on the role of olfaction in avian navigation. Her research interest also focuses on the neural basis of pigeon navigation and the cognitive processes underlying spatial behaviours in birds. Abstract: Forty years ago Papi and colleagues observed that anosmic pigeons failed to find their way home when released from unfamiliar locations. They explained the dramatic impact on homing by developing the olfactory navigation hypothesis. Pigeons at the home loft learn to associate different odour profiles with the wind direction they arrive from. Once at a release site, they determine the direction of displacement on the basis of the odours perceived locally, and based on that information, compute a homeward bearing. Experimental evidence pointed out that the olfactory map becomes redundant within a familiar area, where visual cues constitute an alternative source of information allowing navigation in olfactory deprived pigeons. Some older research hinted at an important role of olfactory cues for the navigation of nesting swifts and starlings, that displayed homing impairments when made anosmic. More recently, new satellite technologies allowed us to track wild birds subjected to olfactory manipulation after displacement from their breeding site or from their migratory route. Tracking experiments in wild species provided evidence consistent with olfactory navigation and visual landmark-based navigation in unfamiliar and familiar areas, respectively. The talk will be livestreamed from the ICTP website (ictp.it/livestream). Light refreshments will be served after the event. ICTP ICTP [email protected]
ICTP Colloquium on Olfactory navigation and familiar visual landmark-based navigation in birds: from homing pigeons to wild species
Address: Strada Costiera 11 34151 Trieste Italy
Speaker(s): Prof. Anna Gagliardo, Department of Biology, University of Pisa, Italy
Colloquium Start Time: 16:30
Europe/Rome Scaling laws in ecology are recurrent and pervasive patterns observed in ecosystems, intended both as functional relationships among ecologically-relevant quantities and the probability distributions that characterize their occurrence. Well-known examples include the Species-Area relationship (SAR), quantifying the increase of biodiversity with ecosystem area, and Kleiber's law, the allometric relation between organismic size and metabolic rate. The interest in these laws lies in their intrinsic predictive power: how many species would go extinct if the ecosystem shrinks to half its size? What is the mass of the largest organisms inhabiting ecosystems of different extent? Are there more large-sized or small-sized organisms and species? Scaling laws observed empirically often conform to power-laws, A=Ba, where a is the scaling exponent. Although their functional form appears to be ubiquitous, empirical scaling exponents may vary with ecosystem type and resource supply rate. While ecological laws have been often studied independently, simple heuristic reasonings show that they are linked. Such reasonings, however, do not allow accounting for finite-size effects, restricting the range for power-law behavior in finite ecosystem due to ecological or biological constraints on organismic size, or for other deviations from pure power-laws. These limits demand for a different approach. The ubiquity of power-laws and the presence of finite-size constraints suggest finite-size scaling theory as a useful tool in this context. A scaling hypothesis for the joint probability distribution of abundance and body mass of species inhabiting an ecosystem of finite size is proposed and used to derive macroecological patterns. The hypothesis is supported by a broad class of resource-limited community dynamics stochastic models. Precise linkages among ecological laws were derived from the proposed scaling hypothesis, in the form of algebraic relationships among scaling exponents. Such relationships rationalize the observed variability of ecological exponents across ecosystems, clarifying how changes in one ecological pattern affect the remaining ones. Predicted covariations were verified on empirical data. This model-free approach allows investigating the effects of different ecological or biological assumptions on the covariation of scaling exponents. ICTP ICTP [email protected]
QLS Seminar: A Finite-size Scaling Framework Uncovers the Covariations of Ecological Scaling Laws
Speaker(s): Silvia Zaoli - École polytechnique fédérale de Lausanne (EPFL)
Europe/Rome ICTP and SISSA, in collaboration with the partner institutions of the Master in the Physics of Complex Systems (http://www.polito.it/pcs) will organize the Spring College on the Physics of Complex Systems from 19 February to 16 March 2018. Many complex systems in physics, biology, engineering and economics are characterized by a large number of interacting degrees of freedom giving rise to a non-trivial collective behavior. The theoretical and computational tools for a quantitative analysis of complex systems are often rooted in modern (statistical, quantum) physics. The Spring College on the Physics of Complex Systems aims to give students the opportunity to get in touch with a selection of topics at the forefront of research during an intensive 4-week program. It consists of 5 courses of 9 lectures each, followed by final written tests. Invited Lecturers: Antonio Celani (ICTP, Trieste) Maurizio Fagotti (ENS, Paris) Chris Mathys (SISSA, Trieste) Mario Nicodemi (U. of Naples Federico II) Angelo Rosa (SISSA, Trieste) Gregory Schehr (LPTMS, Orsay Cedex) Applicants from all countries that are members of the United Nations, UNESCO or IAEA may attend. Participants should have an adequate working knowledge of English. Candidates are required to solicit recommendation letters to be sent by their supervisors in support of their application. As a rule, travel and subsistence expenses of the participants should be borne by their home institutions. However, limited funds are available to partially support some participants, who are nationals of, and working in, a developing country, and who are not more than 45 years old. Such support is limited and candidates should make every effort to secure support for their travel, at least partial. Attendance of lectures and completion of the final written tests are mandatory for those who are granted ICTP funding. There is no registration fee. ICTP ICTP [email protected]
Spring College on the Physics of Complex Systems | (smr 3189)
Organizer(s): Andrea Gambassi (SISSA), Silvio Franz (LPTMS Orsay), Alessandro Pelizzola (Politecnico di Torino), Local Organiser: Matteo Marsili
Cosponsor(s): International Master, Physics of Complex Systems (i-PCS)mpcs, Politecnico di Torinopolito, SISSAsissa, Université Pierre et Marie Curieupmc, Université Paris Diderotdiderot, Laboratory of excellence Physics: Atoms, Light, Matterpalm, Université Franco Italienneufi, Université Paris-Saclayupsay
Europe/Rome The Tan's contact is an ubiquitous quantity in systems with zero-range interactions: it corresponds for example to the average interaction energy, to the weight of the tails of the momentum distribution function at large momenta, to the inelastic two-body loss rate, just to cite a few. We focus on strongly interacting one-dimensional bosons at finite temperature under harmonic confinement. As it is associated to short-distance correlations, the calculation of the Tan's contact cannot be obtained within the Luttinger-liquid formalism. We derive the Tan's contact by employing an exact solution at infinite interactions, as well as a local-density approximation on the Bethe Ansatz solution for the homogeneous system and numerical ab initio calculations for finite interactions. In the limit of infinite interactions, we demonstrate its universal properties, associated to the scale invariance of the model. We then obtain the full scaling function for arbitrary interactions. ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Tan's Contact for a Strongly Interacting One-dimensional Bose Gas in Harmonic Confinement: Universal Properties and Scaling Functions
Speaker(s): Anna MINGUZZI (LPMMC, Univ. Grenoble-Alpes and CNRS, Grenoble, France)
Europe/Rome (joint work with Wael Bahsoun and Chris Bose) ICTP ICTP [email protected]
Quenched decay of correlations for slowly mixing systems
Speaker(s): Marks RUZIBOEV, Loughborough University
Europe/Rome Trieste - Italy ICTP [email protected]
Hosted Activity
@ Trieste - Italy
SIDA ANNUAL REVIEW MEETINGS | (smr H552)
Organizer(s): TWAS - OWSD,
Gromov's conjecture on non-free isometric immersions
Speaker(s): Mahuya DATTA, Indian Statistical Institute, Kolkata
Europe/Rome Jennifer Thomson (PhD Rhodes) is Emeritus Professor in the Department of Molecular and Cell Biology at the University of Cape Town. She held a post-doctoral fellowship at Harvard, was Associate Professor in Genetics at the University of the Witwatersrand, visiting scientist at MIT, and Director of the Laboratory for Molecular and Cell Biology for the CSIR, before becoming Head of the Department of Microbiology at UCT in 1988. She won the L'Oreal/UNESCO prize for Women in Science for Africa in 2004 and has an Honorary Doctorate from the Sorbonne University. Her research field is the development of genetically modified maize resistant to the African endemic maize streak virus and tolerant to drought. She has published three books on Genetically Modified Organisms: Genes for Africa, Seeds for the Future, and Food for Africa, and is a frequent speaker at international meetings, including the World Economic Forum and the United Nations. She is a member of the board (previously Chair) of the African Agricultural Technology Foundation (AATF), based in Nairobi and Vice-Chair of ISAAA (International Service for the Acquisition of AgriBiotech Applications). She serves on the National Advisory Council on Innovation of the South African Minister of Science and Technology. She is the President of the Organisation for Women in Science for the Developing World (OWSD) and chairs the South African chapter. She is a newly elected fellow of TWAS. Abstract: The year 2015 marked the 20th year of the commercialisation of genetically modified (GM) crops. During the period from 1996 to 2014, the global hectarage of these crops increased 100-fold, making it the fastest adopted crop technology in recent times. The overall economic gains from these crops have been estimated to be USD133.4 billion over the period from 1996 to 2013, and have been divided roughly 50% each to farmers in developed and developing countries. The environmental benefits include contributing to the practice of minimal till agriculture and a decrease in the use of pesticides. But what are the downsides of this technology? I will look at some of the problems related to weeds becoming resistant to glyphosate (the main ingredient that is used on herbicide tolerant crops), how these can be overcome and whether glyphosate can cause cancer. I will also discuss the problem of insects becoming resistant to the toxins that are used in insect resistant crops and how these are being addressed. I will then consider GM crops that are in the pipeline of benefit to developing countries and whether any of these are likely to be commercialised in the foreseeable future. The talk will be livestreamed from the ICTP website (ictp.it/livestream). Light refreshments will be served after the event. ICTP ICTP [email protected]
ICTP Colloquium on the Pros and Cons of Genetically Modified Crops
Speaker(s): Prof. Jennifer Thomson, Department of Molecular and Cell Biology, University of Cape Town, South Africa
Europe/Rome Abstract. I will present two topics of research in our group related to synthetic topological quantum matter [1]: (i) topological phases in 3D optical lattices, more specifically a proposal for experimental realization of Weyl semimetals in ultracold atomic gases [2], and (ii) anyons [3,4]. I will present one possible route to engineer anyons in a 2D electron gas in a strong magnetic field sandwiched between materials with high magnetic permeability, which induce electron-electron vector interactions to engineer charged flux-tube composites [3]. I will also discuss intriguing concepts related to extracting observables from anyonic wavefunctions [4]: one can show that the momentum distribution is not a proper observable for a system of anyons [4], even though this observable was crucial for the experimental demonstration of Bose-Einsten condensation or ultracold fermions in time of flight measurements. I will show how time of flight measurements can be used to extract anyonic statistics [4]. [1] N. Goldman, G. Juzeliunas, P. Ohberg, I. B. Spielman, Rep. Prog. Phys. 77, 126401 (2014). [2] Tena Dubček, Colin J. Kennedy, Ling Lu, Wolfgang Ketterle, Marin Soljačić, Hrvoje Buljan, Weyl points in three-dimensional optical lattices: Synthetic magnetic monopoles in momentum space, Phys. Rev. Lett. 114, 225301 (2015). [3] M. Todorić, D. Jukić, D. Radić, M. Soljačić, and H. Buljan, The Quantum Hall Effect with Wilczek's charged magnetic flux tubes instead of electrons, arXiv:1710.10108 [cond-mat.str-el] [4] Tena Dubček, Bruno Klajn, Robert Pezer, Hrvoje Buljan, Dario Jukić, Quasimomentum distribution and expansion of an anyonic gas, Phys. Rev. A 97, 011601(R) (2018)- ICTP ICTP [email protected]
Condensed Matter and Statistical Physics Seminar: Engineering Synthetic Gauge Fields, Weyl Semimetals, and Anyons
Speaker(s): Hrvoje BULJAN (Dept. of Physics, University of Zagreb, Croatia)
Europe/Rome Abstract. Reliable estimates of the allowed range for axion couplings to photons, nucleons and electrons are of major importance for determining the viable axion mass window as well as to focus experimental axion searches. We show that in a class of generalized DFSZ axion models with generation dependent Peccei-Quinn charges the axion couplings to nucleons and electrons can be simultaneously suppressed. Astrophysical limits from the SN1987A burst duration and from white dwarf cooling can therefore be relaxed, and as a consequence for such an astrophobic axion a mass window up to O(0.1) eV remains open. Since the axion-photon coupling remains sizeable, the proposed IAXO helioscope will become crucial to search for axions of this type. An unavoidable consequence of astrophobia are flavor off-diagonal axion couplings at tree-level, so that experimental limits on flavor-violating processes can also provide a powerful tool to constrain this scenario. The astrophobic axion can be a viable dark matter candidate in the heavy mass window, and can also account for anomalous energy loss in stars. ICTP ICTP [email protected]
The Astrophobic Axion
Speaker(s): Enrico NARDI (INFN, Laboratori Nazionali di Frascati)
Europe/Rome We study the XXZ spin chain in the presence of a slowly varying magnetic field gradient. First, it is shown that a local density approximation perfectly captures the ground-state magnetization profile. Furthermore, we demonstrate how the recently introduced technique of curved-spacetime CFT yields a very good approximation of the entanglement profile. Finally, the front dynamics is also studied after the gradient field has been switched off. SISSA, Via Bonomea 265, room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Entanglement in the XXZ Chain with a Gradient
Speaker(s): Viktor EISLER (Institute of Theoretical and Computational Physics, TU Graz, Austria)
Europe/Rome There has been much progress recently in proving single-letter formulas for the mutual information (or "free energy") in high-dimensional estimation and learning problems. Computing the mutual information is important in order to locate the various "phase transitions" occurring in such problems when the noise increases or the data becomes too scarce. It is also key in computing various optimal achievable errors. Unfortunately all existing methods are highly involved, difficult to generalize and restricted in their applicability. In this talk I'll present a new method, called "adaptive interpolation method", that eliminates these barriers all at once: It is much simpler, very generic and able to tackle problems that were resisting until now. I will illustrate the method on a paradigmatic model of high-dimensional estimation, namely the "Wigner spiked model" (or "low-rank matrix factorization"). I will also briefly review some models that are now under full rigorous control thanks to this approach, as well as very recent extensions to physics models such as the "ferromagnetic p-spin model on sparse random graphs", an open problem for decades for reasons that I'll mention. ICTP ICTP [email protected]
Joint MATH-QLS Seminar - A simple tool for complex problems: The adaptive interpolation method for the Wigner spiked model
Speaker(s): Jean BARBIER, Communication Theory Laboratory, EPFL, Lausanne
Europe/Rome Abstract: There has been much progress recently in proving single-letter formulas for the mutual information (or " free energy ") in high-dimensional estimation and learning problems. Computing the mutual information is important in order to locate the various " phase transitions " occurring in such problems when the noise increases or the data becomes too scarce. It is also key in computing various optimal achievable errors. Unfortunately all existing methods are highly involved, difficult to generalize and restricted in their applicability. In this talk I'll present a new method, called " adaptive interpolation method ", that eliminates these barriers all at once: It is much simpler, very generic and able to tackle problems that were resisting until now. I will illustrate the method on a paradigmatic model of high-dimensional estimation, namely the " Wigner spiked model " (or " low-rank matrix factorization "). I will also briefly review some models that are now under full rigorous control thanks to this approach, as well as very recent extensions to physics models such as the " ferromagnetic p-spin model on sparse random graphs ", an open problem for decades for reasons that I'll mention. ICTP ICTP [email protected]
Room: SISSA building
Speaker(s): Jean BARBIER, EPFL, Lausanne
What's inside a black hole?
Speaker(s): Ramy BRUSTEIN (Ben Gurion University of the Negev, Israel)
Europe/Rome In August 1859 the young and still little known Bernhard Riemann presented a paper to the Berlin Academic titled 'On the number of primes less than a given quantity'. In the middle of that paper, Riemann made a guess - remark or conjecture - on the zeros of analytic function which controls the growth of the primes. Mathematics has never been the same since. The seminar presents the captive story behind this problem, which still remains one of the most famous unsolvable problems in mathematics, and show how progress can be made on its solution employing ideas and methods which come physics, i.e. from the stochastic world of random walks and alike. SISSA, Via Bonomea 265, rm 128 ICTP [email protected]
@ SISSA, Via Bonomea 265, rm 128
Joint ICTP/SISSA Statistical Physics Seminar: The Riemann Conjecture
Speaker(s): Giuseppe MUSSARDO (SISSA, Trieste, Italy)
Europe/Rome Abstract: Einstein predicted that gravitational waves exist when he discovered the theory of general relativity more than a century ago. Einstein, however, never believed that it would be possible to detect such waves and he did not believe that black holes exist. Yet, the last Nobel Prize in physics was awarded for the detection of gravitational waves from colliding black holes. I will explain how such waves are generated in black hole mergers, how they were detected against all odds by using laser interference technology and what they could teach us about gravity, black holes and the universe. ICTP ICTP [email protected]
BASIC NOTIONS SEMINAR - Gravitational waves as predicted by Einstein: signals from black holes collisions in the distant universe
Speaker(s): Ramy Brustein (Ben-Gurion University, Israel)
Carleson measure Problem for Hardy spaces on tube domains over symmetric cones
Speaker(s): Edgar TCHOUNDJA, University of Yaoundé I
Europe/Rome Abstract. Lovelock and Horndeski theories are natural generalisations of Einstein's theory of General Relativity with second order equations of motion. While their applications in astrophysics and cosmology have been thoroughly explored, the issue of their mathematical consistency has received little attention. In particular, it is not known whether these theories admit a well-posed initial value problem, a necessary requirement for any classical theory to make sense. If this condition were not satisfied, these theories would not constitute a viable alternative to General Relativity. In this talk, I will discuss why the standard method used to establish the local well-posedness of the Einstein equations fails for these theories. ICTP ICTP [email protected]
On the initial value problem in Lovelock and Horndeski theories of gravity
Speaker(s): Giuseppe PAPALLO (DAMTP, Cambridge, UK)
Europe/Rome Abstract. I review recent progress in compactifying (1,0) SCFT's in 6d on curves and discuss the resulting N=1 SCFT's in 4d. ICTP ICTP [email protected]
6d Theories on Curves
Speaker(s): Cumrun VAFA (Harvard University, USA)
Europe/Rome Abstract. In this talk I will consider the problem of defining measures of quantum information in cases where the space spanned by the set of accessible observables does not form an algebra, i.e. it is not closed under products. This setting is relevant for the study of localized quantum information in theories of gravity where the set of approximately-local operators in a region may not be closed under arbitrary products. While one cannot naturally associate a density matrix with a state in this setting, it is still possible to define a modular operator for a state, and distinguish between two states using a relative modular operator. These operators are defined on a 'little Hilbert space', which parameterizes small deformations of the system away from its original state, and they do not depend on the structure of the full Hilbert space of the theory. I will show how a novel class of relative-entropy-like quantities can be defined using the spectrum of these operators. I will also describe some applications of this formalism for studying bulk reconstruction and subregion-dualities in the AdS/CFT correspondence. ICTP ICTP [email protected]
Quantum Information Measures for Restricted Sets of Observables
Speaker(s): Sudip GHOSH (ICTS / TIFR, Bangalore, India)
Europe/Rome Weak perturbations can drive an interacting many-particle system far from its initial equilibrium state if one is able to pump into degrees of freedom approximately protected by conservation laws. We develop a theory of weakly driven integrable systems and show that pumping can induce large spin or heat currents even in the presence of integrability breaking perturbations, since it activates local and quasi-local approximate conserved quantities. The resulting steady state is qualitatively captured by a generalized Gibbs ensemble with Lagrange parameters that depend on the structure but not on the overall amplitude of perturbations nor the initial state. We suggest to use spin-chain materials driven by terahertz radiation to realize integrability-based spin and heat pumps. We also show that time-dependent generalized Gibbs ensembles accurately describe the time evolution of weakly driven, approximately integrable systems. ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: Pumping Spin-Chain Materials and the Emergence of Generalized Gibbs Ensembles
Speaker(s): Achim ROSCH (University of Cologne, Germany)
Europe/Rome Abstract: Specialization theorems roughly speaking assert that some property valid for a family remains valid for some specialization of the parameters describing the family. We shall recall well-known instances, such as Bertini and Hilbert Irreducibility theorems, and present some new results. We shall also illustrate some new kind of specialization theorems, e.g. for moduli spaces of abelian varieties. ICTP ICTP [email protected]
Some specialization theorems in geometry and number theory
Speaker(s): Umberto ZANNIER, Scuola Normale Superiore, Pisa
Europe/Rome I will discuss several recent results, both numerical and analytical, regarding disordered models in external field, focusing mainly on random field ferromagnetic models and spin glasses in a field. I will mainly treat models with Ising variables, but also some new results on XY models will be presented. Exact analytical results are derived for models defined on random graphs under the Bethe approximation, while numerical results are obtained via large scale Monte Carlo simulations for finite dimensional models and via improved message passing algorithms for models on random graphs. SISSA, Via Bonomea 265, room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: On the Complex Behavior of Disordered Models in a Field
Speaker(s): Federico RICCI-TERSENGHI (Theoretical and Computational Physics, Univ. of Rome 'La Sapienza')
Europe/Rome Abstract. I will review some of the recent attempts to incorporate stringy corrections to low energy effective actions within the framework of GCG. ICTP ICTP [email protected]
Stringy corrections, generalised complex geometry and supersymmetry
Speaker(s): Ruben MINASIAN (IPHT, Saclay, France)
Europe/Rome ICTP's annual Spring School on Superstring Theory and Related Topics provides pedagogical treatment of these subjects through lectures by some of the world's top string theorists. The activity is intended for theoretical physicists or mathematicians with knowledge of quantum field theory, general relativity and string theory. It is organized in collaboration with the Asia Pacific Center for Theoretical Physics (APCTP) and the Italian Institute for Nuclear Physics (INFN). TOPICS: Aspects of 5d and 6d SUSY Theories Dualities and Exact Results in 3d QFT Entanglement in QFT Gravitational Waves Large N Models Primordial Cosmology The Averaged Null Energy Condition in QFT Lecturers include: Francesco BENINI (SISSA) Thomas HARTMAN (Cornell University) Marina HUERTA (Centro Atomico Bariloche) Ken INTRILIGATOR (UC San Diego) Igor KLEBANOV (Princeton University) Guilherme PIMENTEL (University of Amsterdam) Rafael A. PORTO (ICTP-SAIFR) NOTE: There is no registration fee for this activity. ICTP ICTP [email protected]
Strings and Higher Dimensional Theories , High Energy Cosmology and Astroparticle Physics
Spring School on Superstring Theory and Related Topics | (smr 3193)
Secretary: Nadia van Buuren
Organizer(s): A. Dabholkar (ICTP & Université Paris - Sorbonne), E. Gava (INFN, Trieste), V. Hubeny (UC Davis), Z. Komargodski (Weizmann Inst. & SCGP Stony Brook), K.S. Narain (ICTP),
Cosponsor(s): the Asia Pacific Center for Theoretical Physics (APCTP)APCTP, the Italian Institute for Nuclear Physics (INFN)INFN
Europe/Rome ICTP awarded its 2017 Dirac Medal and Prize to Charles H. Bennett (IBM T. J. Watson Research Center), David Deutsch (University of Oxford) and Peter W. Shor (Massachusetts Institute of Technology) for their pioneering work in applying the fundamental concepts of quantum mechanics to basic problems in computation and communication, thereby bringing together the fields of quantum mechanics, computer science and information, creating the field of quantum information science. The three medallists for 2017 each made key contributions in uncovering how the uniquely quantum characteristics of qubits may be exploited to process and transmit data, thus launching the field of quantum information science. For more information on the Prize winners, see: http://bit.ly/2Es3VzF. The 2017 Dirac Medal Award Ceremony takes place on Wednesday 14 March 2018 at 14.30 hrs with general talks by Peter Zoller and Artur Ekert followed by the Award Ceremony with talks by the three Prize Winners. This year's Dirac Medal Award Ceremony takes part at the opening of the Spring School on Superstring Theory and Related Topics. The Ceremony will be livestreamed from the ICTP website. The presentations are available at the links below. After the Ceremony, a public event "Quantum Technologies: Dawn of a new Industrial Revolution" is being organized at the Savoia Hotel in Trieste. The public event starts at 18.30 with featured panellists: Alessandro Curioni (IBM), Hartmut Neven (Google) and Tommaso Calarco (University of Ulm). See the poster at: http://indico.ictp.it/event/8519/material/0/0.jpg for more information. The panel will be moderated by freelance journalist Simona Regina. ICTP ICTP [email protected]
2017 Dirac Medal Award Ceremony
ICTP's Dirac Medal, first awarded in 1985, is given in honor of P.A.M. Dirac, one of the greatest physicists of the 20th century and a staunch friend of ICTP. It is awarded annually on Dirac's birthday, 8 August, to scientists who have made significant contributions to theoretical physics. The 2017 Awardees are Charles H. Bennett (IBM T. J. Watson Research Center), David Deutsch (University of Oxford) and Peter W. Shor (Massachusetts Institute of Technology). The ceremony, during which the three winners will present lectures on their work, takes place on Wednesday 14 March 2018, starting at 14.30 hrs.
Europe/Rome We review some surprising results on multi orbital Hubbard models, highlighting the role of the Hund's coupling in driving a two-stage Mott localization and an effective decoupling between the orbitals which in turn leads to a remarkable differentiation between different orbitals (orbital selectivity). We show that the normal state of iron-based superconductors can be rationalized and understood in this framework and we address the implications on the phase diagram. ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: Multi-orbital Mott Physics and the Iron-based Superconductors
Speaker(s): Massimo CAPONE (SISSA, Trieste, Italy)
Europe/Rome Abstract: Crowd activities and behaviors have significantly expanded in our daily life due to the massive increase in population and technological advances. This growth has forced governments and authorities to hold more attention on these types of scenes particularly, for crowd management and security issues. Therefore, analysis of crowded scenes has become one of the most attractive topics in the computer vision and pattern recognition community. In this talk we introduce crowd scene analysis through an approach that we have developed based on extracting short trajectories (tracklets) and clustering them into coherent groups to form motion pathways. ICTP ICTP [email protected]
Automatic crowd scene analysis and anomaly detection from video surveillance cameras
Speaker(s): Walid GOMAA, Egypt-Japan University, Alexandria
Europe/Rome Background and purpose The Institute of Physics (IOP) and the International Centre for Theoretical Physics are organising a weeklong workshop for science and engineering students and graduates from low-to-middle income countries who are interested in developing entrepreneurial skills. Participants will be introduced to the process of innovation, generation and protection of intellectual property, technology transfer and the commercialisation of ideas and inventions. They will benefit also from the international perspectives and insights of leading experts in the field. Topics covered Scientists and engineers as entrepreneurs; opportunity and value assessment; intellectual property (IP); basics of patenting; IP management and global IP protection; business plan fundamentals; technology readiness levels; invention to product; timelines and processes; tools for financial estimations; pitching for cash; networking, communication and presentation skills; case studies and group projects. ICTP ICTP [email protected]
Entrepreneurship Workshop for Scientists and Engineers | (smr 3190)
Secretary: Elizabeth Brancaccio
Organizer(s): Linsey Clark (Institute of Physics), Joe Niemela (ICTP), Surya Raghu (Advanced Fluidics), Local Organiser: Joseph Niemela
Cosponsor(s): Institute of Physicsiop
Europe/Rome SCHOOL RELATED MATERIAL: - Video recording, slides of lectures, and hands-on material can be retrieved through the online programme; - Final list of participants (updated actual participation at the end of the School) and Book of Poster Abstracts are available through links at foot; See also: http://epw.org.uk/Documentation/School2018 * * * The School addresses senior PhD students and experienced researchers with prior working knowledge of DFT. Theoretical and hands-on training will focus on ab-initio calculations of many properties relating to the electron-phonon interaction, for applications in condensed matter physics, materials physics, and nanoscience. Starting from an introduction to the background on electron-phonon physics and related materials properties from the point of view of ab-initio calculations, we will show the participants how to perform cutting- edge electron-phonon calculations using a suite of electronic structure codes, including EPW, Wannier90, Quantum ESPRESSO, and ABINIT. Topics: • Density functional perturbation theory • Electron-phonon coupling • Phonon-assisted optical absorption • Maximally-localized Wannier functions • Temperature dependence of the electronic bandstructure • Electronic transport using the Boltzmann transport equation • Phonon-driven superconductivity Lecturers: S. de Gironcoli, SISSA P. Giannozzi, University of Udine F. Giustino, University of Oxford X. Gonze, Univ. Catholique of Louvain E. Kioupakis, University of Michigan E. R. Margine, Binghamton University-SUNY G. Pizzi, EPFL S. Poncé, University of Oxford * * * ICTP Secretariat contact: [email protected] ICTP ICTP [email protected]
Condensed Matter and Statistical Physics , New Research Areas
School on Electron-Phonon Physics from First Principles | (smr 3191)
Organizer(s): Feliciano Giustino (University of Oxford), Samuel Poncé (University of Oxford), Elena Roxana Margine (Binghamton University-SUNY), Local Organisers: Ralph Gebauer, Nicola Seriani
Cosponsor(s): Psi-kpsi_k, Centre Européen de Calcul Atomique et Moléculairececam
Limited participation
Europe/Rome Abstract: This is report on joint work with Martijn Kool. Using arguments from theoretical physics, Vafa and Witten gave a generating function for the Euler numbers of moduli spaces of rank 2 coherent sheaves on algebraic surfaces. Sheaves can be viewed as generalizations of vector bundles. Moduli spaces are algebraic varieties that parametrize the objects in which we are interested. While the Vafa-Witten formula cannot be literally true as a formula for the Euler numbers in general, we give an interpretation of it in terms of virtual topological invariants, and confirm it in many cases. We extend the results to finer topological invariants and also to rank 3. Such virtual invariants occur everywhere in modern enumerative geometry, inspired by physics, like Gromov-Witten invariants and Donaldson Thomas invariants, when attempting to make sense of the predictions from physics. Basically the idea is that the moduli spaces have very bad properties, but they "want to be" smooth, and the virtual invariants are those that they would have if they were smooth. We will try to give elementary explanations of all these concepts and their background, as much as is possible. Most of the time will be spent to just trying to explain the problem and the result. If there is time, some hints will be given about the methods. ICTP ICTP [email protected]
Virtual topological invariants of moduli spaces
Speaker(s): Lothar GOETTSCHE, ICTP
Europe/Rome Systems out of equilibrium are characterized by a continuous exchange of energy and matter with the environment through different mechanisms, thus producing entropy at the macroscopic level. We study the entropy production of a discrete-state system with random transition rates and the injection of an external probability current. At stationarity, the entropy production is shown to be composed by two contributions whose exact distributions are evaluated in the large system size limit and close to equilibrium. The first one is related to a generalized Joule's law, while the second contribution has a Gaussian universal distribution which depends just on two topological parameters. In the second part of the talk we compare the entropy production of a Master Equation system with the one of the corresponding continuum limit, i.e. a Fokker-Planck equation. We demonstrate that the Seifert's formula for the entropy production provides just a lower bound for the exact entropy production. In fact, this formula ought to be corrected keeping into account information about the microscopic transition rates, and this correction survives in the continuum limit. This effect of the coarse-graining has shown to be glaring in the simple example of a n-step random walk. The difference between discrete-state systems and continuous systems in evaluating the entropy production has been addressed also in comparing non-equilibrium steady states, NESS, and stochastic pumping, SP. We show that the entropy production of a system with SP is greater than the one produced by a system in a NESS, generating the same (time-averaged) probability current and exhibiting the same (time-averaged) probability distribution. ICTP ICTP [email protected]
QLS Seminar: Entropy production in non-equilibrium systems
Speaker(s): Daniel M. Busiello - University of Padua
Europe/Rome In this talk I will describe our work on the simulation of the Schwinger model (i.e. d=1+1 QED) with matrix product states (MPS). I will discuss some systematic aspects of our approach like the truncation of the local infinite bosonic gauge field Hilbert space, or the incorporation of local gauge invariance into the MPS anzats. Furthermore, I will go through some of our results: the simulation of the particle excitations ('mesons' of confined electron/positron pairs), of string breaking for heavy probe charges and last but not least of the real-time evolution that occurs from a background electric field quench (i.e. the full quantum Schwinger effect). ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Matrix Product States for Relativistic Quantum Gauge Field Theories
Speaker(s): Karel VAN ACOLEYEN (University of Gent, Belgium)
Europe/Rome Abstract. Superconformal multiplet calculus provides a systematic method to construct higher derivative invariants in Poincare supergravity. In this context, matter multiplets in conformal supergravity are of interest. We will discuss the construction of a 24+24 real scalar multiplet in N=2 conformal supergravity in four dimensions. We will see the reduction of this multiplet to an 8+8 restricted multiplet and a precise embedding of the 8+8 tensor multiplet inside this multiplet. We will then discuss the first steps towards constructing a superconformal action for this multiplet. ICTP ICTP [email protected]
A 24+24 real scalar multiplet in four dimensional N=2 conformal supergravity
Speaker(s): Subramanya HEGDE (IISER, India)
Europe/Rome Abstract: We introduce the area functional, namely the map which evaluates the area of the graph of a given C^1 vector valued function u defined on a open domain on R^n. We discuss the problem of relaxation of the area functional and recall some important properties. Then we consider the problem of triple junction points, in the presence of which the area functional shows nonlocal behaviour. The problem of evaluating the area functional on such maps is related to a problem of minimal surfaces whose solution passes by application of geometric analysis tools. ICTP ICTP [email protected]
Some recent progress on the relaxation of the area functional
Speaker(s): Riccardo SCALA, University of Lisbon
Europe/Rome Abstract: Auslander's formula suggests that for studying an abelian category, one may study the category of finitely presented additive functors on it, that has nicer homological properties than the category itself, and then translate the results back to the original category. According to H. Lenzing, a considerable part of Auslander's work on the representation theory of finite dimensional, or more general artin, algebras can be connected to this formula. In this talk, we first recall the Auslander's proof of the formula and then introduce and study a relative version of it. As application, some connections between the Morita equivalences of the endomorphism algebras of generators and the Morita equivalences of the original algebras will be provided. The talk is based on a joint work with R. Hafezi and M. H. Keshavarz. ICTP ICTP [email protected]
Relative Auslander's Formula
Speaker(s): Javad ASADOLLAHI, University of Isfahan and IPM-Isfahan, Iran
Europe/Rome Abstract. In this talk I show the linear and nonlinear time evolution of a holographic system possessing a first order phase transition. The initial state is chosen in the spinodal region of the phase diagram, and includes an inhomogeneous perturbation in one of the field theory directions. The final state of the time evolution shows a clear phase separation in the form of domain formation. The results indicate the existence of a very rich class of inhomogeneous black hole solutions. ICTP ICTP [email protected]
Holographic first order phase transition
Speaker(s): Hosam SOLTANPANAHI SARABI (IPM, Iran)
Europe/Rome Abstract. Effective theories have been very successful in all branches of theoretical physics. There are still a variety of phenomena where we do not have a clear understanding from the standard effective field theory point of view. One such phenomenon is dissipation and information loss in time evolution of a system. In the open quantum system literature little has been done for field theoretic systems. In this talk what we will do is first explain the tools available to deal with effective field theories of open systems. Then we will use these tools to study open version of \phi^3+\phi^4 theory, discuss few interesting aspects and study renormalisability of this theory. ICTP ICTP [email protected]
Renormalisation in open Quantum Field Theory: Scalar Field Theory
Speaker(s): Chandan Kumar JANA (ICTS, Bangalore, India)
Europe/Rome The theory of the electronic properties of condensed matter is a major challenge in many-particle quantum mechanics. The exact solution of the Schrödinger equation for more than a few interacting electrons is impossible. Nevertheless, powerful approximation techniques exist, based on underlying theoretical ideas that are themselves exact, including density-functional theory and many-body perturbation theory. To improve the approximations made in the implementation of these theories for ground-state, spectroscopic and time-dependent properties of many-electron systems, I will present results using our iDEA code [1], which allows the exact treatment of electron correlation and localisation for real-space one-dimensional systems of interacting electrons. [1] http://www-users.york.ac.uk/~rwg3/idea.html ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: "Many-Electron Quantum Simulation of Matter: Insight from Exact Time-Dependent Real-Space Model Systems"
Speaker(s): Rex GODBY (Department of Physics, University of York, U.K.)
Europe/Rome Abstract - See document below ICTP ICTP [email protected]
SPECIALIZED SEMINAR - Finite determinacy of matrices of power series
Speaker(s): Pham Thuy Huong (Quy Nhon University)
Probing interacting dark radiation with cosmic 21 cm line
Speaker(s): Maxim POSPELOV (Perimeter Institute, Canada)
Europe/Rome The two-week School is designed for young professionals from developing countries, ideally with 1-3 years of experience, working at relevant institution in their home country. Candidates should have a specific career interest in, or knowledge of nuclear security, although their academic and technical background may vary. Candidates with a scientific or technical background in a discipline of relevance to nuclear security, such as nuclear physics, nuclear engineering or political science, and/or in related fields, are especially encouraged to apply. CURRICULUM TOPICS • International legal framework supporting nuclear security • Identification of, and measures to address, threats against nuclear material, facilities, and activities • Instruments and methods for physical protection of associated facilities • Threat and risk assessment, detection architecture, and response plan for material out of regulatory control • Radiation detection instruments and detection strategies and techniques • Transport security for nuclear and other radioactive material • Nuclear forensics and radiological crime scene management • Nuclear security culture, computer and information security, and security at major public events • Measures for systematic nuclear security human resource development at the national level Additionally, there will be practical exercises, designed to incorporate the acquired knowledge into national planning and procedures to protect against threats to nuclear security. PREREQUISITES As a prerequisite to admission to the School, all applicants will be asked to complete IAEA introductory e-learning modules on several topics in nuclear security: - Nuclear Security Threats and Risks - Transport Security - Information and Computer Security - NMAC for Nuclear Security - Radiological Crime Scene Management - Physical Protection - Preventive and Protective Measures against Insider Threat - Use of Radiation Detection Instruments for Front Line Officers - Introduction to Radioactive Sources and Their Applications The modules are available only online: http://elearning.iaea.org/m2/course/index.php?categoryid=48 The estimated time for completing the courses is between 1 and 4.5 hours. Upon successful completion of each module, the system will generate a personalized certificate. Please submit all certificates of completion (as pdf or jpg files) with the online application form for this School. In case of technical problems please contact: [email protected] HOW TO APPLY Fill in the online application form using the link "Apply here" that can be found in this page. Before applying please check the above mentioned prerequisites. (Deadline expired on 2 December 2017) ICTP ICTP [email protected]
9 Apr 2018 - 20 Apr 2018
Joint ICTP-IAEA International School on Nuclear Security | (smr 3194)
Organizer(s): Dmitriy Nikonov (IAEA), Local Organiser: Claudio Tuniz
Cosponsor(s): International Atomic Energy Agencyiaea, Italian Ministry of Foreign Affairsmae
Europe/Rome Modern radiotherapy has substantially increased the use of small radiation fields like those used in various forms of SRT, SBRT, SRS and IMRT. These treatments are not only performed with specialized, dedicated machines, but also with conventional, non-dedicated accelerators equipped with high resolution multi-leaf collimators. In radiotherapy, accurate doses are essential. Therefore, a key requirement in radiotherapy is consistent reference dosimetry and consistent procedures within a country. For conventional radiotherapy this has been achieved by internationally adopted codes of practice. Howewer the standard codes of practice are based on the use of a 10 cm x 10 cm reference field that may not be achievable using some modern specialized machines. A joint working group between the IAEA and AAPM has written a new small field code of practice. The aim of this course is to teach participants how to implement the new code of practice in the clinic. PARTICIPATION: This workshop is for clinically qualified medical physicists from United Nations, UNESCO or IAEA member countries who should hold a university degree preferably in medical physics. The candidate should also have at least 3 years working experience in a hospital and must participate in small field clinical techniques. Participants should have an understanding of IAEA TRS-398, 'Absorbed dose determination in external beam radiotherapy'. The workshop is an opportunity for medical physicists from Member States to get first-hand information on the dosimetry of small fields in radiotherapy. It will be beneficial to clinical medical physicists working in radiotherapy modalities using small fields such as Stereotactic Radiotherapy (SRT), Stereotactic Body Radiotherapy (SBRT) Stereotactic Radiosurgery (SRS) and Intensity Modulated Radiation Therapy (IMRT). TOPICS: Physics and challenges of small field megavolt photon beams; description of the new IAEA/AAPM code of practice for the dosimetry of static small photon fields; discussion of small field detectors; absorbed dose to water standards for small fields; machine-specific reference dosimetry; output factors: definition, measurement and correction; and relative dose measurements in small fields. TOPICS: Physics and challenges of small field megavolt photon beams Description of the new IAEA/AAPM code of practice for the dosimetry of static small photon fields Discussion of small field detectors Absorbed dose to water standards for small fields Machine-specific reference dosimetry Output factors: definition, measurement and correction; and relative dose measurements in small fields DEADLINE: 12 January 2018 ICTP ICTP [email protected]
Joint ICTP-IAEA Advanced School on IAEA/AAPM Code of Practice for the Dosimetry of Static and Small Photon Fields | (smr 3196)
Secretary: Suzie Radosic
Organizer(s): Karen Christaki (IAEA), Local Organiser: Renato Padovani
Cosponsor(s): american associatiation of physicst in medicineaapm2
Co-sponsored by the AAPM
Europe/Rome Abstract: They are quite different, and both quite amazing. ICTP ICTP [email protected]
Two problems related to the Riemann Hypothesis
Speaker(s): Don B. Zagier (MPI, Bonn/ICTP)
Topology of the Electroweak Vacua
Speaker(s): Ben GRIPAIOS (Cavendish Laboratory, Cambridge, UK)
Europe/Rome DEADLINE for requesting participation: 24 NOVEMBER 2017 Highly qualified persons not requiring financial support may contact the Directors through [email protected] and will be considered for participation as long as space permits. DIRECTORS: B.J. Braams, CWI, Amsterdam, Netherlands H.-K. Chung, Gwangju Institute of Science & Technology, South Korea G. Csányi, University of Cambridge, U.K. A.G. Császár, MTA-ELTE Complex Chemical Systems Research Group, Budapest, Hungary A.I. Krylov, University of Southern California, Los Angeles, U.S.A. LOCAL ORGANIZER: G. Thompson, ICTP, Trieste, Italy The Abdus Salam International Centre for Theoretical Physics (ICTP) and the International Atomic Energy Agency (IAEA) will jointly organize this School and Workshop on Fundamental Methods for Atomic, Molecular and Materials Properties in Plasma Environments. The one-week event at ICTP in Trieste, from 16 to 20 April 2018, will provide training and information exchange for computational scientists working on models and data for atomic, molecular and materials processes relevant to fusion energy research, industrial plasmas, laser-produced plasmas, astrophysical plasmas, and warm and hot dense matter. The training is aimed at advanced Ph.D. students, postdocs and other young researchers. The information exchange will span several disciplines: from molecules to materials and from method developments to data treatments. Topics related to energetic events and electronically excited states are emphasized throughout the programme. The schedule features lectures by international experts, invited and contributed research talks, posters and discussion sessions, with ample time available for interaction and discussions. A scientific contribution (normally a poster, but some may be selected for a talk) is expected from each participant. Applicants are requested to attach a one-page abstract of their contribution in the Research Abstract sub-section under Professional Data when completing the online application form. A number of poster prizes will be awarded, courtesy of the journal Physical Chemistry Chemical Physics (PCCP). TOPICS: Advanced electronic structure approaches. Equation-of-motion and other wavefunction-based methods for excited electronic states. Coupled cluster methods applied to solids. State-of-the art methods for alloys and liquid metals. New developments based on reduced density matrices and the density matrix renormalization group. Nuclear quantum dynamics. Quantum treatments of scattering. Path integral molecular dynamics including treatment of multiple electronic surfaces. Determination of reaction rates beyond Arrhenius scaling. Potential energy and property surfaces. Machine-learning and kernel-based methods. Methods for multiple electronic states and their interactions. Uncertainty assessment and uncertainty propagation. Uncertainty correlations and other topics. Applications. Electronic and atomic collisions in plasmas. Plasma-material interaction and radiation damage of materials. INVITED SPEAKERS: V. Averbuck, Imperial College London, U.K. A.D. Baczewski, Sandia National Laboratory, Albuquerque, NM, U.S.A. K. Burke, UC Irvine, California, U.S.A. A. De Vita, King's College London, U.K. W.M.C. Foulkes, Imperial College London, U.K. S. Fritzsche, GSI and Friedrich Schiller University, Jena, Germany C.H. Greene, Purdue University, Lafayette, IN, U.S.A. M.P. Head-Gordon, UC Berkeley, CA, U.S.A. K. Heinola, University of Helsinki, Finland and CCFE/JET, U.K. T. Hickel, MPIE, Düsseldorf, Germany T.-C. Jagau, Ludwig-Maximilians-Universität, Munich, Germany J.R. Kermode, Warwick University, U.K. J. Kohanoff, Queen's University Belfast, U.K. N. Moiseyev, Technion, Haifa, Israel T. Oda, Seoul National University, Seoul, Republic of Korea R. Santra, Center for Free-Electron Laser Science, Hamburg, Germany J.E. Subotnik, University of Pennsylvania, Philadelphia, PA, U.S.A. J. Tennyson, University College London, U.K. U. Von Toussaint, IPP, Garching, Germany D.R. Trinkle, University of Illinois, Urbana-Champaign, IL, U.S.A. G.A. Worth, University College London, U.K. Z. Zeng, CAS Institute for Solid State Physics, Hefei, China P. Zhang, Peking University and IAPCM, Beijing, China Z. Zhao, National University of Defense Technology, Changsha, China B. Ziaja-Motyka, DESY, Hamburg, Germany ICTP ICTP [email protected]
Joint ICTP-IAEA School and Workshop on Fundamental Methods for Atomic, Molecular and Materials Properties in Plasma Environments | (smr 3197)
Secretary: Doreen Sauleek
Organizer(s): Hyun-Kyung Chung (Gwangju Institute of Science & Technology, South Korea), Bastiaan J. Braams (Centrum Wiskunde & Informatica, Amsterdam, Netherlands), Attila G. Császár (Eötvös Loránd University, Budapest, Hungary), Gábor Csányi (University of Cambridge, UK), Anna Krylov (University of Southern California, Los Angeles, CA, USA), Local Organiser: George Thompson
Cosponsor(s): International Atomic Energy Agencyiaea3, Physical Chemistry Chemical Physicspccp
ICTP Mini Workshop on Particle Physics
Europe/Rome The grand canonical ensemble lies at the core of quantum and classical statistical mechanics. A small system thermalizes to this ensemble while exchanging heat and particles with a bath. A quantum system may exchange quantities represented by operators that fail to commute. Whether such a system thermalizes and what form the thermal state has are questions about truly quantum thermodynamics. Here we investigate this thermal state from three perspectives. First, we introduce an approximate microcanonical ensemble. If this ensemble characterizes the system-and-bath composite, tracing out the bath yields the system's thermal state. This state is expected to be the equilibrium point, we argue, of typical dynamics. Finally, we define a resource-theory model for thermodynamic exchanges of noncommuting observables. Complete passivity -- the inability to extract work from equilibrium states -- implies the thermal state's form, too. Our work opens new avenues into equilibrium in the presence of quantum noncommutation. [Based on 1512.01189 with N. Yunger Halpern, P. Faist and J. Oppenheim.] ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Microcanonical and Resource-Theoretic Derivations of the Gran Canonical Thermal State of a System with Non-Commuting Charges
Speaker(s): Andreas J. WINTER (Universitat Autonoma de Barcelona, Experimental Sciences & Mathematics, Dept. of Physics, Bellaterra, Spain)
Europe/Rome Robin Santra is a professor of theoretical physics jointly appointed by DESY and the University of Hamburg. His accomplishments include the prediction of electronic hole dynamics and alignment in optical strong-field ionization, and predictions that motivated and supported the first experiments at the x-ray free-electron laser LCLS. He has been awarded the IUPAP Young Scientist Prize in Atomic, Molecular, and Optical Physics and has received the U.S. Presidential Early Career Award (PECASE). He is a Fellow of the American Physical Society. Abstract: I will present recent developments at the interface between attosecond science and x-ray science. In this context, I will discuss theoretical models of nonperturbative light-matter interactions and the performance of these models in comparison with experiment. The talk will be livestreamed from the ICTP website (ictp.it/livestream). Light refreshments will be served after the event. ICTP ICTP [email protected]
ICTP Colloquium on Atoms in Intense Light Fields
Address: Strada Costiera 11 34141 Trieste, Italy
Speaker(s): Prof. Robin Santra, Center for Free-Electron Laser Science - DESY, and Department of Physics, University of Hamburg, Germany
Joint ICTP-SISSA String Seminars: Bosonization in 3d and 2d
Speaker(s): David TONG (DAMTP, Cambridge, UK)
Joint ICTP-SISSA String Seminars: JT-bar deformed CFTs and their holographic interpretation
Speaker(s): Monica GUICA (IPhT, Saclay, France)
Europe/Rome Computational biology is playing an ever-increasing important role in understanding the atomistic level details of complex biological processes, rationalizing mechanisms and predicting novel aspects, which are inaccessible, yet complementary, to experiments and identify novel drugs to interfere with biological mechanisms leading to diseases' onset. This field evolves extremely fast, so that microsecond/millisecond time-scale simulations are nowadays routinely performed to follow the dynamics of biological systems containing up to millions of atoms. In this talk, multi-scale simulations ranging from force field molecular dynamics to hybrid quantum-classical (QM/MM) simulations, in combination with free energy techniques, will enlighten the molecular mechanisms of complex biological processes ranging from (i) the splicing mechanism in self-splicing ribozymes up to the eukaryotic spliceosome [1,2], (ii) membrane–anchored enzymes metabolizing hormones [3], and finally, (iii) to signal transmission in nuclear receptors and their polymorphic variants [4]. [1] Casalino L., Palermo G., Rothlisberger U., Magistrato A. 'Who Activates the Nucleophile in Ribozyme Catalysis? An Answer from the Splicing Mechanism of Group II Introns' J. Am. Chem. Soc. (2016) 138 (33), 10374–10377 (Cover Picture) [2] Casalino L., Palermo G., Abdurakhmonova N., Rothlisberger U., Magistrato A. 'Development of Site-specific Mg2+-RNA force field parameters: a Dream or a Reality? Guidelines from combined Molecular Dynamics and Quantum Mechanics Simulations' J. Chem. Theor. Comput. (2017) 13 (1), 340-352 [3] Magistrato, A.; Sgrignani, J.; Krause, R.; Cavalli A.; Single or multiple access channels to the CYP450s active site? An answer from free energy simulations of the human aromatase enzyme. J Phys Chem Lett 2017, 8 (9), 2036-2042 [4] M Pavlin, A Spinello, M Pennati, N Zaffaroni, S Gobbi, A Bisi, G Colombo, Magistrato A A Computational Assay of Estrogen Receptor α Antagonists Reveals the Key Common Structural Traits of Drugs Effectively Fighting Refractory Breast Cancers Scientific reports 2018, 8 (1), 649 ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: Computational Studies of Biological Systems Related to Human Diseases
Speaker(s): Alessandra MAGISTRATO (CNR-IOM & SISSA, Trieste, Italy)
Europe/Rome Space Weather is the variation in Sun energy emissions, solar wind, magnetosphere, ionosphere and thermosphere, which can influence the performance and reliability of a variety of space borne and ground-based technological systems. As such Space Weather is recognised as the cause of significant errors experienced by Global Satellite Navigation Systems (GNSS), Satellite Based Augmentation Systems (SBAS) and their users. GNSS or SBAS signals, propagating from a satellite to the user receiver, pass through the ionosphere where they are subject to the damaging effects of Space Weather. Under these conditions pseudorange errors and signals scintillations at user receiver level are present. The effects are critical at low latitudes where most of the developing countries are located. The purpose of the proposed workshop is to give theoretical and practical training on the physics of Space Weather and its main effects on the GNSS operations with particular emphasis on the low latitudes ionospheric processes related to Space Weather. This workshop is the number 10 of the series of activities in the field carried out since 2009 by the ICTP T/ICT4D in partnership with the Institute for Scientific Research (ISR) of Boston College done with the collaboration of the International Committee on GNSS of the UN Office of Outer Space Affairs and The Institute of Navigation both in Trieste and in Africa. During the first day of the Workshop a review of the impact of these training efforts on the capacity building in the field in Africa and in general in developing countries will be given. Topics: Impact of the ICTP T/ICT4D Boston College ISR joint training activities on GNSS science and applications in developing countries. (Speakers from Nigeria, Cote d'Ivoire, Ethiopia, Argentina and Malaysia will describe research and academic achievements in their countries as a result of the joint training activities) Introduction to satellite navigation and positioning (Representatives of national and international institutions dealing with GNSS activities will be invited to illustrate the importance of satellite navigation for the developing countries) GNSS: systems and operations Introduction to Space Weather Continuous and transient transport of energy from the Sun to the Earth Ionosphere and its response to Space Weather with particular attention to low latitudes Space Weather effects on GNSS operations Computer Laboratory exercises on the use of Space Weather data for GNSS research and applications The deadline for submitting applications expired on 1 February 2018. ICTP ICTP [email protected]
23 Apr 2018 - 4 May 2018
Workshop on Space Weather Effects on GNSS Operations at Low Latitudes | (smr 3198)
Secretary: Margherita Di Giovannantonio
Organizer(s): Patricia Doherty (Boston College), Sandro M. Radicella (ICTP), Bruno Nava (ICTP), Local Organiser: Bruno Nava
Cosponsor(s): Institute for Scientific Research, Boston College, USAbcl, International Committee on GNSSicg, Institute of Navigationion2
Europe/Rome Register in advance for the webinar URL: https://zoom.us/webinar/register/WN_b-GTZrwfSVms5BOlmeG9DA CONTEXT The Mediterranean is a hotspot for climate change and air pollution. Climate change will significantly impact the regional air quality by reinforcing the hot, sunny and dry Mediterranean climate. Mediterranean inhabitants are already regularly exposed to pollutant loads well above WHO (World Health Organization) air quality recommendations standards and will be further exposed. Additional exposures to air pollution and warm conditions will result in an excess of premature deaths, but we still lack quantification of the impact in southern and eastern countries. Thus, there is a need to promote and develop Integrated Health Impact assessment (IHIA) approach by empowering scientists from around the Mediterranean basin. OBJECTIVES The objective of the training school is to strengthen in-country scientists and stakeholders capacity to face the health challenges posed by environmental stressors. It aims at giving early-stage researchers a good understanding of risk and uncertainty matched to a set of practical skills in facing the environmental health issues related to evaluating the health impact of air pollution and climate change. Students will be trained to the practice of exposure assessment, epidemiology and integrated health impact assessment. CONTENTS The school will introduce state of the art knowledge on air pollution and climate change in the Mediterranean region, as well as methodologies of health impact assessment. It will give students knowledge of protocols, tools, sources of data and will make them practice on case studies specifically designed for the school in order to enable them performing air quality and climate health impact studies including economic valuation. The school will also give insights on how to identify public health priorities for research and preventive actions. The teaching will be provided by recognized experts in atmospheric chemistry, climate, epidemiology, toxicology, economics... Participants will be able to interact and communicate with all these experts. SPEAKERS Carla ANCONA (DEP Lazio, Rome, Italy) Isabella ANNESI-MAESANO (INSERM & UPMC, Paris, France) Olivier CHANEL (AMSE, Univ. Of Aix-Marseille, France) Augustin COLETTE (INERIS, Verneuil en Halatte, France) Zeina DAGHER (Univ. Lebanese, Fanar, Lebanon) Kees DE HOOGH (Univ. Of Basel, Basel, Switzerland) Jos LELIEVELD (Max Planck Institute for Chemistry, Mainz, Germany) Konstantinos MAKRIS (Cyprus International Institute for Environmental and Public Health, Limassol, Cyprus) and to be announced ICTP ICTP [email protected]
Integrated Environmental Health Impact Assessment (IEHIA) of Air Pollution and Climate Change in Mediterranean Areas | (smr 3250)
Secretary: Susanne Henningsen
Organizer(s): Carla Ancona (Dept. of Epidemiology, Lazio Reg. Health Service, Rome, Italy), François Dulac (CEA LSCE (France)), Eric Hamonou (CNRS LSCE (France)), Konstantinos Makris (Cyprus International Institute for Environmental and Public Health (Cyprus)), Local Organiser: Filippo Giorgi
Cosponsor(s): Chemistry-Aerosol Mediterranean eXperimentcharmex, ENVI-MEDenvimed, ARCHIMEDESarchimedes, The International Union of Geodesy and Geophysicsiugg4
Europe/Rome LoRa is a long range, low power networking solution recently developed for the Internet of Things (IoT). In the workshop we will learn about the underlying technology and protocols and we will develop scientific instrumentation with LoRa chips. A LoRa network can connect a plethora of tiny battery powered LoRa sensors and devices within a few kilometers and even up to 150 km with line of sight. With complete LoRa modules available from $5 USD the cost of instrumenting huge areas and cities with real-time environmental sensor networks has diminished substantially. In this school the participants will develop and construct instrumentation and sensors equipped or embedded with LoRa chip and they will deploy and analyse the data from a LoRa enabled sensor network across the ICTP campus and in Trieste. The power of the technology to rely on long distance, low power (battery or rechargeable/ renewable) technologies is of considerable interest to the environmental monitoring of cities and agricultural environments. Topics: • LoRa and LoRaWAN protocols; • Prototyping of sensor boards; • Coding of LoRa sensors; • Planning and deployment of a LoRa network; • Collection, analysis and the visualization of the data. Please see also: http://wireless.ictp.it/school_2018/ ICTP ICTP [email protected]
23 Apr 2018 - 11 May 2018
Joint ICTP-IAEA School on LoRa Enabled Radiation and Environmental Monitoring Sensors | (smr 3188)
Address: via Beirut, 7 I - 34151 Trieste (Italy)
Room: SciFabLab (EFB)
Secretary: Petra Krizmancic
Organizer(s): Iain Darby (IAEA NAPC/PH-NSIL), Marco Zennaro (ICTP T/ICT4D), Maria Liz Crespo (ICTP MLab), Local Organiser: Marco Zennaro
Cosponsor(s): International Atomic Energy Agencyiaea3, International Telecommunication Unionitu, SigCommsigcomm, ICTP SciFabLabsfl
Europe/Rome This talk addresses the low energy physics of the Sachdev-Ye-Kitaev model, a paradigm of strongly interacting (Majorana) quantum matter. A salient feature of this system is its exceptionally high degree of symmetry under reparameterizations of physical time. At low energies this symmetry is spontaneously broken and the ensuing infinite dimensional Goldstone mode manifold takes strong influence on all physical observables. We will discuss the effects of these fluctuations on the example of the so-called out of time ordered correlation functions, diagnostic tools to describe both manifestations of quantum chaos in the system and its conjectured duality to an AdS2 gravitational bulk. While previous work predicts exponential decay of these correlations in time our main finding is that at large time scales non-perturbative Goldstone mode fluctuations generate a crossover to power law behavior. This phenomenon must have ramifications in the physics of the holographic bulk which, however, we do not understand at present. ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Large Conformal Goldstone Mode Fluctuations in the SYK Model
Speaker(s): Alexander ALTLAND (THP Institute for Theoretical Physics, University of Cologne, Germany)
Non-perturbative black hole entropy from quantum foam
Speaker(s): Joao VIEIRA GOMES (University of Amsterdam, The Netherlands)
Europe/Rome Abstract: See file below ICTP ICTP [email protected]
Studying geometry with arithmetic techniques
Speaker(s): Martin Mereb (University of Buenos Aires)
Axions from Strings
Speaker(s): Marco GORGHETTO (SISSA)
Europe/Rome Writing software has become central to research in many fields of science. This school aims to give early-career scientists an introduction to a variety of topics that help them to write efficient, clean, maintainable and long-lived code that is useful beyond solving an immediate problem. In a mixture of talks and many hands-on sessions, the focus lies on showing best practices and building fundamental skills in creating, extending and collaborating on modular and reusable software. TOPICS: ● Python / shell scripts as glue code ● Mixing programming languages ● Introduction to computer architectures and software optimization ● Modular, reusable software design ● Effective collaborative development with multiple co-authors ● Version control and release cycles ● Automated testing frameworks ● Structured documentation ● Systematic debugging Sharif University of Technology ICTP [email protected]
@ Sharif University of Technology
6th Workshop on Collaborative Scientific Software Development and Management of Open Source Scientific Packages | (smr 3199)
Address: Azadi St. - Tehran - Islamic Republic of Iran
Room: Physics Department
Secretary: Milena Poropat
Organizer(s): D. Grellscheid (Durham University / ICTP), S. Baghram (Sharif University), M.R. Ejtehadi (Sharif University), A. Langari (Sharif University), S. Moghimi-Araghi (Sharif University), ICTP Scientific Contact: I. Girotto
Cosponsor(s): Sharif University of Technologysherif2
**NEW DEADLINE: 24/01/2018** **DEADLINE: 14/01/2018**
2 May 2018 - 5 May 2018
EMODnet Biology General meeting | (smr H547)
Organizer(s): Simon Claus VLIZ - InnovOcean Site, Marina Lipizer Istituto Nazionale di Oceanografia e di Geofisica Sperimentale - OGS,
Europe/Rome Entanglement is at heart of many interesting phenomena in quantum physics. From determining important properties in many-body physics to practical communication using quantum light, understanding and proving entanglement is a paradigmatic task across many subdisciplines. It is relatively well understood for small Hilbert space dimensions, however it becomes intractably complex for large dimensions and particle numbers. In this talk I would like to give an introductory overview about state-of-the-art methods in entanglement theory for nonetheless navigating high-dimensional Hilbert spaces and reliably certifying entanglement within. ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: Entanglement Beyond Two Qubits
Speaker(s): Marcus HUBER (IQOQI Vienna, AAS, University of Vienna, Austria)
Europe/Rome Abstract: We prove the wellposedness of scalar wave equations on spatially at universe as a background with nonminimal coupling with the scalar potential turned on by introducing the k-order linear energy and the corresponding energy norm. In the local case, we show that both the k-order linear energy and the energy norm are bounded for finite time with initial data in Hk+1 × Hk. Whereas in the global case, we have to add three assumptions related to the nonminimal coupling constant, the scale factor of spacetimes, and the form of the scalar that has to be a polynomial with a small positive parameter. Then, we show that the solution does globally exist with a particular decay estimate that depends on the scale factor of the spacetimes. Finally, we provide some physical models that support our general setup. ICTP ICTP [email protected]
Global Existence of Solutions to Scalar Equations on Spatially Flat Universe as a Background with Non-minimal Coupling
Speaker(s): Fiki Taufik Akbar, Institut Teknologi Bandung
Europe/Rome Dynamical Systems and Ergodic Theory constitute a vibrant and exciting eld of current research in pure mathematics, with lots of connections to other areas such as probability theory, differential geometry, ODEs, and potential applications to such diverse areas as natural and biological sciences, economics, engineering, climate and disease modeling and many others. It can be thought of as a modern development of classical theories of Differential Equations and the fundamental concepts and guiding ideas go back more than a century to pioneering thinkers such as Poincaré, Birkhoff and Kolmogorov. Its interest and importance has only grown in time and several Fields Medals have been awarded in the area. The ICTP-TMU School on Dynamical Systems and Ergodic Theory will consist of introductory mini-courses aimed specifically at masters and beginning PhD students who are expected to have an excellent general mathematics background but not necessarily any previous experience of the particular topics covered in the courses. The mini-courses will be complemented by a series of research-level seminars, given by internationally renowned scholars in the eld, and aimed at more advanced PhD students and local faculty and researchers. All students and researchers are welcome to all the lectures. Scientific Committee S. LUZZATTO, ICTP, Italy M. NASSIRI, IPM, Iran A. TAHZIBI, ICMC, Brazil Lecturers C. BONATTI, Dijon, France P. ESLAMI, Warwick, UK C. LIVERANI, Rome, Italy Speakers J.F. ALVES, Porto, Portugal P. BERGER, Paris, France O. BUTTERLEY, ICTP, Italy S. LUZZATTO, ICTP, Italy M. POLLICOTT, Warwick, UK M. TSUJII, Kyushu, Japan Tutor L.D. SIMONELLI, ICTP, Italy Tehran - Islamic Republic of Iran ICTP [email protected]
@ Tehran - Islamic Republic of Iran
TMU-ICTP School and Conference on Dynamical Systems and Ergodic Theory | (smr 3200)
Secretary: Koutou Mabilo
Organizer(s): Khosro Tajbakhsh (Tarbiat Modares University, Tehran, Iran), ICTP Scientific Contact: Stefano Luzzatto
Cosponsor(s): Tarbiat Modares Universitytmu, Institute for Research in Fundamental Sciencesipm2, IMU-CDCcdc2
Europe/Rome In nature, collective behaviour is a widespread phenomenon that spans many systems at different length- and timescales. Cells forming complex tissues, social insects such as ants creating dynamical structures using their own bodies, fish schools and bird flocks with their synchronous and coordinated motion are prototypical examples of the emergent self-organization which arises from local interactions among a large number of individuals. Unravelling the underlying principles and mechanisms through which such a macroscopic complexity is achieved is a fundamental challenge in biological sciences. Collective behaviour has become a unifying concept in a range of disciplines—from the spontaneous ordering of spins in ferromagnetic systems in physics, to the emergence of herding behaviour in economy, to the consensus dynamics in social sciences. Furthermore, enormous potential lies in the field of robotics in which, drawing inspiration from natural swarms, artificial collectives can be created with the abilities of the natural ones. A range of techniques have developed over recent years which allow unprecedented acquisition of quantitative empirical data, together with an enormous influx of theoretical concepts and models of collective behaviour. This upsurge enabled scientists from biology, physics, and robotics to start tackling intensively many of the open issues in this field: How is information transferred through a group? What is the structure of the underlying interaction and communication network? What is the role of the inter-individual differences (physiological, social, etc.) in a group decision-making? What is the relationship between organisms and their habitat? However, except on very rare occasions, different disciplines continue to address these problems along separate lines. Recent rapid progress in the collective behaviour field presents a well-timed opportunity to initiate a fruitful interdisciplinary discussion. The goal of this conference is to bring together experts from different scientific fields—biology, physics and engineering—to help identify new cross-disciplinary challenges, to boost the development of new ideas, and to foster the birth of an interdisciplinary community working in this field. Invited Speakers include: Lucy Aplin (University of Oxford) Nicolas Bredeche (UPMC) Andrea Cavagna (CNR, Rome) Hugues Chaté (CEA) Gonzalo De Polavieja (Champalimaud Foundation) Audrey Dussutour (Université Toulouse III) Ofer Feinerman (Weizmann Institute of Science) Francesco Ginelli (University of Aberdeen) Deborah M. Gordon (Stanford University) Roderich Gross (University of Sheffield) Takashi Ikegami (University of Tokyo) M. Cristina Marchetti (Syracuse University) Patrizio Mariani (Technical University of Denmark) James Marshall (University of Sheffield) Thierry Mora (ENS) Melanie E. Moses (University of New Mexico) Thomas Schmickl (University of Graz) Ariana Strandburg-Peshkin (Max Planck Institute for Ornithology) Guy Theraulaz (Université Toulouse III) Shashi Thutupalli (SCSLM, NCBS, ICTS) Colin Torney (University of Glasgow) Tamás Vicsek (Eötvös University Budapest) Aleksandra Walczak (ENS) Justin Werfel (Harvard University) ICTP ICTP [email protected]
Conference on Collective Behavior | (smr 3201)
Room: Auditorium, SISSA Building
Organizer(s): Iain Couzin (Max Planck Institute for Ornithology), Dario Floreano (EPFL), Irene Giardina (Università di Roma), Asja Jelic (ICTP), Local Organiser: Antonio Celani
Cosponsor(s): Institute for Complex Adaptive Mattericam
Europe/Rome The School will introduce young scientists to synchrotron and X-ray free electron laser (XFEL) based research with a glance to the future. The School's interdisciplinary character will provide a stimulating environment for discussions with leaders in the field. DESCRIPTION X-ray based analytical methods play an increasingly important role in many domains of fundamental research and applied science. The international cast of lecturers will introduce the basic principles of advanced X-ray spectroscopy and scattering techniques. Emphasis will be given on how these techniques allow accessing the complexity of matter by monitoring the atomic arrangement, electronic, magnetic and chemical properties of individual building blocks and their response to external stimuli like temperature, pressure, electric or magnetic fields and light pulses. The program will be complemented by lectures on recent advances in synchrotron and XFEL sources enabling novel experimental methodologies. Demonstration sessions will be organized at Elettra's synchrotron and XFEL facilities. TOPICS • Fundamentals of synchrotron and FEL radiation and production. • Beamlines: photon transport and handling • X-ray Absorption Spectroscopy • X-ray Crystallography and Powder Diffraction • Inelastic and Elastic Scattering (SAXS, WAXS and RIXS) • Photoelectron spectroscopy • Imaging techniques using X-rays and advantages using coherent X-rays • Spectromicroscopy and imaging using X-rays and IR • Time-resolved approaches using FELs and synchrotrons Grants: a limited number of grants are available to support the attendance of selected participants, with priority given to those from developing countries. Support for attending the School will also be provided by the H2020 funded European Cluster of Advanced Laser Light Sources (EUCALL) and the French research network XFEL-Science: - EUCALL will provide a limited number of travel bursaries for young scientists from European Laboratories. For more details and to apply, see the following web page: https://www.eucall.eu/bursaries/spring_school - The XFEL-Science network will provide a limited number of travel bursaries for students from French Laboratories. Please contact Jan Luning for more information when applying to the School. There is no registration fee. ICTP ICTP [email protected]
Condensed Matter and Statistical Physics , Applied Physics
School on Synchrotron and Free-Electron-Laser Methods for Multidisciplinary Applications | (smr 3202)
Organizer(s): Maya Kiskinova (ELETTRA, Sincrotrone Trieste), Jan Luning (UPMC, Paris), Local Organiser: Nadia Binggeli
Cosponsor(s): GDRi XFEL-Sciencexfel, EUCALLeucall
Europe/Rome Every physicist has a pretty clear idea of how to define equilibrium phases of matter (e.g. using free energy considerations), whether disordered or ordered (and if ordered, a variety of situations can be encountered). By contrast, dynamics-wise, no generic and clear-cut definition a dynamical phase (disordered, intermittent, uniform, ergodicity-breaking, pattern-forming, etc) can be found. Instead, one works on a system-to-system basis. I will illustrate, on the simple example of a classical system of mutually excluding particles diffusing on a line, how a robust definition of what a dynamical phase is can be achieved. As I will go along, we will see that there may even exist transitions between dynamical phases. On a formal level, these dynamical transitions have everything in common with the quantum phase transitions that appear in hard-condensed matter. I will show that, in turn, approaching quantum problems with a classical eye, can, even with the simple example I'll discuss, lead to unexpected progress on the quantum side. SISSA, Via Bonomea 265, room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Dynamical Phase Transitions
Speaker(s): Frédéric van WIJLAND (Univ. Paris Diderot, Paris, France)
Europe/Rome Abstract. We shall present a phase space description of unitary matrix model. It is well known that eigenvalues of unitary matrices are like positions of free fermions. We show that in large N limit, number of boxes in Young diagram corresponding to dominant representation of SU(N) plays the role of momentum for those fermions. A relation between eigenvalues and number of boxes allows one to provide a phase space description for different large N phases of the theory. We shall consider Chern-Simons matter theory on $S^2\times S^1$ in particular and discuss how phase space description forces the corresponding dominant representations to be integrable. We also discus the level-rank duality between different dominant representations. ICTP ICTP [email protected]
From phase space to integrable representations and level-rank duality
Speaker(s): Suvankar DUTTA (IISER, Bhopal, India)
Europe/Rome Abstract: One of the most important problem in geometric topology is the "embedding problem". After revisiting embedding problems in various categories, we discuss some recent developments in smooth and contact category. The concepts required will be developed in the talk and the talk will be accessible to an advanced undergraduate audience. ICTP ICTP [email protected]
On embeddings of manifolds
Speaker(s): Dishant Pancholi (Chennai Mathematical Institute)
Gluequark Dark Matter
Speaker(s): Roberto CONTINO (Scuola Normale Superiore, Pisa)
Europe/Rome PLEASE NOTE The organizers will select some of the received abstracts for oral presentations. Best Poster Awards are foreseen. No registration fee is required. The organizers will offer welcome reception, coffee breaks, and conference dinner to all participants, without additional costs. Participants wishing to present a poster during the Conference should kindly indicate this on the application by uploading a short abstract with their surname/name/institute address/ title and brief - max one page - description MAIN TOPICS: Novel quantum phenomena in multicomponent superconductors and superfluids Competing orders in multicomponent quantum systems Multicomponent superconductivity and superfluidity in 2D materials and heterostructures Topological effects in multicomponent quantum systems Vortical and skyrmionic states in multicomponent quantum fluids The standard picture of superconductors and superfluids is based on the existence of a single quantum condensate, but recent advances in material science and ultracold atoms opened a paradigm of multicomponent quantum fluids. New directions in this multifacetted and booming field are opened by emergent 2D materials, heterostructures, multiorbital materials, and ultracold atomic mixtures, in which superconductivity and superfluidity are readily realized, and where links between observed quantum effects can be made and knowledge can be cross-fertilized. This MultiSuper conference is aimed to help understand, quantify, and manipulate the effects of hybridization between multiple condensates in a single system, as a pathway to yet unseen quantum phenomena. The key experts will be assembled to present and interpret state-of-the-art experiments and theory, and advance fundamental physics and material science in this field. Confirmed speakers: Saeed Abedinpour (IASBS, Zanjan) Vanderlei S. Bagnato (University of Sao Paulo) Antun Balaz (Institute of Physics, Belgrade) Luis Balicas (Florida State Univ., Tallahassee) Jonas Bekaert (Univ. Antwerpen) Lara Benfatto (CNR and Rome "Sapienza") Marco Cariglia (Fed. Univ. of Ouro Preto) Stefania De Palo (CNR-IOM Democritos & Sissa, Trieste) Lorenzo Del Re (Sissa, Trieste) Mauro M. Doria (IF - UFRJ, Rio de Janeiro) Francois Dubin (INSP, Paris) Stephen D. Edkins (Cornell University, New York) Leonardo Fallani (LENS and University of Florence) Laura Fanfarillo (SISSA, Trieste) Shunsuke Furukawa (Physics, The Univ. of Tokyo) Stefano Giorgini (University of Trento) Isabel Guillamon Gomez (Autonomous Univ. of Madrid) Yukio Hasegawa (ISSP, Univ. Tokyo, Kashiwa) Walter Hofstetter (Goethe-Universität, Frankfurt) Maria Iavarone (Temple Univ., Philadelphia, P) Kim Kee Hoon (Seoul National Univ., Seoul) Svetoslav Kuzmichev (M.V. Lomonosov Moscow State U.) Pablo Lopez Rios (Cavendish Lab., Cambridge Univ.) Allan MacDonald (University of Texas at Austin) Maria Vittoria Mazziotti (RICMASS, Rome) David Neilson (Univ. of Camerino & Univ.of Antwerp) Sebastiano Peotta (Aalto University, Finland) Philip Phillips (Univ. of Illinois at Urbana Champaign, Urbana) Erik Piatti (Politecnico di Torino) Aristeu Pontes Lima (Inst.Ciencias Exatas e da Naturez, Redencao) Gianni Profeta (Univ. dell'Aquila, SPIN CNR, L'Aquila) Nick Proukakis (University of Newcastle, UK) Dimitri Roditchev (INSP, Paris) Carlos Sa de Melo (Georgia Tech, Atlanta) Peter Samuely (Inst. of Experimental Physics, Kosice) Gaetano Senatore (Univ. of Trieste) Marcello Spera (Univ. of Geneva) Daniela Stornaiuolo (Univ. of Naples Federico II) Jacques Tempère (University of Antwerp) Christopher Triola (Uppsala University) Alexei Vagov (Bayreuth University) Davide Valentinis (DPMC Ecole de Physique, Univ. Geneva) Angelo Valli (SISSA, Trieste) Andrey Varlamov (CNR-Spin, Rome) ICTP ICTP [email protected]
International Conference on Multi-Condensate Superconductivity and Superfluidity in Solids and Ultra-cold Gases | (smr 3204)
Secretary: Marina de Comelli
Organizer(s): Massimo Capone (SISSA, Trieste, Italy), Milorad Milosevic (University of Antwerp, Belgium), Andrea Perali (University of Camerino, Italy), Local Organiser: Rosario Fazio
Cosponsor(s): MultiSuper Networkmsn, Universita' di Camerinounicam, Research Foundation - Flanders FWOfwo, Universiteit Antwerpenuniv_antwerpen, International School for Advanced Studiessissa, Condensed Matter Journal sponsors the Best Poster Awardcm
Europe/Rome Statistical mechanics and thermodynamics provide nowadays a comprehensive picture of the properties of equilibrium and near-equilibrium systems. Yet most phenomena occur out of equilibrium – for example nonequilibrium is essential for sustaining life or fluid motions – and our understanding of nonequilibrium systems is comparatively poorer and limited. At the same time, the revolution in information technologies has provided a wealth of quantitative data on complex systems whose fundamental laws are yet unknown to us. These two related domains are one of the great frontiers of contemporary science, both from the theoretical standpoint and for a multitude of practical applications. This research area has a strong interdisciplinary flavor and has special relevance for physics, geosciences, and life sciences. The understanding of complex, high-dimensional and non-equilibrium systems raises many challenges: · How does a complex system respond to perturbations? · How to construct and evaluate models across a hierarchical ladder of complexity? · How to deal with multiple scales in space and in time? · How to test theories and models against limited and often noisy observational data? · How to merge effectively models and data? · How to describe and predict large fluctuations and extreme events? · How do patterns emerge in the self-organization of complex systems? · How to study fluctuations in reversible and irreversible systems? The workshop aims to approach these problems taking advantage of results, tools, and ideas coming from a diverse range of scientific disciplines, with the goal of advancing our knowledge and foster cross fertilization. Speakers include: M. Baiesi, Italy L. Biferale, Italy G. Biroli, France A. Celani, Italy S. Ciliberto, France S. Cocco, France L. Cugliandolo, France L. De Cruz, Belgium E. Domany, U.S.A. E. Frey, Germany K. Gawedsky, France A. Gritsun, Russian Federation R. Harries, U.K. A. Jelic, Italy F. Kucharski, Italy T. Kuna, U.K. J. Kurchan, France A. Laio, Italy B. Machta, U.S.A. C. Maes, Belgium C. Mejia-Monasterio, Spain I. Nemenmann, U.S.A. A. Politi, U.K. D. Ruelle, France S. Sarkar, U.S.A. D. Schwab, U.S.A. U. Seifert, Germany T. Sharpee, U.S.A. S. Suweis, Italy M. Transtrum, U.S.A. E. Vanden-Eijnde, U.S.A. S. Vannitsem, Belgium A. Vulpiani, Italy J. Yorke, U.S.A. ICTP ICTP [email protected]
Advanced Workshop on Nonequilibrium Systems in Physics, Geosciences, and Life Sciences | (smr 3203)
Organizer(s): Freddy Bouchet (ENS-Lyon), Giovanni Gallavotti (Uni La Sapienza Rome), Andrea Gambassi (SISSA), Valerio Lucarini (Uni Reading/Uni Hamburg), Stefano Ruffo (SISSA), Local Organiser: Matteo Marsili
Cosponsor(s): University of Readinguniv_reading, SISSAsissa
Europe/Rome Abstract. In a QFT or a theory of Gravity, the most interesting objects are the physical amplitudes. As it turns out, the symmetry properties of the theory are not always best understood in the usual field space basis. In this talk, I shall talk about two different sets of basis space, where amplitudes are best understood. I shall provide some explicit examples for clarification. ICTP ICTP [email protected]
Proper Basis for Studying Amplitudes
Speaker(s): Nabamita BANERJEE (IISER, Pune, India)
Europe/Rome ICTP Campus - Aerial view with Miramare Castle and Park The policy relevance of understanding past, present, and future climate change and variability and the impact, response and feedbacks of natural and managed ecosystems and many different socio-economic sectors is clear. This has motivated the development of a wide spectrum of techniques and coordinated research efforts around the world aimed at generating robust regional climate change scenarios for the assessment of impacts and vulnerability and associated risks. Progress in observational and modeling capabilities, process understanding, and the development of diverse applications within the scientific, policy and practitioner communities has nurtured the scoping of an ambitious assessment of regional climate change information as part of the IPCC Sixth Assessment Report (AR6) of Working Group I (WGI). The WGI report will assess processes of global to regional climate change, weather and climate extremes in a changing climate, and climate change information for regional impact and risk assessment. A regional atlas is also proposed as an annex of the WGI report. The Expert Meeting will address the challenging assessment of regional climate information within the WGI AR6 report that will build on multiple sources of information, including observations, climate model products, downscaling techniques, and understanding of local, regional and large-scale drivers and feedbacks of regional climate and change. It will consider how regional information is used and assessed within the climate change risk assessment framework. It will It will consider the scoping of a regional atlas and the development of mechanisms to strengthen the 'hand shake', or integration of the assessment across the IPCC, in particular of regional impacts, vulnerability and risk undertaken by WGII. The meeting will be organized by a scientific steering committee (SSC) with representation across the three IPCC WGs, supported by the WGI Technical Support Unit. Participation is by invitation. The list of participants will be finalized following the outcome of author selection of the IPCC WG AR6 reports in early February, 2018. Presentations slides Wed 16 May for download, http://indico.ictp.it/event/8458/material/13/ Presentation slides Thu 17 May for download, http://indico.ictp.it/event/8458/material/15/ Presentation slides Fri 18 May for download, http://indico.ictp.it/event/8458/material/18/ Regional programme draft, http://indico.ictp.it/event/8458/material/10/ Scientific Steering Committee Valerie Masson-Delmotte Panmao Zhai Carolina Vera Edvin Aldrian Greg Flato Andreas Fischlin Hans-Otto Portner Debra Roberts Mark Howden Ramon Pichs-Madruga Technical Support Unit Anna Pirani Wilfran Moufouma-Okia Elvira Poloczanska Points of contact Programme: Anna Pirani (IPCC) [email protected] Local: Susanne Henningsen (ICTP) [email protected] For ICTP Colloquium Valerie Masson-Delmotte on 17 May 2018 at 16:00, see link: http://indico.ictp.it/event/8477/ Trieste - Italy ICTP [email protected]
Intergovernmental Panel on Climate Change (IPCC) <br> Expert Meeting on Assessing Climate Information for Regions | (smr H545)
Room: ex-SISSA Main Auditorium
Cosponsor(s): IPCC Working Group I wgi2
Europe/Rome Abstract. In this talk, I will present two dark matter frameworks where a mass splitting in the dark sector dramatically alters the expectations for indirect detection rates. In the first case, the presence of a quasi-degenerate metastable state, where the dark matter number is stored, allows for sub-GeV relics with large s-wave annihilation cross section and not excluded by CMB bounds. In the second case, dark matter particles inelastically up-scatter in the interstellar plasma to a quasi-degenerate heavier partner, whose subsequent decays generate X-ray lines with unique spectrum and morphology. ICTP ICTP [email protected]
Cosmic Photons from Mass Splitting in the Dark Sector
Speaker(s): Francesco D'ERAMO (INFN, Padova)
--- Please note unusual venue!! --
Europe/Rome Abstract: We begin exploring a well-known notion for finite dimensional spaces that appears to be revealed, as well, in infinite dimensional spaces. This fact will allow us to examine the notion on monotonicity in a more general setting. The presentation will be self-contained and connecting well-known results, such as Lax-Milgram Theorem. Please note unusual venue: LECTURE ROOM D (Leonardo Building, Terrace Level) ICTP ICTP [email protected]
Surjectivity Problems for Monotone Operators
Room: Leonardo Building - Lecture Room D
Speaker(s): Claudio H. Morales (University of Alabama in Huntsville)
Europe/Rome Dr. Valérie Masson-Delmotte is a senior scientist from Laboratoire des Sciences du Climat et de l'Environnement, Institut Pierre Simon Laplace, Université Paris Saclay / CEA / CNRS, France. She is the Co-chair of IPCC Working Group I for the AR6 cycle. Her research interests are focused on quantifying and understanding past changes in climate and atmospheric water cycle, using analyses from ice cores in Greenland, Antarctica and Tibet, analyses from tree-rings as well as present-day monitoring, and climate modelling for the past and the future. She has worked on issues such as the North Atlantic Oscillation, drought, climate response to volcanic eruptions, polar amplification, climate feedbacks, abrupt climate change and ice sheet vulnerability accross different timescales. She is active in outreach for children and for the general public and has contributed to several books on climate change issues (e.g. Greenland, climate, ecology and society, CNRS editions, 2016; in French). Her research was recognized by several prizes (European Union Descartes Prize for the EPICA project, 2008; Women scientist Irène Joliot Curie Prize, 2013; Tinker-Muse Prize for science and policy in Antarctica, 2015; Highly Cited Researcher since 2014). Abstract: Ice cores provide a wealth of insights into past climatic and environmental changes. Obtaining information on past polar temperature changes is important to document climate variations beyond scarce instrumental records, and to test our quantitative understanding of past climate variations. Water stable isotope ratios in ice core records have commonly been used as qualitative proxies for past changes in polar temperature and moisture source characteristics, but extracting quantitative signals is a major challenge. Initially, spatial relationships between surface snow isotopic composition and surface temperature were used to establish a modern "isotopic thermometer". Simulations performed with climate models equipped with water stable isotopes were subsequently used to assess the validity of this "isotopic thermometer calibration" for different climate states (e.g. glacial, interglacial), assuming that the ice core signal is a precipitation weighted deposition record. I will first present recent findings based on new capability to monitor water vapour isotopic composition in the North Atlantic / Greenland and several Antarctic regions. These new datasets challenge the classical interpretation of ice core records as just precipitation-weighted signals. Moreover, they challenge the ability of atmospheric models equipped with water stable isotopes to fully resolve the initial marine boundary layer isotopic composition spatial patterns. These are key limitations to our quantitative understanding of ice core signals. I will then illustrate major results obtained from water stable isotope records in Greenland and Antarctic ice cores at three time scales : (i) the documentation of polar climate variability during the last thousand years, and the challenge to separate intrinsic, spontaneous climate variability from the response to natural forcings; (ii) the bipolar structure of abrupt changes during the last climatic cycle, and its implications for the interplay between reorganizations in ocean circulation, sea ice extent and polar climate ; (iii) polar temperature trends during the current and last interglacial period, and their relevance for the assessment of ice sheet vulnerability. Over these three time scales, I will stress why quantifying past changes is relevant for the evaluation of climate models and for the assessment of future risks. The Colloquium will be livestreamed at ictp.it/livestream. Light refreshments will be served after the talk. ICTP, Trieste, Italy ICTP [email protected]
@ ICTP, Trieste, Italy
ICTP Colloquium: From water molecules to climate, making sense of Greenland and Antarctic ice core records
Speaker(s): Prof. Valerie Masson-Delmotte, Laboratoire des Sciences du Climat et de l'Environnement, Institut Pierre Simon Laplace, France, and Co-chair of IPCC Working Group I
OLIFIS " Ad un passo dalle IPhO 2018" | (smr H548)
Room: Denardo Informatics Room (AGH)
Organizer(s): OLIFIS, Prof. Luigi Censi,
Europe/Rome The German Research Foundation (DFG) and the International Center for Theoretical Physics (ICTP) are organizing a Workshop on Global Differential Geometry, to be held at the African Institute of Mathematical Sciences (AIMS) in Mbour, Sénégal, May 21 - 25, 2018. The workshop will focus on recent developments in Global Differential Geometry, in particular on symplectic and Poisson geometry, including foliations and Lie theory, as well as classical differential geometry with its connections to differential topology and global analysis. There are close relations to the index theory of elliptic operators, generalized complex geometry and geometric flows, all of which open the door to applications in mathematical physics. Summer School on Global Differential Geometry (14-18 May) web page: http://home.mathematik.uni-freiburg.de/mathphys/konf/diffgeo-summerschool/index.html Organizing Committee: Hamidou Dathe (Dakar) Bernhard Hanke (Augsburg) Aissa Wade (Penn State and AIMS Mbour) Katrin Wendland (Freiburg) DFG Coordinators: Carsten Balleier Alida Höbener Beate Wilhelm The list of speakers includes: Claudio Arezzo (ICTP) Augustin Banyaga (Penn State) Augustin Tshidibi Batubenge (South Africa) Kai Cieliebak (Augsburg) A. Degeratu (Stuttgart) Hassimou Diallo (Cote d'Ivoire) Abdoul Salam Diallo (Senegal) Cheikh Mbacke Diop* (Senegal) Jonathan Mboyo Esole* (Northeastern) Urs Frauenfelder (Augsburg) Sebastian Goette (Freiburg) Djideme Franck Houenou (Benin) El-Kaioum Mohamed Moutuou* (USF Florida) Karl-Hermann Neeb (Erlangen) Sylvie Paycha (Postdam) Philippe Rukimbira (FIU Florida) Thomas Schick (Göttingen) Léonard Todjihoundé (Benin) Joel Tossa (Benin) Burkhard Wilking (Münster) *to be confirmed Mbour - Senegal ICTP [email protected]
@ Mbour - Senegal
Workshop on Global Differential Geometry | (smr 3205)
Organizer(s): Hamidou Dathe (Mathematics Institute, Dakar University), Bernhard Hanke (Mathematics Institute, Augsburg University), Aissa Wade (Department of Mathematics, Penn State University, and AIMS Senegal, Mbour), Katrin Wendland (Mathematics Institute, Freiburg University), ICTP Scientific Contact: Claudio Arezzo
Cosponsor(s): DFGdfg, AIMS Sénégalaims
**DEADLINE: 15/03/2018** For those who applied for financial support to participate in the Workshop on Global Differential Geometry, May 21-25, 2018, at the AIMS Center M'bour, Senegal: If you were planning to participate in the Summer School on Global Differential Geometry, May 14-18, 2018, please be aware that a separate application is necessary, sent directly to the AIMS Center M'bour (by e-mail to [email protected]), where the school will take place. Please follow the application procedure described on the summer school's webpage, cf. http://home.mathematik.uni-freiburg.de/mathphys/konf/diffgeo-summerschool/index.html under the headline "Applications". The deadline for applications for the summer school has been extended to 2nd of March 2018. **DEADLINE: 15/03/2018** **DEADLINE: 10/04/2018**
Europe/Rome REGISTRATION & ADMINISTRATIVE FORMALITIES ATTENTION: All Participants paid by the ICTP and staying in the ICTP Guesthouses are automatically registered at the time of check-in at the reception desk. Therefore, on Monday morning, please go directly to the Finance Office at the Enrico Fermi Building from 8.30 to 9.00 am, ground floor to collect your expenses. Badges, Passports and Travel Receipts are needed. Additional Shuttle Bus service will be provided from the Adriatico Guest House and pick-up/drop-down is available from the carpark area, main entrance of the Adriatico Guest House. Finance office opening times are: Monday, Tuesday and Friday from 8.30 - 12.00 and 13.30 to 14.30 PLEASE NOTE: To all Directors/Conference Speakers and to all Participants NOT staying in the ICTP Guesthouses Please register with the Conference Secretary in office no. 1 Adriatico Guest House, lower level 1 ORGANIZERS: G. Kaminski Schierle, Cambridge University, U.K. A. Painelli, University of Parma, Italy LOCAL ORGANIZERS: L. Grisanti, SISSA/ICTP, Trieste Italy A. Hassanali, ICTP, Trieste, Italy A Conference for experimentalists and theoreticians investigating the photophysics and photochemistry behind the interaction of light with the organic material that make up living systems. This Conference will be a venue for scientists from all over the world working in biochemistry, physical chemistry and physics to discuss interactions between light and biological matter or its molecularbuilding blocks. Synergy among experimentalists and theoreticians is crucial to understand the diversity of phenomena and the fundamental processes that govern the complex interaction of light with living matter. The programme will include oral keynote invited presentations on the most recent developments, given both by high-profile senior scientists and junior researchers as well as contributed short talks and poster presentations. CALL FOR CONTRIBUTED ABSTRACTS: In the application form, all applicants are invited to submit an Abstract for a poster presentation. A limited number of contributed Abstracts will be selected by the Organizing Committee for a short oral presentation. TOPICS: • Theoretical advances inmodelling electronic excited states in biologically-relevant systems, including charge and energy transfer processes; • Photophysics and photochemistry of soft matter through time-dependent spectroscopic techniques; • State-of-the-artdevelopments andapplications of linear and nonlinear optical techniques for the imaging of biological systems in cells and tissues. • Emerging fluorescence and luminescence phenomena in proteins and biological systems. SPEAKERS INCLUDE: M. Barbatti, Marseille, France S. Boxer, Stanford, U.S.A. N. Chawdhury, Sylhet, Bangladesh I. Daidone, L'Aquila, Italy J. Dasgupta, Mumbai, India M. Di Donato, Florence, Italy W. Domcke, Garching, Germany N. Doslic, Zagreb, Croatia M. Garavelli, Bologna, Italy E. Gazit, Tel-Aviv, Israel M. Hariharan, Kerala, India R. Improta, CNR/IBB, Naples, Italy B. Mennucci, Pisa, Italy H. Okur, EPFL Lausanne, Switzerland N. Rega, Naples, Italy B. Rossi, ELETTRA Trieste, Italy F. Santoro, ICCOM-CNR Pisa, Italy Y, Taghipour Azar, Tehran, Iran I. Tavernelli, IBM Zurich, Switzerland E. Vauthey, Geneva, Switzerland R. Venkatramani, Mumbai, India N. Ventosa, ICMAB Barcelona, Spain ICTP ICTP [email protected]
Conference on the Complex Interactions of Light and Biological Matter: Experiments meet Theory | (smr 3120)
Organizer(s): Luca Grisanti (ICTP), Gabi Kaminski (Cambridge University), Anna Painelli (University of Parma), Local Organiser: Ali Hassanali
Europe/Rome Abstract. Although the standard paradigm of Cold Dark Matter (CDM) has been remarkably successful in predicting the large-scale structure of the universe, it has been known to exhibit a number of problems at small scales. A popular solution to these problems involves invoking Warm Dark Matter (WDM) particles of mass about keV that erase small-scale power. The mass of these WDM particles can, in principle, be constrained using cosmological observations at high redshift. In this talk, we will explore various observational probes and the corresponding theoretical formalism which can be useful for constraining the nature of dark matter using ongoing and upcoming instruments. ICTP ICTP [email protected]
Nature of Dark Matter using Cosmological Observations
Speaker(s): Tirthankar Roy CHOUDHURY (Tata Institute of Fundamental Research, Pune, India)
Formazione e Scienza per lo Sviluppo Sostenibile | (smr H560)
Organizer(s): TWAS,
Europe/Rome Spontaneous avalanche to plasma splits the core of an ellipsoidal Rydberg gas of nitric oxide. Ambipolar expansion first quenches the electron temperature of this core plasma. Then, long-range, resonant charge transfer from ballistic ions to frozen Rydberg molecules in the wings of the ellipsoid quenches the centre-of-mass ion / Rydberg molecule velocity distribution. This sequence of steps gives rise to a remarkable mechanics of self-assembly, in which the kinetic energy of initially formed hot electrons and ions drives an observed separation of plasma volumes. These dynamics adiabatically sequester energy in a reservoir of mass transport, starting a process that anneals separating volumes to form an apparent glass of strongly coupled ions and electrons. Short-time electron spectroscopy provides experimental evidence for complete ionization. The long lifetime of this system, particularly its stability with respect to recombination and neutral dissociation, suggests that this transformation affords a robust state of arrested relaxation, far from thermal equilibrium. We argue that this state of the quenched ultracold plasma offers an experimental platform for studying quantum many-body physics of disordered systems in the long-time and finite energy-density limits. The qualitative features of the arrested state fail to conform with classical models. Here, we develop a microscopic quantum description for the arrested phase based on an effective many-body spin Hamiltonian that includes both dipole-dipole and van der Waals interactions. This effective model offers a way to envision the quantum disordered non-equilibrium physics of this system. ICTP ICTP [email protected]
Condensed Matter and Statistical Physics Seminar: Possible Manifestations of Quantum Disordered Dynamics in the Arrested Relaxation of a Molecular Ultracold Plasma
Speaker(s): Edward GRANT (Univ. of British Columbia, Dept. of Chemistry, Vancouver, Canada
Europe/Rome Abstract: I will give a very brief introduction to the concept of computation on a computer, and in particular on a quantum computer. I will try to describe for which kind of problems we could expect a speed-up due to quantum mechanics and discuss the status of the technology in this direction. ICTP ICTP [email protected]
BASIC NOTIONS SEMINAR: Introduction to Quantum Computation
Speaker(s): Antonello SCARDICCHIO, ICTP
Non-relativistic Dark Matter Bound States
Speaker(s): Michele REDI (INFN, Sezione di Firenze)
Europe/Rome Abstract: In 1936 Tsen proved that a 1-dimensional family of hypersurfaces of degree d in projective n-space always admits a section provided that d is less than or equal to n. This simple statement has been generalized in many ways, and still inspires developments in algebraic geometry. In this talk I will survey the history of Tsen's Theorem, mostly from the geometric point of view, and describe current re- search toward new interpretations and generalizations. ICTP ICTP [email protected]
A Geometric View on Tsen's Theorem
Speaker(s): Carolina Araujo (IMPA)
Europe/Rome Kip Thorne was born in 1940 in Logan, Utah, USA, and is currently the Feynman Professor of Theoretical Physics, Emeritus at the California Institute of Technology (Caltech). From 1967 to 2009, he led a Caltech research group working in relativistic astrophysics and gravitational physics, with emphasis on relativistic stars, black holes, and especially gravitational waves. Fifty three students received their PhD's under his mentorship, and he mentored roughly sixty postdoctoral students. He co-authored the textbooks Gravitation (1973, with Charles Misner and John Archibald Wheeler) and Modern Classical Physics (2017, with Roger Blandford), and was sole author of Black Holes and Time Warps: Einstein's Outrageous Legacy. Thorne was cofounder (with Rainer Weiss and Ronald Drever) of the LIGO (Laser Interferometer Gravitational Wave Observatory) Project. LIGO - in the hands of a younger generation of physicists - made the breakthrough discovery of gravitational waves arriving at Earth from the distant universe on September 14, 2015. For his contributions to LIGO and to gravitational wave research, Thorne has shared the Nobel Prize in Physics, and other major awards. In 2009 Thorne stepped down from his Caltech professorship to ramp up a new career at the interface between art and science, including the movie Interstellar (which sprang from a Treatment he co-authored, and for which he was Executive Producer and Science Advisor). Abstract: A half century ago, John Wheeler challenged his students and colleagues to explore geometrodynamics by asking, how does the curvature of spacetime behave when roiled in a storm, like a storm at sea with crashing waves? We tried to explore this, and failed. Success eluded us until two new tools became available: computer simulations, and gravitational wave observations. Thorne will describe what these have begun to teach us, and he will offer a vision for the future of geometrodynamics. ICTP, Trieste, Italy ICTP [email protected]
ICTP Colloquium - Geometrodynamics: The Nonlinear Dynamics of Curved Spacetime
Address: Strada Costiera 11 34151 TriesteItaly
Speaker(s): Prof. Kip S. Thorne, Feynman Professor of Theoretical Physics, California Institute of Technology
Two new avenues in dark matter indirect detection
Speaker(s): Ranjan LAHA (Mainz University, Germany)
--- Please note unusual time!!! ---
Soft theorem and its classical limit
Speaker(s): Ashoke SEN (Harish-Chandra Research Institute, Allahabad, India)
Europe/Rome Atmospheric neutrino experiments were the first ones to discover 20 years ago the phenomenon of neutrino oscillations, to establish existence of non-zero neutrino mass and large lepton mixing. Studies of atmospheric neutrinos are used to search for new physics beyond the 3-neutrino paradigm, including sterile neutrinos, non-standard neutrino interactions, effects of violation of fundamental symmetries of Nature. Now the field moves to the next phase of high-precision studies, which will enable us to effectively use atmospheric neutrinos as a tool to determine the mass ordering, octant of the 2-3 mixing angle and the Dirac CP-violating phase. In this connection, knowledge of the atmospheric neutrino fluxes at percent level is needed, which requires higher precision determination of both cosmic ray fluxes and neutrino-nucleon cross sections, as well as a better control over systematics. Understanding of atmospheric neutrinos is essential to estimate background for diffuse supernova neutrinos, proton decay, future dark matter direct/indirect detection experiments, and high-energy cosmic neutrinos. The goal of this workshop is to further explore physics potential of atmospheric neutrinos and support the physics case of new experiments on atmospheric neutrinos. We plan to bring together leading experts in both theory and experiment as well as young researchers to assess the state-of-the-art knowledge in this field and to foster further theoretical, phenomenological, and experimental studies in atmospheric neutrino physics. The programme will be mainly composed of the invited talks. Some time will be allocated for the oral presentations selected from submitted abstracts. It will be possible to display posters during whole time of the workshop. Papers which will not be selected for oral presentations can be presented as posters. Ample time will be given for discussion. List of Topics: Results from atmospheric neutrino experiments Analysis of the data and treatment of systematics Cosmic ray fluxes at all energies Neutrino-nucleon and neutrino-nucleus interactions Computation of atmospheric neutrino fluxes Prompt atmospheric neutrinos Oscillations and absorption of neutrinos in the Earth Neutrino tomography of the Earth Determination of mixing angles, mass hierarchy, and CP-phase Searches for sterile neutrinos Searches for non-standard neutrino interactions Tests of fundamental symmetries Future experiments with atmospheric neutrinos Solar atmospheric neutrinos Atmospheric neutrinos as a background for diffuse supernova neutrinos, proton decay, dark matter searches Atmospheric neutrinos and cosmic neutrinos Synergy/Complementarity among atmospheric and LBL experiments Scientific Advisory Committee: John F. Beacom (Ohio State University) Paschal A. Coyle (CPPM, Marseille) Amol S. Dighe (TIFR, Mumbai) Thomas K. Gaisser (University of Delaware) Francis L. Halzen (University of Wisconsin-Madison) Takaaki Kajita (ICRR, University of Tokyo) Edward T. Kearns (Boston University) Eligio Lisi (INFN, Bari) John G. Learned (University of Hawai'i) Naba Kumar Mondal (SINP, Kolkata) Michele Maltoni (IFT, Madrid) Orlando L. G. Peres (Campinas State University) Ina Sarcevic (University of Arizona) Walter Winter (DESY, Zeuthen) The Programme is Preliminary and some minor changes are possible. Invited Speakers do not need to apply online. There is no registration fee for this activity. ICTP ICTP [email protected]
28 May 2018 - 1 Jun 2018
High Energy Cosmology and Astroparticle Physics
Advanced Workshop on Physics of Atmospheric Neutrinos - PANE 2018 | (smr 3207)
Organizer(s): Sanjib Kumar Agarwalla (IOP, Bhubaneswar), Bhupal Dev (Washington University), Antonio Palazzo (University of Bari & INFN), Alexei Smirnov (MPIK Heidelberg & ICTP), Local Organiser: Atish Dabholkar
Cosponsor(s): the Italian Institute for Nuclear Physics (INFN)INFN
Europe/Rome ATTENTION in the 1st week the MORNING lectures are taking place at the Leonardo building, Budinich Lecture Hall. All AFTERNOON sessions and the 2nd week will take place at the Adriatico Guest House! The ICTP regional climate modeling system RegCM4 is currently participating to the CORDEX-CORE initiative, which entails the completion of a new set of downscaled climate projections over most CORDEX domains for two greenhouse gas concentration scenarios (RCP8.5 and RCP2.6) at a horizontal grid spacing of 25 km. These projections are being completed at ICTP as well as other laboratories using the RegCM4 model, as a community effort aimed at producing climate information usable for impact assessment studies and contributing to the activities of the Intergovernmental Panel on Climate Change (IPCC). It is expected that by the date of the workshop a number of new projections will have been completed for multiple domains, and therefore the workshop will be an optimal venue to analyze these projections and exchange experience across different domains and regions. As in previous workshops of this series, this event will provide lectures and extensive hands-on sessions on the theory of regional climate change and regional climate modeling as well as the use of the RegCM modeling system. The focus of the present workshop will be on the analysis and interpretation of the output of regional model projections, addressing issues such as : assessment of model performance, performance metrics a and model systematic errors, identification and quantification of added value, study of phenomena relevant to different regions, uncertainty in projections and their dependence on model biases. The workshop will also provide a forum to discuss the production of scientific publications by the RegCM user community participating to the CORDEX-CORE effort, particularly in view of relevant IPCC deadlines. Experience with the RegCM modeling system and with the analysis of regional model output is an important requirement for participation. The workshop is intended for scientists and graduate students working in the areas of Atmospheric Physics and Dynamics, Climatology, Oceanography, Physics and Mathematics. ICTP ICTP [email protected]
Ninth ICTP Workshop on the Theory and Use of Regional Climate Models | (smr 3208)
Organizer(s): Filippo Giorgi (ICTP, Italy), Erika Coppola (ICTP, Italy), Marta Llopart (UNESP, Brazil), Mouhamadou B. Sylla (WASCAL, Burkina Faso),
Regional Workshop on the Accident Analysis code for Severe Accidents | (smr H544)
Organizer(s): N. Hiranuma (IAEA), A.P. Ulses (IAEA),
Europe/Rome The Schur process is in some sense a discrete analogue of a random matrix. Their edge behavior are known to be in the same universality class, described by the Airy kernel and the Tracy-Widom distribution. In this talk we consider two variants of the Schur process: the periodic case introduced by Borodin, and the "free boundary" case recently introduced by us. We are able to compute their correlation functions in a unified manner using the machinery of free fermions. We then investigate the edge asymptotic behavior and show it corresponds to two nontrivial deformations of the Airy kernel and of the Tracy-Widom distribution. Based on joint work with Dan Betea, Peter Nejjar and Mirjana Vuletić. SISSA, Via Bonomea 265, room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Edge Behaviour of the Periodic and the Free Boundary Schur Processes
Speaker(s): Jeremie BOUTTIER (IPhT Saclay/ENS Lyon, France)
Europe/Rome Natural Decadal Climate Variability: Societal Impacts is important for understanding the natural decadal climate variability (DCV), a phenomenon which has made long lasting impacts on civilizations, especially on water availability and agriculture. Multiyear to decadal variations in instrument measured precipitation and temperature, water availability and river flows, crop production, agricultural irrigation, inland water-borne transportation, hydroelectricity generation, and fish and crustacean captures since the 1960s are observed. A longer term perspective is provided with the use of multi-century data on dry and wet epochs based on tree ring information, and corroborating evidence from other literature. This work will benefit climate scientists, meteorologists, hydrologists, agronomists, water transportation planners, resource economists, policymakers, professors, and graduate students and anyone else who has an interest in learning how natural climate phenomena has influenced societies for at least the past 1000 years. ICTP ICTP [email protected]
Natural Decadal Climate Variability and its Societal Impacts
Room: Adriatico Guest House - Kastler Lecture Hall
Speaker(s): Vikram Madhuvadan Mehta, Ph.D., Executive Director, The Center for Research on the Changing Earth System, Maryland, U.S.A.
Anderson localisation in theory space
Speaker(s): Dave SUTHERLAND (University of Santa Barbara, California)
-- Please note unusual time and venue!! --
Europe/Rome Abstract: Tensors are fundamental objects in multilinear algebra, with important applications to the complexity of matrix multiplication, signal processing, phylogenetics and algebraic statistics. In applications, one generally looks for minimal decomposition of tensors as linear combinations of undecomposable tensors. The smallest integer r needed to write a tensor T as a linear combination of r undecomposable tensors is called the rank of T. Determining the rank of a tensor is a problem that has received much attention in recent years, and has a nice geometric interpretation. In this talk I will explain some applications of tensors decomposition and interpret the problem from the point of view of algebraic geometry. In particular, I will present new results about ranks of tensors, in collaboration with Alex Massarenti and Rick Rischter. ICTP ICTP [email protected]
BASIC NOTIONS SEMINAR: The Geometry of Tensors
Europe/Rome The Summer School on Modelling Tools for Sustainable development will provide training on selected open modelling tools for sustainable development pathways. The Summer School involves four weeks of intensive training for government officials from countries participating in, amongst others, UNDESA, UNDP, World Bank Group and related capacity development projects on modelling tools. It will also engage academics and researchers working in the field of sustainable development whose institutions will become regional centres of excellence. As one of several entry points analysts will be exposed to geospatial electricity access (using the OnSSET.org tool) as well as medium to long term energy investment modelling (using the OSeMOSYS.org tool). Participants will then extend this to include integrated Climate-, Land-, Energy- and Water- system (using the OsiMOSYS.org) modelling. The latter will investigate the benefits of resource policy coherence. During the first week of training steering committee meetings of experts dedicated to the development of these tools will also be held (in parallel to the training). The same experts will be involved in introductory presentations. The RGCM and SPEEDY climate models of the ICTP will also be introduced in the first week. During the second to the third week participants will deepen their modelling skills so that they can operate and enhance their national models independently, incorporate new and evolving policy issues and contribute to the practice of providing rigorous evidence for decision making on national sustainable development policies. In the fourth week, presentations (by participants) will be made to high level policy makers. This will be followed by a one day high level policy discussion. The workshop will culminate with an exercise by trainers, trainees, academics, funders and others to enhance the efficacy of global capacity building for sustainability. Participation in the event will be primarily by invitation only. However, limited spaces are available for self-funded participants. The self-funded applicants are limited to academics, researchers and graduate students working on related fields, preferably working in (or with developing countries). The self-funded participants must be from a university that plans to implement teaching activities using OSeMOSYS.org, OnSSET.org or OSiMOSYS.org. A letter of commitment should be obtained from the appropriate person responsible - and submitted with the application. (Note that all teaching material from the course, can be used for the course. There are online discussion fora. Also, note that all software is free and open source.) Applications will be evaluated by the selection committee based on the following criteria: i) Relevance of their academic background to the research field(s) of the Summer School, ii) Excellent language skills iii) Clear potential and ambition to contribute to the adoption of related teaching and training in developing country institutions. The program is open for applications from all countries. Visa support may be provided. Self-funded participants will be required to bring their own laptops with windows 10 installed and full administrator rights. ICTP ICTP [email protected]
4 Jun 2018 - 29 Jun 2018
The Summer School on Modelling Tools for Sustainable Development - OpTIMUS | (smr 3210)
Secretary: Pandora Malchose
Organizer(s): Mark Howells (KTH), Local Organiser: Adrian Tompkins
Cosponsor(s): University of Cambridgecambridge, The International Union of Geodesy and Geophysicsiugg4, UNDPundp2, UNDESAundesa1, KTHkth2, The World Bankwbank1, OpTIMUSoptimus1, DFIDdfid
Europe/Rome -------------------------------------------------------------- Organised in partnership with the Clay Mathematics Institute -------------------------------------------------------------- Please note that School related material (abstracts, slides/notes as received by lectureres, and video recording of most School lectures) can be viewed & downloaded directly from this site, see list of links on the left of this page, as follows: - either by clicking on the link "Programme" and scrolling it down (i.e. in chronological order), or - by clicking on the link "Speakers" (i.e. sets of lectures grouped by lecturer). In addition, video recording of lectures is also available on a dedicated playlist on ICTP YouTube Channel: https://www.youtube.com/watch?v=2msDseBIAeI&list=PLLq_gUfXAnknsg-mI5B1OmZfOuqZls0dK * * * The School aims at introducing students both to the basic ideas and to the most recent breakthroughs in the field of extrinsic curvature flows, a fundamental research direction at the intersection of Analysis and Geometry, with deep connections to the theories of minimal surfaces and of diffusive partial differential equations. Extrinsic flows have important applications to the geometry of submanifolds, as their study provides an effective strategy for obtaining topological classification results and for proving sharp geometric inequalities. In a broader context, extrinsic flows arise in describing the dynamics of interfaces in physical and biological sciences, a fact that provides a strong scientific motivation for their mathematical inquiry. Virtually all the different approaches to mean curvature flows will be accounted for, with particular emphasis on the fundamental problem of singularities formation. A major goal of the School will indeed be putting the best students in the position of understanding the many open problems in the field. LIST OF LECTURERS S. Angenent, University of Wisconsin-Madison P. Daskalopoulos, Columbia University G. Huisken, Universität Tübingen C. Mantegazza, Università di Napoli Federico II F. Otto, MPI for Mathematics in the Sciences C. Sinestrari, Università di Roma Tor Vergata T. Souganidis, University of Chicago Y. Tonegawa, Tokyo Institute of Technology Interested candidates should apply online (see link "Apply here" on the left-hand menu). * DEADLINE FOR APPLICATION: EXPIRED * ICTP Secretariat contact: [email protected] ICTP ICTP [email protected]
International School on Extrinsic Curvature Flows | (smr 3209)
Organizer(s): Giovanni Bellettini (University of Siena & ICTP), Francesco Maggi (University of Texas at Austin), Carlo Sinestrari (Tor Vergata Roma University), Local Organiser: Claudio Arezzo
Cosponsor(s): Clay Mathematics Institutecmi
Europe/Rome Abstract. Localization techniques have proven to be a powerful and invaluable tool for obtaining exact results in rigidly supersymmetric theories. In this talk, we will discuss how this framework can be used in the study of locally supersymmetric theories, i.e. in supergravity. We will discuss the BRST quantization of supergravity theories on spaces with an asymptotic boundary via a suitable background field formalism. When the background is restricted to have a residual isometry group, an equivariant BRST algebra arises as a deformation of the standard nilpotent BRST algebra. This equivariant algebra can then be used to localize supersymmetric partition functions or expectation values. As an illustration of this general formalism, we will revisit the derivation of the exact entropy of certain asymptotically flat BPS black holes in the Quantum Entropy Function formalism. We will also present recent results for the exact entropy of asymptotically AdS BPS black holes and compare with exact results previously obtained in the dual CFT. ICTP ICTP [email protected]
Localization in supergravity: theory and applications
Speaker(s): Valentin REYS (University of Milano, Bicocca)
Europe/Rome It is long believed that any ordered phase may not exist in two dimensions (2D) due to the strong fluctuations. However recent discoveries during last decades have changed this picture drastically. A very intriguing question arises about the possibility of inducing superconducting correlations in 2D. Beside fundamental interests, it is proven that high temperature superconductivity also has two dimensional physics and very recently a 2D superconductor at the interfaces between certain insulators has been observed. Therefore a suitable answer can shine light on the pass-way to the consistent theory of high temperature superconductivity and heavy fermion interface superconductors. In this talk I first review the general properties of 2D superconductors beside some historical introduction. Then I will focus on our own contribution to the field consisting of proximity effect in graphene and other 2D systems. In particular I will discuss over the exotic proximity effect in MoS2 which we have found due to the interplay of exchange field with intrinsic spin-orbit interaction. ICTP ICTP [email protected]
Condensed Matter and Statistical Physics: Inducing Superconductivity in Two Dimensional Materials
Speaker(s): Ali GHORBAN ZADEH MOGHADDAM (IASBS Zanjan, Islamic Republic of Iran)
Europe/Rome It is well known that fictitious gauge fields are induced by strain and elastic deformations in graphene. But the interference effects attributed to these gauge fields remain almost unexplored. In this talk, I will show how the supercurrent passing through the graphene-based Josephson junctions can be influenced by the gauge fields. In particular we find that the Josephson current is monotonically enhanced in the presence of a constant pseudo-magnetic field induced by arc-shape strain. On the other hand, when both magnetic and pseudo-magnetic fields are present, Fraunhofer-like oscillations as function of the real magnetic field flux are found. Intriguingly, the combination of two kinds of gauge fields results in strong localization of the Josephson current density and also appearance of large inflated vortex cores. These findings reveal unexpected interference signatures of strain-induced gauge fields in graphene SNS junctions mainly originated from the time-reversal symmetric property of the pseudo-magnetic fields and provide unique tools for sensitive probing of the pseudomagnetic fields. ICTP ICTP [email protected]
Condensed Matter and Statistical Physics: Gauge-Field-Induced Anomalous Interference Effects in Graphene SNS Junctions
Speaker(s): Hadi KHANJANI (IASBS Zanjan, Islamic Republic of Iran)
Europe/Rome Abstract. In this talk, I will discuss the neutrino oscillation phenomenology with three active and one light sterile neutrinos with a special emphasis on currently running and upcoming long-baseline experiments and the upcoming atmospheric neutrino experiment at the India-based Neutrino Observatory (INO). ICTP ICTP [email protected]
Oscillation with Three Active and One Light Sterile Neutrinos
Speaker(s): Sanjib Kumar AGARWALLA (Institute of Physics, Bhubaneswar)
Europe/Rome Plasmon modes represent a second kind of possible elementary excitation for the Fermi liquid. Basically, plasmon modes involve a cooperative motion of the system, governed by the global interaction between the charge carriers. Plasmon modes in two-dimensional electron liquids illustrate a long-wavelength dispersion that can be captured by classical equations of motion. The dispersion, however, departs from its classical value, becoming sensitive to quantum effects, by increasing the plasmon momentum. The response of electron systems to electrodynamic fields that change rapidly in space is endowed by unique features, including an exquisite spatial nonlocality. This can reveal much about the materials' electronic structure that is invisible in standard probes that use gradually varying fields. In this talk, we will start by introducing the plasmonics both in single and double layer graphene and describe the quantum non-locality effects in graphene plasmonics. Our theory involves three types of nonlocal quantum effects: single-particle velocity matching, interaction-enhanced Fermi velocity, and interaction-reduced compressibility. The near-field imaging experiments reveal a parameter-free match with the full quantum description of the massless Dirac electron gas. ICTP ICTP [email protected]
Condensed Matter and Statistical Physics: Quantum Non-Local Effects in Graphene Plasmonics
Cosponsor(s): Reza ASGARI (Institute for Research in Fundamental Sciences IPM, Tehran, Islamic Republic of Iran
Europe/Rome The Conference will address the physics of disordered quantum many-body systems, with an emphasis on quantum dynamics of systems that are under active experimental investigation. The focus will be on the fundamental physics exhibited by novel systems and on emerging phenomena. Description Quantum dynamics of disordered many-body systems remains a vibrant research area. In particular, recent years have witnessed an outstanding interest in Anderson localization, one of most fundamental and ubiquitous phenomena in modern condensed matter physics in the many-body setting. Experimentally, it has become possible to explore the remarkably rich interplay of interaction and localization phenomena in a variety of quantum many-body systems. Concepts of quantum information theory, such as entanglement entropy and spectra, play a prominent role in the characterization of phases of strongly interacting disordered matter. The interplay of disorder and interaction effects becomes particularly intricate in systems with non-trivial topology, such as surfaces and edges of topological insulators. The Conference will bring together theorists and experimentalists to discuss recent progress and future perspectives. Topics Disordered quantum many-body systems, including cold atoms in magneto-optical traps, disordered semiconductors, electron glasses in amorphous systems, graphene, topological insulators, disordered superconducting lms and wires, and various implementations of quantum circuits; Many-body localization; Periodically driven systems, time crystals; Quantum information and entanglement; Topological phases; Transport in disordered interacting systems; Superconductor-insulator transitions; Disordered bosons, super uid-Bose glass transitions; Quantum quenches, far-from-equilibrium phenomena. List of Speakers John Bollinger (University of Colorado Boulder, USA) Piet Brouwer (Freie Universität Berlin, Germany) Pasquale Calabrese (SISSA, Trieste, Italy) Eugene Demler (Harvard University, USA) Jens Eisert Freie Universität Berlin, Germany) Rosario Fazio (ICTP, Trieste, Italy) Yuval Gefen (Weizmann Institute, Israel) Leonid Glazman (Yale University, USA) Igor Gornyi (KIT Karlsruhe, Germany) Moty Heiblum (Weizmann Institute, Israel) Vedika Khemani (Harvard University, USA) Curt von Keyserlingk (Univ. Birmingham, UK) Nicolas Laflorencie (Paul Sabatier University, Toulouse, France) Leonid Levitov (MIT, USA) Netanel Lindner (Technion, Israel) Mikhail Lukin (Harvard University, USA) Giovanni Modugno (UniFI, Florence, Italy) Laurens Molenkamp (University of Würzburg, Germany) Joel Moore (University of California, Berkeley, USA) Markus Müller (Paul Scherrer Institute, Switzerland) Yuval Oreg (Weizmann Institute, Israel) Guido Pagano (Univ. of Maryland, USA) Frederic Pierre (CNRS / Université Paris-Sud, France) Frank Pollmann (Technische Universität München, Germany) Ivan Protopopov (Geneva, Switzerland) Gil Refael (Caltech, USA) Jörg Schmiedmayer (Technische Universität Wien, Vienna, Austria) Dan Shahar (Weizmann Institute, Israel) Ady Stern (Weizmann Institute, Israel) Lieven Vandersypen (Delft University of Technology, Netherlands) Ali Yazdani (Princeton University, USA) ICTP ICTP [email protected]
Conference on Quantum Dynamics of Disordered Interacting Systems | (smr 3212)
Organizer(s): Alexander Mirlin (Karlsruhe Institute of Technology), Felix von Oppen (Freie Universität Berlin), Local Organiser: Marcello Dalmonte
A lowerbound on the bounce action
Speaker(s): Ryosuke SATO (Weizmann Institute, Rehovot, Israel)
Europe/Rome The emerging field of compartmentalized in vitro evolution, where selection is carried out by differential reproduction in each compartment, is a promising new approach to protein engineering. From a practical point of view, it is important to know the effect of the increase in the average number of genotype bearing agents per compartment. This effect is also interesting on its own in the context of primordial evolution in the hypothetical RNA world. The question is important as genotypes with different phenotypes in the same compartment share their fitness (the number of produced copies) rendering the selection frequency-dependent. I will show the results of a theoretical investigation of this problem in the context of selection dynamics for a simple model with an infinite population that is periodically redistributed among infinite number of identical compartments, inside which all molecules are copied without distinction with the success rate as a function of the total genomic composition in the compartment. Surprisingly, with a linear selection function, the selection process is slowed down only approximately inversely proportional to the average number of individuals per compartment. I will also demonstrate exact forms of the governing equations for some nonlinear selection functions. Finally, I will expose an intriguing open problem of an apparent phase transition for an exponential selection function seen in numerical experiments, which is missed by the current infinite population theory. ICTP ICTP [email protected]
QLS Seminar: Natural selection in compartmentalized environment with reshuffling
Room: Central Area, 2nd floor, old SISSA building, Via Beirut 2
Speaker(s): Anton Zadorin, LBC, CBI, ESPCI/ParisTech Paris, France
Europe/Rome Chagas disease, caused by the parasite Trypanosoma cruzi, is widespread in Latin America, where the disease remains one of the major public health problems, with an estimation of 8 million infected people and 10 000 deaths per year. This condition is mostly transmitted by insects, known as kissing bugs, belonging to the Triatominae family (Hemiptera), which are obligate haematophagous insects all their life. More than 150 species have been described in the world. While the majority lives in the wild, some of them are highly associated to the human beings, living in or around the dwellings. They typically stay hidden in the wall or roof cracks of homes, going out at night to feed on human blood. The correct determination of the species involved in the transmission of the disease is crucial to develop efficient control strategies. This can be achieved by keys of determination for adult stages, or by molecular technics. Both technics are time and/ or money consuming, showing the needs of new identification methods, especially for nymphal instars. These last years, various publications demonstrated the potential of infrared spectroscopy in insect taxonomy. Bolivia is a highly endemic country for Chagas disease. The main vector, Triatoma infestans, lives on about 60% of the territory. In total, 17 species of triatomines are reported in the country, among them Triatoma sordida and Triatoma guasayana which are reported as secondary vectors. These two species are sympatric and morphologically similar, and so they are difficult to discriminate. The goal of this study was to develop a classification model, using living nymphal and adult stage of these three species. 1293 spectra were taken in the invisible and near-infrared range. Different models were built, using different pre-processing methodologies of the spectra, and different types of feature selection. The performance of each model was evaluated for each species. After their comparison, the best model was tested on a different set of specimens where it showed a global accuracy of 97% (95-99%), an F1 score greater than 0.95 and a specificity greater than 0.94. This result shows that using infrared spectroscopy is a good strategy to predict the triatomine species. It is the first investigation to report the ability to identify juvenile instars, and moreover with a single model together with the adult stage. ICTP ICTP [email protected]
QLS Seminar: Vectors of Chagas disease - determination of species by infrared spectroscopy and machine learning
Speaker(s): Stephanie Depickere, Medical Entomology UMSA - IRD - INLASA La Paz, Bolivia
Europe/Rome In 2018, Run II of the LHC will be well under way, and it will be important to assess what has been learned, as well as to discuss the directions in which the field might move in the near-term and medium-term future. The school will therefore be largely focused on LHC Physics, but without forgetting other areas that can give complementary information regarding the nature of the physics at the TeV scale. Following the tradition of the Summer Schools on Particle Physics at the ICTP in Trieste, the school aims at giving a detailed overview of particle physics, and covering important areas where recent progress has been made in the field. We will have at most three lectures per day, giving ample time for discussion and problem solving by the students. We will also encourage students to give short presentations on their research. There is no registration fee and limited funds are available for travel and local expenses. <div class="tabcontent responsive-tabs__panel responsive-tabs__panel--active" id="tablist1-panel2" role="tabpanel" style="display: block;"> First week: Lectures on the Standard Model: Benjamin Grinstein (UCSD, USA) Practical QCD at Colliders: Giulia Zanderighi (University of Oxford, UK) Particle Physics and the Early Universe: Laura Covi (Institute for Theoretical Physics – Göttingen, Germany) Second week: Lectures on Beyond the Standard Model Physics: Alex Pomarol (IFAE, Spain) Dark Matter and Particle Physics: Paddy Fox (FERMILAB, USA) Experimental Elements for Theorists: A Roadmap to the Future: Gustaaf Brooijmans (Columbia University, USA) The link to the activity webpage for registration and further information is: http://www.ictp-saifr.org/?page_id=16149 Sao Paulo - Brazil ICTP [email protected]
@ Sao Paulo - Brazil
1st Joint ICTP-Trieste/ICTP-SAIFR School on Particle Physics | (smr 3220)
Address: Rua Dr. Bento Teobaldo Ferraz 271, Bloco 2 - Barra Funda 01140-070 Sao Paulo, Brazil
Secretary: Rosanna Sain
Organizer(s): Enrico Bertuzzo (USP, Brazil), Eduardo Ponton (ICTP-SAIFR & IFT-UNESP, Brazil), Andrea Romanino (SISSA/ISAS, INFN & ICTP, Trieste), ICTP Scientific Contact: Giovanni Villadoro
Cosponsor(s): FAPESPfapesp, UNESPunesp, IFT
FAMU 2018, Fourth collaboration meeting of the experiment | (smr H554)
Room: Euler Lecture Hall (LB)
Organizer(s): Prof. Andrea Vacchi, INFN,
Europe/Rome The purpose of the School is to provide an introduction to the current state of research in cosmology and astroparticle physics. It is intended for beginning graduate students, as well as more senior non-expert researchers that are interested in these fields. TOPICS & LECTURERS: CMB, E. Komatsu (MPA, Garching & Kavli IPMU, Tokyo) DARK ENERGY, C. De Rham (Imperial College) DARK MATTER, F. D'Eramo (Padua University) GRAVITATIONAL WAVES I, S. Babak (APC, Paris) GRAVITATIONAL WAVES II, V. Cardoso (Lisboa University) INFLATION, M. Kleban (NYU) LARGE SCALE STRUCTURE I, R. Sheth (UPenn) LARGE SCALE STRUCTURE II, L. Senatore (Stanford University and SLAC) NOTE: The School will be followed by a 'Conference on Shedding Light on the Dark Universe with Extremely Large Telescopes' (smr3218, 2 - 6 July). There is no registration fee for this activity. ICTP ICTP [email protected]
Cosmology , High Energy Cosmology and Astroparticle Physics
Summer School on Cosmology 2018 | (smr 3213)
Organizer(s): Paolo Creminelli (ICTP), Mehrdad Mirbabayi (ICTP), Ravi Sheth (UPenn),
Europe/Rome Moduli spaces of stable pointed curves play an important role in algebraic geometry. This School will have one course on vector bundles of coinvariants and on conformal blocks and another one on their cohomology classes in relation with those of moduli of abelian varieties. The cohomology of moduli spaces of curves and abelian varieties carries several natural classes. We focus on the tautological classes and the cohomology classes related to spaces of modular forms. The problem of determining relationships between the tautological classes turns out to be particularly interesting. Moduli spaces of curves carry vector bundles of coinvariants and conformal blocks; they are invariants of a curve C attached to a Lie group G that are canonically isomorphic to global sections of an ample line bundle on the moduli stack of certain G-bundles on C. These are generalized theta functions in case C is smooth. In case g=0, the bundles of co-invariants are globally generated, and their first Chern classes are semi-ample line bundles on the moduli of curves, and shed light on its birational geometry. We can also use the moduli space of curves to learn about generalized theta functions. TOPICS: Cohomology classes on moduli of curves and abelian varieties moduli spaces of curves and abelian varieties properties of the moduli spaces and their compactifications natural cohomology classes and their relations tautological classes and cohomology classes related to spaces of modular forms Vector bundles of coinvariants and conformal blocks introduction to moduli spaces of curves open problems and F-conjecture vector bundles of coinvariants and conformal blocks case g=0: global generation and semi-ample divisors any genus: nef divisors Chern classes of bundles of coinvariants Global sections of ample line bundles on Bun_G(C): smooth and nodal case PARTICIPATION: Women are particularly encouraged to apply. Should you come to Trieste with your child(ren), please send an e-mail to [email protected] to describe your family needs and we will do our best to meet them. A limited number of grants are available to support the attendance of selected participants, with priority given to participants from developing countries. There is no registration fee. To apply, please use the link on the left side of this web page. The deadline for submitting applications expired on 15 March 2018. ICTP ICTP [email protected]
Summer School on Geometry of Moduli Spaces of Curves | (smr 3215)
Organizer(s): Valentina Beorchia (Dipartimento di Matematica e Geoscienze, Trieste), Ada Boralevi (Politecnico di Torino), Barbara Fantechi (SISSA, Trieste), Local Organiser: Fernando Rodriguez Villegas
Cosponsor(s): SISSAsissa, GNSAGA INdAMgnsaga, Foundation Compositio Mathematicafcm
Europe/Rome The partial transpose of density matrices in many-body systems has been known as a good candidate to diagnose quantum entanglement of mixed states. In particular, it can be used to define the (logarithmic) entanglement negativity for bosonic systems. In this talk, I introduce partial time-reversal transformation as an analog of partial transpose for fermions. This definition naturally arises from the spacetime picture of partially transposed density matrices in which partial transpose is equivalent to reversing the arrow of time for one subsystem relative to the other subsystem. I show the success of this definition in capturing the entanglement of fermionic symmetry-protected topological phases as well as conformal field theories in (1+1) dimensions. SISSA, Via Bonomea 265, rm 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Partial Time-reversal Transformation and Entanglement Negativity in Fermionic System
Speaker(s): Hassan SHAPOURIAN (Univ. of Chicago, U.S.A.)
Europe/Rome Models for active matter have brought a new type of experiments in statistical physics where the source of nonequilibrium lies within the particles themselves or on their surface. In this talk, I will take the viewpoint of molecular simulations to study matching experiments on chemically-powered nanomotors: self-propulsion by symmetry-breaking, chemotaxis, sedimentation and anisotropic nanomotors. I will comment on the design of consistent microscopic models with respect to energy conservation, to chemical kinetic, and to thermal fluctuations. As a perspective, I will discuss enzyme nanomotors. On the one hand, they consist in elaborate catalytic devices with interesting thermodynamic properties and on the other hand they might inspire or serve as molecular scale machine for nano- and bio-technology in the coming years. SISSA, Via Bonomea 263, room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar - Nanomotors: Symmetry, Chemotaxis, Sedimentation and Anisotropy
Speaker(s): Pierre de BUYL (KU Leuven, Belgium)
SPECIALIZED SEMINAR: Exceptional splitting of reductions of abelian surfaces with real multiplication
Speaker(s): Yunqing Tang, Princeton University
25 Jun 2018 - 6 Jul 2018
Advanced Training School on Sustainable Blue Growth in the Mediterranean and Black Sea countries | (smr H549)
Room: Lundqvist Lecture Hall (AGH)
Organizer(s): Istituto Nazionale di Oceanografia e di Geofisica Sperimentale - OGS, Dr. Mounir Ghribi, Director, Sustainable Blue Growth Initiative In-Charge of International Cooperation and Strategic Partnerships,
Europe/Rome For more information please visit: http://www.africanschoolofphysics.org/asp2018/ - Namibia ICTP [email protected]
25 Jun 2018 - 13 Jul 2018
@ - Namibia
The 5th Biennial African School of Fundamental Physics and Applications | (smr 3216)
Organizer(s): Mweneni Shahungu (National Commission on Research, Science and Technology, Namibia), Rukee Tjingaete (Namibian University of Science and Technology), Maxie van derWesthuizen (National Commission on Research, Science and Technology, Namibia), Bobby S. Acharya (ICTP and King's College London), Ketevi A. Assamagan (Brookhaven National Laboratory), Steve Muanza (CNRS-IN2P3 France), Christine Darve (European Spallation Source, Sweden), Anne Dabrowski (CERN), Ruediger Voss (CERN), Eli Kassai (University of Namibia), Michael Backes (University of Namibia), Riann Steenkamp (University of Namibia), Andrew Zulu (Namibian University of Science and Technology), Dharm Singh Jat (Namibian University of Science and Technology), Emmi N. Shivute (Ministry of Information and Communication Technology), Angelique Philander (National Commission on Research, Science and Technology, Namibia), Generosa Simon (National Commission on Research, Science and Technology, Namibia), Johannes Ndjamba (National Commission on Research, Science and Technology, Namibia), Albanus Sindano (National Commission on Research, Science and Technology, Namibia), ICTP Scientific Contact: Bobby Acharya
Europe/Rome The climate community is still faced with large uncertainties in estimating possible climate changes in the next decades and quantifying the relative role of anthropogenic contribution to climate change. Although most modern climate models are able to reproduce reasonably well global climatologies and patterns of interannual variations, they still struggle with pervasive biases and the representation of some of the climate phenomena involving the interaction and coupling between the atmosphere, the ocean and the cryosphere. The problem is compounded by the limited understanding of some of the physical mechanisms giving rise to both our present mean climate and its natural variability at different time scales. One possible way forward is the use of a hierarchy of models to tackle the most pressing questions in climate dynamics and modeling. Key among them, is whether the climate is stable, or whether internal feedbacks could lead to tipping points, abrupt changes, and transitions to fundamentally different equilibria. Changes in the oceanic overturning, ice-albedo effects, land-surface and vegetation coupling to the atmosphere, and radiative-convective properties of the atmosphere have all been suggested as possible causes of instability in the climate system. Advances in our understanding, quantification, and modelling of these processes are necessary both for the interpretation of the paleoclimate record and for the projection of possible future climate states. A variety of studies have found that multiple equilibria exist both in highly idealized and more comprehensive models of the climate system. Whether multiple equilibria do exist in state-of-the-art climate models is still a subject of controversy. A fundamental understanding of key processes within a hierarchical modeling framework will eventually translate into a better representation and simulation within state-of-the-art climate models, as it brings new insights for process-based evaluation of climate model reliability and fit for purpose. The use of hierarchies additionally promotes the use of standardized performance metrics and highlights instances when post-processing approaches (e.g. bias correction) or diverse model tuning practices should be explored. The school will be based on lectures on theoretical aspects of atmosphere, ocean and climate dynamics, with a focus on the present state of established knowledge and relevant mechanisms. The topic of the school, Multiple Equilibra in the Climate System, will be the subject of afternoon lectures, giving an overview of the most recent progress and hypotheses suggesting the existence of multiple equilibrium states, and consequences for past and future climates. Afternoons will also be devoted to practical sessions, involving the use of simplified climate models and analysis of relevant data sets. The school will be followed by the workshop WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity: 2nd Meeting on Monsoons and Tropical Rain Belts , SMR3252, 2 - 5 July 2018, go to link http://indico.ictp.it/event/8457/ Confirmed speakers: Simona Bordoni, CalTech, USA David Ferreira, Reading U., UK In-Sik Kang, SNU, Republic of Korea John Marshall, MIT, USA Franco Molteni, ECMWF, UK Brian Rose, U. Albany, USA Stephen Thomson, U. exeter, UK Adrian M. Tompkins, ICTP, Italy Geoff K. Vallis, U. Exeter, UK Shang-Ping Xie, SCRIPPS, USA ATTENTION: APPLICATION HERE IS FOR BOTH SCHOOL AND WORKSHOP, DEADLINE 1 MARCH 2018 SHOULD YOU WISH TO APPLY TO THE WORKSHOP ONLY, PLEASE VISIT THE RELEVANT PAGE http://indico.ictp.it/event/8457/ AND APPLY THERE ICTP ICTP [email protected]
ICTP Summer School on Theory, Mechanisms and Hierarchical Modelling of Climate Dynamics: Multiple Equilibria in the Climate System | (smr 3214)
Organizer(s): Fred Kucharski (ICTP), Anna Pirani (Universite' Paris-Saclay), Adrian Tompkins (ICTP), Michela Biasutti (Columbia University), Aiko Voigt (KIT), Riccardo Farneti (ICTP),
Cosponsor(s): US Climate Variability and Predictability Programmeusclivar2, European Geosciences Unionegu, The International Union of Geodesy and Geophysicsiugg4
Europe/Rome Abstract. Due to the electric-magnetic duality of supergravity, one is able to have a Lagrangian description of the theory in different duality frames. Although the gaugings of supergravity have been classified as long as we stick to the electric frame, it is not for long that the similar analysis in an arbitrary electric-magnetic duality frame has been done. After a review of the topic and BV deformation formalism, we will go on to show a no-go theorem for the gaugings of N=4 and maximal supergravity in four dimensions. ICTP ICTP [email protected]
BV-BRST deformation formalism and gaugings of supergravity
Speaker(s): Arash RANJBAR (ULB, Belgium)
Europe/Rome I will discuss our recent development of massively-parallel real-time time-dependent density functional theory (RT-TDDFT) method based on the planewave-pseudopotential formalism and its applications to modeling electronic stopping in condensed matters. RT-TDDFT provides a convenient framework for numerically studying non- pertubative electron dynamics coupled with lattice (i.e. ions) movements in large systems. Because of the massively parallel nature of modern high-performance computers, development of new numerical algorithms is often not free from considering its parallelizability over large numbers of processors. We have developed a highly-scalable implementation of RT-TDDFT in qb@ll code for studying large extended systems. We will discuss performance of our new implementation over millions of processor cores, reaching the peta-flops performance. I will then discuss its application to the problem of electronic stopping. Electronic stopping describes the transfer of the kinetic energy from a highly- energetic ion to electrons in condensed matter. The projectile ions bear a highly localized electric field that is quite heterogeneous at the atomistic scale, and massive electronic excitations are produced in the process. Electronic stopping has been long studied within linear response theory framework (e.g. Bethe theory). I will discuss how non-equilibrium simulations based on RT-TDDFT enable us to study this electronic excitation process, in particular for the importance case of liquid water under proton irradiation. In addition to determining the energy transfer rate (i.e. electronic stopping power), our work reveals several key features in the excitation dynamics at the mesoscopic and molecular levels in liquid water under proton irradiation. <!-- /* Font Definitions */ @font-face {font-family:Courier; panose-1:0 0 0 0 0 0 0 0 0 0; mso-font-charset:0; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:3 0 0 0 3 0;} @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;} @font-face {font-family:Times; panose-1:0 0 0 0 0 0 0 0 0 0; mso-font-charset:0; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:3 0 0 0 7 0;} @font-face {font-family:ArialMT; panose-1:2 11 6 4 2 2 2 2 2 4; mso-font-alt:Arial; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:auto; mso-font-signature:0 0 0 0 0 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0cm; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; mso-bidi-font-size:10.0pt; font-family:Times; mso-fareast-font-family:Times; mso-bidi-font-family:"Times New Roman"; mso-ansi-language:EN-GB;} p {mso-style-priority:99; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman",serif; mso-fareast-font-family:"Times New Roman";} .MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-size:10.0pt; mso-ansi-font-size:10.0pt; mso-bidi-font-size:10.0pt; font-family:Courier; mso-ascii-font-family:Courier; mso-fareast-font-family:Times; mso-hansi-font-family:Courier;} @page WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 72.0pt 72.0pt 72.0pt; mso-header-margin:36.0pt; mso-footer-margin:36.0pt; mso-paper-source:0;} div.WordSection1 {page:WordSection1;} --> ICTP ICTP [email protected]
Atomistic Simulation Theory Seminar CMSP - Electronic Stopping from Non-Equilibrium Real-Time TDDFT Simulation: Development and Applications
Speaker(s): Yosuke KANAI (Dept. of Chemistry, Univ. of North Carolina at Chapel Hill, U.S.A.)
Europe/Rome Many interesting physical phenomena are connected to strongly correlated systems, which, due to their complexity, cannot usually be studied analitically, making numerical approaches essential. The development of the latter and the study of the physical scenarios induced by strong correlations are therefore both of great importance. In this context, I will present the results of my numerical investigations of strongly correlated systems: specifically, i) the ground-state phase diagram of a bosonic cluster-forming model of interest for cold atom experiments, as well as ii) the ground-state properties of the fermionic t-J model, a candidate Hamiltonian to describe high-T_c superconductivity, in the presence of two mobile holes. Path Integral and Variational Monte Carlo have been chosen as numerical techniques for the two problems, respectively. The main results I will discuss are the demonstration of a ground-state supersolid-supersolid transition in the bosonic scenario, and of a d-wave hole bound state in the fermionic model. My investigation of the latter in the 2-hole case is foundational for the application of my approach of choice to other problems, of direct interest for high-T_c superconductivity, where the physical picture is still unclear (such as thecase of finite hole concentration). ICTP ICTP [email protected]
Condensed Matter Seminar - Strongly Correlated Systems of Bosons and Fermions: Many-Body Phenomena and Numerical Methods
Speaker(s): Adriano ANGELONE (Lab. de Physique Quantique, ISIS, IPCMS, Strasbourg, France)
Europe/Rome Macroscopic systems are often endowed with a large number of different scales and features. While controlling one single system often demands to properly describe all its complexity, different systems can express similar behaviours which are scale-invariant and depend on a minimal set of parameters. The last two features allow to reproduce these phenomena at the laboratory scale and describe them with a minimal analytical model, thus characterising the observed phenomena in a very general way. Following this approach I focus on the stability of different macroscopic systems, from the human scale (suitcases and bikes), to the geophysical (vortices and currents in the ocean) and astrophysical scales (accretion disks). I show that in all these systems the macroscopic behaviour relies on the balance or inbalance between few parameters. Also, depending on the relative importance of these parameters, these systems can show the appearing of longevous and well organised patterns. Finally I introduce a new project which focuses on the complex structure of the arboreal nests built by Nasute termites. I aim to characterise these objects using the same minimal approach than before while facing the additional biological aspects. I will try to answer three connected questions: what are the biological functions of the macroscopic structure, how it is built by many simply interacting agents, and at which extent these two elements are related to the form of the nest. ICTP ICTP [email protected]
QLS Seminar: Stability and organisation in macroscopic systems
Address: Via Beirut 2
Room: Central Area, old SISSA building
Speaker(s): Giulio Facchini, IRPHE Marseille, France
Europe/Rome The 2017 ICTP Prize Ceremony takes place on Friday 29 June 2018 at 18.30 hrs (NB: new time!) in the Budinich Lecture Hall, and takes place during the Summer School on Cosmology 2018 (smr 3213). ICTP has awarded its 2017 ICTP Prize to Emilio Kropff, a neuroscientist from Argentina affiliated with that country's National Scientific and Technical Research Council's (CONICET) Instituto de Investigeciones Bioquimicas de Buenos Aires (IIBBA), Leloir Institute, and an ICTP Associate. Each year, the ICTP Prize is given in honor of a scientist who has made outstanding contributions to the field in which the prize is given. The 2017 ICTP Prize honors Daniel J. Amit, a theoretical physicist who pioneered statistical mechanics approaches to neural networks and was one of the founding fathers of modern theoretical and computational neuroscience. The title of the talk is: "Space, time, speed and acceleration in the brain's GPS". The Ceremony will be livestreamed from the ICTP website. All are most welcome to attend. ICTP, Trieste, Italy ICTP [email protected]
2017 ICTP Prize Ceremony
Speaker(s): Dr. Emilio Kropff, National Scientific and Technical Research Council's (CONICET) Instituto de Investigeciones Bioquimicas de Buenos Aires (IIBBA), Argentina
Emilio Kropff was born in Bariloche, Argentina. He studied Physics in the University of Buenos Aires, where in 2003 he defended his thesis on the statistical mechanics of simple unsupervised learning models. In 2007 he got his PhD in Cognitive Neuroscience at SISSA (Trieste) under the supervision of Alessandro Treves, defending his thesis on analytical and computational models of human semantic memory. The next year he moved to the Kavli Institute in Trondheim, Norway, as a postdoc fellow under the supervision of Edvard and May-Britt Moser (Nobel Laureates in Physiology or Medicine, 2014). The experiments he performed in Norway led to the discovery of speed cells, a previously unknown neural type belonging to the GPS circuity. In 2012 he established in Buenos Aires, where he is a national council (CONICET) researcher at the Leloir Institute. He currently works on electrical and optical recordings of neural activity in behaving rodents, complemented with analytical and computational models, with the aim of understanding how mammals form memories and orient themselves in space. His work has been published in high impact journals such as Science and Nature. Recent honors include the FIMA-LELOIR prize (2017), the Grass Fellowship (2018) and the ICTP prize (2017).
Europe/Rome This Conference will bring together an international group of experts to review the current state of the art in the study of dark energy and dark matter and discuss how best to use giant telescopes to learn about their fundamental nature. It is the third of a series of three conferences: more information can be found at https://conferences.pa.ucla.edu/dark-universe/index.html Invited Speakers include: Mariangela Bernardi (UPenn) Rebecca A. Bernstein (Carnegie Observatories) Simon Birrer (UCLA) Stefano Borgani (Trieste Observatory) Marusa Bradac (UCDavis) Tirthankar Roy Choudhury (TIFR) Michele Cirasuolo (European Southern Observatory) Stefano Cristiani (Trieste Observatory) Arianna di Cintio (IAC - Instituto de Astrofísica de Canarias) Christophe J. Dumas (TMT International Observatory) Isobel Hook (Lancaster U.) Lucas Macri (Texas A&M U.) John McKean (ASTRON/Kapteyn Astronomical Institute) Stefano Profumo (UCSC) Piero Rosati (UNIFE) Robyn Sanderson (Caltech) Anze Slosar (Brookhaven) Tommaso Treu (UCLA) Simona Vegetti (MPA, Garching) Matteo Viel (SISSA) Mark Vogelsberger (MIT) John K. Webb (U. of New South Wales) NOTE: The Conference will be preceded by the Summer School on Cosmology 2018 (smr3213, 18 - 29 June). There is no registration fee for this activity. ICTP ICTP [email protected]
2 Jul 2018 - 6 Jul 2018
Conference on Shedding Light on the Dark Universe with Extremely Large Telescopes | (smr 3218)
Organizer(s): Mariangela Bernardi (UPenn), Stefano Borgani (Trieste Observatory), Paolo Creminelli (ICTP), Romeel Davè (Royal Observatory, Edinburgh), Anjan A. Sen (Jamia Millia Islamia, India), Ravi Sheth (UPenn), Tommaso Treu (UCLA), Simona Vegetti (MPA, Garching), Matteo Viel (SISSA),
Europe/Rome The Workshop follows the ICTP Summer School on Theory, Mechanisms and Hierarchical Modelling of Climate Dynamics: Multiple Equilibria in the Climate System. Reliable projections of tropical rainfall changes are key to any climate adaption efforts in a warming world. Yet, our global climate models are a subpar tool for the task: their spatial resolution is too coarse to reproduce the deep convection that produces most rainfall in the tropics, and current parametrizations are inadequate – as signified by persistent biases in the simulation of the annual and diurnal cycles of rainfall in large areas of the oceans and continents, as well as the response to forcing of the past. Nonetheless, tropical rainfall is organized in the large-scale structures of the monsoons and the ITCZ whose dynamics are shaped by large-scale energetic and momentum constraints that involve the global circulation of both the ocean and the atmosphere. This suggests that building a coherent understanding of tropical rainfall can benefit from an understanding of these large-scale influences and their coupling with small scale cloud and precipitation processes. Making this link across scales to improve our understanding and our ability to anticipate future tropical rainfall changes is a key question in climate science. The workshop, building on the knowledge and practical skills acquired during the school, aims to bring together expertise on large-scale atmospheric and oceanic dynamics, small scale cloud and precipitation processes, hierarchical climate modeling and observation. The aim is to both review recent progress on tropical rainfall dynamics and to identify areas where progress is most amenable in the future given the existing and emerging modelling tools and theoretical frameworks. For the ICTP Summer School on Theory, Mechanisms and Hierarchical Modelling of Climate Dynamics: Multiple Equilibria in the Climate System, 25 June - 5 July 2018, go to link: http://indico.ictp.it/event/8318/ Confirmed speakers: William Boos, U.Cal Berkeley, USA Christian Jakob, Monash U., Australia John Marshall, MIT, USA Mahyar Mohtadi , Marum, Bremen, Germany Sonia Seneviratne, ETH Zürich, Switzerland Hui Su, JPL, USA Andrew Turner, U. Reading, UK Tianjun Zhou, IAP China ATTENTION: THE APPLICATION HERE IS FOR THE 2nd WEEK WORKSHOP ONLY. DEADLINE 1 MARCH 2018 SHOULD YOU WISH TO ATTEND THE SCHOOL, PLEASE VISIT THE RELEVANT PAGE http://indico.ictp.it/event/8318/ AND APPLY THERE APPLICANTS WHO HAVE ALREADY APPLIED FOR THE SCHOOl ARE NOT REQUESTED TO APPLY AGAIN FOR THE WORKSHOP ICTP ICTP [email protected]
WCRP Grand Challenge on Clouds, Circulation and Climate Sensitivity: 2nd Meeting on Monsoons and Tropical Rain Belts | (smr 3252)
Organizer(s): Fred Kucharski (ICTP), Anna Pirani (Universite' Paris-Saclay), Adrian Tompkins (ICTP), Michela Biasutti (Columbia University), Aiko Voigt (KIT), Mike Byrne (Imperial College), Riccardo Farneti (ICTP),
Cosponsor(s): US Climate Variability and Predictability Programmeusclivar2, European Geosciences Unionegu, The International Union of Geodesy and Geophysicsiugg4, World Climate Research Programmewcrp
Europe/Rome The AdS/CFT correspondence conjectures an exact equivalence between string theories in anti-de-Sitter space and gauge theories. This correspondence and its generalizations allow one to address difficult problems of strongly coupled field theory dynamics - such as the computation of Wilson loops - using weakly coupled gravity. The other way around, it also allows one to tackle fundamental questions of gravity - such as the quantum entropy of black holes - using field theoretic methods. A very important theoretical development of the last decade has been the advent of exact results in supersymmetric field theories using the methods of localization. These methods often reduce path-integrals to finite-dimensional integrals, which in many situations can be computed exactly. Being exact, localization results are most fruitfully applied at strong coupling, and thus provide a powerful framework for producing and exploiting AdS/CFT results. In particular, the interplay of localization and the AdS/CFT correspondence provides a fertile ground to answer precise questions about the entropy of string theoretic black holes and the expectation values of Wilson loops. This School aims to provide the necessary knowledge to work at the interface of supersymmetric localization and holography, through a series of pedagogical lectures by individual speakers. The activity is intended for students in theoretical physics or mathematics and postdocs with knowledge of quantum field theory, general relativity and string theory. TOPICS: Supersymmetric localization 4D N=2 supergravity AdS black hole entropy Localization in supergravity Wilson loops Quantum entropy of black holes LECTURERS include: Francesco BENINI (SISSA) Joao GOMES (University of Amsterdam) Sameer MURTHY (King's College London) Ioannis N. PAPADIMITRIOU (KIAS) Diego TRANCANELLI (University of São Paulo) Stefan VANDOREN (Utrecht University) Alberto ZAFFARONI (Milano-Bicocca) The School will be followed by a focused Workshop on Supersymmetric Localization and Holography: Black Hole Entropy and Wilson Loops (smr3227) from 9 to 13 July (http://indico.ictp.it/event/8326), with the goal of bringing together experts in the areas of localization and holography, to provide cross fertilization and outline key directions for the future. There is no registration fee for this activity. ICTP ICTP [email protected]
School on Supersymmetric Localization, Holography and Related Topics | (smr 3256)
Organizer(s): Francesco BENINI (SISSA), Atish DABHOLKAR (ICTP & Sorbonne Université, CNRS), Sameer MURTHY (King's College London), Leopoldo PANDO ZAYAS (University of Michigan), Alberto ZAFFARONI (Milano-Bicocca),
Cosponsor(s): European Research Council (ERC)ERC, Italian Institute for Nuclear Physics (INFN)INFN, Scientific Independence of young Researchers (SIR)SIR2
Europe/Rome In this talk, we first discuss the equilibrium properties of a linear polymer chain in a confined space e.g. the cone-shaped channel (entropy gradient) to understand the transport of biopolymers from pore etc. In the second part, we shall discuss the effects of the net gradient field arising due to the competition between the velocity gradient and solvent quality gradient (e.g. chemical potential, pH of the solvent, concentration of ions etc) on the dynamics of polymers. Multi-factor gradients are essential components of many biological phenomena. Insight into the behavior of such systems is of fundamental importance in a wide spectrum of systems. ICTP ICTP [email protected]
QLS Seminar: Polymer under gradient fields
Address: Via Beirut, 2
Speaker(s): Sanjay Kumar, Department of Physics, Banaras Hindu University, Varanasi, India
Europe/Rome Given a maximal orthogonal resolution of the identity B in the Hilbert space of a quantum system we define a measure of the coherence generating power of a unital operation with respect to B. This measure is the average coherence generated by the operation acting on an input ensemble of B-diagonal states. We give its explicit analytical form in any dimension andprovide an operational protocol to directly detect it. We characterize the set of unitaries with maximal coherence generating power and study the properties of our measure when the unitary is drawn at random from the Haar distribution. I will conclude by establishing a connection between this formalism and the Hilbert-Schmidt geometry of maximal abelian algebras of operators and its application to eigenstate phase transitions in interacting quantum systems e.g., many-body localization. References: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.052307 https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.052306 https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.032304 https://aip.scitation.org/doi/10.1063/1.4997146 ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Coherence Generating Power of Quantum Processes
Speaker(s): Paolo ZANARDI (USC, Los Angeles, CA, USA)
Europe/Rome The Fermi-Hubbard model is a cornerstone of modern condensed matter theory. It describes interacting electrons in solids, notably featuring a metal to Mott insulator transition. In its extended SU(N>2)-symmetric form, it has already attracted much interest in the context of multi-orbital materials such as transition-metal oxides. In addition, it has been predicted to exhibit novel quantum magnetic phases and spin liquids with topological order. In this talk, I will give a general introduction to the SU(N) Fermi-Hubbard model and, after discussing its general features, I will present its experimental realization with ultracold alkaline-earth atoms in optical lattices. To this aim, I will first describe in detail how SU(N)-symmetry emerges for fermionic atoms with alkaline-earth-like electronic structure. Owing to the existence of metastable excited states as well as to the strong decoupling between the nuclear and the electronic angular momenta, these atoms are particularly well suited for the investigation of SU(N) symmetric models with orbital degrees of freedom. I will then present some experimental results obtained using ultracold ytterbium atoms trapped in a three-dimensional optical lattice. In particular, I will report on the study of the equation of state of such system in various interaction regimes and for distinct values of N, directly showing the emergence of an SU(N)-symmetric incompressible Mott insulating phase. I will finally discuss perspectives towards probing novel SU(N) magnetic phases in optical lattices. ICTP ICTP [email protected]
Condensed Matter Seminar: Ultracold Atoms with SU(N) Symmetry
Speaker(s): Francesco SCAZZA (LENS, Sesto Fiorentino, Firenze)
Europe/Rome Abstract: Fractional integrals and the classical fractional maximal function are smoothing operators, in the sense that they map Lebesgue spaces into first order Sobolev spaces. We show that this phenomenon continues to hold for the fractional spherical maximal function when the dimension of the ambient space is greater than or equal to 5. A key element in the proof is a local smoothing estimate for the wave equation. This is joint work with Joao P. G. Ramos and Olli Saari. ICTP ICTP [email protected]
Regularity of spherical maximal functions
Speaker(s): David Beltran (BCAM)
Europe/Rome Abstract: In this talk we examine the regularity theory of the solutions to a few examples of (nonlinear) PDEs. Arguing through a genuinely geometrical method, we produce regularity results in Sobolev and Hölder spaces, including some borderline cases. Our techniques relate a problem of interest to another one - for which a richer theory is available - by means of a geometric structure, e.g., a path. Ideally, information is transported along such a path, giving access to finer properties of the original equation. Our examples include elliptic and parabolic fully nonlinear problems, the Isaacs equation, degenerate examples and a double divergence model. We close the talk with a discussion on open problems and further directions of work. ICTP ICTP [email protected]
Geometric regularity for (nonlinear) PDEs
Speaker(s): Edgard A. Pimentel (PUC- Rio)
Europe/Rome Surfactant spreading at air-water interfaces is driven by flow setup by surface tension gradients (Marangoni stress) established by the surfactants themselves. We experimentally probe the nature of steady surfactant transport on the interface when the resulting flow is strongest in a thin boundary layer near the interface. In particular, we present three experimental hydrodynamic signatures to distinguish between two limiting cases, viz. adsorption versus dissolution dominated transport, without invoking the surfactant's physico-chemical properties. In a region much larger than the surfactant source, but much smaller than the interfacial area, the steady-state fluid velocity assumes a self-similar form whose magnitude decays as a power-law with the distance from the source. We experimentally demonstrate that this power-law possesses an exponent -3/5 in adsorption and -1 in dissolution dominated flow. Explicit measurement of boundary layer and shear stress provide additional hydrodynamic signatures of surfactant transport mechanisms in the two limiting cases. We test this criterion against two known surfactants, Sodium Dodecyl Sulfate and Tergitol 15-S-9, and apply the results to camphoric acid, with unknown surface properties. ICTP ICTP [email protected]
QLS Seminar: Hydrodynamic signatures of stationary Marangoni-driven surfactant transport
Speaker(s): Mahesh M. Bandi, OIST, Okinawa, Japan
Condensed Matter Seminar: Interplay of Charge and Spin Degrees of Freedom in Pnictides and Dichalcogenides
Speaker(s): Dmitry EFREMOV (IFW, Dresden, Germany)
Europe/Rome Abstract: There are quite a number of subvarieties which can be defined to sit inside flag varieties. One of such is the family of Hessenberg varieties. In this talk, I will give combinatorial and geometric descriptions of the computation of the one of the class members of the family known as Springer varieties and discuss the current problem at hand. ICTP ICTP [email protected]
Geometric Computation of Betti numbers of Springer varieties
Speaker(s): Hammed Praise Adeyemo (University of Ibadan)
Europe/Rome Abstract. Bosonic ultra-light dark matter (ULDM) would form cored density distributions at the center of galaxies. These cores, seen in numerical simulations, admit analytic description as the lowest energy bound state solution ("soliton") of the Schroedinger-Poisson equations. Numerical simulations of ULDM galactic halos found an empirical scaling relation between the mass of the large-scale host halo and the mass of the central soliton. We show that this relation predicts that the peak circular velocity, measured for the host halo in the outskirts of the galaxy, should approximately repeat itself in the central region. Contrasting this prediction to the measured rotation curves of well-resolved near-by galaxies, we show that ULDM in the mass range m ~ (10-22 - 10-21) eV, which has been invoked as a possible solution to the small-scale puzzles of CDM, is in tension with the data. ICTP ICTP [email protected]
Galactic Rotation Curves vs. Ultra-Light Dark Matter
Speaker(s): Kfir BLUM (CERN/Wiezman Institute)
--- Please note change of date and venue!! ---
Europe/Rome ICTP-CIMPA School: AGRA III (Aritmetica, Grupos y Analisis III) from the 9 - 20 July 2018 in Cordoba, Argentina to held at the Academia Nacional de Ciencias and at FaMAF Please note: Arrival date should be 8 July 2018 and Departure date should be 21 July 2018 For Schedule and further information please refer to following link: http://www.famaf.unc.edu.ar/agra3/ http://www.anc-argentina.org.ar/web/actividades/bienvenida Deadline for requesting participation: 4 MARCH 2018 The area of the School is number theory, broadly understood - analytic, algebraic, combinatiorial, with links to groups and geometry. All of the course topics lie at thematic intersections. The course on Galois representations will focuse on their connections with modular forms and elliptic curves. The course on arithmetic groups will involve familiarizing students with topics in group theory and modular forms. Equidistribution in a diophantine context will be the main subject of a course. Another lecture series will focus on analysis, combinatorics and discrete geometry. The study of curves over finite fields, and the resulting codes, will allow for an accessible introduction to deep issues in algebraic geometry, with immediate applications. Finally, the course on primes, parity and analysis will combine classical tools and the use of entropy and independence. The plan is, then, to introduce advanced students to a variety of fields and tools and to notable recent developments. Scienfific Committee: Michael Harris Harald Helfgott Roberto Miatello Nuria Vila Oliva Fernando Rodriguez Villegas Local Committee: Maria Chara Emilio Lauret Ariel Pacetti Ricardo Podesta Diego Sulca Invited Speakers: Miram Abdon, Universidade Federal Fluminense, Brazil Mikhail Belolipetsky, IMPA - Instit. Nac. de Matematica Pura e Aplicada, Brazil José Burgos Gil, ICMAT, Spain Luis Dieulefait, Universitat de Barcelona, Spain Cicero Fernandes De Carvalho, Universidade Federal de Uberlandia, Brazil Michael Harris, Université Paris VII, France & Columbia University, U.S.A. Harald Helfgott, Georg-August Universität Göttingen & CNRS - Centre National de la Recherche Scientifique & Université Paris VI/VII, France Marc Hindry, Univesité Paris VII, France Benjamin Linowitz, Oberlin College, Ohio, U.S.A. Ricardo Menares, PUCV - Pontificia Universidad Católica de Valparaíso, Chile Roberto Miatello, Universidad Nacional de Córdoba, Argentina Ariel Pacetti, Universidad Nacional de Córdoba, Argentina Daniel Panario, Carleton University, Canada Marusia Rebolledo, Université Blaise Pascal Clermont-Ferrand 2, France David Roberts, University of Minnesota, Morris, U.S.A. Fernando Rodriguez Villegas, ICTP - the Abdus Salam International Centre for Theoretical Physics, Italy Adrián Ubis Martínez, Universidad Autonoma de Madrid, Spain Co-sponsors: Academia Nacional de Ciencias Alexander von Humboldt Centre International de Mathematiques Pures et Appliques (CIMPA) Facultad de Matematica, Astronomia, Fisica y Computacion (FAMAF) Foundation Compositio Mathematica Universidad Nacional de Cordoba (UNC) Cordoba - Argentina ICTP [email protected]
9 Jul 2018 - 20 Jul 2018
@ Cordoba - Argentina
ICTP-CIMPA School: AGRA III (Aritmética, Grupos y Analisis III) | (smr 3222)
Organizer(s): Roberto Jorge Miatello (Universidad Nacional de Cordoba), Harald Andres Helfgot (University of Goettingen), Michael Harris (Columbia University), Nuria Vila (Universitat de Barcelona), Ariel Pacetti (Universidad Nacional de Cordoba), ICTP Scientific Contact: Fernando Rodriguez Villegas
Cosponsor(s): Humboldtavh2, Universidad Nacional de Cordobaunca, Academia Nacional de Cienciasanc2, Facultad de Matematica, Astronomia, Fisica y Computacionfamaf, Centre International de Mathematiques Pures et Appliquescimpa2, Foundation Compositio Mathematicafcm
Europe/Rome Goals and brief description of the conference Defects in crystalline solids are ubiquitous. It is the second law of thermodynamics that gives rise to the appearance of a certain amount of disorder in crystalline materials at finite temperatures. Moreover, defects can be present in synthetic materials well above the equilibrium concentration due to the imperfections of material production processes or due to the exposure of the system to irradiation with energetic particles. Such lattice imperfections have a strong influence on the electronic, optical, thermal, and mechanical properties of the solids, normally deteriorating their characteristics. However, defects not always have detrimental effects on material properties, with the most prominent example being the doping of semiconductors by controllable introduction of impurities using ion implantation. In general, treatments of solids with beams of energetic ions and electrons have been shown to be a very powerful tool for the post-synthesis tailoring of material characteristics. The goal of the conference is to bring together active researchers in the field, as well as several experts in the related areas, to discuss "state of the art" in theory and experiment dealing with the physics of defects in solids. The effects of impurities and point/line defects on various properties of solids will be addressed, and the attendees will be able to learn not only the experimental facts, but also understand how the defects are treated within the framework of computational and analytical methods in theoretical physics. Particular attention is going to be paid to defects in nanomaterials, as the reduced dimensionality strongly affects their behavior. The Program will include about 20 oral presentations given by invited speakers, a poster session, and a limited number of short talks selected from contributed abstracts. Topics to be addressed Modern techniques (Raman spectroscopy, STM, XPS, TEM, etc.) used to assess concentration of defect in solids and identify their types First-principles modeling of native defects and impurities Ion and electron irradiation-induced defects Simulations of ion impacts onto solids Defects in superconductors Defects in low-dimensional materials (graphene, inorganic 2D materials, nanotubes etc.) Topological defects Defects for quantum computing Tutorial The conference will be preceded by a half-day tutorial where an introduction to the techniques used to characterize the defects will be given, along with the modern computational techniques used to get theoretical insights into defect behavior. List of invited speakers and lecturers at the tutorial: U. BANGERT, University of Limerick, Ireland P. BØGGILD, DTU Nanotech, Denmark D. EFREMOV, IFW, Dresden, Germany D. GOLBERG, Queensland University of Technology, Australia A. JORIO, Universidade Federal de Minas Gerais, Brazil K. KAASBJERG, Danish Technical University, Denmark H. KOMSA, Aalto University, Finland J. KOTAKOSKI, University of Vienna, Austria G. LEE, Seoul National University, South Korea V. MEUNIER, Rensselaer Polytechnic Institute, USA T. MICHELY, University of Köln, Germany M. NASTASI, University of Nebraska at Lincoln, USA J. NEUGEBAUER, Max-Planck-Institut für Eisenforschung, Germany L. PIZZAGALLI, CNRS, France M. SCARDAMAGLIA, University of Mons, Belgium M. SCHLEBERGER, Duisburg-Essen University, Germany G. SEIFERT, TU Dresden, Germany K. SUENAGA, AIST, Japan T. SUSI, University of Vienna, Austria A. VANTOMME, KU Leuven, Belgium Several additional invited speakers will be selected from the submitted abstracts. If the applicant wants his/her abstract to be included in the Conference book of abstracts, it should also be submitted as a Word file using a template posted at https://defectsinsolids.files.wordpress.com/2018/02/abstract_template2.docx Grants A limited number of grants are available to support the attendance of selected participants, with priority given to participants from developing countries. There is no registration fee. Deadlines 31 March 2018 for those who need visa 15 April 2018 otherwise --- Please visit also https://defectsinsolids.wordpress.com/ ICTP ICTP [email protected]
Conference on Physics of Defects in Solids: Quantum Mechanics Meets Topology | (smr 3221)
Organizer(s): Carla Bittencourt (University of Mons, Belgium), Chris Ewels (Institut des Matériaux, Nantes, France), Stefan Facsko (Helmholtz-Zentrum Dresden- Rossendorf, Germany), Arkady Krasheninnikov (Helmholtz-Zentrum Dresden-Rossendorf, Germany and Aalto University, Finland), Local Organiser: Mikhail Kiselev
Cosponsor(s): Aalto Universityaalto, Fund for Scientific Research-FNRSfnrs1, Helmholtz-Zentrum Dresden-Rossendorfhzdr, Scienta Omicronomicron, Zurich Instrumentszurich
Europe/Rome This is a three-week school and workshop on homological methods in algebra and geometry. The first two weeks will be a school for students from East Africa and beyond. International and African researchers will join for a workshop in the third week. The school begins with two introductory courses on some basic techniques that are widely used in the area. These are then built upon in the second week with slightly more advanced topics. The goal here is to give the participants a glimpse into some ideas used in algebraic geometry and homological algebra with the hope of inspiring them to pursue further research in a related topic. The school is intended for advanced graduate students (M.Sc. and Ph.D.) and young academic staff members from East Africa and beyond. Senior academic members are also welcome to participate. In the last week, the activity turns into a research focus-session involving international and African experts. The focus-session will study tensor categories and how they arise naturally in algebraic geometry with the aim of exploiting them using homological algebra. School Website: eaumpictp2018.weebly.com School Topics: Advanced linear algebra Galois theory Elementary algebraic geometry Introduction to homological algebra Workshop topic: Tensor categories of coherent sheaves Organizers: Tarig Abdelgadir (UNSW Sydney) Ulrich Krähmer (TU Dresden) Sylvester Rugeihyamu (Dar Es Salaam) David Ssevviiri (Makerere) Balázs Szendrői (Oxford) Fernando Rodriguez Villegas (ICTP) School speakers: Ravi Ramakrishna (Cornell) David Ssevviiri (Makerere) Chelsea Walton (Temple) Paul Wedrich (ANU Canberra) Michael Wemyss (Glasgow) Workshop speakers: Raf Bocklandt (Amsterdam) John Boiquaye (Accra) Andre Saint-Eudes Mialebama Bouesso (AIMS-South Africa) Alexandru Chirvasitu (Boulder) Joshua Greene (Boston College) Pinhas Grossman (UNSW Sydney) Yujiro Kawamata (Tokyo) Shinnosuke Okawa (Osaka) Sue Sierra (Edinburgh) Hermann Sore (Burkina Faso) Angela Tabiri (Glasgow) Ralph Twum (Accra) ICTP Scientific Contact: L. GOETTSCHE (ICTP) Application is open to all mathematicians and graduate students from developing countries. Applicants from EAUMP member states will be given priority. We encourage participants to secure their own funding for travel and subsistence from their home institution. Limited funds are available for participants from Sub- Saharan African Countries. There is no registration fee. Deadline: 1st of April 2018 Activity e-mail: [email protected] Dar-es-Salaam - United Republic of Tanzania ICTP [email protected]
@ Dar-es-Salaam - United Republic of Tanzania
EAUMP-ICTP School and Workshop on Homological Methods in Algebra and Geometry II | (smr 3219)
Organizer(s): Tarig Abdelgadir (UNSW Sydney), Ulrich Kraehmer (TU Dresden), Eunice Mureithi (University of Dar-es-Salaam), Mark Roberts (AIMS Tanzania), Balazs Szendroi (University of Oxford), ICTP Scientific Contact: Fernando Rodriguez Villegas, Lothar Göttsche
Cosponsor(s): EAUMPeaump, London Mathematical Societylns, CIMPAcimpa2, Foundation Compositio Mathematicafcm
Europe/Rome The AdS/CFT correspondence conjectures an exact equivalence between string theories in anti-de-Sitter space and gauge theories. This correspondence and its generalizations allow one to address difficult problems of strongly coupled field theory dynamics - such as the computation of Wilson loops - using weakly coupled gravity. The other way around, it also allows one to tackle fundamental questions of gravity - such as the quantum entropy of black holes - using field theoretic methods. A very important theoretical development of the last decade has been the advent of exact results in supersymmetric field theories using the methods of localization. These methods often reduce path-integrals to finite-dimensional integrals, which in many situations can be computed exactly. Being exact, localization results are most fruitfully applied at strong coupling, and thus provide a powerful framework for producing and exploiting AdS/CFT results. In particular, the interplay of localization and the AdS/CFT correspondence provides a fertile ground to answer precise questions about the entropy of string theoretic black holes and the expectation values of Wilson loops. This Workshop aims to bring together experts in the areas of supersymmetric localization and holography, to provide cross fertilization and outline key directions for the future. INVITED SPEAKERS include: Nikolay BOBEV (KU Leuven) Cyril N. CLOSSET (CERN) Diego H. CORREA (UNLP-CONICET) Justin R. DAVID (IISc Bangalore) Bernard de WIT (Utrecht University) Nadav DRUKKER (King's College London) Rajesh GUPTA (King's College London) Kiril HRISTOV (Bulgarian Academy of Sciences) Camillo IMBIMBO (University of Genova) Imtak JEON (Harish-Chandra Research Institute) Dietmar KLEMM (INFN Milano) Alberto LERDA (INFN Torino) James LIU (University of Michigan) Kumar S. NARAIN (ICTP) Ioannis N. PAPADIMITRIOU (KIAS) Nicolo' PIAZZALUNGA (SCGP Stony Brook) Soo-Jong REY (Seoul National University) Valentin REYS (Milano-Bicocca) Guillermo A. SILVA (UNLP-CONICET) Konstantinos SKENDERIS (University of Southampton) James SPARKS (University of Oxford) Chiara TOLDO (UC Santa Barbara) Cumrum VAFA (Harvard University) Brian WILLETT (UC Santa Barbara) Itamar YAAKOV (Kavli IPMU) Maxim ZABZINE (Uppsala University) The Workshop will be preceded by a School on Supersymmetric Localization, Holography and Related Topics (smr3256) from 2 to 7 July (http://indico.ictp.it/event/8560), with the goal of providing the necessary basic knowledge through a series of pedagogical lectures by individual speakers. The activity is intended for students in theoretical physics or mathematics and postdocs with knowledge of quantum field theory, general relativity and string theory. There is no registration fee for this activity. ICTP ICTP [email protected]
Workshop on Supersymmetric Localization and Holography: Black Hole Entropy and Wilson Loops | (smr 3227)
Europe/Rome Aim of the workshop The workshop on the emerging field of biological operations research – the application of operations research methodologies in systems and molecular cell biology – is designed to bring together participants and lecturers versed in either of the two disciplines that are eager to further uncover the deep connections between the two fields and find new avenues for research and collaboration while gaining new insight on biological functions from an operational and system perspective. The long-term goal of the workshop is to boost research and collaboration in this field, by acquainting researchers with the existing long-standing problems in system biology, molecular cell biology and bacterial physiology. We will explore the deep connections between these problems and those that arise in operations research, e.g. the question of regulation of mass production in an uncertain market, dynamic resource allocation, scheduling problems and the impact of different queuing disciplines of speed and efficiency of a complex production facility. Researchers will have a chance to exchange ideas and compare different methodologies and modes of thinking about operational parameters such as throughput, efficiency, resource allocation, reliability, redundancy, and cost in a novel biological context. Topics Scheduling and queuing disciplines – from circadian rhythms to pipelining of self-replication Allocation of resources to self-replication vs. maintenance Transcription and translation regulation, role of external and internal information Controlling WIP (work-in-progress) – Kanban, TOC, and CONWIP vs. product feedback inhibition Anticipatory systems and their reverse engineering Metabolic networks and queuing theory Bullwhip effect and the role of inventory control, forecasting and information Modern queuing theory Self-regulating and resource dependent branching processes ICTP ICTP [email protected]
Workshop on Operations Research of Biological Systems | (smr 3223)
Organizer(s): Suckjoon Jun (UC San Diego, USA), Rami Pugatch (Ben Gurion U. / ICTP), Local Organiser: Matteo Marsili
Europe/Rome Abstract. In principle, we expect that our cosmos has experienced a series of complicated phase transitions. Exploring the corresponding consequences and discriminating between various types of mentioned phase transitions are therefore of interest not only from theoretical approaches but also from observational points of view. In this talk, relying on an extensive simulation of cosmic strings networks, I examine the CMB random field induced by a typical topological defects which is so-called cosmic strings as a part of ISW phenomenon. I will use various robust topological measures accompanied by some multi-scale reductions to quantify our ability to detect the probable cosmic strings networks. To this end I also utilize extensive Machine learning algorithm to find best feature sequences which are more sensitive for searching such one-dimensional defect. Based on arXiv:1710.00173 / arXiv:1801.04140 ICTP ICTP [email protected]
Topological measures for the search of exotic features in the light of Universe
Speaker(s): Seyed Mohammad Sadegh Movahed, Department of Physics, Shahid Beheshti University, Tehran, IRAN School of Physics, Institute for Research in Fundamental Sciences, IPM, Tehran, IRAN http://facultymembers.sbu.ac.ir/movahed/
New Stringy Perspective on Cosmology
--- SPECIAL SEMINAR ---
Europe/Rome Erol Gelenbe is currently the "Dennis Gabor" professor in Electrical and Electronic Engineering at Imperial College. He received a PhD on "Stochastic Automata with Structural Restrictions" from the Polytechnic Institute of New York (NYU), and a Doctor of Science degree on "Modeles de Performances de Systemes Informatiques" from University Pierre et Marie Curie (Paris). A Fellow of ACM and IEEE and of several National Academies, his honours include the Commendatore al Merito of Italy and France's Legion d'Honneur. He was awarded Doctorates Honoris Causa by the University of Rome Tor-Vergata, the University of Liege (Belgium) and Bogazici Universty (Istanbul). Abstract: The biological sciences have long been a source of inspiration in the design of solutions for complex problems posed by operations research. Neuronal model-based optimization, prey-predator models and food webs, ant colony optimization, reinforcement learning, data classification with neural networks, learning-based techniques for tracking and control, are some widely used techniques in operations research that have been inspired by nature. This lecture will delve deeper into this interface. I will first discuss spiking random neural networks and show that the Random Neural Network (RNN) is a rigorous mathematical model with function approximation capability and a remarkable product form solution, whereby in equilibrium the joint probability distribution of an arbitrarily large set of fully connected spiking neurons have a joint probability distribution which is the product of the marginal distributions of individual neurons. This initial discovery provided a rigorous basis for fast gradient and deep learning, which has been exploited in numerous applications, including the design of a Cognitive Packet Network that is controlled via neuronal distributed intelligence to enhance Quality of Service and Security. The RNN has also been be used to provide more "truthful" information from web searches. The RNN has been generalized, giving rise to a branch of Queueing Theory called G-Networks which allow the rigorous prediction of the performance of distributed computer systems and data networks which incorporate dynamic control functions such as state dependent rerouting, load balancing and the elimination of overload, the reset of a system after failures, and the ability to modify a sub-system's state with knowledge from other sub-systems. In this talk, in addition to the RNN and G-Networks, I shall also discuss how such models can be applied to the analysis of Gene Regulatory Networks and the detection of anomalies from micro-array data. The Colloquium will be livestreamed from the ICTP website. Light refreshments will be served after the event. All are welcome. ICTP, Trieste, Italy ICTP [email protected]
ICTP Colloquium on "Neural Networks and Gene Regulatory Networks with Product Form Solutions"
Speaker(s): Prof. Erol Gelenbe, Imperial College, London, UK
Europe/Rome Vladan Vuletić is Lester Wolfe Professor of Physics at MIT. Born in Serbia (Yugoslavia), and educated in Germany, he earned the Physics Diploma and a PhD from the Ludwig-Maximilians-Universität München. He was postdoc and Assistant Professor in the Department of Physics at Stanford University before joining MIT in 2003. Vuletić's research in experimental atomic physics focuses on entanglement in many-body systems for uses in precision measurements, quantum simulation, and potentially quantum computing. Major achievements include spin squeezing for overcoming the standard quantum limit in atomic clocks, the development of laser cooling techniques for Bose-Einstein condensation, and the first observation of bound states of photons. Abstract: Recent years have seen a remarkable development in our ability to manipulate matter and light at a quantum level. Quantum simulators with individual trapped atoms are becoming a reality, and quantum computing is on the verge of becoming experimentally viable. Of particular interest are tunable strong interactions between atoms that can be used to experimentally implement and control entangled many-body states. Highly excited, metastable atomic Rydberg states can be used to implement controllable long-distance interactions between individual quanta. I will discuss two applications: By coherently coupling light to Rydberg excitations in a dense atomic medium, we have realized a highly nonlinear optical medium where the interactions between individual photons are so strong that two photons can even form a bound state. I will also discuss the use of Rydberg interactions to realize a many-atom quantum simulator with up to 51 individually trapped atoms, where we have observed a quantum phase transition towards a state with antiferromagnetic order, as well as long-lived many-body oscillations after a sudden quench. The event will be livestreamed from the ICTP website. All are invited to attend. Please note unusual venue and time! ICTP, Trieste, Italy ICTP [email protected]
ICTP Colloquium on "Manipulating many quanta one by one: molecules of light and 51 atomic qubits"
Address: Via Grignano 9 34141 Trieste
Speaker(s): Professor Vladan Vuletić, Massachusetts Institute of Technology (MIT), USA
Europe/Rome Abstract: see document below ICTP ICTP [email protected]
Ergodicity and Partial Hyperbolicity on 3D Manifolds
Speaker(s): Raul Ures (Southern University of Sceince and Technology)
Europe/Rome The School provides early stage researchers with interactive experiences of hands-on research involving table-top experiments with computer data acquisition and modeling. Participants will also take part in professional development of improved scientific communication in English. An intensive programme of laboratory experiments, mathematical modeling, and lectures give participants immersive experiences with complex systems in the physical and life sciences. Additionally, participants will present their own research in talks and posters with extensive faculty feedback to enhance the presentation quality. The faculty are eminent scientists who have conducted frontier table-top research published in leading international scientific journals. TOPICS: • Biological Physics • Modeling of Epidemics • Soft Matter Physics-Dynamic light scattering • Interfacial fracture of particle rafts • Turbulence - Flow analysis by imaging particles • Fluid Instabilities-Digital movies to determine dynamics • Machine learning in table-top experiments • Microfluidics • Granular Materials • Modeling in MATLAB - Molecular Dynamics • Chemical patterns • Cardiac dynamics • Computational Modeling ICTP ICTP [email protected]
Hands-On Research in Complex Systems School | (smr 3224)
Organizer(s): Michael F. Schatz (Georgia Institute of Technology), Mark Shattuck (The City College of New York, USA), Harry L. Swinney (University of Texas, Austin), Joseph Niemela (ICTP), Local Organiser: Maria Liz Crespo
Cosponsor(s): The University of Texas at Austinutexas
Europe/Rome Week 1 A circle of concepts and methods in dynamics. Basic concepts in dynamics will be introduced, with many examples, especially in the setting of circle maps. Topics include rotations of the circle, doubling map, Gauss map and continued fractions and an introduction to the basic ideas of symbolic codings and invariant measures. At the end of the week we will discuss some simple examples of structural stability and renormalization. Week 2 Ergodicity in smooth dynamics (10h, Jana Rodriguez-Hertz and Amie Wilkinson) The concept of ergodicity is a central hypothesis in statistical mechanics, one whose origins can be traced to Boltzmann's study of ideal gases in the 19th century. Loosely speaking, a dynamical system is ergodic if it does not contain any proper subsystem, where the notion of "proper" is defined using measures. A powerful theorem of Birkhoff from the 1930's states that ergodicity is equivalent to the property that "time averages = space averages:" that is, the average value of a function taken along an orbit is the same as the average value over the entire space. The property of ergodicity is the first stepping stone in a path through the study of statistical properties of dynamical systems, a field known as Ergodic Theory. We will develop the ergodic theory of smooth dynamical systems, starting with the fundamental, linear examples of rotations and doubling maps on the circle introduced in Week 1. We will develop some tools necessary to establish ergodicity of nonlinear smooth systems, such as those investigated by Boltzmann and Poincaré in the dawn of the subject of Dynamical Systems. Among these tools are distortion estimates, density points, invariant foliations and absolute continuity. Closer to the end of the course, we will focus on the ergodic theory of Anosov diffeomorphisms, an important family of "toy models" of chaotic dynamical systems. Renormalization in entropy zero systems (5h, Corinna Ulcigrai) Rotations of the circle are perhaps the most basic examples of low complexity (or "entropy zero") dynamical systems. A key idea to study systems with low complexity is renormalization. The Gauss map and continued fractions can be seen as a tool to renormalize rotations, i.e.study the behaviour of a rotation on finer and finer scales. We will see two more examples of renormalization in action. The first is the characterization of Sturmian sequences, which arise as symbolic coding of trajectories of rotations (and hint at more recent developments, such as the characterization of cutting sequences for billiards in the regular octagon). The second concerns interval exchange maps (IETs), which are generalizations of rotations. We will introduce the Rauzy-Veech algorithm as a tool to renormalize IETs. As applications, we will give some ideas of how it can be used (in some simplified settings) to study invariant measures and (unique) ergodicity and deviations of ergodic averages for IETs. ----------------------------------------------------------------------------------------- Tutorial and exercise sessions will be held regularly and constitute an essential part of the school. Tutors: Oliver BUTTERLEY (ICTP), Irene PASQUINELLI (Durham University, UK), Davide RAVOTTI, (University of Bristol, UK), Lucia SIMONELLI (ICTP), Kadim WAR (Ruhr-Universität, Bochum, Germany). Women in Mathematics: Activities directed to encourage and support women in mathematics, such as panel discussions and small groups mentoring and networking, will be organized during the event. ICTP ICTP [email protected]
Summer School in Dynamics (Introductory and Advanced) | (smr 3226)
Organizer(s): Jana Rodriguez-Hertz (Southern University of Science and Technology, Shenzhen, China), Corinna Ulcigrai (University of Bristol), Amie Wilkinson (University of Chicago), Local Organiser: Stefano Luzzatto
Europe/Rome In an ecological system, pathogens often need to share their host with other pathogens, and therefore compete for the resources with different spreading strategies. Both cooperative and competitive interactions in bacterial infections have been observed. Here we first review recent theoretical studies addressed these two mechanisms separately and then study their combination and discuss about non-trivial dynamical effects can be expected to arise. Thus we study two strains competing with each other for host resources in the presence of a third pathogen cooperating with both of them. We first treat dynamics in a homogeneously mixed population by means of mean-field theory and stability analysis. We study the impact of cooperation on the outcome of the two-pathogen competition, which can be quantified in terms of dominance of one competing pathogen or the co-circulation of both of them. We show that the presence of a third cooperating pathogen can alter the outcome of competition as it may favor the more cooperative pathogen over the more infectious one. We then consider more complex contact structures among hosts and perform computer simulations to study the evolution of the diseases. ICTP ICTP [email protected]
Disease ecology: How to compete in a multi-pathogen system?
Room: Central Area, 2nd floor,old SISSA building
Speaker(s): Fakhteh Ghanbarnejad - ITP, TU Berlin http://www.pks.mpg.de/~fakhteh
Europe/Rome Abstract. Starting from a generic near horizon expansion in any spacetime dimension greater than two we derive all near horizon symmetries and discover a wealth of novel results: 1. Any non-extremal horizon has an infinite set of near horizon symmetries and associated soft hair excitations. 2. The near horizon symmetries can be represented as generalization of the Bondi-Metzner-Sachs algebra. 3. For horizons that are either flat or non-rotating the near horizon symmetries can be represented as Heisenberg algebras, with one quarter of the inverse of Newton constant playing the role of Planck's constant. 4. Not only black holes, but also cosmological horizons are equipped with soft hair. We discuss implications of soft hair for horizon thermodynamics and entropy, and comment on open problems and further developments. ICTP ICTP [email protected]
Soft Hair on Generic Horizons - Implications for Black Hole Microstates
Speaker(s): M.M. SHEIKH JABBARI (IPM, Tehran, Iran)
--- Please note time change!! ---
Europe/Rome MYMC 2018 web page: http://www.mymc.it/2018/ REGULATIONS: http://www.mymc.it/2018/doc/MYMC2018REGULATIONS.pdf PARTICIPATING COUNTRIES: Albania, Algeria, Bosnia and Herzegovina, Croatia, Cyprus, Egypt, France, Greece, Italy, Lebanon, Montenegro, Morocco, Palestine, Slovenia, Spain, Tunisia, Turkey TRAINING EXERCISES: http://www.mymc.it/2018/doc/training_exercises.pdf Roma - Italy ICTP [email protected]
@ Roma - Italy
Mediterranean Youth Mathematical Championship 2018 | (smr 3254)
Organizer(s): ICTP Scientific Contact: Claudio Arezzo
Cosponsor(s): MIURmiur2, INdAMgnsaga, UMIumi, UNINTumi, Piano Nazionale Lauree Scientifichepnls, Sapienza Universita di Romasapienza, Universita Roma Treromatre, Universita Tor Vergatatorv
Europe/Rome Week 1 (see smr3226) A circle of concepts and methods in dynamics Basic concepts in dynamics will be introduced, with many examples, especially in the setting of circle maps. Topics include rotations of the circle, doubling map, Gauss map and continued fractions and an introduction to the basic ideas of symbolic codings and invariant measures. At the end of the week we will discuss some simple examples of structural stability and renormalization. Week 2 Ergodicity in smooth dynamics (10h, Jana Rodriguez-Hertz and Amie Wilkinson) The concept of ergodicity is a central hypothesis in statistical mechanics, one whose origins can be traced to Boltzmann's study of ideal gases in the 19th century. Loosely speaking, a dynamical system is ergodic if it does not contain any proper subsystem, where the notion of "proper" is defined using measures. A powerful theorem of Birkhoff from the 1930's states that ergodicity is equivalent to the property that "time averages = space averages:" that is, the average value of a function taken along an orbit is the same as the average value over the entire space. The property of ergodicity is the first stepping stone in a path through the study of statistical properties of dynamical systems, a field known as Ergodic Theory. We will develop the ergodic theory of smooth dynamical systems, starting with the fundamental, linear examples of rotations and doubling maps on the circle introduced in Week 1. We will develop some tools necessary to establish ergodicity of nonlinear smooth systems, such as those investigated by Boltzmann and Poincaré in the dawn of the subject of Dynamical Systems. Among these tools are distortion estimates, density points, invariant foliations and absolute continuity. Closer to the end of the course, we will focus on the ergodic theory of Anosov diffeomorphisms, an important family of "toy models" of chaotic dynamical systems. Renormalization in entropy zero systems (5h, Corinna Ulcigrai) Rotations of the circle are perhaps the most basic examples of low complexity (or "entropy zero") dynamical systems. A key idea to study systems with low complexity is renormalization. The Gauss map and continued fractions can be seen as a tool to renormalize rotations, i.e.study the behaviour of a rotation on finer and finer scales. We will see two more examples of renormalization in action. The first is the characterization of Sturmian sequences, which arise as symbolic coding of trajectories of rotations (and hint at more recent developments, such as the characterization of cutting sequences for billiards in the regular octagon). The second concerns interval exchange maps (IETs), which are generalizations of rotations. We will introduce the Rauzy-Veech algorithm as a tool to renormalize IETs. As applications, we will give some ideas of how it can be used (in some simplified settings) to study invariant measures and (unique) ergodicity and deviations of ergodic averages for IETs. -------------------------------------------------------------------------------------------- Tutorial and exercise sessions will be held regularly and constitute an essential part of the school. Tutors: Oliver BUTTERLEY (ICTP), Irene PASQUINELLI (Durham University, UK), Davide RAVOTTI, (University of Bristol, UK), Lucia SIMONELLI (ICTP), Kadim WAR (Ruhr-Universität, Bochum, Germany). Women in Mathematics: Activities directed to encourage and support women in mathematics, such as panel discussions and small groups mentoring and networking, will be organized during the event. ICTP ICTP [email protected]
Summer School in Dynamics (Advanced | (smr 3253)
Europe/Rome The classical triangular antiferromagnetic Heisenberg model with Dzyaloshinskii-Moriya interactions in a magnetic field is studied. We focus in particular on the emergence of a composite spin crystal phase that was recently observed in [Phys. Rev. B 92, 214439 (2015)] for intermediate fields. This complex phase, which corresponds to a lattice of Z2 vortices, can be made up from three inter-penetrated skyrmion lattices, one for each sub-lattice of the original triangular one. We present our numerical results and the explicit construction of the low-energy effective action that reproduces the correct phenomenology. This effective action could serve as a starting point to study the coupling to charge carriers, lattice vibrations, structural disorder and transport phenomena. ICTP ICTP [email protected]
Condensed Matter and Statistical Physics Seminar: From Skyrmions to Z2 Vortices in Frustrated Chiral Magnets
Speaker(s): Daniel CABRA (Univ.Nacional de La Plata, Argentina)
Europe/Rome Abstract. I will discuss how to use neural networks to detect data departures from a given reference model, with no prior bias on the nature of the new physics responsible for the discrepancy. The algorithm that I will describe returns a global p-value that quantifies the tension between the data and the reference model. It also allows to compare directly what the network has learned with the data, giving a fully transparent account of the nature of possible signals. The potential applications are broad, from LHC physics searches to cosmology and beyond. ICTP ICTP [email protected]
Learning New Physics from a Machine
Speaker(s): Raffaele Tito D'AGNOLO (SLAC, Stanford, USA)
Europe/Rome The Call for Makers is CLOSED, but we would love to make an exception for "last minute" ideas of particular value and relevance, so go and submit your projects (see https://trieste.makerfaire.com/en/call-makers-2018/)! We invite makers, inventors, scientists, artisans, artists and other creative passionate and enthusiast people from the Triveneto Area, the neighbouring regions and countries (the near and the far ones) to participate to show and demonstrate their projects during the fifth edition of the Trieste Mini Maker Faire, that will be held on July 28 and 29 in Miramare (Trieste). We see this fifth edition as a great opportunity for everyone to share experiences and to grow bigger together, and we need the help of you all to make it a unique and memorable event! To learn more about the Trieste Mini Maker Faire you can have a look at the web site https://trieste.makerfaire.com/about/ The complete list of co-sponsors of the event is available here: https://trieste.makerfaire.com/sponsors/ VOLUNTEERS are needed and most welcome! to learn more, click here: https://trieste.makerfaire.com/diventa-un-volontario/ ICTP ICTP [email protected]
Trieste Mini Maker Faire | (smr 3206)
Organizer(s): Enrique Canessa (ICTP), Carlo Fonda (ICTP), Local Organiser: Enrique Canessa
Cosponsor(s): Comune di Triestecomts, Maker Media Inc.maker, ICTP SciFabLabsfl, Soroptimist Internationalsorop, ESOFproesof
Europe/Rome The Coastal Ocean Environment Summer School in Ghana (https://coessing.org) is an international collaboration aimed at building capacity in oceanographic and environmental sciences in Ghana specifically, and Africa more generally. The Summer School was previously held in 2015, 2016 and 2017. We intend to hold the schools every year, and to alternate the schools between Regional Maritime University and University of Ghana, which are both located in Ghana's capital city of Accra. Marine issues of great importance to Ghana include fisheries, piracy, pollution, shipping and port management, and the recent advent of offshore oil drilling. Long-term goals of our collaborative effort include securing funding to continue the school on an annual basis, increasing the number of links with institutions in other African countries, and incorporating research partnerships as part of the summer school. The school will include lectures, hands-on labs, and a field trip, and will address the physics and biogeochemistry of the coastal ocean environment, as well as the tools that scientists use to describe and understand this environment. - Ghana ICTP [email protected]
30 Jul 2018 - 5 Aug 2018
@ - Ghana
The Coastal Ocean Environment Summer School in Ghana | (smr 3230)
Organizer(s): Prof. Kwasi Appeaning-Addo (University of Ghana), Prof. Brian K. Arbic (University of Michigan, USA), Dr. Edem Mahu (University of Ghana), ICTP Scientific Contact: Dr. Riccardo Farneti
Cosponsor(s): Coastal Ocean Environment Summer School Ghanaghana21, International Union of Geodesy and Geophysicsiugg3, National Science Foundationnsf2, University of Michiganumich
Europe/Rome The School is a certificate course providing specialized education and training on development and implementation of knowledge management programmes in nuclear science and technology organizations. It is intended for young professionals in current or future leading roles in managing nuclear knowledge. Description: Jointly organized by ICTP and the IAEA since 2004, this Nuclear Knowledge Management (NKM) School focuses on methodologies and practices, and explores various dimensions of nuclear knowledge management. These include processes and tools, challenges and benefits, culture influence, relationship with human resource development, IT for knowledge preservation and sharing. Learning is supplemented with real life examples from NKM programmes in different types of nuclear organizations. The aim is to encourage 'forward thinking' and to enable participants to apply theory and insights in their daily work. Pre-training is done via e-learning for a common understanding of the basics, allowing a more efficient interaction with peers and lecturers during the in-class School. Lectures at the ICTP by recognized experts are followed by case studies, practical work and breakout sessions where participants will have the opportunity to discuss issues and solutions. NKM group projects are also developed over the week. Shortlisted candidates will undergo an on-line pre-training course and a test, the results of which will be one of the elements to define the final list of participants. Topics: • Nuclear knowledge and knowledge management fundamentals; • Developing policies and strategies in managing nuclear knowledge; • Managing nuclear information resources; • Human resources development, risk of knowledge loss and knowledge transfer; • Practical guidance and good practices on NKM. Methodologies: • Blended learning course; • Oriented to practical exercises and group projects; • Additional continuous education certificate provided by MEPhI University, Russian Federation, after successful completion. DEADLINE FOR APPLICATION: 22 December 2017 ---------------------------------------------------------------------- ADMISSION TO THIS ACTIVITY IS LIMITED TO PARTICIPANTS SELECTED AND OFFICIALLY INVITED. ICTP ICTP [email protected]
14th Joint ICTP-IAEA School on Nuclear Knowledge Management | (smr 3229)
Organizer(s): M. Chudakov (IAEA), M.E. Urso (IAEA), Local Organiser: Claudio Tuniz
Cosponsor(s): IAEAiaea
Limited participation**DEADLINE: 22/12/2017**
INFN Summer School | (smr H542)
Organizer(s): Dr. Diego Tonelli,
Europe/Rome Ecological Modelling The deadline to apply to the 2nd Advanced school on Multispecies modelling Approaches for ecosystem-based marine REsource management in the MEDiterranean Sea (AMARE-MED 2018) HAS BEEN EXTENDED TO 20 APRIL. Candidates must apply through the online form: http://echo.inogs.it/amare-med Secretary: AMARE-MED Secretariat ([email protected]) Teacher: André Punt (School of Aquatic and Fishery Sciences, UWA, Seattle, USA) Organizer(s): Simone Libralato, OGS, Italy; Angelo Bonanno, CNR-IAMC, Italy; Piera Carpi, CEFAS, UK; Francesco Colloca, CNR-IAMC, Italy; Tomaso Fortibuoni, OGS, Italy; Elisabetta B. Morello, Italy; Saša Raicevich, ISPRA, Italy; Giuseppe Scarcella, CNR-ISMAR, Italy; Cosimo Solidoro, OGS, Italy; Fabio Fiorentino, CNR-IAMC, Italy Trieste - Italy ICTP [email protected]
AMARE-MED 2018 | (smr H539)
Organizer(s): Dr. Simone Libralato - OGS,
Sponsors: GFCM (General Fisheries Commission for the Mediterranean), Cefas (Centre for Environment Fisheries and Aquaculture Science) With the support of FAO - General Fisheries Commission for the Mediterranean (GFCM)
Europe/Rome Abstract: Consider a quantum chain in its ground state and then take a subdomain of this system with natural truncated hamiltonian. Since the total hamiltonian does not commute with the truncated hamiltonian the subsystem can be in one of its eigenenergies with different probabilities. Since the global energy eigenstates are locally close to diagonal in the local energy eigenbasis we argue that the Shannon(Renyi) entropy of these probabilities follows an area-law for the gapped systems. When the system is at the critical point the Shannon(Renyi) entropy follows a logarithmic behaviour with a universal coefficient. Our results show that the Shannon(Renyi) entropy of the subsystem energies closely mimics the behaviour of the entanglement entropy in quantum chains. We support the arguments by detailed numerical calculations performed on the transverse field XY-chain. ICTP ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Area-Law and Universality in the Statistics of the Subsystem Energy
Speaker(s): Mohammad Ali RAJABPOUR (Instituto de Fisica, Univ. Federal Fluminense, Niteroi, Brazil)
Europe/Rome Abstract. The current tension in H_{0} measurements by Planck-2015 for $\Lambda$CDM and that by recent direct measurement by Riess et al. 2016, also results another tension in the value of r_{d}, the sound horizon at drag epoch, measured by BAO observations and Planck-2015. We show that, regardless of dark energy dynamics, to accommodate higher value of H_{0}, one needs a lower value of r_{d}, and so necessarily a modification of early universe cosmology. Later on, a model independent approach to constraint background evolution of the universe with low-redshift data, will be discussed. ICTP ICTP [email protected]
The price of shifting the Hubble Constant and evidence for dark energy evolution
Speaker(s): Anjan A. SEN (Centre for Theoretical Physics, New Delhi, India)
Europe/Rome In natural systems time delays are inevitable due to the finite speed of the signal transmission over a distance, and are not negligible if they are similar to the time scales of the observed dynamics. For example in the brain, where the delays are on the same scale as the signal operation, they affect the brain performances and need to be considered. On the other hand, many complex networks, such as the cortical networks, are hierarchical. It means that their communities may be further divided into sub-communities. Therefore, because of the above-mentioned reasons, the analysis of time-delayed dynamics on hierarchical networks may help to understand the interplay between brain structure and function. Based on extensive simulations in artificial and cortical networks with homogeneous time delays, we uncovered that for a fixed coupling strength, by changing time delay different regions of coherent, multistable and incoherent dynamics are revealed. We show that in a hierarchical network, by transition from incoherent to coherent states, different topological scales of the network are revealed respectively in a non transient behavior. In addition, we find that by considering bimodal distribution of the time delays the parameter regions corresponding to the multistabe dynamics will be extended. Finally we show that the results are connected to the global slow dynamics of the brain. ICTP ICTP [email protected]
Revealing modular architecture of the cortical networks
Speaker(s): Mina ZAREI - IASBS, Zanjan, Iran
Europe/Rome For more information please visit: https://indico.cern.ch/event/680421/overview Xinjiang - People's Republic of China ICTP [email protected]
1 Aug 2018 - 10 Aug 2018
@ Xinjiang - People's Republic of China
The First Xinjiang International Summer School and Workshop on High Energy Physics | (smr 3217)
Organizer(s): Sayipjamal Dulat (CTP-XJU, China), C.-P. Yuan (MSU, USA), Tie-Jiun Hou (CTP-XJU, China), Qing-Hong Cao (PKU, China), ICTP Scientific Contact: Bobby Acharya
Europe/Rome Abstract: Topological classes are often invariant manifolds of renormalization. Unfortunately, renormalization is not a differentiable dynamical system. The stable manifold theorem for hyperbolic dynamics can not be applied. The talk will discuss a technique to construct smooth invariant manifolds in such a non-differentiable context. ICTP ICTP [email protected]
Smooth invariant manifolds of renormalization
Speaker(s): Marco Martens, SUNY at Stony Brook)
Europe/Rome Abstract: A central question in dynamics is whether the topology of a system determines its geometry, whether the system is rigid. Under mild topological conditions rigidity holds in many classical cases, including: Kleinian groups, circle diffeomorphisms, unimodal interval maps, critical circle maps, and circle maps with a break point. More recent developments show that under similar topological conditions, rigidity does not hold for slightly more general systems. We will discuss the case of circle maps with a flat interval. The class of maps with Fibonacci rotation numbers is a C1 manifold which is foliated with co dimension three rigidity classes. Finally, we summarize the known non-rigidity phenomena in a conjecture which describes how topological classes are organized into rigidity classes. ICTP ICTP [email protected]
The rigidity conjecture
Speaker(s): Liviana Palmisano (Uppsala University)
Europe/Rome The summer school (6-17 August 2017) builds competence in data analysis and security for participants from all disciplines and/or backgrounds from Sciences to Humanities. Four applied workshops run in parallel from 20-24 August 2017. Summer school: Principles and practice of research data management, curation and security for Open Science using a range of search compute infrastructure, large-scale data handling, analysis, visualization and modeling technique. Workshop on Extreme sources of data: Introduction to ATLAS Open Data Platforms/Tools, tutorials and CERN LHC. Workshop on Bioinformatics: computational methods for the management and analysis of genomic and sequencing data. Workshop on IoT/Big Data Analytics: Big Data tools and technology; real time event processing; low latency query; analyzing social media and customer sentiment. Workshop on Climate Data Science: Cloud computing platform/tools for Climate Data Sciences including integration and visualization of on-line and local datasets. ICTP ICTP [email protected]
The CODATA-RDA Research Data Science Summer School | (smr 3231)
Organizer(s): Andrew Harrison (Department of Mathematical Sciences, University of Essex), Simon Hudson (CODATA), Hugh Shanahan (Department of Computer Science, Royal Holloway University of London, UK), Celia van Gelder (Dutch Techcentre for Life Sciences (DTL), Netherlands), M. Hassan (TWAS), Teresa. K. Attwood (University of Manchester, UK), Rob Quick (Indiana University, U.S.A), Sarah Jones (University of Glasgow, UK), Nicola Mulder (University of Cape Town, South Africa), U. Singe (ICTP), M. Zennaro (ICTP), A. Tompkins (ICTP), Local Organiser: C. Onime
Cosponsor(s): ELIXIRelixir2, Global Organisation for Bioinformatics Learning, Education & Traininggoblet, H3ABioNeth3abionet, International Council for Science - Committee on Data for Science and Technologycodata, The Research Data Alliance (RDA)rda, Springer Naturesprnat2, The World Academy of Sciencestwas
Europe/Rome The Abdus Salam International Centre for Theoretical Physics (ICTP, Trieste, Italy) in collaboration with the Gordon and Betty Moore Foundation, the Institute for Complex Adaptive Matter (ICAM-I2CAM), the National High Magnetic Field Laboratory, and the Department of Physics - University of Florida is organizing the School and Workshop on "Strongly Correlated Electronic Systems - from Quantum Criticality to Topology". Topics Novel theories of quantum criticality in metals and Mott insulators Superconductivity and competing orders Novel approaches to spin liquids Correlated systems with strong spin-orbit coupling Coulomb interaction in topological systems Topological superconductors This School and Workshop will bring graduate and postdoctoral students in condensed matter physics together with experts in the field to discuss and the existing challenges and the latest theoretical and experimental developments in correlated electron systems and in topological materials . The research talks will be held mostly during the first week (Aug. 6-10). On Friday, August 10, we will hold a mini-workshop entitled "Fermions: heavy, topological, and critical". The tutorial lectures on various aspects of strongly correlated and topological electron systems will be given during the second week (Aug. 13-17). ICTP ICTP [email protected]
Advanced School and Workshop on Correlations in Electron Systems – from Quantum Criticality to Topology | (smr 3232)
Organizer(s): Andrey Chubukov (University of Minnesota), Piers Coleman (Rutgers University), Dmitrii L. Maslov (University of Florida), Naoto Nagaosa (RIKEN Tokyo), Andy Schofield (University of Birmingham), Hide Takagi (MPI Stuttgart), Local Organiser: Rosario Fazio
Cosponsor(s): Gordon and Betty Moore Foundationgbmoore, Institute for Complex Adaptive Mattericam, University of Floridaunif, The National High Magnetic Fieldmaglab
Europe/Rome One-dimensional systems with short-range interactions cannot exhibit a long-range order at nonzero temperature. However, there are some particular one-dimensional models, such as the Ising-Heisenberg spin models with a variety of lattice geometries, which exhibit unexpected behavior similar to the discontinuous or continuous temperature-driven phase transition. Although these pseudo-transitions are not true temperature-driven transitions showing only abrupt changes or sharp peaks in thermodynamic quantities, they may be confused while interpreting experimental data. Here we consider the spin-\frac{1}{2} Ising-XYZ diamond chain in the regime when the model exhibits temperature-driven pseudo-transitions. We provide a detailed investigation of how this phenomenon occurs in several physical quantities, such as entropy, magnetization, specific heat, magnetic susceptibily, and correlation functions between distant spins that illustrates the properties of quasi-phases separated by pseudo-transitions. Inevitably, all correlation functions show the evidence of pseudo-transition. It is worth to mention that the correlation functions between distant spins have an extremely large correlation length at pseudo-critical temperature. ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: Further Evidence for Quasi-Phases and Pseudo-Transitions in One-Dimensional Models
Speaker(s): Onofre ROJAS SANTOS (Federal University of Lavras, Brazil)
Europe/Rome For registration and more information please visit: https://impa.br/en_US/eventos-do-impa/eventos-2018/tropical-geometry-and-moduli-spaces/ Deadline to apply for funding: 28 February 2018 Rio de Janeiro - Brazil ICTP [email protected]
@ Rio de Janeiro - Brazil
Workshop on Tropical Geometry and Moduli Spaces | (smr 3233)
Organizer(s): Oliver Lorscheid (IMPA - Rio de Janeiro), Margarida Melo (University of Coimbra), Johannes Nicaise (Imperial College - London), Sam Payne (Yale University), ICTP Scientific Contact: Lothar Goettsche
Europe/Rome The applied workshops are preceded by a related Summer school (6-17 August 2017) that builds competence in data analysis and security for participants from all disciplines and/or backgrounds from Sciences to Humanities. The four applied workshops run in parallel from 20-24 August 2017. Summer school: Principles and practice of research data management, curation and security for Open Science using a range of search compute infrastructure, large-scale data handling, analysis, visualization and modeling technique. Workshop on Extreme sources of data: Introduction to ATLAS Open Data Platforms/Tools, tutorials and CERN LHC. Workshop on Bioinformatics: computational methods for the management and analysis of genomic and sequencing data. Workshop on IoT/Big Data Analytics: Big Data tools and technology; real time event processing; low latency query; analyzing social media and customer sentiment. Workshop on Climate Data Science: Cloud computing platform/tools for Climate Data Sciences including integration and visualization of on-line and local datasets. ICTP ICTP [email protected]
The CODATA-RDA Research Data Science Advanced Workshops on Bio-informatics, Climate Data Sciences, Extreme sources of data and Internet of Things (IoT)/Big-Data Analytics | (smr 3257)
Organizer(s): Andrew Harrison (Department of Mathematical Sciences, University of Essex), Simon Hodson (CODATA), Hugh Shanahan (Department of Computer Science, Royal Holloway University of London, UK), Celia van Gelder (Dutch Techcentre for Life Sciences (DTL), Netherlands), M. Hassan (TWAS), Teresa. K. Attwood (University of Manchester, UK), Rob Quick (Indiana University, U.S.A), Sarah Jones (University of Glasgow, UK), Nicola Mulder (University of Cape Town, South Africa), U. Singe (ICTP), M. Zennaro (ICTP), A. Tompkins (ICTP), Local Organiser: C. Onime
Europe/Rome REGISTRATION & ADMINISTRATIVE FORMALITIES Participants receiving financial support and staying in the ICTP Guesthouses are automatically registered at the time of check-in at the reception desk. Therefore, on Monday morning, please go directly to the Finance Office at the Enrico Fermi Building from 8.30 to 9.00 am, ground floor to collect your expenses. Badges, Passports and Travel Receipts are needed. Additional Shuttle Bus service will be provided from the Adriatico Guest House and pick-up is available from the carpark area, main entrance of the Adriatico Guest House. Those people accommodated at the Galileo Guest House, please walk down the slope to the Enrico Fermi building The Finance office is on the groud floor, right hand side. Opening times: Monday, Tuesday and Friday from 8.30 - 12.00 and 13.30 to 14.30 PLEASE NOTE: To all Directors / Lecturers Please register with the Workshop Secretary in office no. 135, first floor at the Leonardo building. This is a comprehensive workshop reviewing state-of-the-art reactor design concepts, nuclear fuel cycle options including design and technological features of various innovative reactors. The unique course provides basic understanding of different innovative nuclear energy systems. Directors: Vladimir KRIVENTSEV & Chirayu BATRA (IAEA, Vienna, Austria) Local Organizer: Sandro SCANDOLO (ICTP, Trieste, Italy) The participants will receive a theoretical foundation on most important research and technology development areas of innovative nuclear energy systems and will get familiarized with the modern physical models and simulation codes for the design and safety analysis of these systems. The workshop aims to engage and stimulate young scientists, researchers, and engineers currently involved in nuclear reactors research, as well as students interested in the field. Active discussion, group activities, poster session and various blended learning approaches will be used to enhance sharing of new ideas and emphasize the need of continued R&D and innovation in all areas of nuclear reactor and fuel cycle science and technology development. CALL FOR CONTRIBUTED ABSTRACTS: In the application form, all applicants are requested to submit a brief abstract for a poster presentation. Only a limited number of contributed abstracts will be selected for poster session. The best poster will be awarded a certificate of appreciation. The abstract of the contribution should include the title, short description, and cover one of the following topics of the Worksop: Topics: Global scenario for nuclear energy; Innovative reactor concepts and fuel cycle options; Reactor physics of innovative nuclear energy systems; Thermal hydraulics of innovative nuclear energy systems; Status of advanced primary components and development and qualification of structural materials, coolants, and fuels; Passive safety systems and other safety technologies; Safety analysis including severe accident scenarios; Advanced reactor modelling and simulation; and Status of research and technology development in support of innovative reactor and fuel cycle technologies. Confirmed Lecturers: Chirayu BATRA, Vienna, Austria Adriaan BUIJS, Hamilton, Canada Galina FESENKO, Vienna, Austria Masakazu ICHIMIYA, Tokyo, Japan Vladimir KRIVENTSEV, Vienna, Austria Christian LATGE, Saint Paul lez Durance, France Konstantin MIKITYUK, Villigen, Switzerland Massimo SALVATORES, Aix-en-Provence, France ICTP ICTP [email protected]
Joint ICTP-IAEA Workshop on Physics and Technology of Innovative Nuclear Energy Systems | (smr 3225)
Room: 20-23 August: Budinich Lecture Hall (LB), 24 August: Euler Lecture Hall (LB)
Organizer(s): Vladimir Kriventsev (International Atomic Energy Agency), Chirayu Batra (International Atomic Energy Agency), Local Organiser: Sandro Scandolo
Cosponsor(s): International Atomic Energy Agencyiaea3
**DEADLINE: 31/05/2018** **DEADLINE: 31/05/2018**
2018 AAAS-TWAS Science Diplomacy Course | (smr H541)
IUPAP C13 Meeting | (smr H555)
Organizer(s): Prof. Joe Niemela,
Europe/Rome The programme outline for the 2018 Spirit of Salam Awards & 2017/2018 PostGraduate Diploma Programme Awards Ceremony taking place on Friday 24 August from 14.30 hrs will be available in due course. For the feature article on the 2018 Spirit of Salam recipients, please see: https://www.ictp.it/about-ictp/media-centre/news/2018/1/spirit-salam-2018-winners.aspx The event will be livestreamed from the ICTP website. Light refreshments will be served after the event. ICTP ICTP [email protected]
2018 Spirit of Salam Awards & 2017/2018 PostGraduate Diploma Programme Awards Ceremony
27 Aug 2018 - 14 Sep 2018
College on Medical Physics: Applied Physics of Contemporary Medical Imaging – Expanding Utilization in Developing Countries | (smr 3185)
Organizer(s): Slavik Tabakov (King's College London, UK and IOMP), Franco Milano (University of Florence, Italy), Anna Benini (University Hospital of Copenhagen, Denmark), Mario De Denaro (Azienda Ospedaliero Universitaria Integrata, Trieste, Italy), Perry Sprawls (Emeritus Professor, Emory University, USA), Local Organiser: Luciano Bertocchi
Europe/Rome This School, aimed at graduate students and junior researchers, aims to teach a modern course in condensed matter and statistical physics. It combines basic concepts with recent structural and interdisciplinary developments. It features a combination of theory and computational courses, and seminars on experimental progress in the field. TOPICS: The program will cover a broad variety of topics within condensed matter physics, emphasizing connections with related fields such as quantum information, atomic, optical and high-energy physics: 1. Statistical Mechanics: from foundations to quantum information 2. Numerical methods: high-level programming and advanced numerical methods 3. Coherent dynamics: entanglement, decoherence, phase transitions, driven systems 4. Topological quantum matter: phases and diagnostics 5. Physical implementations: cold atoms, trapped ions, nanophysics, materials. SPEAKERS: F. Alet (CNRS, Toulouse, France) E. Andrei (Rutgers University, US) B. Beri (University of Cambridge, UK) I. Bloch (MPQ, Garching, Germany) P. Calabrese (SISSA, Trieste, Italy) J. Chalker (University of Oxford, UK) X. Chen (Caltech, US) J. Dalibard (Collège de France, Paris, France) M. Devoret (Yale University, US) D. Dhar (Indian Institute of Science Education and Research, Pune) M. Heyl (MPIPKS Dresden, Germany) D. Huse (Princeton, US)* V. Khemani (Harvard University, US) W. Krauth (Ecole Normale Superieure, Paris, France) B. Lake (Berlin Technical University, Germany) C. Laumann (Boston University, US) A. Lazarides (MPIPKS Dresden, Germany) A. MacKenzie (MPI-CPfS Dresden, Germany) E. Martinez (University of Copenhagen, Denmark) U. Schollwoeck (LMU Munich, Germany) M. Znidaric (University of Ljubljana, Slovenia) *to be confirmed HOW TO PARTICIPATE: Fill in the online application form using the link "Apply here" that can be found on the sidebar of this page. The deadline to submit applications has expired. ICTP ICTP [email protected]
Summer School on Collective Behaviour in Quantum Matter | (smr 3235)
Organizer(s): Claudio Castelnovo (University of Cambridge), Paul Fendley (University of Oxford), Roderich Moessner (MPIPKS Dresden), Marcello Dalmonte (ICTP), Local Organiser: Antonello Scardicchio
Cosponsor(s): ICAMicam, MPIPKSMPIPKS
Europe/Rome Abstract. For simplicity, experimental dark matter (DM) direct detection exclusion curves in the literature are almost always calculated by assuming a naive Standard Halo Model (SHM) expectation for the Milky Way (MW) DM halo. There have been a number of attempts that have tried to go beyond the SHM and assess the impact of astrophysical uncertainties on DM direct detection experiments. Most of these attempts have examined the impact of an uncertain local dark matter density which results in a trivial re-scaling of the exclusion curves. Others have attempted to look at the effect of the DM velocities via changes in the local DM escape speed and the local DM velocity dispersion, or have tried to incorporate the local DM velocity distribution but have mainly restricted to ansatzes or simulations. We claim that the right approach should be to use an observationally inferred determination, along with the related uncertainties, of the local DM phase-space in a self- consistent manner. In this talk, I will present a detailed look at this fundamentally important element of DM detection results. We start by constructing the MW DM rotation curves and end with re-estimates of the direct detection DM exclusion limits. On the way, we present the tightest constraint on local DM density till date. ICTP ICTP [email protected]
Milky Way Dark Matter and the Direct Detection Exclusion Limits
Speaker(s): Subhabrata MAJUMDAR (Tata Institute of Fundamental Research, Mumbai, India)
Europe/Rome Abstract: Groups of birational transformations are a classical object of study. In many cases these groups are huge and very difficult to describe. The purpose of the talk is to discuss recent results on FINITE subgroups of birational automorphism groups. I will present all the basic definitions and many examples. The talk is based on joint works with Constantin Shramov. ICTP ICTP [email protected]
Finite groups of birational automorphisms
Speaker(s): Yuri PROKHOROV, Steklov Institute and Higher School of Economics (Moscow)
Europe/Rome Abstract. A local and gauge invariant gauge field model including Nambu - Jona-Lasinio (NJL) and QCD Lagrangian terms in its action is introduced. Surprisingly, it becomes power counting renormalizable. This occurs thanks to the presence of action terms which modify the quark propagators, to become more decreasing that the Dirac one at large momenta in a Lee-Wick form, implying power counting renormalizability. The appearance of finite quark masses already in the tree approximation in the scheme is determined by the fact that the new action terms explicitly break chiral invariance. In this starting work we present the renormalized Feynman diagram expansion of the model and derive the formula for the degree of divergence of the diagrams. ICTP ICTP [email protected]
A proposal of a renormalizable Nambu - Jona-Lasinio model
Speaker(s): Alejandro CABO (Instituto de Cibernetica, Matematica y Fisica, Havana, Cuba)
Europe/Rome Abstract: We use the recent theorem of Chen-Cheng to prove the existence of a family of constant scalar curvature Kähler metrics on any Kähler manifold with semi-ample canonical bundle. A conjecture about the limiting behavior of these metrics will also be discussed. This is joint work with Wangjian Jian and Jian Song. ICTP ICTP [email protected]
CscK metrics on Kähler manifolds with semi-ample canonical bundles
Speaker(s): Yalong SHI, Nanjing University, P.R. China
Europe/Rome Abstract: Topological Quantum Field Theories (TQFTs) play important role both in physics and mathematics, in particular low-dimensional topology. In my talk I will review this notion and consider a family of TQFTs in various dimensions for which the input data is a finite group G and an element of a spin cobordism group of its classifying space BG. I will also describe some explicit examples of such TQFTs. ICTP ICTP [email protected]
Spin-TQFTs and cobordisms
Speaker(s): Pavel PUTROV, ICTP
Europe/Rome <!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;} @font-face {font-family:DengXian; panose-1:2 1 6 0 3 1 1 1 1 1; mso-font-alt:等线; mso-font-charset:134; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:-1610612033 953122042 22 0 262159 0;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536859905 -1073697537 9 0 511 0;} @font-face {font-family:"\@DengXian"; panose-1:2 1 6 0 3 1 1 1 1 1; mso-font-charset:134; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:-1610612033 953122042 22 0 262159 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0cm; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:DengXian; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-ansi-language:EN-US;} p {mso-style-noshow:yes; mso-style-priority:99; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman",serif; mso-fareast-font-family:"Times New Roman";} .MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:DengXian; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-ansi-language:EN-US;} @page WordSection1 {size:668.0pt 946.0pt; margin:42.55pt 42.55pt 42.55pt 42.55pt; mso-header-margin:0cm; mso-footer-margin:43.65pt; mso-paper-source:0;} div.WordSection1 {page:WordSection1;} /* List Definitions */ @list l0 {mso-list-id:2036736079; mso-list-template-ids:945295524;} ol {margin-bottom:0cm;} ul {margin-bottom:0cm;} --> At low temperatures, the dynamics of glasses suffer a dramatic slowing down. The system becomes stuck in metastable configurations called traps, which are rarely abandoned, through a process called activation. The Trap Model (TM), that de- scribe the motion between different traps, provides a simplified framework to un- derstand activated dynamics. Even though signs of TM-like behavior were found in realistic systems, it is not clear (i) whether the dynamics of most models is the one predicted by the TM, (ii) to what extent the TM description applies to other glasses and (iii) which are the relevant features for this dynamics to be found. We show that the TM description does apply to other glassy models, such as the Ran- dom Energy Model [1], if one defines the traps dynamically, through the time series of the energy [2]. We then extend our analysis to systems with correlated energy levels, and see that the trap behavior holds as long as the correlations are weak [3]. Once the correlations are strong enough, as it happens in a Hamiltonian system, it is unclear whether the dynamic behavior of the glass can be completely described through the simple picture provided by the TM. Comparing Dynamics of Glasses with Deep Neural Networks Time permitting, a second section on Deep Learning is presented. We study dynam- ics and energy (or loss) landscape of feedforward Deep Neural Networks [4], with an emphasis on their Mean Square Displacement, and find that they exhibit aging on a finite time scale. Later, they start diffusing at the bottom of the landscape. Further, we argue that the slow dynamics is due rather to flat directions in the landscape than to barrier crossing, and we show evidence of an under- to over-parametrized phase transition. References <span style="font-size:11.0pt;font-family:"Arial",sans-serif;mso-fareast-font-family: Arial">1. [1] M. Baity-Jesi, G. Biroli & C. Cammarota, J. Stat. Mech. (2018) 013301. <span style="font-size:11.0pt;font-family:"Arial",sans-serif;mso-fareast-font-family: Arial">2. <span style="font-size: 11.0pt;font-family:"Arial",sans-serif">[2] C. Cammarota & E. Marinari, Phys. Rev. E 92 010301(R) (2015). <span style="font-size:11.0pt;font-family:"Arial",sans-serif;mso-fareast-font-family: Arial">3. [3] M. Baity-Jesi, A. Achard-de Lustrac & G. Biroli, Phys. Rev. E 98, 012133 (2018). <span style="font-size:11.0pt;font-family:"Arial",sans-serif;mso-fareast-font-family: Arial">4. <span style="font-size: 11.0pt;font-family:"Arial",sans-serif">[4] M. Baity-Jesi, L. Sagun, M. Geiger, S. Spigler, G. Ben-Arous, C. Cammarota, Y. LeCun, M. Wyart & G. Biroli, PMLR 80:324-333, 2018 (ICML 2018). <!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:0; mso-generic-font-family:roman; mso-font-pitch:variable; mso-font-signature:-536870145 1107305727 0 0 415 0;} @font-face {font-family:DengXian; panose-1:2 1 6 0 3 1 1 1 1 1; mso-font-alt:等线; mso-font-charset:134; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:-1610612033 953122042 22 0 262159 0;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-536859905 -1073697537 9 0 511 0;} @font-face {font-family:"\@DengXian"; panose-1:2 1 6 0 3 1 1 1 1 1; mso-font-charset:134; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:-1610612033 953122042 22 0 262159 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin:0cm; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:DengXian; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-ansi-language:EN-US;} p {mso-style-noshow:yes; mso-style-priority:99; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman",serif; mso-fareast-font-family:"Times New Roman";} .MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:DengXian; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi; mso-ansi-language:EN-US;} @page WordSection1 {size:668.0pt 946.0pt; margin:42.55pt 42.55pt 42.55pt 42.55pt; mso-header-margin:0cm; mso-footer-margin:43.65pt; mso-paper-source:0;} div.WordSection1 {page:WordSection1;} /* List Definitions */ @list l0 {mso-list-id:2036736079; mso-list-template-ids:945295524;} ol {margin-bottom:0cm;} ul {margin-bottom:0cm;} --> ICTP ICTP [email protected]
Trap Models and their Connection to more Realistic Glasses
Room: Central Area, Second floor, old SISSA building
Speaker(s): Marco Baity-Jesi Columbia University, New York (NY), USA
Europe/Rome Prof. J. Dalibard heads an experimental research group at CNRS-ENS and Collège de France, Paris. Pioneering recent achievements include the first observation of a Kosterlitz-Thouless transition in 2D Bose gases and studies of vortex dynamics in rotating condensates. Early in his career, J. Dalibard was responsible for key theoretical ideas in laser cooling and trapping, including polarization gradient cooling and the Magneto-optical Trap (MOT), and thus provides in his present work the link between theoretical and experimental quantum optics. He has also pioneered theoretical methods to solve the dynamics of open quantum systems utilizing quantum trajectories. Among his honors we can find the Pascal Medal of the European Academy of Science (2009), the Max Born Medal of the American Optical Society (2012), the Davisson–Germer Prize of the American Physical Society (2012), the Senior BEC award (2017), and his fellowships in the European Academy of Science, the Optical Society of America and the American Philosophical Society. Abstract: The physics of many-body systems strongly depends on their dimensionality. For example, in a two-dimensional world, most standard phase transitions towards an ordered state of matter would not occur, because of the increased role of fluctuations. However non-conventional "topological" transitions can still take place, as understood initially by Kosterlitz and Thouless. During the last decade, a novel environment has been developed for the study of low-dimensional physics. It consists of cold atomic gases confined in tailor-made light traps, forming thus a thin layer of material particles. In this talk I will present some key aspects of these quantum 2D gases, such as their transition to a superfluid state and their (approximate) scale invariance. I will also discuss out-of-equilibrium features, like the nucleation of random currents when merging independent samples. The event will be livestreamed from the ICTP website. All are invited to attend. ICTP, Trieste, Italy ICTP [email protected]
ICTP Colloquium on "Topology in atomic Flatland"
Speaker(s): Prof. Jean Dalibard, CNRS-ENS and Collège de France, Paris
Special geometry for Calabi-Yau manifolds of Fermat type and Q-invariant Milnor rings
Speaker(s): Alexander BELAVIN (Russian Academy of Sciences, Moscow)
Europe/Rome Abstract. We present several results about solenoidal manifolds motivated by results by Dennis Sullivan in [1] with commentaries developed in [2] and on a join project with Dennis Sullivan [3]. Solenoidal manifolds of dimension n are topological spaces which are locally homeomorphic to the product of a Cantor set with an open subset of Rn. Geometric 3-dimensional solenoidal manifolds are the analog of geometric 3-manifolds in the sense of Thurston. We will give some results related to 3-dimensional geometric solenoidal manifolds. Referencias [1] D. Sullivan, Solenoidal manifolds, J. Singul. 9 (2014), 203–205. [2] A. Verjovsky, Commentaries on the paper "Solenoidal manifolds" by Dennis Sullivan. J. Singul. 9 (2014), 245–251. [3] D. Sullivan, A. Verjovsky, Compact 3-dimensional geometric solenoidal ma- nifolds. In preparation. ICTP ICTP [email protected]
Compact 3-dimensional geometric solenoidal manifolds
Room: Luigi Stasi Room
Speaker(s): Alberto Verjovsky (Instituto de Matemáticas, UNAM, Mexico)
Europe/Rome Conditions for applying in this School: Selected candidates will be expected to actively engage in the programme and present a 20' minute talk (with a time for discussion and questions included) describing own or own group's research summarizing important or recent results on wasteform, spent fuel or nuclear materials of relevance to immobilization studies. Participants are encouraged to set their work in the context of their own national waste management strategy and should first give a brief overview of the types of waste produced and managed in her/his country, and then a current status of waste conditioning and disposal practice (both by means of one or two summary slides). SCHOOL DESCRIPTION: Nuclear waste management is a core issue for sustainable development and long-term viability of nuclear energy as energy supply. The main goal of this school is the dissemination of knowledge on optimal methods of synthesis and study of crystalline and glass-crystalline wasteforms for the immobilization of actinides and other long-lived dangerous radionuclides. It aims on transferring experience of ceramic and glass-composite materials fabrication from leading experts to specialists interested in reliable immobilization of toxic nuclides. Directors: Michael I. Ojovan (IAEA), Boris E. Burakov (Radium Institute), Local Organiser: Antonello Scardicchio Description: The school will bring together researchers from the area of materials science with a focus on crystalline and vitreous materials for nuclear energy. The school will assist experts to better understand the wide range and full potential of material science applied to radioactive waste immobilisation and technology tools and methods devoted to immobilisation and properties of crystalline and glass-crystalline materials. Knowledge transfer will be facilitated between individuals from developed and developing countries, and can be used to develop further the internationally sponsored development of nuclear waste immobilisation using crystalline and glass-crystalline wasteforms. Participants should return from the school with a richer understanding of actinide immobilization technologies and the range of techniques to investigate actinide-containing materials. Topics: Fundamentals of actinide immobilization; Radiation damage effects in actinide-doped crystalline materials and glasses; Leach behaviour of actinide-doped ceramics and glasses; Advanced materials based on durable actinide host-phases; New types of Pu fuel and targets for actinide transmutation; Interaction of actinide wastes and geological environment; Modelling of actinide migration in geological environment; Actinide behaviour during severe nuclear accident (Chernobyl, Fukushima, etc.) and nuclear tests. ICTP ICTP [email protected]
10 Sep 2018 - 14 Sep 2018
Joint ICTP-IAEA International School on Nuclear Waste Actinide Immobilization | (smr 3237)
Organizer(s): Michael I. Ojovan (IAEA - Vienna, Austria), Boris E. Burakov (Radium Institute - S. Petersberg, Russian Federation), Local Organiser: Antonello Scardicchio
Cosponsor(s): International Atomic Energy Agencyiaea
Europe/Rome Abstract: We develop a new approach of the discriminant of a complete intersection curve in the 3-dimensional projective space. By relying on the resultant theory, we prove a new formula that allows us to define this discriminant without ambiguity and over any commutative ring, in particular in any characteristic. This formula also provides a new method for evaluating and computing this discriminant more efficiently, without the need to introduce new variables as with the well-known Cayley trick. Then, we derive new properties and we show that this new definition of the discriminant satisfies the expected geometric property and yields an effective smoothness criterion for complete intersection space curves. ICTP ICTP [email protected]
Discriminants of complete Intersection Varieties (joint work with Laurent Busé (INRIA, Nice -France))
Speaker(s): Ibrahim Nonkane (Université Ouaga II, IUFIC - Burkina Faso)
Europe/Rome Electrocatalytic conversion of biomass derived feedstocks offers a promising avenue for effective carbon recycling from renewable energy resources. To retain economic viability of this target technology, rational design of electrocatalysts with high activity and selectivity towards producing value-added chemicals and fuels is necessary. For improved conversion of biomass resources to fuels and fine chemicals, understanding and controlling the aqueous-phase catalytic hydrogenation of organic compounds on metals is crucial. Unlike gas-phase hydrogenation, the presence of water and the solid/liquid interface play critical roles in catalysis. Although there have been extensive studies in electrocatalysis, there exists a lack of mechanistic exploration and molecular-level understanding of electrocatalytic conversion of organic compounds specifically pertaining to biomass feedstocks. Moreover, these reactions occur at the solvated electrode-electrolyte interface where complex interactions between the electrode and solvent molecules have a critical influence on the reaction chemistry. In this talk, I will address the effect of the solvent and the charged metal electrode on the reaction pathways and their capacity to undergo reduction/hydrogenation. Results of molecular-scale structural/electronic properties near the electrochemical interface and the reaction energetics of target organic compounds obtained from density-functional-theory(DFT) based ab initio molecular dynamics (AIMD) simulations will be presented. The inferences drawn will be used to postulate design criteria for electrocatalytic conversion of organic compounds from an experimental and theoretical perspective. ICTP ICTP [email protected]
Condensed Matter and Statistical Physics: Electrocatalytic Conversion of Organic Compounds at Solid/Liquid Interface from Ab Initio Molecular Dynamics Simulation
Speaker(s): Mal-Soon LEE (Institute of Integrated Catalysis, PNNL, Richland, WA, U.S.A.)
Europe/Rome Abstract: We will explain some basic notions of hyperbolic geometry and its relations to complex geometry and other parts of mathematics. ICTP ICTP [email protected]
BASIC NOTIONS SEMINAR: The Poincaré disk and non-euclidean geometry
Europe/Rome Some photos taken at the workshop can be viewed here: https://www.flickr.com/photos/ictpimages/albums/72157698505871582 * * * DESCRIPTION: Technology plays a big role in sustainable development in all of its aspects: social, environmental, economic and scientific. The recent development of low cost technologies (electronic boards, sensors, 3D printing, etc.) can empower scientists and educators from developing countries in fostering scientific knowledge. In this workshop we have covered some of the low cost tools that can be used in universities to support courses in physics and engineering. We have presented concrete examples of projects that can be carried out during post-graduate courses. We have studied in depth one important case study, the LED Sun Photometer, which provides an ultra-low cost solution for education and science. TOPICS: • Sustainable Development Goals and the role of technology; • Open hardware and open software tools; • IoT4D: Internet of Things for Development; • 3D printing and the FabLab revolution; • The role of inexpensive technology in education (example: hand-made robot for kids, programmed in local language); • A case study, the LED Sun Photometer: from blueprinting to deployment in the field. * * * Secretariat: [email protected] ICTP ICTP [email protected]
Advanced Workshop on Technology for Sustainable Development: Low-Cost Tools to support Scientific Education | (smr 3238)
Organizer(s): Edoardo Milotti (University of Trieste Italy), Sandor Markon (Kobe Institute of Computing, Japan), Local Organiser: Marco Zennaro
Cosponsor(s): ICTP SciFabLabsfl
Joint ICTP-IAEA School on Quality Assurance and Dose Management in Hybrid Imaging (SPECT/CT AND PET/CT) | (smr 3240)
Organizer(s): Jenia Vassileva (Radiation Protection of Patients Unit), Gian Luca POLI (Dosimetry and Medical Radiation Physics section, Division of Human Health, Department of Nuclear Applications, International Atomic Energy Agency), Local Organiser: Luciano Bertocchi
Cosponsor(s): International Atomic Energy Agencyiaea, European Federation of Organisations for Medical Physicsefomp, American Association of Physicsts in Medicineaapm2, Eurpean Association of Nuclear Medicineenam
Europe/Rome Living systems differ from inanimate objects in their ability to unite basic laws of nature into chains and clusters leading to new stable and pervasive relations among physical variables and involving new parameters. They produce actions by modifying these parameters. Some of such "biological laws of nature" have been described including those underlying the control of voluntary movements. This approach allows exploring stability of action and perception, which looks next to miraculous given the continuously changing intrinsic states of the body and the changing environment. Stability of action and perception is viewed as the formation of a stable low-dimensional manifold, uncontrolled manifold for action and iso-perceptual manifold for perception, in high-dimensional spaces of elements. Over the past years, we have developed a method of uncontrolled manifold-based analysis of action stability in spaces of elemental performance variables and control variables for the effectors. These studies have shown, in particular, that humans are able to modify stability of steady states in preparation to action and that control of action stability is grossly impaired in certain neurological disorders. I will also review a couple of recent experimental studies testing some of the predictions of the scheme for stable perception. One of them explored force illusions induced by muscle vibration. The other study explored perception of elemental variables during a multi-element action. ICTP ICTP [email protected]
Towards physics of biological action and perception
Speaker(s): Mark L. Latash The Pennsylvania State University, USA
Europe/Rome Abstract. In 1998, Super-Kamiokande came up with the solution of long-standing atmospheric neutrino anomaly in terms of neutrino flavour oscillation providing an exclusive evidence for physics beyond the Standard Model. This year we are celebrating the 20th anniversary of discovery of atmospheric neutrino oscillation. In the first part of my talk, I will discuss the physics behind the atmospheric neutrino oscillation. Then, I will discuss the physics reach of currently running and upcoming atmospheric neutrino experiments. In the second part of my talk, I will present the status and prospects of the India-based Neutrino Observatory (INO) facility, a flagship mega-science project of our country to study atmospheric neutrinos. Towards the end of my talk, I will show some selected results from our recent studies on the physics capabilities of the INO experiment. ICTP ICTP [email protected]
Atmospheric Neutrino Oscillation: Celebrating 20 years of Super-K Discovery and Beyond
Speaker(s): Sanjib Kumar AGARWALLA (Institute of Physics, Bhubaneswar, India)
Europe/Rome We revisit the calculation of multi-interval modular Hamiltonians for free fermions using a Euclidean path integral approach. We show how the multi-interval modular flow is obtained by glueing together the single interval modular flows. Our methods are based on a derivation of the non-local field theory describing the reduced density matrix, and makes manifest its non-local conformal symmetry and U(1) symmetry. We will show how the non local conformal symmetry provides a simple calculation of the entanglement entropy. Time-permitting, we will connect multi-interval modular flows to the frame work of extended quantum field theory. SISSA, Via Bonomea 263, Room 128 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Glueing together Modular Flows with Free Fermions
Speaker(s): G. WONG (SISSA, Trieste)
Going beyond the horizon
Speaker(s): Erik P. VERLINDE (University of Amsterdam, The Netherlands)
Europe/Rome We study stability of the SYK4 model with a large but finite number of fermions N with respect to a perturbation, quadratic in fermionic operators. We develop analytic perturbation theory in the amplitude of the SYK2 perturbation and demonstrate stability of the SYK4 infra-red asymptotic behavior characterized by a Green function G(t) µ 1/t3/2, with respect to weak perturbation. This result is supported by exact numerical diagornalization. Our results open the way to build a theory of non-Fermi-liquid states of strongly interacting fermions. A. V. Lunkin, K. S. Tikhonov, M. V. Feigel'man arxiv:1806.11211 ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics 'SYK Model with Quadratic Perturbations: The Route to a Non-Fermi-Liquid'
Speaker(s): Mikhail V. FEIGELMAN (Landau Institute for Theoretical Physics, Russia)
Europe/Rome Abstract. The bootstrap has had remarkable success but it is not always straightforward to find novel fixed points. We consider what lessons may be learned by considering the epsilon expansion. I scalar theories. ICTP ICTP [email protected]
Seeking Fixed Points in Scalar Theories
Speaker(s): Hugh OSBORN (DAMTP, Cambridge, UK)
--- Please note rescheduling and time change !! ---
Europe/Rome Abstract: We make systematic developments on Lawson-Osserman constructions relating to the Dirichlet problem (over unit disks) for minimal surfaces of high codimension in their 1977' Acta paper. In particular, we show the existence of boundary functions for which infinitely many analytic solutions and at least one nonsmooth Lipschitz solution exist simultaneously. This newly-discovered amusing phenomenon enriches the understanding on the Lawson-Osserman philosophy. ICTP ICTP [email protected]
Dirichlet boundary values on Euclidean balls with infinitely many solutions for the minimal surface system
Speaker(s): Yongsheng ZHANG (Tongji University, Shanghai)
PATH DEV Project - educational meetings for fishery professionals | (smr H567)
Organizer(s): Key Congressi Trieste,
Europe/Rome Recent experiments on magnets put inside optical and microwave cavities demonstrated a variety of interesting phenomena. After an extensive introduction, we will first concentrate on asymmetry of Stokes and anti-Stokes peaks in light transmission through a cavity and present a theory which explains this difference and also predicts only one peak in reflection [1]. After that, we will discuss cooling of magnons with light [2]. [1] S. Sharma, Ya. M. Blanter, and G. E. W. Bauer, Phys. Rev. B 96, 094412 (2017) [2] S. Sharma, Ya. M. Blanter, and G. E. W. Bauer, Phys. Rev. Lett. 121, 087205 (2018) ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: Light Scattering in Cavity Optomagnonics
Speaker(s): Yaroslav M. BLANTER (Kavli Institute of Nanoscience, Delft University of Technology, The Netherlands
Europe/Rome Last Deadline to Apply: 3 June 2018 Ion beam techniques (IBT) have been extensively applied for material analysis and modification by using ions in the keV-GeV energy range. Advanced capabilities for single ion implantation and detection, enhanced by recent advances in ion beam material analysis and modification at the nano-scale can provide a prominent role for IBTs in the recently emerging field of quantum technologies (QT). New generation of devices are expected to be developed within the "Second Quantum Revolution", which is being supported through major strategic research initiatives worldwide e.g. in Australia, China, Europe, India, Japan, USA etc. This Advanced School will provide the latest technological developments to engineer new material properties with ion beams, with a specific focus on quantum technologies for PhD students and early career researchers (i.e. up to 7 years after PhD degree) actively involved in ion beam techniques and/or in the field of quantum technologies. Topics: Key topics in Quantum Technologies; Radiation Effects in Materials: Theory and Modelling; Novel keV-GeV Ion Beam Techniques for Quantum Technologies: Single Ion Implantation, High Resolution Ion Beam Lithography, Time-Resolved Analysis, etc.; Materials Engineering by Ion Beams for QT applications: Single Dopants in Semiconductors, Optically Active Defects in Wide-bandgap Semiconductors, etc.; "Project Development Lab" to develop skills on how to write and present a coherent scientific proposal in the field of "Ion Beams for QT". Call for Abstracts: In the application form, all applicants are invited to submit an abstract for a poster presentation within the scope of the School. It is highly desirable to get a letter of support if one is applying for a financial support. Keynote Lecturers: Andrew A. BETTIOL, National University of Singapore, Singapore Edward S BIELEJEC, Sandia National Laboratories, USA Rosario FAZIO (ICTP Trieste, Italy) Leonard C. FELDMAN, Rutgers University, USA Jacopo FORNERIS, INFN, Italy Jan MEIJER, Leipzig University, Germany Paolo OLIVERO, University of Turin, Italy Frank WILHELM-MAUCH, University of Saarbrucken, Germany ICTP ICTP [email protected]
Joint ICTP-IAEA Advanced School on Ion Beam Driven Materials Engineering: Accelerators for a New Technology Era | (smr 3236)
Organizer(s): Paolo Olivero (University of Turin, Italy), Aliz Simon (IAEA, Vienna, Austria), Local Organiser: Sandro Scandolo
Cosponsor(s): IAEA, Viennaiaea
Inquiry-Based Science Education, an introduction ( CESAME) | (smr H559)
Address: Via Beirut,7 I - 34151 Trieste (Italy)
Room: SciFabLab (E.Fermi Building)
Organizer(s): Joe Niemela, Odile Macchi,
Cosponsor(s): ICTP SciFabLab sfl, Associazione Nazionale degli Insegnanti di Scienze Naturalianisn1
Direction on how to reach the lab are here: http://scifablab.ictp.it/how-to-get-here/
Supports of the Hitchin fibration on the reduced locus
Speaker(s): Luca MIGLIORINI (University of Bologna, Italy)
Europe/Rome A special Maths colloquium entitled "On the functional equation of automorphic L-functions" will be given by Prof. Ngô Bảo Châu. Prof. Châu is a Francis and Rose Yuen Distinguished Service Professor at the Department of Mathematics of the University of Chicago, USA; Scientific Director, VIASM, Vietnam Institute for Advanced Study in Mathematics. Ngô Bảo Châu is a Vietnamese-French mathematician at the University of Chicago, best known for proving the fundamental lemma for automorphic forms proposed by Robert Langlands and Diana Shelstad. He is the first Vietnamese national to have received the 2010 Fields Medal "For his proof of the fundamental lemma in the theory of automorphic forms through the introduction of new algebro-geomatric methods". The other winners of the 2010 Fields Medal were: Cédric Villani, professor of mathematics at the ENS (École Normale Supérieure) in Lyon and director of the Institut Henri Poincaré (UPMC/CNRS), Elon Lindenstrauss, professor at the Hebrew University of Jerusalem, and Stanislav Smirnov of UNIGE (University of Geneva). The abstract of the Prof. Châu's talk will follow shortly. The event will take place at 16.30 hrs, in the Euler Lecture Hall, ICTP. Light refreshments will be served afterwards. ICTP ICTP [email protected]
ICTP Maths Colloquium on "The functional equation of automorphic L-functions"
Speaker(s): Prof. Ngo Bao Chau, University of Chicago, USA; Scientific Director, VIASM, Vietnam Institute for Advanced Study in Mathematics
Europe/Rome In this seminar I will present results from time-dependent density functional theory (TDDFT) simulations in real time, where the coupling of the Kohn-Sham Hamiltonian with classical force-fields or with a Tight-Binding (TB) approach provide multiscale treatments to tackle large molecular or condensed phase systems. In particular, the integration of TDDFT with a TB Hamiltonian allows to model quantum transport across molecules immobilized between massive electrodes. On the other hand, hybrid QM-MM Ehrenfest dynamics are used to describe photoisomerization or photodissociation of organics in solution. ICTP ICTP [email protected]
ICTP Seminar Series in Condensed Matter and Statistical Physics: Transport and Photochemistry from Multiscale Quantum Dynamics Simulations
Speaker(s): Damian A. SCHERLIS PEREL (Univ. Buenos Aires, Quimica Inorganica Analitica y Quimica Fisica, Buenos Aires, Argentina
Europe/Rome We report a new mechanism to generate dissipative steady state entanglement in coupled qubits driven by strong periodic fields. We demonstrate that steady entanglement can be induced and tuned by changing the amplitude of the driving field. A rich dynamic behavior with {\it creation, death and revival} of entanglement can be observed near multiphoton resonances. Coupled superconducting qubits are good candidates for the observation of these effects. Please see paper in arXiv:1806.09047 ICTP ICTP [email protected]
Condensed Matter and Statistical Physics Seminar: Amplitude Tuning of Steady State Entanglement in Strongly Driven Coupled Qubits
Speaker(s): Daniel DOMINGUEZ (CAB CNEA, San Carlos de Bariloche, Rio Negro, Argentina)
Europe/Rome The school is a certificate course providing a unique international education aimed at building future leadership to manage nuclear energy programmes. It is intended for promising young professionals from the nuclear industry and academia, including those from developing countries. Description: Jointly organized by ICTP and the IAEA since 2010, this Nuclear Energy Management (NEM) School focuses on broadening the young professional's understanding of current issues in the nuclear industry, generating awareness on recent developments in nuclear energy and sharing international perspectives on issues related to the peaceful use of nuclear technology. Over 30 experts from the IAEA and the nuclear industry deliver more than 70 presentations and case studies. Technical visits to nuclear facilities form an essential practical element of the curriculum and students are asked to work on special projects relevant to the development of nuclear energy programmes. Participants will be selected based on an online pre- training course and a test. Topics: • World Energy Balance, Geopolitics and Climate Issues; • Energy Planning, Energy Economics and Nuclear Power Economics and Finance; • Nuclear Power Technology and Life Cycle; • Nuclear Safety and Security; • Nuclear Law, International Conventions and Relevant Mechanisms; • Nuclear Non-Proliferation and Safeguards; • Human Resource Development and Knowledge Management; • NuclearLeadership,Management and Sociology; • Emergency Planning and Preparedness; • Radioactive Waste Management and Decommissioning; • Communicating Radiation Risks to the Public; • IAEA support for Nuclear Power ICTP ICTP [email protected]
8 Oct 2018 - 19 Oct 2018
Joint ICTP-IAEA School of Nuclear Energy Management | (smr 3241)
School Executive Director: M. Chudakov (IAEA)
School Co-ordinators: D. Drury (IAEA), A. Ganesan (IAEA)
Local Organizer: C. Tuniz (ICTP)
ICTP Seminar Series in Condensed Matter and Statistical Physics: Landau-Zener-Stueckelberg-Majorana Interference"
Speaker(s): Sigmund KOHLER (Instituto de Ciencia de Materiales de Madrid CSIR, Spain)
To B or not to B: Primordial magnetic fields from Weyl anomaly
Speaker(s): Takeshi KOBAYASHI (ICTP)
Europe/Rome The workshop introduces young and established nuclear scientists to the evaluation of nuclear structure and decay data, by providing an overview of experimental and theoretical nuclear techniques and basic training in the evaluation procedures and formats involved in the production of the Evaluated Nuclear Structure Data File (ENSDF). Reliable evaluated nuclear structure and decay data are of vital importance for basic nuclear physics and astrophysics, as well as for nuclear applications such as power generation, material analysis, dosimetry and medical diagnostics. These important data requirements are catered by the international network of Nuclear Structure and Decay Data (NSDD) Evaluators, created in 1974 and coordinated by the IAEA. The main output of this network is the recommended ENSDF database and evaluations published in Nuclear Data Sheets. This workshop belongs to a series of ICTP workshops that started in 2003 and have been crucial for attracting young nuclear scientists to nuclear structure and decay data evaluation and for providing them with the basic tools to pursue this activity. TOPICS: modern nuclear experimental techniques & facilities; nuclear structure models; nuclear structure and decay data: compilations and evaluations; evaluation methodologies for nuclear structure and decay data; analysis and utility codes; ENSDF editors and Web tools; databases and online retrieval software. CALL FOR PAPERS A session will be held during the workshop for participants to present their work related to nuclear structure and decay data. Please upload a one-page abstract directly to the on-line application. (upload file attachments in pdf) GRANTS A limited number of grants are available to support the attendance of selected participants, with priority given to participants from developing countries. There is no registration fee. Applicants should hold at least a Master's Degree (M.Sc.) in Nuclear Physics, and possess a few years of professional experience related to nuclear structure and decay data. ICTP ICTP [email protected]
Joint ICTP-IAEA Workshop on Nuclear Structure and Decay Data: Theory, Experiment and Evaluation | (smr 3242)
Organizer(s): Paraskevi Dimitriou (Nuclear Data Section, IAEA), Elisabeth A. McCutchan (Brookhaven National Laboratory, USA), Local Organiser: Claudio Tuniz
Europe/Rome Director: Teketel YOHANNES, AASTU, Addis Ababa, Ethiopia ICTP Contacts: Ralph GEBAUER & Nicola SERIANI, ICTP, Trieste, Italy The Workshop addresses students and researchers working in the field of solar energy and energy storage. The event will focus on fundamental and applied research on functional materials, on devices and on their application. To cope with increasing global energy demand, solar energy is one of the best alternatives. The feasibility of solar energy technology in Africa, where the sun is available throughout the year, is not questionable. This huge energy can be used as photo-thermal and in photovoltaic forms. For efficient conversion of solar energy it is very important to have a clear understanding of the photo-physical and opto-electrical properties of materials. In parallel with the development of photovoltaic devices, due attention has to be given to the storage of electrical energy in easily portable chemical energy forms and/or in an electrical field. Since solar energy is collected only in the day time, it is also wise to develop alternative technologies such as storage for renewable energy forms. Please note that funds from the Joint Undertaking for an African Materials Institute (JUAMI) are also available for US students. TOPICS: Current photovoltaic systems, new technologies; Promise of nanostructures and technology in this field; Wind and geothermal energy: potential and installations in Ethiopia; Energy storage: fuel cells, batteries and supercapacitors; Importance of materials science, experimental and computational approach; Renewable energy potentials in the region. WORKSHOP SPEAKERS: Michael ARMAND, Minano, Spain Vladimir DYAKONOV, Wuerzburg, Germany Gebrekldan G. ESHETU, Aachen, Germany Sosina M. HAILE, Northwestern, U.S.A. Harold HOPPE, Jena, Germany Kenneth I. OZOEMENA, Johannesburg, South Africa Markus C. SCHARBER, Linz, Austria Veronika WITTMANN, Linz, Austria Addis Ababa - Ethiopia ICTP [email protected]
@ Addis Ababa - Ethiopia
Ethiopian Regional Workshop on Solar Energy and Energy Storage Technologies: Materials, System Design, and Applications | (smr 3149)
Organizer(s): Teketel Yohannes (AASTU, Addis Ababa, Ethiopia), ICTP Scientific Contact: Ralph Gebauer, Nicola Seriani
Cosponsor(s): Ministry of Water & Energymowie, Addis Ababa Science & Technology Universityaastu, Ministry of Science & Technologymost, Joint Undertaking for an African Materials Institutejuami
Europe/Rome I will discuss walking behavior in gauge theories and weakly first order phase transition in statistical models. Despite being phenomena appearing in very different physical systems, they both show a region of approximate scale invariance. They can be understood as a theory passing between two fixed points living at complex couplings, which we call complex CFTs. By using conformal perturbation theory, knowing the conformal data of the complex CFTs allows us to make predictions on the observables of the walking theory. As an example, I will discuss the two dimensional Q-state Potts model with Q>4. SISSA, Via Bonomea 265, room 004 ICTP [email protected]
Joint ICTP/SISSA Statistical Physics Seminar: Walking, Weakly First Order Phase Transitions and Complex CFTs
Speaker(s): B. ZAN (EPFL Lausanne, Switzerland)
Europe/Rome Slides of talks presented are being uploaded -- see link "Slides" at foot (last update: 17.06.2019) Some photos taken at the Francis Allotey tribute meeting in Kigali can be viewed here: https://www.flickr.com/photos/ictpimages/albums/72157702697977105 * * * The Workshop was successfully held on 17th and 19th October 2018 in Kigali, Rwanda as part of the festivities for the official opening on the 18th of October 2018 of the East African Institute of Fundamental Research (EAIFR), an ICTP partner Institute in Africa. The activity began with a tribute to Prof. Francis Allotey who passed away last year. Focussed on discussions about all the great science that is currently going on in the African continent, promoting networking amongst African scientists and setting out a roadmap for the future of science in general in Africa, the Workshop counted 41 attendees (organizers, speakers, and selected participants). Final List of Speakers Amna Abdalla Mohammed Khalid John Akindayo Adedoyin George Amolo Rondrotiana Barimalala Paul Buah-Bassuah Nithaya Chetty Mmantasae Moche Diale Kedro Diomande Moses Jojo Eghan Jean Paul Faye Garu Gebreyesus Assia Harbi Estelle Inack Stephane Kenmoe Rasha M. Khafagy Lucy Kiruri Timoleon Crepin Kofane Bernard M'Passi Mabiala Malik Maaza Liliana Mammino Sekazi Mtingwa Joe Niemela Lawrence Norris Solofoarisoa Rakotoniaina Bonfils Safari Mohammed Semlali Charles Tabod Mourad Telmini Malik Maaza Ahmadou Wague Amanda Weltman Hisham Widatallah Related links: ICTP news at: https://www.ictp.it/about-ictp/media-centre/news/2018/10/eaifr.aspx EAIFR webpage: https://eaifr.ictp.it/ Kigali - Rwanda ICTP [email protected]
@ Kigali - Rwanda
Reviving the African Physical Society - A Tribute to Professor Francis Allotey | (smr 3372)
Organizer(s): Garu Gabreyesus (Legon, Ghana), Omololu Akin-Ojo (EAIFR, Rwanda), Ahmadou Wague (UCAD), Sandro Scandolo (ICTP), ICTP Scientific Contact: Ali Hassanali
Europe/Rome KINDLY NOTE CHANGED TIME - from 14:00 to 15:30 PART I In this tutorial I will describe matrix product state (MPS) ansatz and its generalization to operators, and show how it can be used to calculate unitary or dissipative quantum evolution. Such ansatz is, in many situations, rather efficient and allows one to study large many-body systems. I will explain how to write states in the canonical MPS form and how to perform transformations, preserving optimal finite-rank description. The tutorial will include blackboard exercises as well as hands-on numerical examples and will give sufficient knowledge to produce your own code. ICTP ICTP [email protected]
Condensed Matter & Statistical Physics SPECIAL SERIES OF TUTORIALS: Evolving Quantum States with Matrix Product Ansatz- Part I
Speaker(s): Marko ZNIDARIC (Dept. of Physics, University of Ljubljana, Slovenia)
A Swampland Update
Europe/Rome We shall discuss one of the main problems in Computer Vision: how to reconstruct a three-dimensional shape starting from an apparent contour, namely from a suitable graph with cusps and transversal crossings, lying on a projection plane. We shall discuss some related topological problems and, if possible, the role of the program Appcontour. ICTP ICTP [email protected]
Apparent contours and reconstruction of solid shapes
Speaker(s): Giovanni Bellettini (ICTP)
Europe/Rome PART II In this tutorial I will describe matrix product state (MPS) ansatz and it's generalization to operators, and show how it can be used to calculate unitary or dissipative quantum evolution. Such ansatz is namely in some situations rather efficient and allows to study large many-body systems. I will explain how to write states in the canonical MPS form and how to perform transformations, preserving optimal finite-rank description. The tutorial will include blackboard exercises as well as hands-on numerical examples and will give sufficient knowledge to produce your own code. ICTP ICTP [email protected]
Condensed Matter & Statistical Physics SPECIAL SERIES OF TUTORIALS: Evolving Quantum States with Matrix Product Ansatz - Part II
Europe/Rome A Global Community Space Apps is an international hackathon that occurs over 48 hours in cities around the world. Because of citizens like you, we continue to grow each year. If you haven't already, join us to share ideas and engage with open data to address real-world problems, on Earth and in space. Work alone or with a team to solve challenges that could help change the world. The Trieste event is organized and hosted by the Abdus Salam International Centre for Theoretical Physics (ICTP) with the Patronage of the municipality Comune di Trieste and Universtiy of Trieste, and the sponsorship of the United States Consulate of Milan. The World Needs Your Ideas Part of the Open Government Partnership, Space Apps is an annual event that pulls citizens together regardless of their background or skill level. Don't let the name fool you... it's not just about apps! Tackle a challenge using robotics, data visualization, hardware, design and many other specialties! Inspire each other while you learn and create using stories, code, design and, most of all, YOUR ideas. Show us your problem-solving skills and share your talents with the world! Find out more at https://2018.spaceappschallenge.org/locations/trieste/ Con il patrocinio di: / With patronage of: Comune di Trieste Università degli Studi di Trieste ICTP ICTP [email protected]
Space-Apps Challenge | (smr 3341)
Organizer(s): Local Organisers: Enrique Canessa, Carlo Fonda
Cosponsor(s): USA ConsulateMilan, ICTP SciFabLabsfl, Nasa Space Apps Challengenasats, proESOFproesof, EuroScienceeusci, Comune di Triestecomts, Università degli Studi di Triesteust
Joint ICTP-IAEA Advanced School on Quality Assurance and Dosimetry in Mammography | (smr 3248)
Organizer(s): Harry Delis (IAEA), Local Organiser: Renato Padovani
Europe/Rome Regional climate models (RCMs) have become important tools to study climate variability and change at regional scales. They can also be used to provide climate change information for impact and adaptation studies. Within the context of RCM research, the Coordinated Regional climate Downscaling EXperiment (CORDEX) provides the basis for generating multi-model RCM-based projections over regions Worldwide. The community of users of the ICTP regional model RegCM is one of the most active in contributing to the CORDEX program and in involving the scientific community from developing countries in climate change research. With the aim of enlarging and strengthening the RegCM user community, the ICTP organizes a series of training workshops in different regions of the World. This workshop represents the second of a series of RegCM training activities in Southeast Asia, and includes the following lectures: * Climate and climate change over the South-east Asia region * Regional climate modeling and the physical processes that regulate regional climate variability and change * Regional modeling for the South-east Asia region * Hands-on sessions on the structure and use of the RegCM model (latest release RegCM4). * Discussion of the contribution of the RegCM community to the CORDEX initiative * Discussion of future developments of the RegCM system Hanoi - Viet Nam ICTP [email protected]
@ Hanoi - Viet Nam
Second Training Workshop on Regional Climate Modeling for Southeast Asia | (smr 3166)
Address: VNU University of Science
Organizer(s): Phan Van Tan (Vietnam National University, Hanoi, Vietnam), ICTP Scientific Contact: Filippo Giorgi
Cosponsor(s): VNU Asia Research Centerarc, VNU Hanoi University of Sciencehus, Vietnam Institute of Meteorology, Hydrology and Climate Change (IMHEN)imhen
Europe/Rome The course programme is designed to cover a range of topics of direct relevance to the science of physical, chemical and radiological phenomena specific to progression of severe accidents in WCRs including an overview of the associated technologies designed to cope with such events. Director: Matthias KRAUSE, IAEA, Vienna, Austria Local Organizer: Fred KUCHARSKI, ICTP, Trieste, Italy The course will build a complete understanding of the science underpinning the complex phenomena associated with the progression of severe accidents in WCRs. Knowledge transfer will be facilitated between the international experts as lecturers, and young professonals and engineers, as participants through discussions and hands-on learning with the goal to gain a comprehensive understanding about science of the physical, chemical and radiological phenomena specific to severe accidents in WCRs, advancement in scientific methods, approaches and simulation tools, fundamentals on various interrelated scientific phenomena associated with in-vessel and ex-vessel phases of severe accident progression, and the role of technologies required to control and prevent progression of such accidents in WCRs, including mitigation of the resulting severe consequences. TOPICS: Introduction Physics of Water Cooled Power Reactors; Nuclear Safety of Water Cooled Power Reactors; Defence in Depth and Design Basis Accidents in Water Cooled Power Reactors; Progression of Fukushima Daiichi Accident and its Consquences. Phenomenology of Scientific Phenomena in Propagation of Severe Accidents Classification of Severe Accidents Phenomena into Three Levels of Scientific Knowledge: High, Medium and Low; Nuclear Fuel Degradation; Relocation of Melted Fuel; In-Vessel Melt Retention; Reactor Vessel Failure Mechanisms; Ex-Vessel Corium Cooling; Early-Phase Containment Failure; Late-Phase Containment Failure; Physics and Chemistry of Source Term; Fission Products Behaviour and Transport; Hydrogen Generation, Transport and Explosion; Numerical Simulations of Severe Accident Phenomena. ICTP ICTP [email protected]
Joint ICTP-IAEA 1st Course on Scientific Novelties in Phenomenology of Severe Accidents in Water-Cooled Reactors | (smr 3247)
Organizer(s): Matthias Krause (Nuclear Power Technology Development Section, Division of Nuclear Power, Department of Nuclear Energy, International Atomic Energy Agency), Local Organiser: Fred Kucharski
Europe/Rome Directors: N. CHETTY, University of Pretoria, South Africa R. MARTIN, Stanford University, U.S.A. S. NARASIMHAN, JNCASR Bangalore, India S. SCANDOLO, ICTP, Trieste, Italy N. SERIANI, ICTP, Trieste, Italy Local Organizer: T. YOHANNES, AASTU, Addis Ababa, Ethiopia ICTP contacts: S. SCANDOLO, ICTP, Trieste, Italy N. SERIANI, ICTP, Trieste, Italy The Schools provide an introdution to the theory of electronic structure and other atomistic simulation methods, with an emphasis on the computational methods for practical calculations. The African School series on Electronic Structure Methods and Applications is planned on a biennial basis from 2010 to 2020. Previous schools were held in Cape Town, South Africa (2010), Eldoret, Kenya (2012), Johannesburg, South Africa (2015) and Accra, Ghana (2016). The School will also cover basic and advanced topics and applications of these methods to the structural, mechanical and optical properties of materials. The School will include hands-on tutorial sessions based on public license codes (including, but not limited to, the Quantum Espresso package). During the second week, students will be asked to split up in teams and work on specific projects under the guidance of the lecturers and mentors. Please note that funds from the Joint Undertaking for an African Materials Institute (JUAMI) are also available for US students. Invited Speakers: O. AKIN-OJO, ICTP-EAIFR, Rwanda G. AMOLO, TU Kenya, Nairobi M. CASIDA, Universite' Grenoble Alpes, Saint-Martin-d'Heres, France M. GATTI, ETSF, LSI - CNRS, Palaiseau, France G. GEBREYESUS, University of Ghana E. LUIJTEN, Northwestern McCormick Sch. of Eng., Evanston, U.S.A. D. MAGERO, University of Eldoret, Kenya A. MARINI, ISM, Rome, Italy R. MAEZONO, JAIS, Ishikawa, Japan S. NARASIMHAN, JNCASR Bangalore, India Y. SHAIDU, SISSA, Trieste, Italy Addis Ababa - ICTP [email protected]
@ Addis Ababa -
5th African School on Electronic Structure Methods and Applications (ASESMA-2018) | (smr 3234)
Organizer(s): N. Chetty (University of Pretoria, South Africa), R. Martin (Stanford University, USA), S. Narasimhan (JNCASR, Bangalore, India), S. Scandolo (ICTP, Trieste, Italy), N. Seriani (ICTP, Trieste, Italy), T. Yohannes (AASTU, Addis Ababa, Ethiopia), ICTP Scientific Contact: S. Scandolo, N. Seriani
Cosponsor(s): Addis Ababa Science & Technologyaastu, Ministry of Science & Technologymost, International Union of Pure & Applied Physcsiupap, US Liaison Committee for IUPAPusliaison, Swiss National Center for Computational Design and Discovery of Novel Materials (NCCR MARVEL)marvel2, Quantum Espresso Foundationqef, Joint Undertaking for an African Materials Institutejuami, National Academy of Sciencenas
Modular linear differential equations
Speaker(s): Kiyokazu NAGATOMO (Osaka University) | CommonCrawl |
Mathematics and Youth Magazine Problems - March 2004, Issue 321
Write the number $2003^{2004}$ as a sum of positive integers. What is the remainder of the division by $3$ of the sum of the cubes of these integers?
Simplify the expression $$\frac{(a-2)(a-1002)}{a(a-b)(a-c)}+\frac{(b-2)(b-1002)}{b(b-a)(b-c)}+\frac{(c-2)(c-1002)}{c(c-a)(c-b)}$$ where $a, b, c$ are distinct numbers such that $a b c \neq 0$.
Let $a, b, c, d$ be positive numbers. Prove that
a) $\displaystyle \frac{a}{b}+\frac{b}{c}+\frac{c}{a} \geq \frac{a+b+c}{\sqrt[3]{a b c}}$.
b) $\displaystyle \frac{a^{2}}{b^{2}}+\frac{b^{2}}{c^{2}}+\frac{c^{2}}{d^{2}}+\frac{d^{2}}{a^{2}} \geq \frac{a+b+c+d}{\sqrt[4]{a b c d}}$.
Find a necessary and sufficient condition on the number $m$ so that the following system of equations has a unique solution $$\begin{cases} x^{2} &=(2+m) y^{3}-3 y^{2}+m y \\ y^{2} &=(2+m) z^{3}-3 z^{2}+m z \\ z^{2} &=(2+m) x^{3}-3 x^{2}+m x\end{cases}$$
Let $A B C D$ be a trapezoid inscribed in a circle with radius $R=3cm$ such that $B C=2 cm$, $A D=4cm$. Let $M$ be the point on side $A B$ such that $M B=3 M A$. Let $N$ be the midpoint of $C D$. The line $M N$ cuts $A C$ at $P$. Calculate the area of the quadrilateral $A P N D$.
Let be given three positive integers $m$, $n, p$ such that $n+1$ is divisible by $m$. Find a formula to calculate the number of $p$-uples of positive integers $\left(x_{1}, x_{2}, \ldots, x_{p}\right)$ satisfying the conditions: the sum $x_{1}+x_{2}+\ldots+x_{p}$ is divisible by $m$ and $x_{1}, x_{2}, \ldots, x_{p}$ are not greater than $n$.
$a$, $b$ are arbitrary positive numbers such that the equation $x^{3}-a x^{2}+b x-a=0$ has three roots greater than $1$. Determine $a$, $b$ so that the expression $\dfrac{b^{n}-3^{n}}{a^{n}}$ attains its least value and find this value.
The incircle of triangle $A B C$ touches the sides $B C$, $C A$, $A B$ respectively at $D$, $E$, $F$. Prove that $$\frac{D E}{\sqrt{B C . C A}}+\frac{E F}{\sqrt{C A \cdot A B}}+\frac{F D}{\sqrt{A B \cdot B C}} \leq \frac{3}{2}.$$
MOlympiad.NET: Mathematics and Youth Magazine Problems - March 2004, Issue 321
https://www.molympiad.net/2021/08/mathematics-and-youth-magazine-problems_28.html | CommonCrawl |
Home | Articles | * The Best and Worst College Majors * The Levels of Human Experience Boycott Microsoft Everything You Know is Wrong Fire Your Stockbroker! If I Were In Charge Jokes Programming Handheld Calculators Stocks and Stockbrokers Symmetry The Best and Worst College Majors (PDF) The Idea Carnival Was Thanos Right? Share This Page
The Idea Carnival
A collection of brief, entertaining expositions
Sino-students | Mid-course Incorrection | Bystander | Vertical | Horizontal | Wildfire | Extinction | Literally | Market Prediction | Rich and Poor | Watts Towers | Calculation | Divide and Conquer | Semmelweiss | MRSA | First Programmer | Clever Insects | Pando | Note | Footnotes
(double-click any word to see its definition)
Click here to download this article in PDF form
My articles normally expend many words on few topics. This one expends few words on many topics. I often find subjects interesting that don't need lengthy treatments, and it hasn't escaped my attention that the Internet is gradually abandoning words altogether. I don't plan to go to that extreme, but I've always respected efficient word use and I'm sure many readers will accept — perhaps even celebrate — brief expositions. Here we go ...
Sino-students
There are more people learning English in China than there are English speakers in the U.S.1
Mid-course Incorrection
On September 15, 1999, the NASA Deep Space network radioed a mid-course correction burn to the Mars Orbital Surveyor spacecraft, then approaching the Red Planet. A few days later, instead of skipping off the Martian atmosphere in an aerobraking maneuver as planned, the spacecraft plowed into the densest part of Mars' atmosphere and disintegrated. Later investigation revealed that the spacecraft required its burn instructions to be expressed in Metric units (newton-seconds), but because of a mixup, English units (pound-seconds) were transmitted, which resulted in the spacecraft's destruction. The total cost for the mission was US$655.2 million.2
A few minutes after President John Kennedy was shot in Dallas, Texas, a policeman ran into the building from which the shots were fired and accosted a man leaving the building. Because he was able to prove he was an employee of the Texas School Book Depository, the officer let the man go. Thus did Lee Harvey Oswald walk away from the crime of the century.3
The towers of the Golden Gate Bridge in San Francisco are 1,280.2 meters (4,200 feet) apart and 227.4 meters (746 feet) high. Here are some facts about the bridge:
The towers are perfectly vertical. If a weighted plumb line were to be placed alongside the centerline of the towers, it would align itself precisely with the towers from top to bottom.
The towers aren't vertically aligned with each other — the horizontal separation between the towers' centerlines is 4.6 centimeters (1.8 inches) greater at the top than at the bottom.
Both these statements are true. The reason both are true simultaneously is that the earth's surface is curved, consequently the two perfectly vertical towers' horizontal separation produces a small difference in their angles with respect to the center of the earth (click here for a diagram).4
Figure 1: Bridge tower diagram with greatly exaggerated proportions.
Here's another geometry fact: For a sailor sitting on a boat, six feet above the water, the horizon is three miles away5. In many places on earth's oceans, this means the bottom of the ocean is farther away than the horizon. If our imagined sailor were traveling over the Mariana Trench in the western Pacific ocean, the ocean's bottom would be more than twice as far away as the horizon. Click here for a diagram.
Figure 2: Horizon distance diagram.
On August 5, 1949, a team of 15 smokejumpers parachuted into a fire in Montana's Mann Gulch. Then the wind grew stronger and the fire became what's called a "blow up", a fire able to spread faster than a man can run. The firefighters realized the fire was approaching faster than they could escape through the unburned brush. Thinking quickly, Wagner Dodge, the foreman and most experienced person, took out a match and started a downwind fire along a likely escape path. Dodge tried to encourage the others to follow him into the wake of his escape fire, but the other men, not understanding his plan and beginning to panic, broke and ran in all directions to escape the flames. Dodge's downwind fire quickly burned itself out, providing a safe escape path ... for Wagner Dodge, one of just three men who survived the infamous Mann Gulch Fire.6
70,000 years ago a gigantic volcanic supereruption known as the Toba event7 took place in what is now Indonesia. The eruption was so massive that it triggered a nuclear winter8 that nearly wiped out the human race. Modern genetic analysis9 shows that the eruption and its aftermath reduced total human numbers to between 3,000-10,000 individuals. If the lower number is correct, this means there are 2.3 million times more people alive now. It also means there are more people living in a typical city block than existed on the entire planet at that time.
We should remember that dictionaries don't tell us what words mean, they tell us what people think words mean. This is why the words "literally" and "figuratively" mean the same thing.10
Market Prediction
Let's say John Doe wants to make money investing in stocks. Through the mechanism of puts and calls11, John can make money whether the market rises or falls. But John needs a strategy, a way to decide whether the market will rise or fall in the future. Is such a strategy possible?
There are stories about very successful investors, people who seem to have a winning strategy to make money. But a theory called the Efficient Market Hypothesis (EMH)12 says that such a strategy isn't possible, that the market's workings are too unpredictable to allow a consistent winning strategy. (No one knows whether the EMH applies to real markets.)
Certainly some people make much better than average investment returns, but why? A scientist examining these outcomes would use the null hypothesis13 as her starting point — the assumption that an outcome results from chance, not design. Assuming the null hypothesis, we can compute the probability that the picks of successful investors can be explained by the random workings of nature or by flipping a coin in a back room. Here are some facts about the role of chance in investing:
The probability that an investor will correctly guess the market's direction (up or down) once is 1/2 or 50%.
The probability for two sequential correct guesses is 1/4 or 25%. This probability is expressed mathematically as p = 2-2.
Now we can write a rule: the probability that n sequential correct binary (up/down) guesses arose from chance is p = 2-n.
Let's say we hear of an investor who correctly guessed the market's direction twenty times in an unbroken sequence and made a fortune. Isn't that proof of genius? Well, maybe, but chance must always be taken into account. In this case, the probability that this outcome arose from chance is 2-20 or 1/1,048,576.
Many people might think the above outcome, with its very small chance probability, proves genius, but it overlooks something — it's the outcome for just one person, and there are millions of investors trying to guess the market's direction.
What is the probability that, in a population of a million investors, one of them will make twenty sequential correct predictions by chance alone? It's 61%14, which means when we hear an investment success story, the most likely explanation is chance, not genius.
Next question: In a population of a million investors, will that one lucky investor assume his success resulted from chance, or will he write a useless investment book titled "Secrets of the Winners"?
Rich and Poor
According to an old saying, the rich get richer and the poor get poorer. It may be a folk saying, but there's a force at work in society that, if unchecked, can make it inevitable that the rich get richer and the poor get poorer. That force is called "compound interest"15. In compound interest, a periodic interest amount is (for an investment) added to, or (for a loan) subtracted from, the balance in an account. This compounding effect causes the account's balance to greatly increase or decrease over time, in a way that many people find surprising.
Here's an example. At age 20, a young man inherits $20,000 and decides to put the money in an investment that earns 12% per annum. His plan is to maintain the investment until he's 60 and retires. He also wants to withdraw a small amount of money weekly, but he wants to choose an amount that won't erode the account's value. Here are the details so far:
Initial balance: $20,000.00
Annual interest return: 12%, compounded weekly
Weekly withdrawal: $46.15
As it turns out, the chosen weekly withdrawal amount prevents the account from either increasing or decreasing in value over time — after 40 years, the account still has a balance of $20,000.00. But consider these alternatives:
If the withdrawal amount is increased by only 39 cents (less than 1%), to $46.54, the balance after 40 years will decline to zero.
If the withdrawal amount is decreased by 38 cents to \$45.77, the balance after 40 years will double to \$40,000.00.
If the man decides to forgo weekly withdrawals altogether, the balance after 40 years will grow to over 2.4 million dollars.
Click here for a graph of these outcomes.
Figure 3: Effect of weekly withdrawal amounts on final balance.
This section is only meant to describe a mathematical property of compound interest, not become a political discussion — I'll leave that to others. All this treatment shows is that compound interest produces instability and has the effect of turning a homogeneous society into a pyramid with increasing numbers of poor people at the bottom and a small handful of very rich people at the top. Is that a problem? Should something be done about it? Let the voters decide.
Watts Towers
Simon Rodia was an Italian immigrant construction worker who in his spare time built large sculptures out of found materials — construction rebar, steel pipes, bed frames, pottery, glass — anything he could find. From 1921 to 1954 Rodia added to his collection of sculptures, then, annoyed by his hostile neighbors, he moved away never to return.
The sculptures he left behind produced a mixture of responses, some admiring, some hostile. Eventually the City of Los Angeles deemed the structures hazardous — after all, they were built out of found scraps by a single person without any engineering training — so the city condemned them and ordered them razed.
Before serious demolition got underway, a preliminary test was performed to see what kind of equipment would be needed to tear down these makeshift structures. A heavy construction crane was brought up to the site and its steel grappling cable attached to one of the towers. The crane was powered up and the cable grew taut. Onlookers closely watched to see if the towers moved under the crane's force, but before any movement could be seen and according to contemporary accounts, the crane "experienced a mechanical failure", the test was called off, and the city abandoned its plan to demolish the towers.
Today the towers, now called the "Watts Towers of Simon Rodia State Historic Park", are maintained by the city and county of Los Angeles. The towers are listed in the National Register of Historic Places and were designated a National Historic Landmark in 1990.16
Simon, wherever you are, my hat's off to you.
In 1873, English amateur mathematician William Shanks set out to compute the numerical value of $\pi$, using Machin's formula17:
$ \displaystyle \pi = 16 \, \tan^{-1}\left( \frac{1}{5} \right) - 4 \, \tan^{-1}\left(\frac{1}{239}\right)$
After computing 707 decimal places Shanks stopped, having decided he had produced enough resolution for any earthly purpose. His extraordinary result stood for many decades until the advent of mechanical calculation in the 1940s, at which point another mathematician discovered Shanks' result was in error past the 527th digit. On this basis, one could say Shanks' result was 74.5% right.
A recent (2011) computer calculation produced ten trillion (1013) digits of $\pi$, but now that computer power and leisure time are both ubiquitous, any such record is fated to be short-lived.
Because $\pi$ is irrational (inexpressible as the ratio of two integers) its decimal sequence is infinite, and because its digits appear to be normally distributed18, one often hears the claim that if $\pi$'s digit sequence were to be expressed in base 26 (i.e. alphabetic letters), every work of literature that has ever been written, or will ever be written, would appear somewhere within the sequence. The claim is true, but there's the daunting problem of locating those particular sequences.19
In 1996, for its role as a test bed for the U.S. Navy's "Smart Ship" program, the 9,800-ton displacement cruiser U.S.S. Yorktown (DDG-48/CG-48) was equipped with 27 Pentium Pro-based computers running Windows NT 4.0 and communicating over a fiber-optic cable network. The purpose of Smart Ship was to rely on computer automation to reduce the size of the crew required to operate a capital ship.
On September 21, 1997, while Yorktown was on maneuvers off the coast of Cape Charles, Virginia, a crewmember entered a zero into a database field that required a nonzero value. The crewmember's entry caused a divide-by-zero database error, which caused a workstation failure, which caused a network failure, which caused the Yorktown's propulsion system to fail. The Yorktown had to be towed back to port.20
Semmelweiss
In 1847, Viennese physician Ignaz Semmelweiss discovered that the rate at which women died in childbirth was 10%-35% in doctor's wards, three times higher than the rate in midwive's wards. The only real difference between the environments was that the midwives washed their hands between clinical procedures, and the doctors did not. Semmelweiss proposed that doctors wash their hands between procedures using a chlorinated lime solution, a suggestion known to be very effective, reducing death rates below 1%, but one that doctors rejected as insulting and beneath them.
In 1865 Semmelweiss suffered a mental breakdown and was committed to an asylum. Two weeks later he was severely beaten by the guards and died at age 47.21
After his death Semmelweiss' theories were vindicated by the work of Louis Pasteur22, Joseph Lister23 and others, who established a scientific basis for germ theory and designed antiseptic procedures and treatments ... including making doctors wash their hands.
When antibiotics saw first practical application in the 1940s, many believed this marked the end for a large class of diseases including Tuberculosis, Bubonic plague and others, as well as being an effective treatment for everyday but life-threatening infections. But as time passed, bacterial agents began to show resistance to antibiotics, which resulted in a cycle of introducing replacement drugs, then watching resistant strains develop that required newer drugs.
The reason antibiotics become ineffective over time is provided by the theory of evolution by natural selection. When a new antibiotic is first applied it wipes out nearly all the bacteria, except a very small minority that happen by chance to be resistant to the drug. Those few survivors eventually become the entire bacterial population, which requires application of a newer drug.
A particularly severe example of this gradual evolution of resistance, one that may prove to be a harbinger for the future of all antibiotic treatments, is Methicillin-resistant Staphylococcus aureus (MRSA)24, an infectious organism that first appeared in hospital wards in Western Europe and Australia in the early to mid-1960s. From its status as a laboratory curiosity in the 1960s, MRSA has managed to spread across the world and develop resistance to all antibiotics applied to it. It has evolved into a particularly dangerous iatrogenic (hospital-borne) infection that killed about 19,000 U.S. patients in 2007. The present situation is now so serious that the danger from MRSA in a hospital ward is often greater than the diseases for which people seek hospital admission.
But MRSA is only one example of the general decline in the effectiveness of antibiotics. Tuberculosis is another example of a disease with a long history, that at one time seemed to be fully controlled by antibiotics, but that now has forms that are essentially untreatable. Many other diseases that were thought to have been defeated by antibiotics are gradually reappearing in resistant forms. The problem of antibiotic resistance is now so serious and apparently insoluble that some are proclaiming the end of the antibiotic era.25
First Programmer
Augusta Ada King, Countess of Lovelace, daughter of Lord Byron, was born in 1815 and was from an early age encouraged by her mother to develop her mathematical and logical skills. She was then educated by a number of very talented men and women and became a skilled mathematician. As a young woman Ada befriended and began a working relationship with mathematician Charles Babbage, at the time working on a mechanical computer named the Analytical Engine. As the project unfolded, Ada's extensive notes on the Engine became a blueprint for imagined machines far beyond the technical possibilities of the time. Ada's notes include what is now recognized as the first computer algorithm, a method for computing a sequence of Bernoulli numbers which modern analysis shows would have worked, had the Analytical Engine actually been built.
Ada's extensive notes, on the Analytical Engine and on algorithms for it, were republished in 1953, at the beginning of the modern computer era. Ada's notes show that she had a deep understanding of a computer's possibilities, far beyond that of her contemporaries. In her notes, Ada says:
[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine...
Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.
In 1852 Ada died of uterine cancer at the age of 36. In 1980 the computer language Ada, created on behalf of the United States Department of Defense, was named after Ada Lovelace.26
Clever Insects
Entomologists have known for some time that a certain species of cicada (Magicicada septendecim) only rarely come out of hiding — every 13 or 17 years (13 years in the southern U.S., 17 years in the north). The cicadas suddenly appear, reproduce in prodigious swarms, then disappear. But what's special about 13 and 17? A new theory has it that evolution by natural selection chooses these intervals because they mathematically minimize the chance for interaction between cicadas and their predators.
How so? It turns out that 13 and 17 are prime numbers — numbers that can only be divided by themselves and 1. This means prime-numbered cicada reproduction cycles minimize the chance for interaction with predator reproduction cycles. Contrast this with a cicada cycle of 12 years — that would invite attacks by predators having reproduction cycles of 1, 2, 3, 4, 6 and 12 years. A cicada cycle of 16 years would invite attacks by predators with cycles of 1, 2, 4, 8 and 16 years. It turns out that, if the point is to evade predators who also reproduce periodically, a prime numbered cycle is optimal.
When this theory is modeled by computer27, it shows a clear advantage for prime numbered reproduction cycles. This behavior shows the workings of of natural selection, which (to oversimplify) blindly tries everything until something works. It also shows a role for mathematics in nature.
Also known as the Trembling Giant, Pando is a clonal colony of a single male quaking aspen (Populus tremuloides) joined by an underground root system. Genetic testing reveals that Pando is a single organism extending over 43 hectares (106 acres) and weighing approximately 6 million kilograms (13.2 million pounds), which makes it the heaviest known organism.
Pando's subsurface root system, which has survived any number of surface fires that killed the trees above, is estimated to be 80,000 years old, among the oldest living organisms. At any given time the colony has over 40,000 stems (trees) extending from its root system. The average age of one stem is 130 years before being replaced by another.
Pando is located at the western edge of the Colorado Plateau in South-central Utah, at Latitude 38.525° N, Longitude 111.75° W.28 Click here for a picture of Pando.
Figure 4: Pando in the fall (click image for larger size).
Please visit again to see future additions to the Carnival.
1 A quarter of Chinese study English: official — "More than 300 million Chinese people, or nearly a quarter of the country's population, have studied English either as a major course or as an elective subject ..."
2 Mars Climate Orbiter (Wikipedia) — "The discrepancy between calculated and measured position, resulting in the discrepancy between desired and actual orbit insertion altitude, had been noticed earlier by at least two navigators, whose concerns were dismissed."
3 "Killing Kennedy", Bill O'Reilly & Martin Dugard, Henry Holt & Co. 2012, p. 265: "Dallas motorcycle officer Marrion L. Baker has raced into the building and up the stairs. He stops Oswald at gunpoint on the second floor but then lets him go when it becomes clear that Lee Harvey is a TBSD employee."
4 Golden Gate Bridge (Wikipedia)
5 Distance to the Horizon (Wikipedia)
6 Mann Gulch fire (Wikipedia) — "With the fire less than a hundred yards behind he took a match out and set fire to the grass just before them. In doing so he [Dodge] was attempting to create an escape fire to lie in so that the main fire would burn around him and his crew. In the back draft of the main fire the grass fire set burned straight up toward the ridge above. Turning to the three men by him [...] Dodge said 'Up this way', but the men misunderstood him."
7 Toba catastrophe theory (Wikipedia) — A scientific theory about a massive volcanic supereruption 70,000 years ago.
8 Nuclear Winter (Wikipedia) — A theory about events, natural or man-made, that can trigger long-term catastrophic cooling of Earth's climate.
9 Genetic Bottleneck Theory (Wikipedia) — A theory about catastrophes that can greatly reduce species numbers, with the Toba event as an example.
10 Literally (Merriam-Webster) — (1): in a literal sense or manner : actually <took the remark literally> <was literally insane>, (2): in effect : virtually <will literally turn the world upside down to combat cruelty or injustice — Norman Cousins>
11 Options: The Basics (Motley Fool) — How to make money in stocks whether the market is rising or falling.
12 Efficient-market hypothesis (Wikipedia) — a theory that, if true, would prevent a consistently winning investment strategy. (No one knows whether it's true.)
13 Null Hypothesis (Wikipedia) — a scientist's default precept, that a given experimental outcome resulted from chance, not design.
14 We can use the Binomial Theorem (Wikipedia) to compute that a million binary trials, each with a probability of 2-20, has a 61% chance of one or more successes.
15 Compound Interest (Wikipedia) — a property of certain bank accounts/loans that produce an exponential increase or decrease in the balance over time.
16 Watts Towers (Wikipedia) — a somewhat strange, but very strongly built, example of what is called "vernacular architecture" in Los Angeles.
17 Machin's formula (Wikipedia) — a formula for computing the numerical value of $\pi$.
18 Normal number (Wikipedia) — a number whose digit sequence is uniformly distributed, i.e. each digit 0-9 has probability p = 10-1 of appearing, each pair of digits has probability p = 10-2 of appearing, and so forth.
19 $\pi$ (Wikipedia) — The ratio of a circle's circumference to its diameter.
20 USS Yorktown (CG-48) (Wikipedia) — a U.S. Navy ship that had to be towed back to port as the result of an unhandled computer divide-by-zero error.
21 Ignaz Semmelweis (Wikipedia) — a misunderstood critic in the field of hospital antisepsis.
22 Louis Pasteur (Wikipedia) — a pioneer in the field of microbiology and germ theory, inventor of several early vaccines.
23 Joseph Lister (Wikipedia) — a pioneer in the field of antiseptic surgery who built on Pasteur's work.
24 Methicillin-resistant Staphylococcus aureus (Wikipedia) — a dangerous iatrogenic infection that has become a very serious public health problem.
25 Expert: 'The end of antibiotics, period' (UPI) — CDC official announces the end of the antibiotic joyride.
26 Ada Lovelace (Wikipedia) — daughter of Lord Byron, mathematician, now recognized as the first computer programmer.
27 Mathematical Locusts — insects that use a mathematical strategy to avoid predators.
28 Pando (Wikipedia) — A very large clonal colony of a single male Quaking Aspen. | CommonCrawl |
The flanker task is designed to tax cognitive control by requiring subjects to respond based on the identity of a target stimulus (H or S) and not the more numerous and visually salient stimuli that flank the target (as in a display such as HHHSHHH). Servan-Schreiber, Carter, Bruno, and Cohen (1998) administered the flanker task to subjects on placebo and d-AMP. They found an overall speeding of responses but, more importantly, an increase in accuracy that was disproportionate for the incongruent conditions, that is, the conditions in which the target and flankers did not match and cognitive control was needed.
What if you could simply take a pill that would instantly make you more intelligent? One that would enhance your cognitive capabilities including attention, memory, focus, motivation and other higher executive functions? If you have ever seen the movie Limitless, you have an idea of what this would look like—albeit the exaggerated Hollywood version. The movie may be fictional but the reality may not be too far behind.
Despite decades of study, a full picture has yet to emerge of the cognitive effects of the classic psychostimulants and modafinil. Part of the problem is that getting rats, or indeed students, to do puzzles in laboratories may not be a reliable guide to the drugs' effects in the wider world. Drugs have complicated effects on individuals living complicated lives. Determining that methylphenidate enhances cognition in rats by acting on their prefrontal cortex doesn't tell you the potential impact that its effects on mood or motivation may have on human cognition.
…Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent.
Adderall is an amphetamine, used as a drug to help focus and concentration in people with ADHD, and promote wakefulness for sufferers of narcolepsy. Adderall increases levels of dopamine and norepinephrine in the brain, along with a few other chemicals and neurotransmitters. It's used off-label as a study drug, because, as mentioned, it is believed to increase focus and concentration, improve cognition and help users stay awake. Please note: Side Effects Possible.
A fundamental aspect of human evolution has been the drive to augment our capabilities. The neocortex is the neural seat of abstract and higher order cognitive processes. As it grew, so did our ability to create. The invention of tools and weapons, writing, the steam engine, and the computer have exponentially increased our capacity to influence and understand the world around us. These advances are being driven by improved higher-order cognitive processing.1Fascinatingly, the practice of modulating our biology through naturally occurring flora predated all of the above discoveries. Indeed, Sumerian clay slabs as old as 5000 BC detail medicinal recipes which include over 250 plants2. The enhancement of human cognition through natural compounds followed, as people discovered plants containing caffeine, theanine, and other cognition-enhancing, or nootropic, agents.
However, normally when you hear the term nootropic kicked around, people really mean a "cognitive enhancer" — something that does benefit thinking in some way (improved memory, faster speed-of-processing, increased concentration, or a combination of these, etc.), but might not meet the more rigorous definition above. "Smart drugs" is another largely-interchangeable term.
And as before, around 9 AM I began to feel the peculiar feeling that I was mentally able and apathetic (in a sort of aboulia way); so I decided to try what helped last time, a short nap. But this time, though I took a full hour, I slept not a wink and my Zeo recorded only 2 transient episodes of light sleep! A back-handed sort of proof of alertness, I suppose. I didn't bother trying again. The rest of the day was mediocre, and I wound up spending much of it on chores and whatnot out of my control. Mentally, I felt better past 3 PM.
As professionals and aging baby boomers alike become more interested in enhancing their own brain power (either to achieve more in a workday or to stave off cognitive decline), a huge market has sprung up for nonprescription nootropic supplements. These products don't convince Sahakian: "As a clinician scientist, I am interested in evidence-based cognitive enhancement," she says. "Many companies produce supplements, but few, if any, have double-blind, placebo-controlled studies to show that these supplements are cognitive enhancers." Plus, supplements aren't regulated by the U.S. Food and Drug Administration (FDA), so consumers don't have that assurance as to exactly what they are getting. Check out these 15 memory exercises proven to keep your brain sharp.
The hormone testosterone (Examine.com; FDA adverse events) needs no introduction. This is one of the scariest substances I have considered using: it affects so many bodily systems in so many ways that it seems almost impossible to come up with a net summary, either positive or negative. With testosterone, the problem is not the usual nootropics problem that that there is a lack of human research, the problem is that the summary constitutes a textbook - or two. That said, the 2011 review The role of testosterone in social interaction (excerpts) gives me the impression that testosterone does indeed play into risk-taking, motivation, and social status-seeking; some useful links and a representative anecdote:
"Smart Drugs" are chemical substances that enhance cognition and memory or facilitate learning. However, within this general umbrella of "things you can eat that make you smarter," there are many variations as far as methods of action within the body, perceptible (and measurable) effects, potential for use and abuse, and the spillover impact on the body's non-cognitive processes.
Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything:
Noopept is a nootropic that belongs to the ampakine family. It is known for promoting learning, boosting mood, and improving logical thinking. It has been popular as a study drug for a long time but has recently become a popular supplement for improving vision. Users report seeing colors more brightly and feeling as if their vision is more vivid after taking noopept.
Somewhat ironically given the stereotypes, while I was in college I dabbled very little in nootropics, sticking to melatonin and tea. Since then I have come to find nootropics useful, and intellectually interesting: they shed light on issues in philosophy of biology & evolution, argue against naive psychological dualism and for materialism, offer cases in point on the history of technology & civilization or recent psychology theories about addiction & willpower, challenge our understanding of the validity of statistics and psychology - where they don't offer nifty little problems in statistics and economics themselves, and are excellent fodder for the young Quantified Self movement4; modafinil itself demonstrates the little-known fact that sleep has no accepted evolutionary explanation. (The hard drugs also have more ramifications than one might expect: how can one understand the history of Southeast Asia and the Vietnamese War without reference to heroin, or more contemporaneously, how can one understand the lasting appeal of the Taliban in Afghanistan and the unpopularity & corruption of the central government without reference to the Taliban's frequent anti-drug campaigns or the drug-funded warlords of the Northern Alliance?)
Spaced repetition at midnight: 3.68. (Graphing preceding and following days: ▅▄▆▆▁▅▆▃▆▄█ ▄ ▂▄▄▅) DNB starting 12:55 AM: 30/34/41. Transcribed Sawaragi 2005, then took a walk. DNB starting 6:45 AM: 45/44/33. Decided to take a nap and then take half the armodafinil on awakening, before breakfast. I wound up oversleeping until noon (4:28); since it was so late, I took only half the armodafinil sublingually. I spent the afternoon learning how to do value of information calculations, and then carefully working through 8 or 9 examples for my various pages, which I published on Lesswrong. That was a useful little project. DNB starting 12:09 AM: 30/38/48. (To graph the preceding day and this night: ▇▂█▆▅▃▃▇▇▇▁▂▄ ▅▅▁▁▃▆) Nights: 9:13; 7:24; 9:13; 8:20; 8:31.
Vinh Ngo, a San Francisco family practice doctor who specializes in hormone therapy, has become familiar with piracetam and other nootropics through a changing patient base. His office is located in the heart of the city's tech boom and he is increasingly sought out by young, male tech workers who tell him they are interested in cognitive enhancement.
A provisional conclusion about the effects of stimulants on learning is that they do help with the consolidation of declarative learning, with effect sizes varying widely from small to large depending on the task and individual study. Indeed, as a practical matter, stimulants may be more helpful than many of the laboratory tasks indicate, given the apparent dependence of enhancement on length of delay before testing. Although, as a matter of convenience, experimenters tend to test memory for learned material soon after the learning, this method has not generally demonstrated stimulant-enhanced learning. However, when longer periods intervene between learning and test, a more robust enhancement effect can be seen. Note that the persistence of the enhancement effect well past the time of drug action implies that state-dependent learning is not responsible. In general, long-term effects on learning are of greater practical value to people. Even students cramming for exams need to retain information for more than an hour or two. We therefore conclude that stimulant medication does enhance learning in ways that may be useful in the real world.
Enhanced learning was also observed in two studies that involved multiple repeated encoding opportunities. Camp-Bruno and Herting (1994) found MPH enhanced summed recall in the Buschke Selective Reminding Test (Buschke, 1973; Buschke & Fuld, 1974) when 1-hr and 2-hr delays were combined, although individually only the 2-hr delay approached significance. Likewise, de Wit, Enggasser, and Richards (2002) found no effect of d-AMP on the Hopkins Verbal Learning Test (Brandt, 1991) after a 25-min delay. Willett (1962) tested rote learning of nonsense syllables with repeated presentations, and his results indicate that d-AMP decreased the number of trials needed to reach criterion.
As Sulbutiamine crosses the blood-brain barrier very easily, it has a positive effect on the cholinergic and the glutamatergic receptors that are responsible for essential activities impacting memory, concentration, and mood. The compound is also fat-soluble, which means it circulates rapidly and widely throughout the body and the brain, ensuring positive results. Thus, patients with schizophrenia and Parkinson's disease will find the drug to be very effective.
If stimulants truly enhance cognition but do so to only a small degree, this raises the question of whether small effects are of practical use in the real world. Under some circumstances, the answer would undoubtedly be yes. Success in academic and occupational competitions often hinges on the difference between being at the top or merely near the top. A scholarship or a promotion that can go to only one person will not benefit the runner-up at all. Hence, even a small edge in the competition can be important.
If you want to make sure that whatever you're taking is safe, search for nootropics that have been backed by clinical trials and that have been around long enough for any potential warning signs about that specific nootropic to begin surfacing. There are supplements and nootropics that have been tested in a clinical setting, so there are options out there.
It is at the top of the supplement snake oil list thanks to tons of correlations; for a review, see Luchtman & Song 2013 but some specifics include Teenage Boys Who Eat Fish At Least Once A Week Achieve Higher Intelligence Scores, anti-inflammatory properties (see Fish Oil: What the Prescriber Needs to Know on arthritis), and others - Fish oil can head off first psychotic episodes (study; Seth Roberts commentary), Fish Oil May Fight Breast Cancer, Fatty Fish May Cut Prostate Cancer Risk & Walnuts slow prostate cancer, Benefits of omega-3 fatty acids tally up, Serum Phospholipid Docosahexaenonic Acid Is Associated with Cognitive Functioning during Middle Adulthood endless anecdotes.
Didn't seem very important to me. Trump's ability to discern importance in military projects, sure, why not. Shanahan may be the first honest cabinet head; it could happen. With the record this administration has I'd need some long odds to bet that way. Does anyone doubt he got the loyalty spiel and then the wink and nod that anything he could get away with was fine. monies
We included studies of the effects of these drugs on cognitive processes including learning, memory, and a variety of executive functions, including working memory and cognitive control. These studies are listed in Table 2, along with each study's sample size, gender, age and tasks administered. Given our focus on cognition enhancement, we excluded studies whose measures were confined to perceptual or motor abilities. Studies of attention are included when the term attention refers to an executive function but not when it refers to the kind of perceptual process taxed by, for example, visual search or dichotic listening or when it refers to a simple vigilance task. Vigilance may affect cognitive performance, especially under conditions of fatigue or boredom, but a more vigilant person is not generally thought of as a smarter person, and therefore, vigilance is outside of the focus of the present review. The search and selection process is summarized in Figure 2.
Aniracetam is known as one of the smart pills with the widest array of uses. From benefits for dementia patients and memory boost in adults with healthy brains, to the promotion of brain damage recovery. It also improves the quality of sleep, what affects the overall increase in focus during the day. Because it supports the production of dopamine and serotonin, it elevates our mood and helps fight depression and anxiety.
"In 183 pages, Cavin Balaster's new book, How to Feed A Brain provides an outline and plan for how to maximize one's brain performance. The "Citation Notes" provide all the scientific and academic documentation for further understanding. The "Additional Resources and Tips" listing takes you to Cavin's website for more detail than could be covered in 183 pages. Cavin came to this knowledge through the need to recover from a severe traumatic brain injury and he did not keep his lessons learned to himself. This book is enlightening for anyone with a brain. We all want to function optimally, even to take exams, stay dynamic, and make positive contributions to our communities. Bravo Cavin for sharing your lessons learned!"
Supplements, medications, and coffee certainly might play a role in keeping our brains running smoothly at work or when we're trying to remember where we left our keys. But the long-term effects of basic lifestyle practices can't be ignored. "For good brain health across the life span, you should keep your brain active," Sahakian says. "There is good evidence for 'use it or lose it.'" She suggests brain-training apps to improve memory, as well as physical exercise. "You should ensure you have a healthy diet and not overeat. It is also important to have good-quality sleep. Finally, having a good work-life balance is important for well-being." Try these 8 ways to get smarter while you sleep.
Fish oil (Examine.com, buyer's guide) provides benefits relating to general mood (eg. inflammation & anxiety; see later on anxiety) and anti-schizophrenia; it is one of the better supplements one can take. (The known risks are a higher rate of prostate cancer and internal bleeding, but are outweighed by the cardiac benefits - assuming those benefits exist, anyway, which may not be true.) The benefits of omega acids are well-researched.
Accordingly, we searched the literature for studies in which MPH or d-AMP was administered orally to nonelderly adults in a placebo-controlled design. Some of the studies compared the effects of multiple drugs, in which case we report only the results of stimulant–placebo comparisons; some of the studies compared the effects of stimulants on a patient group and on normal control subjects, in which case we report only the results for control subjects. The studies varied in many other ways, including the types of tasks used, the specific drug used, the way in which dosage was determined (fixed dose or weight-dependent dose), sample size, and subject characteristics (e.g., age, college sample or not, gender). Our approach to the classic splitting versus lumping dilemma has been to take a moderate lumping approach. We group studies according to the general type of cognitive process studied and, within that grouping, the type of task. The drug and dose are reported, as well as sample characteristics, but in the absence of pronounced effects of these factors, we do not attempt to make generalizations about them.
The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we'll put the 500 at $5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we'll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it's necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: 5 + (>5 \times 7.25) = >41.
This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit.
However, when I didn't stack it with Choline, I would get what users call "racetam headaches." Choline, as Patel explains, is not a true nootropic, but it's still a pro-cognitive compound that many take with other nootropics in a stack. It's an essential nutrient that humans need for functions like memory and muscle control, but we can't produce it, and many Americans don't get enough of it. The headaches I got weren't terribly painful, but they were uncomfortable enough that I stopped taking Piracetam on its own. Even without the headache, though, I didn't really like the level of focus Piracetam gave me. I didn't feel present when I used it, even when I tried to mix in caffeine and L-theanine. And while it seemed like I could focus and do my work faster, I was making more small mistakes in my writing, like skipping words. Essentially, it felt like my brain was moving faster than I could.
11:30 AM. By 2:30 PM, my hunger is quite strong and I don't feel especially focused - it's difficult to get through the tab-explosion of the morning, although one particularly stupid poster on the DNB ML makes me feel irritated like I might on Adderall. I initially figure the probability at perhaps 60% for Adderall, but when I wake up at 2 AM and am completely unable to get back to sleep, eventually racking up a Zeo score of 73 (compared to the usual 100s), there's no doubt in my mind (95%) that the pill was Adderall. And it was the last Adderall pill indeed.
NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab's production.) A year's supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one's eyes?), it could cost anywhere up to $10,000.
Most of the most solid fish oil results seem to meliorate the effects of age; in my 20s, I'm not sure they are worth the cost. But I would probably resume fish oil in my 30s or 40s when aging really becomes a concern. So the experiment at most will result in discontinuing for a decade. At $X a year, that's a net present value of sum $ map (\n -> 70 / (1 + 0.05)^n) [1..10] = $540.5.
However, they fell short in several categories. The key issue with their product is that it does not contain DHA Omega 3 and the other essential vitamins and nutrients needed to support the absorption of Huperzine A and Phosphatidylserine. Without having DHA Omega 3 it will not have an essential piece to maximum effectiveness. This means that you would need to take a separate pill of DHA Omega 3 and several other essential vitamins to ensure you are able to reach optimal memory support. They also are still far less effective than our #1 pick's complete array of the 3 essential brain supporting ingredients and over 30 supporting nutrients, making their product less effective.
Barbara Sahakian, a neuroscientist at Cambridge University, doesn't dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. "Proving safety and efficacy is needed," she says.
A week later: Golden Sumatran, 3 spoonfuls, a more yellowish powder. (I combined it with some tea dregs to hopefully cut the flavor a bit.) Had a paper to review that night. No (subjectively noticeable) effect on energy or productivity. I tried 4 spoonfuls at noon the next day; nothing except a little mental tension, for lack of a better word. I think that was just the harbinger of what my runny nose that day and the day before was, a head cold that laid me low during the evening.
Most people I talk to about modafinil seem to use it for daytime usage; for me that has not ever worked out well, but I had nothing in particular to show against it. So, as I was capping the last of my piracetam-caffeine mix and clearing off my desk, I put the 4 remaining Modalerts pills into capsules with the last of my creatine powder and then mixed them with 4 of the theanine-creatine pills. Like the previous Adderall trial, I will pick one pill blindly each day and guess at the end which it was. If it was active (modafinil-creatine), take a break the next day; if placebo (theanine-creatine), replace the placebo and try again the next day. We'll see if I notice anything on DNB or possibly gwern.net edits.
Regardless of your goal, there is a supplement that can help you along the way. Below, we've put together the definitive smart drugs list for peak mental performance. There are three major groups of smart pills and cognitive enhancers. We will cover each one in detail in our list of smart drugs. They are natural and herbal nootropics, prescription ADHD medications, and racetams and synthetic nootropics.
Weyandt et al. (2009) Large public university undergraduates (N = 390) 7.5% (past 30 days) Highest rated reasons were to perform better on schoolwork, perform better on tests, and focus better in class 21.2% had occasionally been offered by other students; 9.8% occasionally or frequently have purchased from other students; 1.4% had sold to other students
Nootropics are a broad classification of cognition-enhancing compounds that produce minimal side effects and are suitable for long-term use. These compounds include those occurring in nature or already produced by the human body (such as neurotransmitters), and their synthetic analogs. We already regularly consume some of these chemicals: B vitamins, caffeine, and L-theanine, in our daily diets.
Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'.
When you drink tea, you're getting some caffeine (less than the amount in coffee), plus an amino acid called L-theanine that has been shown in studies to increase activity in the brain's alpha frequency band, which can lead to relaxation without drowsiness. These calming-but-stimulating effects might contribute to tea's status as the most popular beverage aside from water. People have been drinking it for more than 4,000 years, after all, but modern brain hackers try to distill and enhance the benefits by taking just L-theanine as a nootropic supplement. Unfortunately, that means they're missing out on the other health effects that tea offers. It's packed with flavonoids, which are associated with longevity, reduced inflammation, weight loss, cardiovascular health, and cancer prevention.
The evidence? Ritalin is FDA-approved to treat ADHD. It has also been shown to help patients with traumatic brain injury concentrate for longer periods, but does not improve memory in those patients, according to a 2016 meta-analysis of several trials. A study published in 2012 found that low doses of methylphenidate improved cognitive performance, including working memory, in healthy adult volunteers, but high doses impaired cognitive performance and a person's ability to focus. (Since the brains of teens have been found to be more sensitive to the drug's effect, it's possible that methylphenidate in lower doses could have adverse effects on working memory and cognitive functions.)
We reviewed recent studies concerning prescription stimulant use specifically among students in the United States and Canada, using the method illustrated in Figure 1. Although less informative about the general population, these studies included questions about students' specific reasons for using the drugs, as well as frequency of use and means of obtaining them. These studies typically found rates of use greater than those reported by the nationwide NSDUH or the MTF surveys. This probably reflects a true difference in rates of usage among the different populations. In support of that conclusion, the NSDUH data for college age Americans showed that college students were considerably more likely than nonstudents of the same age to use prescription stimulants nonmedically (odds ratio: 2.76; Herman-Stahl, Krebs, Kroutil, & Heller, 2007).
Those who have taken them swear they do work – though not in the way you might think. Back in 2015, a review of the evidence found that their impact on intelligence is "modest". But most people don't take them to improve their mental abilities. Instead, they take them to improve their mental energy and motivation to work. (Both drugs also come with serious risks and side effects – more on those later).
I can test fish oil for mood, since the other claimed benefits like anti-schizophrenia are too hard to test. The medical student trial (Kiecolt-Glaser et al 2011) did not see changes until visit 3, after 3 weeks of supplementation. (Visit 1, 3 weeks, visit 2, supplementation started for 3 weeks, visit 3, supplementation continued 3 weeks, visit 4 etc.) There were no tests in between the test starting week 1 and starting week 3, so I can't pin it down any further. This suggests randomizing in 2 or 3 week blocks. (For an explanation of blocking, see the footnote in the Zeo page.)
Please note: Smart Pills, Smart Drugs or Brain Food Supplements are also known as: Brain Smart Vitamins, Brain Tablets, Brain Vitamins, Brain Booster Supplements, Brain Enhancing Supplements, Cognitive Enhancers, Focus Enhancers, Concentration Supplements, Mental Focus Supplements, Mind Supplements, Neuro Enhancers, Neuro Focusers, Vitamins for Brain Function,Vitamins for Brain Health, Smart Brain Supplements, Nootropics, or "Natural Nootropics"
Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector.
At this point I began to get bored with it and the lack of apparent effects, so I began a pilot trial: I'd use the LED set for 10 minutes every few days before 2PM, record, and in a few months look for a correlation with my daily self-ratings of mood/productivity (for 2.5 years I've asked myself at the end of each day whether I did more, the usual, or less work done that day than average, so 2=below-average, 3=average, 4=above-average; it's ad hoc, but in some factor analyses I've been playing with, it seems to load on a lot of other variables I've measured, so I think it's meaningful).
Finally, all of the questions raised here in relation to MPH and d-AMP can also be asked about newer drugs and even about nonpharmacological methods of cognitive enhancement. An example of a newer drug with cognitive-enhancing potential is modafinil. Originally marketed as a therapy for narcolepsy, it is widely used off label for other purposes (Vastag, 2004), and a limited literature on its cognitive effects suggests some promise as a cognitive enhancer for normal healthy people (see Minzenberg & Carter, 2008, for a review).
Sure, those with a mental illness may very well need a little more monitoring to make sure they take their medications, but will those suffering from a condition with hallmark symptoms of paranoia and anxiety be helped by consuming a technology that quite literally puts a tracking device inside their body? For patients hearing voices telling them that they're being watched, a monitoring device may be a hard pill to swallow.
In my last post, I talked about the idea that there is a resource that is necessary for self-control…I want to talk a little bit about the candidate for this resource, glucose. Could willpower fail because the brain is low on sugar? Let's look at the numbers. A well-known statistic is that the brain, while only 2% of body weight, consumes 20% of the body's energy. That sounds like the brain consumes a lot of calories, but if we assume a 2,400 calorie/day diet - only to make the division really easy - that's 100 calories per hour on average, 20 of which, then, are being used by the brain. Every three minutes, then, the brain - which includes memory systems, the visual system, working memory, then emotion systems, and so on - consumes one (1) calorie. One. Yes, the brain is a greedy organ, but it's important to keep its greediness in perspective… Suppose, for instance, that a brain in a person exerting their willpower - resisting eating brownies or what have you - used twice as many calories as a person not exerting willpower. That person would need an extra one third of a calorie per minute to make up the difference compared to someone not exerting willpower. Does exerting self control burn more calories?
The magnesium was neither randomized nor blinded and included mostly as a covariate to avoid confounding (the Noopept coefficient & t-value increase somewhat without the Magtein variable), so an OR of 1.9 is likely too high; in any case, this experiment was too small to reliably detect any effect (~26% power, see bootstrap power simulation in the magnesium section) so we can't say too much.
Phenylpiracetam (Phenotropil) is one of the best smart drugs in the racetam family. It has the highest potency and bioavailability among racetam nootropics. This substance is almost the same as Piracetam; only it contains a phenyl group molecule. The addition to its chemical structure improves blood-brain barrier permeability. This modification allows Phenylpiracetam to work faster than other racetams. Its cognitive enhancing effects can last longer as well.
One curious thing that leaps out looking at the graphs is that the estimated underlying standard deviations differ: the nicotine days have a strikingly large standard deviation, indicating greater variability in scores - both higher and lower, since the means weren't very different. The difference in standard deviations is just 6.6% below 0, so the difference almost reaches our usual frequentist levels of confidence too, which we can verify by testing:
Ngo has experimented with piracetam himself ("The first time I tried it, I thought, 'Wow, this is pretty strong for a supplement.' I had a little bit of reflux, heartburn, but in general it was a cognitive enhancer. . . . I found it helpful") and the neurotransmitter DMEA ("You have an idea, it helps you finish the thought. It's for when people have difficulty finishing that last connection in the brain").
Imagine a pill you can take to speed up your thought processes, boost your memory, and make you more productive. If it sounds like the ultimate life hack, you're not alone. There are pills that promise that out there, but whether they work is complicated. Here are the most popular cognitive enhancers available, and what science actually says about them.
All clear? Try one (not dozens) of nootropics for a few weeks and keep track of how you feel, Kerl suggests. It's also important to begin with as low a dose as possible; when Cyr didn't ease into his nootropic regimen, his digestion took the blow, he admits. If you don't notice improvements, consider nixing the product altogether and focusing on what is known to boost cognitive function – eating a healthy diet, getting enough sleep regularly and exercising. "Some of those lifestyle modifications," Kerl says, "may improve memory over a supplement."
Only two of the eight experiments reviewed in this section found that stimulants enhanced performance, on a nonverbal fluency task in one case and in Raven's Progressive Matrices in the other. The small number of studies of any given type makes it difficult to draw general conclusions about the underlying executive function systems that might be influenced.
The smart pill industry has popularized many herbal nootropics. Most of them first appeared in Ayurveda and traditional Chinese medicine. Ayurveda is a branch of natural medicine originating from India. It focuses on using herbs as remedies for improving quality of life and healing ailments. Evidence suggests our ancestors were on to something with this natural approach.
Popular smart drugs on the market include methylphenidate (commonly known as Ritalin) and amphetamine (Adderall), stimulants normally used to treat attention deficit hyperactivity disorder or ADHD. In recent years, another drug called modafinil has emerged as the new favourite amongst college students. Primarily used to treat excessive sleepiness associated with the sleep disorder narcolepsy, modafinil increases alertness and energy.
Related to the famous -racetams but reportedly better (and much less bulky), Noopept is one of the many obscure Russian nootropics. (Further reading: Google Scholar, Examine.com, Reddit, Longecity, Bluelight.ru.) Its advantages seem to be that it's far more compact than piracetam and doesn't taste awful so it's easier to store and consume; doesn't have the cloud hanging over it that piracetam does due to the FDA letters, so it's easy to purchase through normal channels; is cheap on a per-dose basis; and it has fans claiming it is better than piracetam.
There is an ancient precedent to humans using natural compounds to elevate cognitive performance. Incan warriors in the 15th century would ingest coca leaves (the basis for cocaine) before battle. Ethiopian hunters in the 10th century developed coffee bean paste to improve hunting stamina. Modern athletes ubiquitously consume protein powders and hormones to enhance their training, recovery, and performance. The most widely consumed psychoactive compound today is caffeine. Millions of people use coffee and tea to be more alert and focused.
After 7 days, I ordered a kg of choline bitartrate from Bulk Powders. Choline is standard among piracetam-users because it is pretty universally supported by anecdotes about piracetam headaches, has support in rat/mice experiments27, and also some human-related research. So I figured I couldn't fairly test piracetam without some regular choline - the eggs might not be enough, might be the wrong kind, etc. It has a quite distinctly fishy smell, but the actual taste is more citrus-y, and it seems to neutralize the piracetam taste in tea (which makes things much easier for me).
A poster or two on Longecity claimed that iodine supplementation had changed their eye color, suggesting a connection to the yellow-reddish element bromine - bromides being displaced by their chemical cousin, iodine. I was skeptical this was a real effect since I don't know why visible amounts of either iodine or bromine would be in the eye, and the photographs produced were less than convincing. But it's an easy thing to test, so why not?
Table 4 lists the results of 27 tasks from 23 articles on the effects of d-AMP or MPH on working memory. The oldest and most commonly used type of working memory task in this literature is the Sternberg short-term memory scanning paradigm (Sternberg, 1966), in which subjects hold a set of items (typically letters or numbers) in working memory and are then presented with probe items, to which they must respond "yes" (in the set) or "no" (not in the set). The size of the set, and hence the working memory demand, is sometimes varied, and the set itself may be varied from trial to trial to maximize working memory demands or may remain fixed over a block of trials. Taken together, the studies that have used a version of this task to test the effects of MPH and d-AMP on working memory have found mixed and somewhat ambiguous results. No pattern is apparent concerning the specific version of the task or the specific drug. Four studies found no effect (Callaway, 1983; Kennedy, Odenheimer, Baltzley, Dunlap, & Wood, 1990; Mintzer & Griffiths, 2007; Tipper et al., 2005), three found faster responses with the drugs (Fitzpatrick, Klorman, Brumaghim, & Keefover, 1988; Ward et al., 1997; D. E. Wilson et al., 1971), and one found higher accuracy in some testing sessions at some dosages, but no main effect of drug (Makris et al., 2007). The meaningfulness of the increased speed of responding is uncertain, given that it could reflect speeding of general response processes rather than working memory–related processes. Aspects of the results of two studies suggest that the effects are likely due to processes other than working memory: D. E. Wilson et al. (1971) reported comparable speeding in a simple task without working memory demands, and Tipper et al. (2005) reported comparable speeding across set sizes.
Much better than I had expected. One of the best superhero movies so far, better than Thor or Watchmen (and especially better than the Iron Man movies). I especially appreciated how it didn't launch right into the usual hackneyed creation of the hero plot-line but made Captain America cool his heels performing & selling war bonds for 10 or 20 minutes. The ending left me a little nonplussed, although I sort of knew it was envisioned as a franchise and I would have to admit that showing Captain America wondering at Times Square is much better an ending than something as cliche as a close-up of his suddenly-opened eyes and then a fade out. (The movie continued the lamentable trend in superhero movies of having a strong female love interest… who only gets the hots for the hero after they get muscles or powers. It was particularly bad in CA because she knows him and his heart of gold beforehand! What is the point of a feminist character who is immediately forced to do that?)↩
In addition, while the laboratory research reviewed here is of interest concerning the effects of stimulant drugs on specific cognitive processes, it does not tell us about the effects on cognition in the real world. How do these drugs affect academic performance when used by students? How do they affect the total knowledge and understanding that students take with them from a course? How do they affect various aspects of occupational performance? Similar questions have been addressed in relation to students and workers with ADHD (Barbaresi, Katusic, Colligan, Weaver, & Jacobsen, 2007; Halmøy, Fasmer, Gillberg, & Haavik, 2009; see also Advokat, 2010) but have yet to be addressed in the context of cognitive enhancement of normal individuals.
The power calculation indicates a 20% chance of getting useful information. My quasi-experiment has <70% chance of being right, and I preserve a general skepticism about any experiment, even one as well done as the medical student one seems to be, and give that one a <80% chance of being right; so let's call it 70% the effect exists, or 30% it doesn't exist (which is the case in which I save money by dropping fish oil for 10 years).
A record of nootropics I have tried, with thoughts about which ones worked and did not work for me. These anecdotes should be considered only as anecdotes, and one's efforts with nootropics a hobby to put only limited amounts of time into due to the inherent limits of drugs as a force-multiplier compared to other things like programming1; for an ironic counterpoint, I suggest the reader listen to a video of Jonathan Coulton's I Feel Fantastic while reading.
I tried taking whole pills at 1 and 3 AM. I felt kind of bushed at 9 AM after all the reading, and the 50 minute nap didn't help much - I was sleep only around 10 minutes and spent most of it thinking or meditation. Just as well the 3D driver is still broken; I doubt the scores would be reasonable. Began to perk up again past 10 AM, then felt more bushed at 1 PM, and so on throughout the day; kind of gave up and began watching & finishing anime (Amagami and Voices of a Distant Star) for the rest of the day with occasional reading breaks (eg. to start James C. Scotts Seeing Like A State, which is as described so far). As expected from the low quality of the day, the recovery sleep was bigger than before: a full 10 hours rather than 9:40; the next day, I slept a normal 8:50, and the following day ~8:20 (woken up early); 10:20 (slept in); 8:44; 8:18 (▁▇▁▁). It will be interesting to see whether my excess sleep remains in the hour range for 'good modafinil nights and two hours for bad modafinil nights.
Articles and information on this website may only be copied, reprinted, or redistributed with written permission (but please ask, we like to give written permission!) The purpose of this Blog is to encourage the free exchange of ideas. The entire contents of this website is based upon the opinions of Dave Asprey, unless otherwise noted. Individual articles are based upon the opinions of the respective authors, who may retain copyright as marked. The information on this website is not intended to replace a one-on-one relationship with a qualified health care professional and is not intended as medical advice. It is intended as a sharing of knowledge and information from the personal research and experience of Dave Asprey and the community. We will attempt to keep all objectionable messages off this site; however, it is impossible to review all messages immediately. All messages expressed on The Bulletproof Forum or the Blog, including comments posted to Blog entries, represent the views of the author exclusively and we are not responsible for the content of any message. | CommonCrawl |
Search all SpringerOpen articles
EURASIP Journal on Advances in Signal Processing
Research | Open | Published: 15 August 2017
Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks
Ling Zhang1,
Yunlong Cai1,
Chunguang Li1 &
Rodrigo C. de Lamare2
EURASIP Journal on Advances in Signal Processingvolume 2017, Article number: 57 (2017) | Download Citation
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Distributed estimation is commonly utilized for distributed data processing over sensor networks, which exhibits increased robustness, flexibility, and system efficiency compared to centralized processing. Owing to these merits, distributed estimation has received more and more attention and been widely used in applications ranging from environmental monitoring [1], medical data collecting for healthcare [2], animal tracking in agriculture [1], monitoring physical phenomena [3], localizing moving mobile terminals [4, 5] to national security. Particularly, distributed estimation technique relies on the cooperation among geographically spread sensor nodes to process locally collected data. With different cooperation strategies employed, distributed estimation algorithms can be classified into the incremental type and the diffusion type. Note that we consider the diffusion cooperation strategy in this paper since the incremental strategy requires the definition of a path through the network and may be not suitable for large networks or dynamic configurations [6, 7]. Many distributed estimation algorithms with the diffusion strategy have been put forward recently, such as diffusion least-mean squares (LMS) [8, 9], diffusion sparse LMS [10–12], variable step size diffusion LMS (VSS-DLMS) [13, 14], diffusion recursive least squares (RLS) [6, 7], distributed sparse RLS [15], distributed sparse total least squares (LS) [16], diffusion information theoretic learning (ITL) [17], and the diffusion-based algorithm for distributed censor regression [18]. Among assorted distributed estimation algorithms, the RLS-based algorithms achieve superior performance to the LMS-based ones by inheriting the advantages of fast convergence and low steady-state misadjustment from the RLS technique. Thus, the distributed estimation algorithms based on the diffusion strategy and the RLS adaptive technique are investigated in this paper.
However, the existing RLS-based distributed estimation algorithms provide a fixed forgetting factor, which has some drawbacks. With a fixed forgetting factor, the algorithm fails to keep up with real-time variations in environment, such as variations in sensor network topology. Moreover, it is expected to adjust the forgetting factors automatically according to the estimation errors rather than choose appropriate values for them through simulations. There have been several studies on variable forgetting factor (VFF) methods. Specifically, the classic gradient-based VFF (GVFF) mechanism was proposed in [19], and most of the existing VFF mechanisms are extensions of this method [20–24]. Nevertheless, the GVFF mechanism requires a large amount of computation. In order to reduce the computational complexity, the improved low-complexity VFF mechanisms have been reported in [25, 26]. To the best of our knowledge, the existing VFF mechanisms are mostly employed in a centralized context and have not been considered in the field of distributed estimation yet.
In this work, the previously reported VFF mechanisms [25, 26] are employed to the diffusion RLS algorithms for distributed signal processing applications, by simplifying the inverse relation between the forgetting factor and the adaptation component to provide lower computational complexity. The resulting algorithms are referred to as low-complexity time-averaged VFF diffusion RLS (LTVFF-DRLS) algorithm and low-complexity correlated time-averaged VFF diffusion RLS (LCTVFF-DRLS) algorithm, respectively. Compared with the GVFF mechanisms, the proposed LTVFF and LCTVFF mechanisms can reduce the computational complexity significantly [25, 26]. Then, we carry out the analysis for the proposed algorithms in terms of the mean and mean square error performance. Finally, we provide simulation results to verify the effectiveness of the proposed algorithms when applied in distributed parameter estimation and distributed spectrum estimation.
Our main contributions are summarized as follows:
We propose the low-complexity VFF-DRLS algorithms for distributed estimation in sensor networks. To the best of our knowledge, the VFF mechanisms have not been considered in the distributed estimation algorithms yet.
We study the mean and mean square performance for the proposed algorithms in a general case, and provide the transient analysis for a specialized case. Specifically, for the general case, in terms of the mean performance, we show that the mean value of the weight error vector approaches zero as iteration numbers go to infinity, which implies the asymptotical convergence of the proposed algorithms; from the perspective of mean square performance, we derive the mathematical expressions for the steady-state MSD and EMSE values. In the specialized case, we study the transient analysis by focusing on the learning curve and prove that the proposed algorithms are convergent and the convergence rate is related to the varying forgetting factors.
We perform simulations to evaluate the performance of the proposed algorithms when applied to distributed parameter estimation and distributed spectrum estimation tasks. The simulation results indicate that the proposed algorithms exhibit remarkable improvements in convergence and steady-state performance when compared with the DRLS algorithm that has a fixed forgetting factor. Besides, effectiveness of our analytical expressions for calculating the steady-state MSD and EMSE is verified by the simulation results. In addition, we also provided detailed simulation results regarding the choice of the parameters in the proposed algorithms to help with the parameter selection in practice.
This paper is organized as follows. Section 2 provides the system model for the distributed estimation over sensor networks. Besides, the DRLS algorithm with the fixed forgetting factor is described briefly. In Section 3, two low-complexity VFF mechanisms are presented, followed by the analyses for the variable forgetting factor in terms of steady-state statistical properties. Besides, the proposed LTVFF-DRLS algorithm and the LCTVFF-DRLS algorithm are presented. In the last part of this section, the computational complexity of the VFF mechanisms as well as the proposed algorithms is analyzed. In Section 4, detailed analyses based on mean and mean-square performance for the proposed algorithms are carried out and analytical expressions to compute MSD and EMSE are derived. In addition, transient analysis for a specialized case is provided in the last part of Section 4. In Section 5, simulation results are presented for distributed parameter estimation and distributed spectrum estimation. Section 6 draws the conclusions.
Notation: Boldface letters are used for vectors or matrices, while normal font for scalar quantities. Matrices are denoted by capital letters and small letters are used for vectors. We use the operator row {·} to denote a row vector, col {·} to denote a column vector, and diag {·} to denote a diagonal matrix. The operator E[·] stands for the expectation of some quantity, and Tr {·} represents the trace of a matrix. We use (·)T and (·)−1 to denote the transpose and inverse operator, respectively, and (·)∗ for complex conjugate-transposition. We also use the symbol I n to represent an identity matrix of size n and $\mathbf {\mathbb {I}}$ to denote a vector of appropriate size with all elements equal to one.
System model and diffusion-based DRLS algorithm
In this section, we first illustrate the system model for the distributed estimation over sensor networks. Following this, we review the conventional DRLS algorithm with the fixed forgetting factor briefly.
System model
Let us consider a sensor network consisting of N sensor nodes which are spatially distributed over a geographical area. The set of nodes connected to node k including itself are called the neighbor nodes of node k, denoted by $\mathcal {N}_{k}$. The number of the nodes linked to node k is the degree of k, denoted by n k . The system model for the distributed estimation over sensor networks is presented in Fig. 1.
At each time instant i, each node k has access to complex valued time realizations {d k,i ,u k,i }, k=1,2,…,N, i=1,2,…, with d k,i a scalar measurement and u k,i a M×1 input vector. The relation between the measurement d k,i and the input vector u k,i can be characterized as
$$ d_{k,i}=\mathbf{u}_{k,i}^{*}\mathbf{w}^{o}+v_{k,i} $$
where w o is the unknown optimal weight vector of size M×1, and v k,i is zero-mean additive white Gaussian noise with variance ${\sigma }_{v,k}^{2}$. Particularly, we assume that the noise variance has been determined in advance somehow. We also assume that the noise samples v k,i , k=1,2,…,N, i=1,2,…, are independent of each other as well as the input vectors u k,i . We aim to estimate the unknown optimal weight vector w o in a distributed manner. That is, each sensor node k obtains a local estimate w k,i of size M×1 to approach the optimal weight vector w o as much as possible. To this end, each node k not only uses its local measurement d k,i and input vector u k,i but also cooperates with its closest neighbors for updating its local estimate w k,i . Specifically, by cooperation, each node k has access to its neighbors' data {d l,i ,u l,i } and estimates w l,i at each time instant i where $l\in \mathcal {N}_{k}$, and then, each node k fuses all the available information to update its local estimate ψ k,i .
Let us first introduce some vectors and matrices. At each time instant i, by collecting all nodes' measurements into vector y i , noise samples into vector v i (both of length N), and input vectors into the matrix H i of size N×M, we obtain
$$ \begin{aligned} \mathbf{y}_{i}&=\text{col}\{d_{1,i} \ldots d_{N,i}\}\\ \mathbf{H}_{i}&=\text{col}\{\mathbf{u}_{1,i}^{*} \ldots \mathbf{u}_{N,i}^{*}\}\\ \mathbf{v}_{i}&=\text{col}\{v_{1,i} \ldots v_{N,i}\}. \end{aligned} $$
Following this, we define the covariance matrix of the noise vector v i as
$$ \mathbf{R}_{v}=E[\mathbf{v}_{i}\mathbf{v}_{i}^{*}]= \text{diag}\left\{ {\sigma}_{v,1}^{2},{\sigma}_{v,2}^{2},\ldots,{\sigma}_{v,N}^{2} \right\}. $$
Next, we stack y i , v i and H i from time instant 0 to time instant i into matrices respectively, which are given by
$$ \begin{aligned} \mathbf{\mathcal{Y}}_{i}&=\text{col}\{\mathbf{y}_{i} \ldots \mathbf{y}_{0}\}\\ \mathbf{\mathcal{H}}_{i}&=\text{col}\{\mathbf{H}_{i} \ldots \mathbf{H}_{0}\}\\ \mathbf{\mathcal{V}}_{i}&=\text{col}\{\mathbf{v}_{i} \ldots \mathbf{v}_{0}\}. \end{aligned} $$
Besides, we define $\mathbf {\mathcal {R}}_{v,i}=E[\mathbf {\mathcal {V}}_{i}\mathbf {\mathcal {V}}_{i}^{*}]$.
Brief review of diffusion-based DRLS algorithm
In this part, we give a brief introduction to the diffusion-based DRLS algorithm [6, 7].
For the diffusion-based DRLS algorithm, the local optimization problem to estimate the optimal weight vector w o at each node k can be formulated as follows:
$$ \mathbf{\psi}_{k,i}=\mathop{\arg}\mathop{\min}_{\mathbf{w}} \left\{\|\mathbf{w}\|_{\mathbf{\Pi}_{i}}^{2}+\|\mathbf{\mathcal{Y}}_{i}- \mathbf{\mathcal{H}}_{i}\mathbf{w}\|_{\mathbf{\mathcal{W}}_{k,i}}^{2}\right\} $$
Note that the notation $\|\mathbf {a}\|_{\boldsymbol {\Sigma }}^{2}=\mathbf {a}^{*}\boldsymbol {\Sigma }\mathbf {a}$ represents the weighted vector norm of any positive definite Hermitian matrix Σ. Besides, the matrix Π i is given by Π i =λ i+1 Π where 0≪λ<1 representing the forgetting factor and Π=δ −1 I M with δ>0. Furthermore, the matrix $\boldsymbol {\mathcal {W}}_{k,i}$ can be expressed as $\boldsymbol {\mathcal {W}}_{k,i}=\boldsymbol {\mathcal {R}}_{v,i}^{-1}\boldsymbol {\Lambda }_{i}\text {diag}\{\mathbf {C}_{k},\mathbf {C}_{k},\ldots,\mathbf {C}_{k}\}$, where Λ i =diag{I N ,λ I N ,…,λ i I N } and C k is a diagonal matrix. It is worth noting that the main diagonal elements of the matrix C k is composed of the kth column of matrix C. Particularly, the matrix C is the adaptation matrix for the diffusion-based DRLS algorithm and satisfies $\mathbf {\mathbb {I}}^{T}\mathbf {C}=\mathbf {\mathbb {I}}$ and $\mathbf {C}\mathbf {\mathbb {I}}=\mathbf {\mathbb {I}}$ [6]. Also, note that the matrix C is a doubly stochastic matrix, that is, both a left stochastic matrix and a right stochastic matrix.
The optimization problem (5) can be rewritten as follows [6]:
$$ {{\begin{aligned} \boldsymbol{\psi}_{k,i}=\mathop{\arg}\mathop{\min}_{\mathbf{w}}\left\{\lambda^{i+1}\|\mathbf{w}\|_{\boldsymbol{\Pi}}^{2}+\sum\limits_{j=0}^{i}\lambda^{i-j}\sum\limits_{l=1}^{N}\frac{C_{l,k}|d_{l,j}-\mathbf{u}_{l,j}^{*}\mathbf{w}|^{2}}{{\sigma}_{v,l}^{2}}\right\} \end{aligned}}} $$
where C l,k represents the (l,k)th element of the matrix C. The closed-form solution to (6) is given by [6, 7]
$$ \boldsymbol{\psi}_{k,i}=\mathbf{P}_{k,i}\boldsymbol{\mathcal{H}}_{i}^{*}\boldsymbol{\mathcal{W}}_{k,i}\boldsymbol{\mathcal{Y}}_{i} $$
where P k,i can be expressed as
$$ \mathbf{P}_{k,i}=\left[\lambda^{i+1}\boldsymbol{\Pi}+\boldsymbol{\mathcal{H}}_{i}^{*}\boldsymbol{\mathcal{W}}_{k,i}\boldsymbol{\mathcal{H}}_{i}\right]^{-1}. $$
However, the closed-form solution in (7) is obtained via calculating the inversion of matrices, which requires large computation. Instead, the diffusion-based DRLS algorithm provides a recursive approach to solve (6), which can be implemented by the following two steps.
Step 1: Let us take the updates at time instant i for example. Note that we denote the iteration number at time instant i as the superscript (·)l with l=0 representing the initial value. At the very start, we initialize the intermediate local estimate ψ k,i and the inverse matrix P k,i for each node k by utilizing the updated results from time instant i−1, that is
$$ \begin{aligned} \boldsymbol{\psi}_{k,i}^{0}&=\mathbf{w}_{k,i-1}\\ \mathbf{P}_{k,i}^{0}&=\lambda^{-1}\mathbf{P}_{k,i-1} \end{aligned} $$
Then, for each node k, its data is updated incrementally among its neighbors, which is given by
$$\begin{array}{@{}rcl@{}} \boldsymbol{\psi}_{k,i}^{l}&{\longleftarrow}&\boldsymbol{\psi}_{k,i}^{l-1}+ \frac{C_{l,k}\mathbf{P}_{k,i}^{l-1}\mathbf{u}_{l,i}\left[d_{l,i}-\mathbf{u}_{l,i}^{*}\boldsymbol{\psi}_{k,i}^{l-1}\right]} {\sigma_{v,l}^{2}+C_{l,k}\mathbf{u}_{l,i}^{*}\mathbf{P}_{k,i}^{l-1}\mathbf{u}_{l,i}} \end{array} $$
$$\begin{array}{@{}rcl@{}} \mathbf{P}_{k,i}^{l}&{\longleftarrow}&\mathbf{P}_{k,i}^{l-1}- \frac{C_{l,k}\mathbf{P}_{k,i}^{l-1}\mathbf{u}_{l,i}\mathbf{u}_{l,i}^{*}\mathbf{P}_{k,i}^{l-1}}{\sigma_{v,l}^{2}+C_{l,k}\mathbf{u}_{l,i}^{*}\mathbf{P}_{k,i}^{l-1}\mathbf{u}_{l,i}} \end{array} $$
where the left arrow denotes the operation of assignment. Finally, each node k obtains its ultimate intermediate local estimate ψ k,i which can be expressed as
$$\begin{array}{@{}rcl@{}} \boldsymbol{\psi}_{k,i}&\longleftarrow&\boldsymbol{\psi}_{k,i}^{|\mathcal{N}_{k}|} \end{array} $$
Step 2: Each node k combines the ultimate intermediate local estimate of its own, i.e., ψ k,i , obtained in step 1 with that of its neighbors, i.e., ψ l,i , $l\in \mathcal {N}_{k}$ by performing the following diffusion to obtain the local estimate w k,i :
$$ \boldsymbol{w}_{k,i}=\sum\limits_{l=1}^{N}A_{l,k}\boldsymbol{\psi}_{l,i} $$
where A l,k denotes the (l,k)th element of the matrix A. Particularly, the matrix A is the combination matrix for the diffusion-based DRLS algorithm and is chosen such that $\mathbf {\mathbb {I}}^{T}\mathbf {A}=\mathbf {\mathbb {I}}$ [6].
Note that the steps (9)–(13) constitute the diffusion-based DRLS algorithm [6, 7].
Low-complexity variable forgetting factor mechanisms
In this section, we introduce the LTVFF mechanism and the LCTVFF mechanism that are employed by our proposed algorithms. Particularly, the analyses for the variable forgetting factor in terms of the steady-state properties of the first-order statistics are presented, and the LTVFF-DRLS algorithm that employs the LTVFF mechanism as well as the LCTVFF-DRLS algorithm that applies the LCTVFF mechanism are proposed. In the last part of this section, we analyze the computational complexity for these two VFF mechanisms as well as the proposed algorithms.
LTVFF mechanism
Motivated by the VSS mechanism [13, 14] for the diffusion LMS algorithm, the low-complexity VFF mechanisms are designed such that smaller forgetting factors are employed when the estimation errors are large in order to obtain a faster convergence speed, whereas the forgetting factor increases when the estimation errors become small so as to yield better steady-state performance. Based on the above idea, an effective rule to adapt the forgetting factor can be formulated as
$$ \lambda_{k}(i)=[1-\zeta_{k}(i)]_{\lambda_{-}}^{\lambda_{+}} $$
where the quantity ζ k (i) is related to the estimation errors and varies in an inverse way to the forgetting factor, which is referred to as the adaptation component. The operator $[\cdot ]_{\lambda _{-}}^{\lambda _{+}}$ denotes the truncation of the forgetting factor to the limits of the range [λ +,λ −].
For the LTVFF mechanism, the adaptation component is given by
$$ \zeta_{k}(i)=\alpha\zeta_{k}(i-1)+\beta|e_{k}(i)|^{2} $$
with parameters 0<α<1 and β>0. Besides, α is chosen close to 1 and β is set to be a small value. The quantity e k (i) denotes the priori estimation error [19] of each node for the DRLS algorithm, which can be expressed as
$$ e_{k}(i)=d_{k,i}-\mathbf{u}_{k,i}^{*}\mathbf{w}_{k,i-1}. $$
That is to say, in the LTVFF mechanism, the adaptation component is updated based on the instantaneous estimation error.
The LTVFF mechanism is given by (14) and (15). The value of the forgetting factor λ k (i) is controlled by the parameters α and β. Particularly, the effects of α and β on the performance of our proposed algorithms are investigated in Section 5. As can be seen from (14) and (15), large estimation errors will cause an increase in the adaptation component ζ k (i), which yields a smaller forgetting factor and provides a faster tracking speed. Conversely, small estimation errors will lead to the decrease of the adaptation component ζ k (i), and thus, the forgetting factor λ k (i) will be increased to yield smaller steady-state misadjustment.
Next, we study the steady-state statistical properties of the adaptation component ζ k (i) and the forgetting factor λ k (i). Based on (15), it is reasonable to assume that ζ k (i) and ζ k (i−1) are approximately equivalent when i→∞. By taking expectations on both sides of (15) and letting i goes to infinity, we can obtain E[ζ k (∞)]
$$ E[\zeta_{k}(\infty)]=\frac{\beta}{1-\alpha}E\left[\left|e_{k}(\infty)\right|^{2}\right]. $$
Then, we compute the quantity of E[|e k (∞)|2]. Let us define the weight error vector for node k as
$$ \mathbf{\widetilde{w}}_{k,i}=\mathbf{w}_{k,i}-\mathbf{w}^{o}. $$
According to (16) and (18), we can rewrite E[|e k (i)|2] as
$$ \begin{aligned} E[|e_{k}(i)|^{2}]&=E\left[|d_{k,i}-\mathbf{u}_{k,i}^{*}(\mathbf{\widetilde{w}}_{k,i-1}+\mathbf{w}^{o})|^{2}\right]\\ &=E\left[|v_{k,i}-\mathbf{u}_{k,i}^{*}\mathbf{\widetilde{w}}_{k,i-1}|^{2}\right]\\ &={\sigma}_{v,k}^{2}+E\left[|\mathbf{u}_{k,i}^{*}\mathbf{\widetilde{w}}_{k,i-1}|^{2}\right] \end{aligned} $$
where the term $E\left [\left |\mathbf {u}_{k,i}^{T}\mathbf {\widetilde {w}}_{k,i-1}\right |^{2}\right ]$ denotes the excess error. Since it is sufficiently small when i→∞ compared with the variance of noise, it can be neglected. As a consequence, the following approximation holds
$$ E\left[\left|e_{k}(\infty)\right|^{2}\right]\approx\varepsilon_{\text{min}} $$
where ε min denotes the minimum mean-square error and can be expressed as
$$ \varepsilon_{\text{min}}=E\left[\left|d_{k,i}-\mathbf{u}_{k,i}^{*}\mathbf{w}^{o}\right|^{2}\right]={\sigma}_{v,k}^{2}. $$
Subsequently, by substituting (20) into (17), we can approximately write
$$ E[\zeta_{k}(\infty)]\approx\frac{\beta}{1-\alpha}\varepsilon_{\text{min}}. $$
According to (14), we can deduce
$$ E[\lambda_{k}(\infty)]=1-E[\zeta_{k}(\infty)]. $$
By substituting (22) into (23), we can obtain the first-order statistics of the forgetting factor for the LTVFF mechanism:
$$ E[\lambda_{k}(\infty)]=1-\frac{\beta}{1-\alpha}\varepsilon_{\text{min}}. $$
By applying the LTVFF mechanism to the diffusion-based DRLS algorithm, we propose the LTVFF-DRLS algorithm, which is exhibited in the left column of Table 1.
Table 1 LTVFF-DRLS and LCTVFF-DRLS algorithms
LCTVFF mechanism
For the LCTVFF mechanism, the forgetting factor can be calculated through (14) while the adaptation component ζ k (i) can be adjusted according to an alternative rule, that is, the time-averaged estimation of the correlation of two consecutive estimation errors is employed to the updating equation of the adaptation component ζ k (i). Therefore, the rule to update the adaptation component can be described as
$$ \zeta_{k}(i)=\alpha\zeta_{k}(i-1)+\beta|\rho_{k}(i)|^{2} $$
where 0<α<1 and β>0. Particularly, α is set close to 1 and β is chosen to be slightly larger than 0. The quantity ρ k (i) denotes the time-averaged estimation of the correlation of two consecutive estimation errors, which is defined by
$$ \rho_{k}(i)=\gamma\rho_{k}(i-1)+(1-\gamma)|e_{k}(i-1)||e_{k}(i)| $$
where 0<γ<1 and γ is slightly smaller than 1. Note that the LCTVFF mechanism is given by (14), (25), and (26).
Next, we consider the steady-state statistical properties of ρ k (i), ζ k (i), and λ k (i) for the LCTVFF mechanism. As we will see in simulation results, the proposed algorithm converges to the steady-state in numerous iterations, and thus, the values of ρ k (i−1) and ρ k (i) can be assumed to be approximately equivalent, respectively, when i is large enough. Thus, we can obtain E[|e k (i−1)||e k (i)|]≈E[|e k (i)|2] and ρ k (i−1)≈ρ k (i) when i→∞. Then, by taking expectations on both sides of (26) and letting i go to infinity, we can obtain the first-order statistical properties of ρ k (i):
$$ E[\rho_{k}(\infty)]{\approx}\varepsilon_{\text{min}}. $$
To study the second-order statistical properties of ρ k (i), we consider the square of (26), which is given by
$$ \begin{aligned} \rho_{k}^{2}(i)&=\gamma^{2}\rho_{k}^{2}(i-1)+(1-\gamma)^{2}|e_{k}(i-1)|^{2}|e_{k}(i)|^{2}\\ &\quad+2\gamma(1-\gamma)\rho_{k}(i-1)|e_{k}(i-1)||e_{k}(i)|. \end{aligned} $$
Recall that |e k (i−1)| and |e k (i)| can be considered equivalent when i→∞, and thus, we can rewrite (28) as
$$ \begin{aligned} \rho_{k}^{2}(i)&{\approx}\gamma^{2}\rho_{k}^{2}(i-1)+(1-\gamma)^{2}|e_{k}(i)|^{4} \\ &\quad+2\gamma(1-\gamma)\rho_{k}(i-1)|e_{k}(i)|^{2}. \end{aligned} $$
Since (1−γ)2|e k (i)|4 is sufficiently small when compared with other terms in (29), it can be neglected. Therefore, we can obtain
$$ \rho_{k}^{2}(i){\approx}\gamma^{2}\rho_{k}^{2}(i-1)+2\gamma(1-\gamma)\rho_{k}(i-1)|e_{k}(i)|^{2}. $$
According to (16) and (26), the quantities of ρ k (i−1) and |e k (i)|2 can be considered uncorrelated at steady state, that is to say, E[ρ k (i−1)|e k (i)|2]≈E[ρ k (i−1)]E[|e k (i)|2]. Note that the detailed derivation is presented in Appendix A: Proof of the uncorrelation of ρ k (i−1) and |e k (i)|2 in the steady state. Then, by taking expectations on both sides of (30), we can obtain the following result:
$$ E\left[\rho_{k}^{2}(\infty)\right]=\frac{2\gamma}{1+\gamma}E\left[\rho_{k}(\infty)\right] E\left[|e_{k}(\infty)|^{2}\right]. $$
Substituting (20) and (27) into (31) results in
$$ E\left[\rho_{k}^{2}(\infty)\right]\approx\frac{2\gamma}{1+\gamma}\varepsilon^{2}_{\text{min}}. $$
To calculate the first-order statistics of the adaptation component ζ k (i), we take expectations on both sides of (25) and let i goes to infinity, as a result, we obtain
$$ E[\zeta_{k}(\infty)]=\frac{\beta}{1-\alpha}E[\rho_{k}^{2}(\infty)]. $$
Substituting (32) into (33) leads to
$$ E[\zeta_{k}(\infty)]=\frac{2\gamma\beta}{(1+\gamma)(1-\alpha)}\varepsilon^{2}_{\text{min}}. $$
Consequently, we have the first-order steady-state statistics of the forgetting factor for the LCTVFF mechanism as follows:
$$\begin{array}{@{}rcl@{}} E[\lambda_{k}(\infty)]=1-\frac{2\gamma\beta}{(1+\gamma)(1-\alpha)}\varepsilon^{2}_{\text{min}}. \end{array} $$
By employing the LCTVFF mechanism to the diffusion-based DRLS algorithm, we propose the LCTVFF-DRLS algorithm, which is presented in the right column of Table 1.
In this part, we study the computational complexity of the proposed LTVFF and LCTVFF mechanisms in comparison with the GVFF mechanism. Generally, we evaluate the number of arithmetic operations in terms of complex additions and multiplications for each node at each iteration. The results have been shown in Tables 2 and 3. From Table 3, the additional computational complexity of the proposed LTVFF and LCTVFF mechanisms is evaluated by fixed small values for each node at each iteration. However, for the GVFF mechanism, the additional computational complexity increases with the size of the sensor network for each node at each iteration. The result in Table 3 clearly reveals that the proposed LTVFF and LCTVFF mechanisms greatly reduce computational cost when compared to the GVFF mechanism.
Table 2 Computational complexity of the DRLS algorithm
Table 3 Additional computational complexity of the analyzed VFF mechanisms
In this section, we carry out the analyses in terms of mean and mean square error performance for the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms. In particular, we derive mathematical expressions to describe the steady-state behavior based on MSD and EMSE. In addition, we also perform transient analysis in a specialized case for the proposed algorithms in the last part of this section. To proceed with the analysis, we first introduce several assumptions, which have been widely adopted in the analysis for the RLS-type algorithms and have been verified by simulations [7, 27].
Assumption 1
To facilitate analytical studies, we assume that all the input vectors u k,i ,∀k,i are independent of each other and the correlation matrix of the input vector u k,i is invariant over time, which is defined as
$$ E[\mathbf{u}_{k,i}\mathbf{u}_{k,i}^{*}]\mathbf{R}_{u_{k}}. $$
For the proposed LTVFF and LCTVFF mechanisms, when i becomes large, we assume that there exists a positive number N i , when i>N i , for which we have that the forgetting factor λ k (i) varies slowly around its mean value, that is
$$ E\{\lambda_{k}(N_{i})\}{\simeq}E\{\lambda_{k}(N_{i}+1)\}{\simeq}\ldots{\simeq}E\{\lambda_{k}(i)\}{\simeq}E\{\lambda_{k}(\infty)\}. $$
For the RLS-type algorithms with the fixed forgetting factor, we have the ergodicity assumption for P k,i [6, 7, 27], that is, the time average of a sequence of random variables can be replaced by its expected value so as to make the analysis for the performance of these algorithms tractable. Similarly, for the RLS-type algorithms with variable forgetting factors, we still have the ergodicity assumption:
We assume that there exists a number N i >0, when i>N i , for which we can replace $\mathbf {P}_{k,i}^{-1}$ by its expected value $E\left [\mathbf {P}_{k,i}^{-1}\right ]$, which can be represented as
$$ {\lim}_{i\to\infty}\mathbf{P}_{k,i}^{-1}\approx{\lim}_{i\to\infty}E\left[\mathbf{P}_{k,i}^{-1}\right] $$
where $\lim \limits _{i\to \infty }E\left [\mathbf {P}_{k,i}^{-1}\right ]$ can be calculated through
$$ {\lim}_{i\to\infty}E\left[\mathbf{P}_{k,i}^{-1}\right]=\frac{1}{1-E[\lambda_{k}(\infty)]}\sum_{l=1}^{N}\frac{C_{l,k}}{ \sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}. $$
The derivation is presented in Appendix B. Since $\lim \limits _{i\to \infty }E\left [\mathbf {P}_{k,i}^{-1}\right ]$ is independent of i, we can denote it by $\mathbf {P}_{k}^{-1}$. Moreover, based on the ergodicity assumption, it is also common in the analysis of the performance of the RLS-type algorithms to replace the random matrix P k,i by P k when i is large enough.
Mean performance
In light of (1) and (13), the following relation holds [7] after the incremental update of ψ l,i is complete:
$$ \mathbf{P}^{-1}_{l,i}\boldsymbol{\psi}_{l,i}=\lambda_{l}(i)\mathbf{P}^{-1}_{l,i-1}\mathbf{w}_{l,i-1}+\sum\limits_{m=1}^{N}\frac{C_{m,l}}{ \sigma^{2}_{v,m} }\mathbf{u}_{m,i}d_{m,i}. $$
By substituting (1) and (18) into (40), we obtain the following equation:
$$ \mathbf{P}^{-1}_{l,i}(\boldsymbol{\psi}_{l,i}-\mathbf{w}^{o})=\lambda_{l}(i)\mathbf{P}^{-1}_{l,i-1}\widetilde{\mathbf{w}}_{l,i-1} +\sum_{m=1}^{N}\frac{C_{m,l}}{\sigma_{v,m}^{2}}\mathbf{u}_{m,i}v_{m,i}. $$
Next, let us define the intermediate weight error vector $\widetilde {\boldsymbol {\psi }}_{k,i}$ for node k as
$$ \widetilde{\boldsymbol{\psi}}_{k,i}=\boldsymbol{\psi}_{k,i}-\mathbf{w}^{o}. $$
Substituting (42) into (41) results in the following result:
$$ \widetilde{\boldsymbol{\psi}}_{l,i}=\lambda_{l}(i)\mathbf{P}_{l,i}\mathbf{P}^{-1}_{l,i-1}\widetilde{\mathbf{w}}_{l-1,i} +\mathbf{P}_{l,i}\sum\limits_{m=1}^{N}\frac{C_{m,l}}{\sigma_{v,m}^{2}}\mathbf{u}_{m,i}v_{m,i}. $$
Then, we construct $\widetilde {\mathbf {w}}_{k,i}$ from $\widetilde {\boldsymbol {\psi }}_{l,i}$ based on (13) and obtain
$$ \widetilde{\mathbf{w}}_{k,i}=\!\sum\limits_{l=1}^{N}A_{l,k}\!\left[\!\lambda_{l}(i)\mathbf{P}_{l,i}\mathbf{P}^{-1}_{l,i-1}\widetilde{\mathbf{w}}_{l,i-1}+\mathbf{P}_{l,i}\!\sum\limits_{m=1}^{N}\!\frac{C_{m,l}}{\sigma_{v,m}^{2}}\mathbf{u}_{m,i}v_{m,i}\!\right]\!. $$
Note that P k,i can be replaced by P k when i is large enough (cf. Assumption 3), and thus, it is reasonable to assume that P k,i converges as i→∞. Therefore, we can approximately have
$$ \mathbf{P}_{k,i}{\approx}E[\mathbf{P}_{k,i}]. $$
Besides, in view of Assumption 3 and the Eq. (39), we can obtain
$$ {\begin{aligned} \mathbf{P}_{k,i}=\left(\mathbf{P}^{-1}_{k,i}\right)^{-1}{\approx}\left\{E\left[\mathbf{P}_{k,i}^{-1}\right]\right\}^{-1}{\approx}\left(1-E[\lambda_{k}(\infty)]\right)\left(\sum\limits_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}\right)^{-1}. \end{aligned}} $$
By combining (45) and (46), we have the following approximation:
$$ \mathbf{P}_{k,i}\mathbf{P}^{-1}_{k,i-1}{\approx}E^{-1}\left[\mathbf{P}_{k,i}^{-1}\right]E\left[\mathbf{P}_{k,i}^{-1}\right]=\mathbf{I}_{M}. $$
Then, substituting (47) into (44) yields the following result when i is sufficiently large:
$$ \widetilde{\mathbf{w}}_{k,i}=\sum\limits_{l=1}^{N}A_{l,k}\left[\lambda_{l}(i)\widetilde{\mathbf{w}}_{l,i-1}+\mathbf{P}_{l,i}\sum\limits_{m=1}^{N}\frac{C_{m,l}}{\sigma_{v,m}^{2}}\mathbf{u}_{m,i}v_{m,i}\right]. $$
Following this, two global matrices $\widetilde {\mathbf {W}}_{i}$ and $\boldsymbol {\mathcal {P}}$ are built in the following form in order to collect the weight error vectors $\widetilde {\mathbf {w}}_{k,i},k=1,\cdots,N$ and matrices P k ,k=1,⋯,N, respectively:
$$\begin{array}{@{}rcl@{}} &&\widetilde{\mathbf{W}}_{i}=\text{row}\{\widetilde{\mathbf{w}}_{1,i},\widetilde{\mathbf{w}}_{2,i},\ldots,\widetilde{\mathbf{w}}_{N,i}\}\\ &&\boldsymbol{\mathcal{P}}=\text{row}\{\mathbf{P}_{1},\mathbf{P}_{2},\ldots,\mathbf{P}_{N}\}. \end{array} $$
In addition, we introduce a global diagonal matrix D(i) to collect the forgetting factors of all nodes at time instant i, which is given by
$$ \boldsymbol{\Lambda}_{i}=\text{diag}\{\lambda_{1}(i),\lambda_{2}(i),\ldots,\lambda_{N}(i)\}. $$
Using the vectors in (2), the term $\sum \limits _{m=1}^{N}\frac {C_{m,l}}{\sigma _{vm}^{2}}\mathbf {u}_{m,i}v_{m,i}$ in (44) can be rewritten as $\mathbf {H}_{i}^{*}\mathbf {C}_{l}\mathbf {R}_{v}^{-1}\mathbf {v}_{i}$. By collecting the vectors $\mathbf {H}_{i}^{*}\mathbf {C}_{l}\mathbf {R}_{v}^{-1}\mathbf {v}_{i}$, l=1,2,…,N, into a block diagonal matrix G i , we obtain
$$ \mathbf{G}_{i}=\text{diag}\left\{\mathbf{H}_{i}^{*}\mathbf{C}_{1}\mathbf{R}_{v}^{-1}\mathbf{v}_{i},\mathbf{H}_{i}^{*} \mathbf{C}_{2}\mathbf{R}_{v}^{-1}\mathbf{v}_{i},\ldots,\mathbf{H}_{i}^{*}\mathbf{C}_{N}\mathbf{R}_{v}^{-1}\mathbf{v}_{i}\right\}. $$
To separate the noise vectors, we can rewrite (51) as
$${} \mathbf{G}_{i}=\text{diag}\left\{\mathbf{H}_{i}^{*}\mathbf{C}_{1}\mathbf{R}_{v}^{-1},\mathbf{H}_{i}^{*}\mathbf{C}_{2}\mathbf{R}_{v}^{-1},\ldots,\mathbf{H}_{i}^{*}\mathbf{C}_{N}\mathbf{R}_{v}^{-1}\right\}(\mathbf{I}_{N}\otimes\mathbf{v}_{i}). $$
where ⊗ denotes the Kronecker product of two matrices [28]. Subsequently, we express (48) in a more compact way, which leads to the following updating equation for the global matrix $\widetilde {\mathbf {W}}_{i}$:
$$ \widetilde{\mathbf{W}}_{i}=\widetilde{\mathbf{W}}_{i-1}\boldsymbol{\Lambda}_{i}\mathbf{A}+\boldsymbol{\mathcal{P}}\mathbf{G}_{i}\mathbf{A}. $$
In order to simplify the notation Λ i A, we denote it as F(i), and thus, we can rewrite (53) as
$$ \widetilde{\mathbf{W}}_{i}=\widetilde{\mathbf{W}}_{i-1}\mathbf{F}(i)+\boldsymbol{\mathcal{P}}\mathbf{G}_{i}\mathbf{A}. $$
In order to facilitate analysis, we assume that $\widetilde {\mathbf {W}}_{i-1}$ and F(i) can be considered uncorrelated, that is, $E[\widetilde {\mathbf {W}}_{i-1}\mathbf {F}(i)]\approx E[\widetilde {\mathbf {W}}_{i-1}]E[\mathbf {F}(i)]$. As we will see in simulation results, this assumption works well for theoretical analysis, which matches numerical results perfectly. By taking expectations on both sides of (54), we obtain the following result:
$$ E[\widetilde{\mathbf{W}}_{i}]=E[\widetilde{\mathbf{W}}_{i-1}]E[\mathbf{F}(i)]+\boldsymbol{\mathcal{P}}E[\mathbf{G}_{i}]\mathbf{A}. $$
Recall (52), since the noise samples v i have zero mean, E[G i ] equals to zero; therefore, we can obtain
$$ E[\widetilde{\mathbf{W}}_{i}]=E[\widetilde{\mathbf{W}}_{i-1}]E[\mathbf{F}(i)]. $$
Following this, we assume that there exists a number N i >0 and iterate (56) starting from the time instant i to N i , as a result, we obtain
$$ E[\widetilde{\mathbf{W}}_{i}]=E[\widetilde{\mathbf{W}}_{N_{i}}]\prod_{j=N_{i}+1}^{i}E[\mathbf{F}(j)]. $$
Recalling that F(i)=Λ i A, with Λ i a diagonal matrix, we have the following relation for each element in F(i):
$$ \mathbf{F}_{m,n}(i)=\lambda_{m}(i)\mathbf{A}_{m,n}(i), \forall m,n\in \{1, 2, \cdots, N\} $$
where the subscript m,n represents the (m,n)-th element in the matrix. Given that the elements of A are all between 0 and 1 and each element in the diagonal matrix Λ i does not exceed the upper bound λ +, which is smaller than unity, we have
$$ {{\begin{aligned} \mathbf{F}_{m,n}(i)=\lambda_{m}(i)\mathbf{A}_{m,n}(i)<\lambda_{+}\mathbf{A}_{m,n}(i)<1, \forall m,n\in \{1, 2, \cdots, N\} \end{aligned}}} $$
Each element in the product $\prod \limits _{j=N_{i}+1}^{i}E[\mathbf {F}(j)]$ can be viewed as a polynomial of F 1,1(i),F 1,2(i),⋯, with an order of i−N i +1. When i→∞, each element of this product approaches zero since F 1,1(i),F 1,2(i),⋯ are all smaller than unity. Now, assuming that all the elements of $E[\widetilde {\mathbf {W}}_{N_{i}}]$ are bounded in absolute value by some finite constant, therefore, all the elements of $E[\widetilde {\mathbf {W}}_{i}]$ converge to zero when i→∞. As a result, we can conclude that the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms converge asymptotically when i→∞.
Mean-square error and deviation performances
In this part, we perform analyses for the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms based on mean square performance and derive expressions for the steady-state MSD and EMSE, which are defined as
$$ \begin{aligned} {MSD}_{k}^{ss}&={\lim}_{i\to\infty}E\left[\|\widetilde{\mathbf{w}}_{k,i}\|^{2}\right]\\ {EMSE}_{k}^{ss}&={\lim}_{i\to\infty}E\left[|\mathbf{u}_{k,i}^{*}\widetilde{\mathbf{w}}_{k,i-1}|^{2}\right]. \end{aligned} $$
We start with (54) and then operate recursively from time instant N i , which yields
$$ \widetilde{\mathbf{W}}_{i}=\widetilde{\mathbf{W}}_{N_{i}}\prod\limits_{j=N_{i}+1}^{i}\mathbf{F}(j)+\boldsymbol{\mathcal{P}}\sum\limits_{t=N_{i}+1}^{i}\mathbf{G}_{t}\mathbf{A}\prod\limits_{j=t+1}^{i}\mathbf{F}(j). $$
Then, the kth column of $\widetilde {\mathbf {W}}_{i}$ is given by
$$ \widetilde{\mathbf{w}}_{k,i}=\widetilde{\mathbf{W}}_{N_{i}}\prod\limits_{j=N_{i}+1}^{i}\mathbf{F}(j)\mathbf{e}_{k} +\boldsymbol{\mathcal{P}}\sum\limits_{t=N_{i}+1}^{i}\mathbf{G}_{t}\mathbf{A}\prod_{j=t+1}^{i}\mathbf{F}(j)\mathbf{e}_{k} $$
where e k is a column vector of length N with unity for the kth element and zero for the others. Next, we write the Euclidean norm of the weight error vector $\widetilde {\mathbf {w}}_{k,i}$, that is, $\|\widetilde {\mathbf {w}}_{k,i}\|^{2}$, or equivalently, $Tr\{\widetilde {\mathbf {w}}_{k,i}\widetilde {\mathbf {w}}_{k,i}^{*}\}$.
Since the elements of F(i) are all bounded by zero and one, $\prod \limits _{j=N_{i}+1}^{i}\mathbf {F}(j)$ vanishes when i→∞, which leads to the expectation of the first term becoming zero. Moreover, seeing that the cross terms incorporate the zero-mean vectors v i , their expectations also become zero. As a result, we have the following expression:
$$ E\left[\|\widetilde{\mathbf{w}}_{k,i}\|^{2}\right]=E\left[\left\|\boldsymbol{\mathcal{P}}\sum_{t=n_{i}+1}^{i}\mathbf{G}_{t}\mathbf{A}\prod_{j=t+1}^{i}\mathbf{F}(j)\mathbf{e}_{k}\right\|^{2}\right] $$
which can be rewritten as
$${} \begin{aligned} E\left[\|\widetilde{\mathbf{w}}_{k,i}\|^{2}\right]&=E\left[Tr\left\{\boldsymbol{\mathcal{P}}\sum\limits_{t=N_{i}+1}^{i}\mathbf{G}_{t}\mathbf{A}\prod\limits_{j=t+1}^{i}\mathbf{F}(j)\mathbf{e}_{k}\mathbf{e}_{k}^{*}\sum\limits_{l=N_{i}+1}^{i}\right.\right.\\ &\quad\left.\left.\times\left(\prod\limits_{j=l+1}^{i}\mathbf{F}(j)\right)^{*}\mathbf{A}^{*}\mathbf{G}_ {l}^{*}\boldsymbol{\mathcal{P}}^{*}\right\}\right]. \end{aligned} $$
For simplicity, we have the following notation:
$$ \mathbf{J}^{t,l}(i)=\mathbf{A}\prod_{j=t+1}^{i}\mathbf{F}(j)\mathbf{e}_{k}\mathbf{e}_{k}^{*}\left(\prod\limits_{j=l+1}^{i}\mathbf{F}(j)\right)^{*}\mathbf{A}^{*} $$
where J t,l(i) is a matrix of size N×N. By combining (52), (64), and (65), let us first compute $(\mathbf {I}_{N}{\otimes }\mathbf {v}_{t})\mathbf {J}^{t,l}(i)(\mathbf {I}_{N}{\otimes }\mathbf {v}_{l}^{*})$. According to the properties of the Kronecker product, we have the following equality:
$$ (\mathbf{A}\otimes\mathbf{B})(\mathbf{C}\otimes\mathbf{D})=\mathbf{AC}\otimes\mathbf{BD}. $$
Therefore, $(\mathbf {I}_{N}{\otimes }v_{t})\mathbf {J}^{t,l}(i)\left (\mathbf {I}_{N}{\otimes }v_{l}^{*}\right)$ can be expressed as
$${} \begin{aligned} (\mathbf{I}_{N}{\otimes}\mathbf{v}_{t})\mathbf{J}^{t,l}(i)\left(\mathbf{I}_{N}{\otimes}\mathbf{v}_{l}^{*}\right)&=\left(\mathbf{I}_{N}{\otimes}\mathbf{v}_{t}\right)\left(\mathbf{J}^{t,l}(i){\otimes}1\right)\left(\mathbf{I}_{N}{\otimes}\mathbf{v}_{l}^{*}\right)\\ &=\mathbf{J}^{t,l}(i){\otimes}\left(\mathbf{v}_{t}\mathbf{v}_{l}^{*}\right). \end{aligned} $$
Note that, in light of (65), the matrix J t(i) and the covariance matrix of noise R v can be considered uncorrelated. Then, by taking expectations on both sides of (67), we have the following results:
$$ \begin{aligned} E\left[(\mathbf{I}_{N}{\otimes}\mathbf{v}_{t})\mathbf{J}^{t,l}(i)\left(\mathbf{I}_{N}{\otimes}\mathbf{v}_{l}^{*}\right)\right] &=E\left[\mathbf{J}^{t,l}(i)\right]{\otimes}E\left[(\mathbf{v}_{t}\mathbf{v}_{l}^{*})\right]\\ &=\left\{\begin{array}{ll} E[\mathbf{J}^{t}(i)]{\otimes}\mathbf{R}_{v}&t=l,\\ 0 &t\neq l, \end{array}\right. \end{aligned} $$
where we drop the index and denote J t,t(i) as J t(i). By substituting (68) into (64), we can obtain
$$ E\left[\|\widetilde{\mathbf{w}}_{k,i}\|^{2}\right]=E\left[Tr\left\{\boldsymbol{\mathcal{P}}\sum_{t=N_{i}+1}^{i}\mathbf{G}_{t}\mathbf{J}^{t}(i)\mathbf{G}_{t}^{*}\mathbf{\mathcal{P}}^{*}\right\}\right]. $$
Note that P k , k=1,2,…,N is Hermitian; therefore, we have the following expression:
$$ E\left[\|\widetilde{\mathbf{w}}_{k,i}\|^{2}\right]=E\left[Tr\left\{\boldsymbol{\mathcal{P}}\sum_{t=N_{i}+1}^{i}\mathbf{G}_{t} \mathbf{J}^{t}(i)\mathbf{G}_{t}^{*}\boldsymbol{\mathcal{P}}^{T}\right\}\right] $$
where $\mathbf {G}_{t}\mathbf {J}^{t}(i)\mathbf {G}_{t}^{*}$ can be represented as a block matrix K t(i), which can be decomposed into N×N blocks of size M×M each. The (m,l)th block is given by
$$ \mathbf{K}_{m,l}^{t}(i)=\mathbf{H}_{t}^{*}\mathbf{C}_{m}\mathbf{R}_{v}^{-1}\mathbf{J}_{m,l}^{t}(i)\mathbf{v}_{t}\mathbf{v}^{*}_{t}C_{l}\mathbf{R}_{v}^{-1}\mathbf{H}_{t}. $$
By taking expectations on both sides of (71), we obtain the following equality:
$$ E[\mathbf{K}_{m,l}^{t}(i)]=E\left[\mathbf{J}_{m,l}^{t}(i)\right]\sum_{n=1}^{N}\frac{C_{n,m}C_{n,l}}{\sigma_{v,n}^{2}}\mathbf{R}_{u_{n}}. $$
Substituting (65) and (72) into (70) yields the following result:
$$ {{\begin{aligned} E[\|\widetilde{\mathbf{w}}_{k,i}\|^{2}]&=Tr\left\{\sum_{t=N_{i}+1}^{i}\sum_{l=1}^{N}\sum_{m=1}^{N}\mathbf{P}_{m}E\left[\mathbf{K}_{m,l}^{t}(i)\right]\mathbf{P}_{l}\right\}\\ &=\sum_{t=N_{i}+1}^{i}\sum_{l=1}^{N}\sum_{m=1}^{N}\sum_{n=1}^{N}Tr\{\mathbf{P}_{m}\mathbf{R}_{u_{n}}\mathbf{P}_{l}\} \frac{C_{n,m}C_{n,l}}{\sigma_{v,n}^{2}}\\ &\quad\times\left\{\mathbf{A}\prod_{j=t+1}^{i}E[\mathbf{F}(j)]\right\}_{m,k}\left\{\mathbf{A}\prod_{j=t+1}^{i}E[\mathbf{F}(j)]\right\}_{l,k}. \end{aligned}}} $$
In view of Assumption 2, we can verify that there exists a number N i >0, when i>N i , for which F(i) satisfies
$$ E[\mathbf{F}(N_{i})]{\simeq}E[\mathbf{F}(N_{i}+1)]{\simeq}\ldots{\simeq}E[\mathbf{F}(i)]{\simeq}E[\mathbf{F}(\infty)]. $$
Therefore, we replace E[F(i)] with E[F(∞)] when i>N i and then reformulate (73) as
$$ {{\begin{aligned} E[\|\widetilde{\mathbf{w}}_{k,i}\|^{2}]&\approx\sum_{t=N_{i}+1}^{i}\sum_{l=1}^{N}\sum_{m=1}^{N}\sum_{n=1}^{N}Tr\{\mathbf{P}_{m}\mathbf{R}_{u_{n}}\mathbf{P}_{l}\}\\ &\quad\times\frac{C_{n,m}C_{n,l}}{\sigma_{v,n}^{2}}\left\{\mathbf{A}E^{i-t}[\mathbf{F}(\infty)]\right\}_{m,k}\left\{\mathbf{A}E^{i-t}[\mathbf{F}(\infty)]\right\}_{l,k}. \end{aligned}}} $$
Subsequently, we replace i−t with t in (75) and then let i goes to infinity. As a result, we can obtain the expression of the steady-state MSD for node k:
$${} \begin{aligned} {MSD}_{k}^{ss}&={\lim}_{i\to\infty}E[\|\widetilde{\mathbf{w}}_{k,i}\|^{2}]\\ &={\lim}_{i\to\infty}\sum_{t=0}^{i}\sum_{l=1}^{N}\sum_{m=1}^{N}\sum_{n=1}^{N}Tr\{\mathbf{P}_{m}\mathbf{R}_{u_{n}}\mathbf{P}_{l}\}\\ &\quad\times\frac{C_{n,m}C_{n,l}}{\sigma_{v,n}^{2}}\left\{\mathbf{A}E^{t}[\mathbf{F}(\infty)]\right\}_{m,k}\left\{\mathbf{A}E^{t}[\mathbf{F}(\infty)]\right\}_{l,k}. \end{aligned} $$
Next, we calculate the steady-state EMSE for node k. According to (60), the EMSE for node k can be expressed as follows
$$ \begin{aligned} E\left[|\boldsymbol{u}_{k,i}^{*}\widetilde{\mathbf{w}}_{k,i-1}|^{2}\right]&=E\left[Tr\left\{\widetilde{\mathbf{w}}_{k,i-1}^{*}\mathbf{u}_{k,i}\mathbf{u}_{k,i}^{*}\widetilde{\mathbf{w}}_{k,i-1}\right\}\right]\\ &=E[Tr\{\mathbf{u}_{k,i}\mathbf{u}_{k,i}^{*}\widetilde{\mathbf{w}}_{k,i-1}\widetilde{\mathbf{w}}_{k,i-1}^{*}\}]\\ &=Tr\left\{\mathbf{R}_{u_{k}}E\left[\widetilde{\mathbf{w}}_{k,i-1}\widetilde{\mathbf{w}}_{k,i-1}^{*}\right]\right\}. \end{aligned} $$
Note that u k,i is independent of $\widetilde {\mathbf {w}}_{k,i-1}$. By substituting (76) into (77), we can obtain the expression of the steady-state EMSE for node k:
$${} \begin{aligned} {EMSE}_{k}^{ss}&={\lim}_{i\to\infty}\sum_{t=0}^{i}\sum_{l=1}^{N}\sum_{m=1}^{N}\sum_{n=1}^{N}Tr\left\{\mathbf{R}_{u_{k}}\mathbf{P}_{m}\mathbf{R}_{u_{n}}\mathbf{P}_{l}\right\}\\ &\quad\times\frac{C_{n,m}C_{n,l}}{\sigma_{v,n}^{2}}\left\{\mathbf{A}E^{t}[\mathbf{F}(\infty)]\right\}_{m,k}\left\{\mathbf{A}E^{t}[\mathbf{F}(\infty)]\right\}_{l,k}. \end{aligned} $$
Expressions (76) and (78) describe the steady-state behavior of the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms. By comparing the expressions (76) and (78) with the analytical results in [7], it is clear that the fixed matrix λ 2 A in the expressions for the conventional DRLS algorithms has been replaced by the matrix F(i) in the expressions (76) and (78), which is weighted by the matrix Λ i . Since Λ i varies from one iteration to the next, F(i) varies for each iteration as well, which improves the tracking performance of the resulting algorithms. Furthermore, since all the elements in F(i) are bounded by zero and unity, the values of the steady-state MSD and EMSE given by (76) and (78) are both very small values when i is large enough. Thus, we can verify that the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms both converge in the mean-square sense.
Transient analysis under spatial invariance assumption
In this subsection, we consider a specialized case that the noise variances and input vector covariance matrices are the same for all the sensor nodes, and provide transient analysis for this specific case. Particularly, we assume spatial invariance:
$$\begin{array}{*{20}l} \sigma_{v_{1}}^{2}&=\sigma_{v_{2}}^{2}=\cdots=\sigma_{v_{N}}^{2}=\sigma_{v}^{2} \end{array} $$
$$\begin{array}{*{20}l} \mathbf{R}_{u_{1}}&=\mathbf{R}_{u_{2}}=\cdots=\mathbf{R}_{u_{N}}=\mathbf{R}_{u}. \end{array} $$
In addition, to facilitate analysis, we assume that all elements of the adaptation matrix C are equal to $\frac {1}{N}$.
We study the transient analysis through focusing on the learning curve, which is obtained by depicting the squared priori estimation error, i.e., $E\left [|\mathbf {u}_{k,i}^{*}(\mathbf {w}_{k,i}-\mathbf {w}^{o})|^{2}\right ]$ [29, 30], as a function of the iteration number i. We first rewrite this squared priori estimation error in a more compact form:
$$ \begin{aligned} &E\left[|\mathbf{u}_{k,i}^{*}(\mathbf{w}_{k,i}-\mathbf{w}^{o})|^{2}\right]\\ =&E\left[|\mathbf{u}_{k,i}^{*}\tilde{\mathbf{w}}_{k,i}|^{2}\right]\\ =&E\left[\tilde{\mathbf{w}}_{k,i}^{*}\mathbf{u}_{k,i}\mathbf{u}_{k,i}^{*}\mathbf{w}_{k,i}\right]\\ =&E\left[\tilde{\mathbf{w}}_{k,i}^{*}\mathbf{R}_{u}\tilde{\mathbf{w}}_{k,i}\right]\\ =&E\left[\|\tilde{\mathbf{w}}_{k,i}\|^{2}_{\mathbf{R}_{u}}\right] \end{aligned} $$
where we use the representation $\|\mathbf {t}\|_{\mathbf {A}}^{2}=\mathbf {t}^{*}\mathbf {A}\mathbf {t}$ in the last equality.
Then, we use the spatial invariance assumption to simply (39) and (48). Particularly, by taking advantage of the assumption that the input vector covariance matrix is the same over all sensor nodes, we can derive the following expression from (39), when i is large enough:
$$ {{\begin{aligned} \mathbf{P}_{k,i}\approx E\left[\mathbf{P}_{k,i}^{-1}\right]^{-1}\approx (1-E[\lambda_{k}(i)])\sigma_{v}^{2}\mathbf{R}_{u}^{-1}\approx (1-\lambda_{k}(i))\sigma_{v}^{2}\mathbf{R}_{u}^{-1}. \end{aligned}}} $$
By substituting (82) into (48), we can arrive at
$${} \begin{aligned} \tilde{\mathbf{w}}_{k,i}&=\sum\limits_{l=1}^{N}A_{l,k}\left[\lambda_{l}(i)\tilde{\mathbf{w}}_{l,i}+(1-\lambda_{l}(i))\mathbf{R}_{u}^{-1}\sum\limits_{m=1}^{N}\mathbf{u}_{m,i}v_{m,i}\right]\\ &=\sum\limits_{l=1}^{N}A_{l,k}\lambda_{l}(i)\tilde{\mathbf{w}}_{l,i}+\sum\limits_{l=1}^{N}A_{l,k}(1-\lambda_{l}(i))\mathbf{R}_{u}^{-1}\mathbf{H}_{i}^{*}\mathbf{v}_{i}\\ &=\sum\limits_{l=1}^{N}A_{l,k}\lambda_{l}(i)\tilde{\mathbf{w}}_{l,i}+\sum\limits_{l=1}^{N}A_{l,k}(1-\lambda_{l}(i))\mathbf{s}_{i}\\ &=\sum\limits_{l=1}^{N}A_{l,k}\lambda_{l}(i)\tilde{\mathbf{w}}_{l,i}+\left(1-\sum\limits_{l=1}^{N}A_{l,k}\lambda_{l}(i)\right)\mathbf{s}_{i} \end{aligned} $$
where we use the column vector s i to denote the quantity $\mathbf {R}_{u}^{-1}\mathbf {H}_{i}^{*}\mathbf {v}_{i}$ in the third equality, and we use the property of the combination matrix, i.e., $\sum _{l=1}^{N}A_{l,k}=1, \forall k\in \{1, 2, \cdots, N\}$, to arrive at the fourth equality. Let us define
$$ \begin{aligned} \tilde{\boldsymbol{\mathcal{W}}}_{i}&=\text{col}\{\tilde{\mathbf{w}}_{1,i}, \tilde{\mathbf{w}}_{2,i},\cdots, \tilde{\mathbf{w}}_{N,i}\}\\ \boldsymbol{\lambda}_{i}&=\text{col}\{\lambda_{1}(i), \lambda_{2}(i), \cdots, \lambda_{N}(i)\}. \end{aligned} $$
Note that Λ i =diag{λ i }. Then, we can write the recursive equation of type (83) for all sensor nodes in a more compact form as follows:
$$ \begin{aligned} \tilde{\boldsymbol{\mathcal{W}}}_{i}&=\mathbf{A}^{T}\boldsymbol{\Lambda}_{i}\tilde{\boldsymbol{\mathcal{W}}}_{i-1}+\left(\mathbf{\mathbb{I}}-\mathbf{A}^{T}\boldsymbol{\lambda}_{i}\right)\otimes\mathbf{s}_{i}\\ &=\mathbf{A}^{T}\boldsymbol{\Lambda}_{i}\tilde{\boldsymbol{\mathcal{W}}}_{i-1}+\mathbf{f}(i)\otimes\mathbf{s}_{i} \end{aligned} $$
where $\mathbf {f}(i)=\mathbf {\mathbb {I}}-\mathbf {A}^{T}\boldsymbol {\lambda }_{i}$ in the second equality. Then, we have the following global squared priori estimation error for all sensor nodes by using the last equality in (85):
$$ {{\begin{aligned} E[\|\tilde{\boldsymbol{\mathcal{W}}}_{i}\|^{2}_{\mathbf{R}_{u}}]&=E[\tilde{\boldsymbol{\mathcal{W}}}_{i}^{*}\mathbf{R}_{u}\tilde{\boldsymbol{\mathcal{W}}}_{i}]\\ &=E\left[\tilde{\boldsymbol{\mathcal{W}}}_{i-1}^{*}\boldsymbol{\Lambda}_{i}\mathbf{A}\mathbf{R}_{u}\mathbf{A}^{T}\boldsymbol{\Lambda}_{i}\tilde{\boldsymbol{\mathcal{W}}}_{i-1}+\left(\mathbf{f}(i)^{T}\otimes\mathbf{s}_{i}^{*}\right)\mathbf{R}_{u}\left(\mathbf{f}(i)\otimes\mathbf{s}_{i}\right)\right]\\ &=E\left[\|\tilde{\boldsymbol{\mathcal{W}}}_{i-1}\|^{2}_{\boldsymbol{\Sigma}}\right]+E\left[\left(\mathbf{f}(i)^{T}\otimes\mathbf{s}_{i}^{*}\right)(\mathbf{R}_{u}\otimes1)(\mathbf{f}(i)\otimes\mathbf{s}_{i})\right]\\ &=E\left[\|\tilde{\boldsymbol{\mathcal{W}}}_{i-1}\|^{2}_{\boldsymbol{\Sigma}}\right]+E\left[\left(\left(\mathbf{f}(i)^{T}\mathbf{R}_{u}\right)\otimes\mathbf{s}_{i}^{*}\right)(\mathbf{f}(i)\otimes\mathbf{s}_{i})\right]\\ &=E\left[\|\tilde{\boldsymbol{\mathcal{W}}}_{i-1}\|^{2}_{\boldsymbol{\Sigma}}\right]+E\left[\left(\mathbf{f}(i)^{T}\mathbf{R}_{u}\mathbf{f}(i)\right)\otimes(\mathbf{s}_{i}^{*}\mathbf{s}_{i})\right]\\ &=E\left[\|\tilde{\boldsymbol{\mathcal{W}}}_{i-1}\|^{2}_{\boldsymbol{\Sigma}}\right]+E\left[\mathbf{f}(i)^{T}\mathbf{R}_{u}\mathbf{f}(i)\right]E[\mathbf{s}_{i}^{*}\mathbf{s}_{i}] \end{aligned}}} $$
where Σ=Λ i A R u A T Λ i , and we use the property of the Kronecker product, i.e., (66), in the fourth and fifth equalities, and the fact that both quantities of f(i)T R u f(i) and $\mathbf {s}_{i}^{*}\mathbf {s}_{i}$ are scalar and they are independent to arrive at the last equality. Particularly, $E[\mathbf {s}_{i}^{*}\mathbf {s}_{i}]$ can be rewritten as
$$ \begin{aligned} &E[\mathbf{s}_{i}^{*}\mathbf{s}_{i}]\\ =&E\left[\text{Tr}\left(\mathbf{v}_{i}^{*}\mathbf{H}_{i}\left(\mathbf{R}_{u}^{-1}\right)^{*}\mathbf{R}_{u}^{-1}\mathbf{H}_{i}^{*}\mathbf{v}_{i}\right)\right]\\ =&E\left[\text{Tr}\left(\mathbf{H}_{i}\left(\mathbf{R}_{u}^{-1}\right)^{*}\mathbf{R}_{u}^{-1}\mathbf{H}_{i}^{*}\mathbf{v}_{i}\mathbf{v}_{i}^{*}\right)\right]\\ =&\sigma_{v}^{2}E\left[\text{Tr}\left(\left(\mathbf{R}_{u}^{-1}\right)^{*}\mathbf{R}_{u}^{-1}\mathbf{H}_{i}^{*}\mathbf{H}_{i}\right)\right]\\ =&N\sigma_{v}^{2}\text{Tr}\left(\left(\mathbf{R}_{u}^{-1}\right)^{*}\mathbf{R}_{u}^{-1}\mathbf{R}_{u}\right)\\ =&N\sigma_{v}^{2}\text{Tr}\left(\mathbf{R}_{u}^{-1}\right) \end{aligned} $$
where we use the spatial invariance assumption, i.e., $\mathbf {v}_{i}\mathbf {v}_{i}^{*}=\text {diag}\{\sigma _{v}^{2}, \sigma _{v}^{2}, \cdots, \sigma _{v}^{2}\}=\sigma _{v}^{2}\mathbf {I}_{N}$ and $\mathbf {H}_{i}^{*}\mathbf {H}_{i}=\sum _{m=1}^{N} \mathbf {u}_{m,i}\mathbf {u}_{m,i}^{*}=\sum _{m=1}^{N}\mathbf {R}_{u_{m}}=N\mathbf {R}_{u}$, to arrive at the third and fourth equalities, respectively, and the symmetry of the input vector covariance matrix in the last equality. By plugging (87) back into (86), we have
$$ \begin{aligned} &E\left[\left\|\tilde{\boldsymbol{\mathcal{W}}}_{i}\right\|^{2}_{\mathbf{R}_{u}}\right]\\ =&E\left[\left\|\tilde{\boldsymbol{\mathcal{W}}}_{i-1}\right\|^{2}_{\boldsymbol{\Sigma}}\right]+N\sigma_{v}^{2}E\left[\text{Tr}\left(\mathbf{f}(i)^{T}\mathbf{R}_{u}\mathbf{f}(i)\mathbf{R}_{u}^{-1}\right)\right]\\ =&E\left[\left\|\tilde{\boldsymbol{\mathcal{W}}}_{i-1}\right\|^{2}_{E[\boldsymbol{\Sigma}]}\right]+N\sigma_{v}^{2}E\left[\text{Tr}\left(\mathbf{f}(i)^{T}\mathbf{R}_{u}\mathbf{f}(i)\mathbf{R}_{u}^{-1}\right)\right]. \end{aligned} $$
For convenience, we use the notation $\|\mathbf {t}\|^{2}_{\text {vec}\{\mathbf {A}\}}$to denote the weighted norm $\|\mathbf {t}\|^{2}_{\mathbf {A}}$, where the symbol vec{A} represents the vectorization of a matrix. Particularly, by using the equality vec{A B C}=(C T⊗A)vec{B}, we can vectorize the matrix Σ=Λ i A R u A T Λ i as follows
$$ \begin{aligned} &\text{vec}\{\boldsymbol{\Sigma}\}\\ =&\text{vec}\left\{\boldsymbol{\Lambda}_{i}\mathbf{A}\mathbf{R}_{u}\mathbf{A}^{T}\boldsymbol{\Lambda}_{i}\right\}\\ =&\left(\left(\mathbf{A}^{T}\boldsymbol{\Lambda}_{i}\right)\otimes(\boldsymbol{\Lambda}_{i}\mathbf{A})\right)\text{vec}\{\mathbf{R}_{u}\}\\ =&\left((\boldsymbol{\Lambda}_{i}\mathbf{A})\otimes(\boldsymbol{\Lambda}_{i}\mathbf{A})\right)\text{vec}\{\mathbf{R}_{u}\}\\ =&(\mathbf{F}(i)\otimes\mathbf{F}(i))\text{vec}\{\mathbf{R}_{u}\}\\ =&\boldsymbol{\mathcal{F}}_{i}\boldsymbol{\gamma} \end{aligned} $$
where F(i)=Λ i A, $\boldsymbol {\mathcal {F}}_{i}=\mathbf {F}(i)\otimes \mathbf {F}(i)$ and γ=vec{R u }. Ultimately, we have
$$ \begin{aligned} &E\left[\left\|\tilde{\boldsymbol{\mathcal{W}}}_{i}\right\|^{2}_{\boldsymbol{\gamma}}\right]\\ =&E\left[\left\|\tilde{\boldsymbol{\mathcal{W}}}_{i-1}\right\|^{2}_{E[\boldsymbol{\mathcal{F}}_{i}]\boldsymbol{\gamma}}\right]+N\sigma_{v}^{2}E\left[\text{Tr}\left(\mathbf{f}(i)^{T}\mathbf{R}_{u}\mathbf{f}(i)\mathbf{R}_{u}^{-1}\right)\right]. \end{aligned} $$
This recursive equation is stable and convergent if $E[\boldsymbol {\mathcal {F}}_{i}]$ is stable [31].
Particularly, the quantity $\boldsymbol {\mathcal {F}}_{i}$ has a spectral radius smaller than unity and thus is stable. This can be proved as follows: If we replace each element in Λ i by its upper bound λ +, then we have $\boldsymbol {\mathcal {F}}_{i}$ replacced by $\lambda _{+}^{2}\mathbf {A}\otimes \mathbf {A}$. Note that A satisfies $\mathbf {\mathbb {I}}^{T}\mathbf {A}=\mathbf {\mathbb {I}}$, and then, we can readily verify that each column of A⊗A sums up to unity. Hence, the quantity $\lambda _{+}^{2}\mathbf {A}\otimes \mathbf {A}$ has the spectral radius $\lambda _{+}^{2}$ that is smaller than one. Given that each element in Λ i does not exceed λ +, the spectral radius of $\boldsymbol {\mathcal {F}}_{i}$ is smaller than $\lambda _{+}^{2}$ and surely is smaller than unity. Therefore, for this specialized case, it can be verified theoretically that the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms are convergent in terms of the learning curve and the convergence rate is related to the varying forgetting factors.
Also note that, since the convergence performance of the adaptive algorithms does not depend on the outside environment but rely on the network topology and the design of algorithms, the analytical results in this specialized case also apply to the general case.
Simulation results
In this section, we present the simulation results for the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms when applied in two applications, that is, distributed parameter estimation and distributed spectrum estimation over sensor networks.
Distributed parameter estimation
In this part, we evaluate the performance of the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms when applied to distributed parameter estimation in comparison with the DRLS algorithm with the fixed forgetting factor and the GVFF-DRLS algorithm. In addition, we also verify the effectiveness of the proposed analytical expressions in (76) and (78) based on simulations.
We assume that there are 10 nodes in the sensor network and the length of the unknown weight vector is M=5. The input vectors u k,i , k=1,2,…,N are assumed to be Gaussian with zero means and variances $\left \{\sigma _{u,k}^{2}\right \}$ chosen randomly between 1 and 2 for each node. The Gaussian noise samples v k,i , k=1,2,…,N have variances $\left \{\sigma _{v,k}^{2}\right \}$ that are chosen randomly between 0.1 and 0.2 for each node. We generate the measurements {d k,i } according to (1). Simulation results are averaged over 100 experiments. The adaptation matrix C is governed by the Metropolis rule, while the choice of the diffusion matrix A follows the relative-degree rule [8]. The network topology used for the simulations is shown in Fig. 2.
Network topology for the simulation results in Section 5.1
Effects of α, β, and γ
In this subsection, we study the effects of the parameters α, β, and γ on the performance of the proposed LTVFF and LCTVFF mechanisms. For the LTVFF mechanism, we investigate the steady-state MSD values versus α for β=0.0015,0.002,0.0025,0.005. The simulation results are shown in Fig. 3. For the LCTVFF mechanism, we first depict the steady-state MSD values versus α for β=0.0025,0.005,0.0075,0.01 in Fig. 4. Then, the effects of γ are illustrated in Fig. 5 by investigating the steady-state MSD values against γ for different pairs of α and β.
Steady-state MSD versus α for different values of β for the LTVFF mechanism
Steady-state MSD versus α for different values of β for the LCTVFF mechanism when γ=0.95
Steady-state MSD versus γ for different values of α and β for the LCTVFF mechanism
As can be seen from Figs. 3 and 4 for both the LTVFF and LCTVFF mechanisms, the optimal choice of α and β is not unique. Specifically, different pairs of α and β can yield the same steady-state MSD value. For example, for the LTVFF mechanism, the pairs α=0.91,β=0.0015, α=0.89,β=0.002, and α=0.87,β=0.0025 provide almost the same steady-state MSD performance. For the LCTVFF mechanism, when γ=0.95, the pairs α=0.93,β=0.0025, α=0.90,β=0.005, α=0.85,β=0.0075, and α=0.80,β=0.01 yield almost the same steady-state MSD value. In addition, it can also be observed that with the decreasing of α and β, the steady-state performance degrades. Furthermore, the result in Fig. 5 reveals that the steady-state MSD performance of the LCTVFF mechanism does not change so much as γ varies for different pairs of α and β.
However, when we choose appropriate values for α, β, and γ, only considering the effects on the steady-state behaviors is not enough. This is because that the convergence speed is closely connected to the steady-state MSD values. That is to say, when the algorithm assumes a faster convergence speed, the steady-state error floor rises; if the convergence speed is controlled to be slower, the steady-state performance improves. Figures 6 and 7 show the trade-off between convergence speed and steady-state performance by depicting learning curves against different values of α and β for LTVFF-DRLS and LCTVFF-DRLS algorithms, respectively. Therefore, we need to keep a good balance between the steady-state behaviors and the convergence speed in order to ensure good performance. In practical applications, the optimized values of α, β, and γ should be obtained through experiments and then stored for the future use.
Learning curves against different values of α and β for LTVFF-DRLS algorithm
Learning curves against different values of α and β for LCTVFF-DRLS algorithm when γ=0.95
MSD and EMSE performance
Figures 8 and 9 show the MSD curves against the number of iterations for the LTVFF-DRLS and LCTVFF-DRLS algorithms with different initial values for the forgetting factor in comparison with the conventional DRLS algorithm and the GVFF-DRLS algorithm, respectively. The parameters of the considered algorithms are listed in Table 4. From the results, the LTVFF-DRLS algorithm converges to almost the same error floor in two scenarios where the variable forgetting factor is initialized to be small or large. This is also true for the LCTVFF-DRLS algorithm, which has lower error floor and faster convergence speed than the LTVFF-DRLS algorithm. However, as shown in Fig. 8, for the conventional DRLS algorithm, its convergence speed and steady-state error floor both have obvious changes when the fixed forgetting factors increases. Specifically, when the fixed forgetting factor is small, the conventional DRLS algorithm converges faster but has a higher error floor than the LTVFF-DRLS algorithm; however, as the fixed forgetting factors increase, it converges to a lower error floor (not as good as the LTVFF-DRLS algorithm) but has slower convergence speed. Besides, from Fig. 9, the MSD performance of the proposed LTVFF-DRLS and LCTVF-DRLS algorithms are less sensitive to the initial values for the forgetting factor than that of the GVFF-DRLS algorithm. Therefore, by employing the LTVFF and LCTVFF mechanisms, the proposed algorithms can track the optimal performance regardless of the initial values for the forgetting factor and greatly reduce the difficulty in choosing the appropriate value for the forgetting factor.
MSD performance against iterations for the proposed algorithms with different initial values for the forgetting factor compared with the DRLS algorithm with the fixed forgetting factor
MSD performance against iterations for the proposed algorithms with different initial values for the forgetting factor compared with the GVFF-DRLS algorithm
Table 4 Optimized parameters for different algorithms considered in Figs. 8 and 9
In Figs. 10, 11, 12, and 13, we evaluate the performance of the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms based on MSD and EMSE behaviors in comparison with that of the conventional DRLS with the fixed forgetting factor and the GVFF-DRLS algorithms. Specifically, the MSD and EMSE curves against the number of iterations for the analyzed algorithms are depicted in Figs. 10 and 11, respectively, while the steady-state MSD and EMSE values for each node are shown in Figs. 12 and 13, respectively. As can be seen from these results, both the LTVFF-DRLS and LCTVFF-DRLS algorithms converge after a number of iterations and achieve lower steady-state MSD and EMSE values compared to the DRLS algorithm with the fixed forgetting factor and the GVFF-DRLS algorithm. Besides, we also depict the analytical results which are calculated through expressions (76) and (78) in Figs 10, 11, 12, and 13. From these results, it is clear that analytical expressions corroborate the simulated results very well. The parameters of the considered algorithms are shown in Table 5, which are tuned through experiments by referring to the investigation in Section 5.1.1.
MSD curve against number of iterations for the proposed and existing algorithms
EMSE curve against number of iterations for the proposed and existing algorithms
Steady-state MSD value versus node for the proposed and existing algorithms
Steady-state EMSE value versus node for the proposed and existing algorithms
Table 5 Optimized parameters for different algorithms considered in Figs. 10, 11, 12, and 13
In Fig. 14, we test the performance of different algorithms considered in a non-stationary environment. Specifically, in order to simulate the non-stationary environment, we consider the scenario where the topology of the sensor network varies over time, that is, the total number of sensor nodes is set to 40 at the start, and then, we switch off half of the nodes after 100 iterations and another 10 nodes after 800 iterations. The MSD curves against the number of iterations for the proposed algorithms in comparison with the conventional DRLS algorithm with the fixed forgetting factor and the GVFF-DRLS algorithm in the non-stationary environment are depicted in Fig. 14. As can be observed, the switching off of some sensor nodes results in the degradation of the performance for all the algorithms. However, the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms still outperform the other two existing algorithms in MSD performance. Besides, they exhibit better tracking properties by showing smaller and smoother variations in the MSD curves at the time of switching sensor nodes.
MSD performance against number of iterations for the proposed and existing algorithms in a nonstationary environment
Next, we elaborate the numerical stability of the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms. Through tuning the parameters α, β, γ, λ +, λ + to different values, the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms can have different convergence speed and steady-state performance, but their MSD and EMSE curves always decrease to the steady-state. Indeed, after a number of experiments, we have not encountered the case where they diverge. Hence, the proposed LTVFF and LCTVFF mechanisms do not make the numerical stability of the DRLS algorithm worse. Besides, the simulation results in Fig. 14 show that, after switching some nodes in the network, the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms still achieve superior performance to the conventional DRLS algorithm, and they exhibit smoother MSD curves at the time of switching nodes, especially the LCTVFF-DRLS algorithm. This further verifies that the proposed algorithms improve instead of impair the numerical stability of the DRLS algorithm by keeping better tracking of the variations.
Distributed spectrum estimation
In this part, we extend the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms to the application of distributed spectrum estimation, for which we focus on estimating the parameter w 0 that is relevant to the unknown spectrum of a transmitted signal s. First of all, we characterize the system model of distributed spectrum estimation.
We denote the power spectral density (PSD) of the unknown spectrum of the transmitted signal s by Φ s (f), which can be well approximated by the following basis expansion model [32] with N b sufficiently large:
$$ \Phi_{s}(f)=\sum\limits_{m=1}^{N_{b}}b_{m}(f){w}_{0m}=\mathbf{b}_{0}^{T}(f)\mathbf{w}_{0} $$
where $\phantom {\dot {i}\!}\mathbf {b}_{0}(f)=\text {col}\{b_{1}(f),b_{2}(f),\ldots,b_{N_{b}}(f)\}$ is the vector of basis functions [33, 34], $\phantom {\dot {i}\!}\mathbf {w}_{0}=\text {col}\{{w}_{01},{w}_{02},\ldots,{w}_{0N_{b}}\}$ is the expansion parameter to be estimated and represents the power that transmits the signal s over each basis, and N b is the number of basis functions.
We assume H k (f,i) to be the channel transfer function between the source emitting the signal s and the receiver node k at time instant i. Based on (91), the PSD of the signal received by node k can be represented as
$$ \begin{aligned} \Phi_{r}(f)&=|H_{k}(f,i)|^{2}\Phi_{s}(f)+\sigma_{r,k}^{2}\\ &=\sum\limits_{m=1}^{N_{b}}|H_{k}(f,i)|^{2}b_{m}(f){w}_{0m}+\sigma_{r,k}^{2}\\ &=\mathbf{b}^{T}_{k,i}(f)\mathbf{w}_{0}+\sigma_{r,k}^{2} \end{aligned} $$
where $\mathbf {b}_{k,i}(f)=\left [|H_{k}(f,i)|^{2}b_{m}(f)\right ]_{m=1}^{N_{b}}\in \mathbb {R}^{N_{b}}$ and $\sigma _{r,k}^{2}$ denotes the receiver noise power at node k.
At each time instant i, by observing the received PSD described in (92) over N c frequency samples f j =f min :(f max −f min )/N c :f max , for j=1,2,…,N c , each node k takes measurements according to the following model:
$$ d_{k,i}^{j}=\mathbf{b}^{T}_{k,i}(f_{j})\mathbf{w}_{0}+\sigma_{r,k}^{2}+v_{k,i}^{j} $$
where $v_{k,i}^{j}$ denotes the sampling noise at frequency f j with zero mean and variance $\sigma _{n,j}^{2}$. The receiver noise power $\sigma _{r,k}^{2}$ can be estimated with high accuracy preliminarily and then subtracted from (93) [35, 36]. Therefore, we can obtain
$$ d_{k,i}^{j}=\mathbf{b}^{T}_{k,i}(f_{j})\mathbf{w}_{0}+v_{k,i}^{j}. $$
By collecting the measurements over N c frequencies into a column vector d k,i , we obtain the following system model of distributed spectrum estimation:
$$ \mathbf{d}_{k,i}=\mathbf{B}_{k,i}\mathbf{w}_{0}+\mathbf{v}_{k,i}. $$
where $\mathbf {d}_{k,i}=\left [d_{k,i}^{f_{j}}\right ]_{j=1}^{N_{c}}\in \mathbb {R}^{N_{c}}$, $\mathbf {B}_{k,i}=\left [\mathbf {b}^{T}_{k,i}(f_{j})\right ]_{j=1}^{N_{c}}\in \mathbb {R}^{N_{c}{\times }N_{b}}$, with N c >N b , and $\mathbf {v}_{k,i}=\left [v_{k,i}^{j}\right ]_{j=1}^{N_{c}}\in \mathbb {R}^{N_{c}}$.
Next, we carry out simulations to show the performance of the proposed algorithms when applied to distributed spectrum estimation. We consider a sensor network composed of N=20 nodes in order to estimate the unknown expansion parameter w 0. We use N b =50 non-overlapping rectangular basis functions with amplitude equal to one to approximate the PSD of the unknown spectrum. The nodes can scan N c =100 frequencies over the frequency axis, which is normalized between 0 and 1. In particular, we assume that only 8 entries of w 0 are non-zero, which implies that the unknown spectrum is transmitted over 8 basis functions. Thus, the sparsity ratio equals to 8/50. We set the power transmitted over each basis function to be 0.7 and the variance of the sampling noise to be 0.004.
In Fig. 15, we compare the performance of different algorithms for the distributed spectrum estimation in terms of MSD. As can be depicted, the proposed LTVFF-DRLS and LCTVFF-DRLS algorithms still outperform the conventional DRLS algorithm in steady-state performance. By tuning parameters, the GVFF-DRLS algorithm can achieve similar performance to the proposed algorithms in the convergence speed and steady-state MSD values but at huge computational cost. We have listed the simulation time of running each algorithm for 600 iterations and 1 Monte Carlo experiment in Table 6. As can be observed, the simulation time of running the GVFF-DRLS algorithm is almost 3 times of that for running the other algorithms. In Fig. 16, we take node 1 as an example to investigate the performance of different algorithms in estimating the true PSD. From the results, although different algorithms obtain similar estimates of the true PSD, the proposed LCTVFF-DRLS algorithm obviously leads to smaller side lobes in the PSD curve than the other three.
MSD performance for different algorithms applied in distributed spectrum estimation
PSD performance for different algorithms applied in distributed spectrum estimation
Table 6 Simulation time of running different algorithms in Figs. 15 and 16
In this paper, we have proposed two low-complexity VFF-DRLS algorithms for distributed estimation including the LTVFF-DRLS and LCTVFF-DRLS algorithms. For the LTVFF-DRLS algorithm, the forgetting factor is adjusted by the time-averaged cost function, while for the LCTVFF-DRLS algorithm, the forgetting factor is adjusted by the time-averaged of the correlation of two successive estimation errors. We also have investigated the computational complexity of the low-complexity VFF mechanisms as well as the proposed VFF-DRLS algorithms. In addition, we have carried out the convergence and steady-state analysis for the proposed algorithms. Moreover, we also have derived analytical expressions for the steady-state MSD and EMSE. The simulation results have shown the superiority of the proposed algorithms to the conventional DRLS and GVFF-DRLS algorithms in applications of distributed parameter estimation and distributed spectrum estimation and have verified the effectiveness of our proposed analytical expressions for the steady-state MSD and EMSE.
A: Proof of the uncorrelation of ρ k (i−1) and |e k (i)|2 in the steady state
By multiplying both sides of (26) by |e k (i)|2 and taking expectaitons, we have the following equation:
$$ {{\begin{aligned} E\left[\rho_{k}(i-1)|e_{k}(i)|^{2}\right]&={\gamma}E\left[\rho_{k}(i-2)|e_{k}(i)|^{2}\right]\!\\ &\quad+\!(1\,-\,\gamma)E\left[|e_{k}(i-2)||e_{k}(i-1)||e_{k}(i)|^{2}\right]. \end{aligned}}} $$
Recall that the values of e k (i−1) and e k (i) and the values of ρ k (i−1) and ρ k (i) can be considered approximately equivalent when i→∞; therefore, we have the following results:
$$ {{\begin{aligned} E\left[\rho_{k}(i-1)|e_{k}(i)|^{2}\right] \approx&{\gamma}E\left[\rho_{k}(i-2)|e_{k}(i)|^{2}\right]\\ &+(1-\gamma)E\left[|e_{k}(i-1)|^{2}\right]E\left[|e_{k}(i)|^{2}\right]\\ \approx&{\gamma}E\left[\rho_{k}(i-1)|e_{k}(i)|^{2}\right]\\ &+(1-\gamma)\varepsilon_{\text{min}}^{2}. \end{aligned}}} $$
By recalling (27), we can obtain
$$\begin{array}{*{20}l} E\left[\rho_{k}(i-1)|e_{k}(i)|^{2}\right]\approx\varepsilon_{\text{min}}^{2}{\approx}E\left[\rho_{k}(i-1)]E[e_{k}(i)|^{2}\right] \end{array} $$
That is, we can conclude that ρ k (i−1) and |e k (i)|2 are uncorrelated in the steady state.
B: Proof of (39)
According to (8), we can obtain the following equation:
$$ \mathbf{P}^{-1}_{k,i}=\prod\limits_{j=0}^{i}\lambda_{k}(j)\boldsymbol{\Pi}+\boldsymbol{\mathcal{H}}_{i}^{*}\boldsymbol{\mathcal{W}}_{k,i}\boldsymbol{\mathcal{H}}_{i} $$
where the matrices $\boldsymbol {\mathcal {H}}_{i}$ and $\boldsymbol {\mathcal {W}}_{k,i}$ can be expressed as follows
$$ \begin{aligned} \boldsymbol{\mathcal{H}}_{i}&= \left[\begin{array}{c} \mathbf{H}_{i}\\ \boldsymbol{\mathcal{H}}_{i-1} \end{array}\right]\\ \boldsymbol{\mathcal{W}}_{k,i}&= \left[\begin{array}{cc} \mathbf{R}^{-1}_{v}\mathbf{C}_{k}&{}\\ {}&\lambda_{k}(i)\boldsymbol{\mathcal{W}}_{k,i-1} \end{array}\right]. \end{aligned} $$
Therefore, (99) can be reformulate as
$$\begin{aligned} \mathbf{P}^{-1}_{k,i}&=\lambda_{k}(i)\left(\prod\limits_{j=0}^{i-1}\lambda_{k}(j)\boldsymbol{\Pi}+\boldsymbol{\mathcal{H}}_{i-1}^{*}\boldsymbol{\mathcal{W}}_{k,i-1}\boldsymbol{\mathcal{H}}_{i-1}\right)\\ & \quad +\mathbf{H}^{*}_{i}\mathbf{R}^{-1}_{v}\mathbf{C}_{k}\mathbf{H}_{i}. \end{aligned} $$
Substituting (2) into (101) yields the following recursion:
$$ \mathbf{P}^{-1}_{k,i}=\lambda_{k}(i)\mathbf{P}^{-1}_{k,i-1}+\sum_{m=1}^{N}\frac{C_{m,k}}{\sigma_{v,m}^{2}}\mathbf{u}_{m,i}\mathbf{u}_{m,i}^{*}. $$
By employing the iterative Eq. (102), we can write
$$ \begin{aligned} \mathbf{P}_{k,i}^{-1}&=\sum\limits_{l=1}^{N}\frac{C_{l,k}}{\sigma_{,}^{2}}\mathbf{u}_{l,i}\mathbf{u}_{l,i}^{*}+{\lambda}_{k}(i)\sum_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{u}_{l,i-1}\mathbf{u}_{l,i-1}^{*}\\ &\quad+{\lambda}_{k}(i){\lambda}_{k}(i-1)\sum_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{u}_{l,i-2}\mathbf{u}_{l,i-2}^{*}+\ldots\\ &\quad+\prod\limits_{j=i}^{1}{\lambda}_{k}(j)\sum\limits_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{u}_{l,0}\mathbf{u}_{l,0}^{*}+\prod_{j=i}^{0}{\lambda}_{k}(j)\boldsymbol{\Pi}. \end{aligned} $$
Recalling Assumption 1, we know that the correlation matrix of the input vector is invariant over time, as a result, the correlation matrix $\mathbf {R}_{u_{l,i}}$ can be represented as $\mathbf {R}_{u_{l}}$. Therefore, by taking expectations on both sides of (103), we obtain the following result
$$ {{\begin{aligned} E\left[\mathbf{P}_{k,i}^{-1}\right]&=\sum_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}+E[\lambda_{k}(i)]\sum\limits_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}+E\left[\lambda_{k}(i)\right.\\ &\quad\left.\times\lambda_{k}(i-1)\right]\sum\limits_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}\,+\,\ldots\,+\,E\left[\!\prod\limits_{j=i}^{1}\lambda_{k}(j)\right]\!\sum_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}\\ &\quad+E\left[\prod\limits_{j=i}^{0}\lambda_{k}(j)\right]\boldsymbol{\Pi}. \end{aligned}}} $$
In view of Assumption 2, (104) can be approximately rewritten as
$$ {{\begin{aligned} E\left[\mathbf{P}_{k,i}^{-1}\right]&\approx\left(1+E[\lambda_{k}(i)]+\ldots+E[\lambda_{k}(i)]^{i-N_{i}+1}\right)\sum\limits_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}\\ &\quad+E[\lambda_{k}(i)\lambda_{k}(i-1)\ldots\lambda_{k}(N_{i})]E\left[{\vphantom{\prod\limits_{j=N_{i}-1}^{1}}}\lambda_{k}(N_{i}-1)\right.\\ &\left.\quad+\lambda_{k}(N_{i}-1)\lambda_{k}(N_{i}-2)+\ldots+\prod\limits_{j=N_{i}-1}^{1}\lambda_{k}(j)\right]\sum\limits_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}\\ &\quad+E\left[\prod\limits_{j=i}^{N_{i}}\lambda_{k}(j)\right]E\left[\prod\limits_{j=N_{i}-1}^{0}\lambda_{k}(j)\right]\boldsymbol{\Pi}\\ &\approx\left(1+E[\lambda_{k}(i)]+\ldots+E[\lambda_{k}(i)]^{i-N_{i}+1}\right)\sum\limits_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}}\\ &\quad+E[\lambda_{k}(i)]^{i-N_{i}+1}(\xi+\chi) \end{aligned}}} $$
where ξ and χ can be expressed as follows, respectively:
$$ \begin{aligned} \xi&=E\left[{\vphantom{\prod_{j=N_{i}-1}^{1}}}\lambda_{k}(N_{i}-1)+\lambda_{k}(N_{i}-1)\lambda_{k}(N_{i}-2)\right.\\ &\quad\left.+\ldots+\prod_{j=N_{i}-1}^{1}\lambda_{k}(j)\right]\sum_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}} \end{aligned} $$
$$ \chi=E\left[\prod_{j=N_{i}-1}^{0}\lambda_{k}(j)\right]\boldsymbol{\Pi}. $$
Since n i is a finite positive number, ξ and χ are two deterministic values. In addition, note that λ k (i) does not exceed its upper bound λ +, which is smaller than but close to unity. Therefore, we have 0<E[λ k (i)]<λ +<1, and $E[\lambda _{k} (i)]^{i-N_{i}+1}<\lambda _{+}^{i-N_{i}+1}$. When i is large enough, $\lambda _{+}^{i-N_{i}+1}$ approaches zero, and, of course, $\phantom {\dot {i}\!}E[\lambda _{k} (i)]^{i-N_{i}+1}$ also approaches zero. As a result, the last term in (105) vanishes. Then, we obtain the following result:
$$ {\lim}_{i\to\infty}E\left[\mathbf{P}_{k,i}^{-1}\right]=\frac{1}{1-E[\lambda_{k}(\infty)]}\sum_{l=1}^{N}\frac{C_{l,k}}{\sigma_{v,l}^{2}}\mathbf{R}_{u_{l}} $$
where the values of λ k (∞) is given in (24) for the LTVFF mechanism and in (35) for the LCTVFF mechanism, respectively. Hence, we obtain (39). Note that, by setting appropriate truncation bounds for λ k (i), the steady-state forgetting factor value will not be influenced by the truncation. Hence, the result (39) always holds true despite the truncation employed to the VFF mechanisms. Indeed, the truncation mechanism only plays a role during the process of converging. Once the algorithms reach the steady state, the values of the forgetting factor are not affected by the truncation mechanism any longer.
P Corke, T Wark, R Jurdak, W Hu, P Valencia, D Moore, Environmental wireless sensor networks. Proc. IEEE.98(11), 1903–1917 (2010).
JG Ko, C Lu, MB Srivastava, JA Stankovic, A Terzis, M Welsh, Wireless sensor networks for healthcare. Proc. IEEE.98(11), 1947–1960 (2010).
R Abdolee, B Champagne, AH Sayed, in Proc. IEEE Statistical Signal Processing Workshop. Diffusion LMS for Source and Process Estimation in Sensor Networks (IEEEAnn Arbor, 2012).
R Abdolee, B Champagne, AH Sayed, in Proc. IEEE ICASSP. Diffusion LMS Localization and Tracking Algorithm for Wireless Cellular Networks (IEEEVancouver, 2013).
R Abdolee, B Champagne, AH Sayed, Diffusion adaptation over multi-agent networks with wireless link impairments. IEEE Trans. Mob. Comput. 15(6), 1362–1376 (2016).
FS Cattiveli, CG Lopes, AH Sayed, in Proc. IEEE Workshop Signal Process. Advances Wireless Commun. (SPAWC). A Diffusion RLS Scheme for Distributed Estimation over Adaptive Networks (IEEEHelsinki, 2007), pp. 1–5.
FS Cattiveli, CG Lopes, AH Sayed, Diffusion recursive least-squares for distributed estimation over adaptive networks. IEEE Trans. Signal Process.56(5), 1865–1877 (2008).
FS Cattiveli, AH Sayed, Diffusion LMS strategies for distributed estimation. IEEE Trans. Signal Process.58(3), 1035–1048 (2010).
CG Lopes, AH Sayed, Diffusion least-mean squares over distributed networks: formulation and performance analysis. IEEE Trans. Signal Process.56(7), 3122–3136 (2008).
Y Liu, C Li, Z Zhang, Diffusion sparse least-mean squares over networks. IEEE Trans. Signal Process.60(8), 4480–4485 (2012).
S Xu, RC de Lamare, HV Poor, in Proc. IEEE ICASSP. Adaptive link selection strategies for distributed estimation in diffusion wireless networks (IEEEVancouver, 2013).
S Xu, RC de Lamare, HV Poor, Distributed compressed estimation based on compressive sensing. IEEE Signal Process. Lett.22(9), 1311–1315 (2015).
MOB Saeed, A Zerguine, SA Zummo, A variable step-size strategy for distributed estimation over adaptive networks. EURASIP J. Adv Signal Process. 2013(1), 1–14 (2013).
H Lee, S Kim, J Lee, W Song, A variable step-size diffusion LMS algorithm for distributed estimation. IEEE Trans. Signal Process.63(7), 1808–1820 (2015).
Z Liu, Y Liu, C Li, Distributed sparse recursive least-squares over networks. IEEE Trans. Signal Process.62(6), 1386–1395 (2014).
S Huang, C Li, Distributed sparse total least-squares over networks. IEEE Trans. Signal Process.63(11), 2986–2998 (2015).
C Li, P Shen, Y Liu, Z Zhang, Diffusion information theoretic learning for distributed estimation over network. IEEE Trans. Signal Process.61(16), 4011–4024 (2013).
Z Liu, C Li, Y Liu, Distributed censored regression over networks. IEEE Trans. Signal Process.63(20), 5437–5449 (2015).
S Haykin, Adaptive Filter Theory, 4th edn (Prentic-Hall, Englewood cliffs, 2000).
S Leung, CF So, Gradient-based variable forgetting factor RLS algorithm in time-varying environments. IEEE Trans. Signal Process. 53(8), 3141–3150 (2005).
CF So, SH Leung, Variable forgetting factor RLS algorithm based on dynamic equation of gradient of mean square error. Electron. Lett.37(3), 202–203 (2011).
S Song, J Lim, S Baek, K Sung, Gauss Newton variable forgetting factor recursive least squares for time varying parameter tracking. Electron. Lett.36(11), 988–990 (2000).
S Song, J Lim, SJ Baek, K Sung, Variable forgetting factor linear least squares algorithm for frequency selective fading channel estimation. IEEE Trans. Vehi. Techonol. 51(3), 613–616 (2002).
F Albu, in Proc. of ICARCV 2012. Improved Variable Forgetting Factor Recursive Least Square Algorithm (IEEEGuangzhou, 2012).
Y Cai, RC de Lamare, M Zhao, J Zhong, Low-complexity variable forgetting factor mechanisms for blind adaptive constrained constant modulus algorithms. IEEE Trans. Signal Process.60(8), 3988–4002 (2012).
L Qiu, Y Cai, M Zhao, Low-complexity variable forgetting factor mechanisms for adaptive linearly constrained minimum variance beamforming algorithms. IET Signal Process. 9(2), 154–165 (2015).
R Arablouei, K Dogancay, S Werner, Y Huang, Adaptive distributed estimation based on recursive least-squares and partial diffusion. IET Signal Process. 62(14), 1198–1208 (2014).
DS Tracy, RP Singh, A new matrix product and its applications in partitioned matrix differentiation. Statistica Neerlandica. 51(3), 639–652 (2003).
H Shin, AH Sayed, in Proc. IEEE ICASSP. Transient Behavior of Affine Projection Algorithms (IEEEHong Kong, 2003).
JH Husoy, MSE Abadi, in IEEE MELECON 2004. A Common Framework for Transient Analysis of Adaptive Filters (IEEEDubrovnik, 2004).
AH Sayed, Adaptive filters (Wiley, 2011).
JA Bazerque, GB Giannakis, Distributed spectrum sensing for cognitive radio networks by exploiting sparsity. IEEE Trans. Signal Process.58(3), 1847–1862 (2010).
S Chen, DL Donoho, MA Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci Comput. 20:, 33–61 (1998).
Y Zakharov, T Tozer, J Adlard, Polynomial splines-approximation of Clarke's model. IEEE Trans. Signal Process.52(5), 1198–128 (2004).
PD Lorenzo, S Barbarossa, A Sayed, Distributed spectrum estimation for small cell networks based on sparse diffusion adaptation. IEEE Signal Process. Lett.20(123), 1261–1265 (2013).
ID Schizas, G Mateos, GB Giannakis, Distributed LMS for consensus-based in-network adaptive processing. IEEE Trans. Signal Process.57(6), 2365–2382 (2009).
This work was supported in part by the National Natural Science Foundation of China under Grant 61471319, the Scientific Research Project of Zhejiang Provincial Education Department under Grant Y201122655, and the Fundamental Research Funds for the Central Universities.
College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, People's Republic of China
Ling Zhang
, Yunlong Cai
& Chunguang Li
CETUC-PUC-Rio, Rio de Janeiro, Brazil
Rodrigo C. de Lamare
Search for Ling Zhang in:
Search for Yunlong Cai in:
Search for Chunguang Li in:
Search for Rodrigo C. de Lamare in:
YC and RCdL proposed the original idea. LZ carried out the experiment. In addition, LZ and YC wrote the paper. CL and RCdL supervised and reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Yunlong Cai.
Ethics declarations
The authors declare that they have no competing interests
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Diffusion recursive least-squares
Variable forgetting factor
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Uspekhi Matematicheskikh Nauk
Uspekhi Mat. Nauk:
Uspekhi Mat. Nauk, 2003, Volume 58, Issue 1(349), Pages 113–164 (Mi umn594)
This article is cited in 6 scientific papers (total in 6 papers)
Generalized continued fractions and ergodic theory
L. D. Pustyl'nikov
M. V. Keldysh Institute for Applied Mathematics, Russian Academy of Sciences
Abstract: In this paper a new theory of generalized continued fractions is constructed and applied to numbers, multidimensional vectors belonging to a real space, and infinite-dimensional vectors with integral coordinates. The theory is based on a concept generalizing the procedure for constructing the classical continued fractions and substantially using ergodic theory. One of the versions of the theory is related to differential equations. In the finite-dimensional case the constructions thus introduced are used to solve problems posed by Weyl in analysis and number theory concerning estimates of trigonometric sums and of the remainder in the distribution law for the fractional parts of the values of a polynomial, and also the problem of characterizing algebraic and transcendental numbers with the use of generalized continued fractions. Infinite-dimensional generalized continued fractions are applied to estimate sums of Legendre symbols and to obtain new results in the classical problem of the distribution of quadratic residues and non-residues modulo a prime. In the course of constructing these continued fractions, an investigation is carried out of the ergodic properties of a class of infinite-dimensional dynamical systems which are also of independent interest.
DOI: https://doi.org/10.4213/rm594
Full text: PDF file (567 kB)
Russian Mathematical Surveys, 2003, 58:1, 109–159
UDC: 511.335+511.336+517.987.5
MSC: Primary 11J70, 28D05; Secondary 11A55, 11K50, 30B70, 11L15, 11J54, 37A05
Citation: L. D. Pustyl'nikov, "Generalized continued fractions and ergodic theory", Uspekhi Mat. Nauk, 58:1(349) (2003), 113–164; Russian Math. Surveys, 58:1 (2003), 109–159
\Bibitem{Pus03}
\by L.~D.~Pustyl'nikov
\paper Generalized continued fractions and ergodic theory
\jour Uspekhi Mat. Nauk
\issue 1(349)
\mathnet{http://mi.mathnet.ru/umn594}
\crossref{https://doi.org/10.4213/rm594}
\adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2003RuMaS..58..109P}
\jour Russian Math. Surveys
\crossref{https://doi.org/10.1070/RM2003v058n01ABEH000594}
http://mi.mathnet.ru/eng/umn594
https://doi.org/10.4213/rm594
http://mi.mathnet.ru/eng/umn/v58/i1/p113
This publication is cited in the following articles:
Georgiev G.H., Glazunov N.M., Krakovsky V.Y., Kumkov S.I., Noel A.G., Pustyl'nikov L.D., Wicks M.C., Himed B., "Selected problems", Computational Noncommutative Algebra and Applications, Nato Science Series, Series II: Mathematics, Physics and Chemistry, 136, 2004, 413–424
V. N. Berestovskii, Yu. G. Nikonorov, "Continued Fractions, the Group $\mathrm{GL}(2,\mathbb Z)$, and Pisot Numbers", Siberian Adv. Math., 17:4 (2007), 268–290
Schratzberger B., "On the singularization of the two-dimensional Jacobi-Perron algorithm", Experiment. Math., 16:4 (2007), 441–454
L. D. Pustylnikov, T. V. Lokot, "Diskretnye povoroty i obobschënnye tsepnye drobi", Preprinty IPM im. M. V. Keldysha, 2009, 044, 7 pp.
A. D. Bryuno, "Universalnoe obobschenie algoritma tsepnoi drobi", Chebyshevskii sb., 16:2 (2015), 35–65
V. G. Zhuravlev, "Simplex-module algorithm for expansion of algebraic numbers in multidimensional continued fractions", J. Math. Sci. (N. Y.), 225:6 (2017), 924–949
Full text: 318 | CommonCrawl |
Lessons from April 6, 2009 L'Aquila earthquake to enhance microzoning studies in near-field urban areas
Giovanna Vessia1,
Mario Luigi Rainone1,
Angelo De Santis2 &
Giuliano D'Elia1
This study focuses on two weak points of the present procedure to carry out microzoning study in near-field areas: (1) the Ground Motion Prediction Equations (GMPEs), commonly used in the reference seismic hazard (RSH) assessment; (2) the ambient noise measurements to define the natural frequency of the near surface soils and the bedrock depth. The limitations of these approaches will be discussed throughout the paper based on the worldwide and Italian experiences performed after the 2009 L'Aquila earthquake and then confirmed by the most recent 2012 Emilia Romagna earthquake and the 2016–17 Central Italy seismic sequence. The critical issues faced are (A) the high variability of peak ground acceleration (PGA) values within the first 20–30 km far from the source which are not robustly interpolated by the GMPEs, (B) at the level 1 microzoning activity, the soil seismic response under strong motion shaking is characterized by microtremors' horizontal to vertical spectral ratios (HVSR) according to Nakamura's method. This latter technique is commonly applied not being fully compliant with the rules fixed by European scientists in 2004, after a 3-year project named Site EffectS assessment using AMbient Excitations (SESAME). Hereinafter, some "best practices" from recent Italian and International experiences of seismic hazard estimation and microzonation studies are reported in order to put forward two proposals: (a) to formulate site-specific GMPEs in near-field areas in terms of PGA and (b) to record microtremor measurements following accurately the SESAME advice in order to get robust and repeatable HVSR values and to limit their use to those geological contests that are actually horizontally layered.
On April 6, 2009, at 1:32 a.m. (local time) an Mw 6.3 earthquake with shallow hypocentral depth (8.3 km) hit the city of L'Aquila and several municipalities within the Aterno Valley. This earthquake can be considered one of the most mournful seismic event in Italy since 1980 although its magnitude was moderately-high: 308 fatalities and 60.000 people displaced (data source http://www.protezionecivile.it) and estimated damages for 1894 M€ (data source http://www.ngdc.noaa.gov). These numbers showed how dangerous can be an unexpected seismic event in urbanized territories where no preventive actions have been addressed to reduce seismic risk. Hence, meanwhile, some actions were implementing in the post-earthquake time such as updating the reference Italian hazard map (by the Decree OPCM n. 3519 on 28 April 2006) and drawing microzoning maps to be used in the reconstruction stage, two major earthquakes struck the Emilia Romagna Region (in the Northern part of Italy), causing 27 deaths and widespread damage. The first, with Mw 6.1, occurred on 20 May at 04:03 local time (02:03 UTC) and was located at about 36 km north of the city of Bologna. Then, a second major earthquake (Mw 5.9) occurred on 29 May 2012, in the same area, causing widespread damage, particularly to buildings already weakened by the 20 May earthquake. Later on, the 2016–17 Central Italy Earthquake Sequence occurred, consisting of several moderately-high magnitude earthquakes between Mw 5.5 and Mw 6.5, from Aug 24, 2016, to Jan 18, 2017, each centered in a different but close location and with its own sequences of aftershocks, spanning several months. The seismic sequence killed about 300 people and injured the other 396. Worldwide, several other strong earthquakes (eg. 1998 Northridge earthquake, 2004 Parkfield earthquake, 2010 Canterbury and 2011 and 2017 Christchurch earthquakes, 2018 Sulawesi earthquake) produced devastating effects in the same time span. All these events show the need to carry out efficient microzoning studies to plan vulnerability reductions of urban structures and promoting the resilience of the human communities in seismic territories. After the 2009 L'Aquila earthquake, the microzoning studies have been introduced in Italy by law and the Guidelines for Seismic Microzonation (ICMS 2008) have been issued to accomplish these studies according to the most updated international scientific findings. These guidelines provide the local administrators with an efficient tool for seismic microzoning study to predicting the subsoil behavior under seismic shaking. Unfortunately, the ICMS does not give special recommendations for urbanized near field areas (NFAs). The microzoning activity concerning urbanized territories as suggest by ICMS (2008) is made up of four steps:
Estimating the reference seismic hazard to provide the input peak horizontal ground acceleration (PGA) at each point on the national territory and the normalized response spectrum at each site.
Dynamic characterization of soil deposits overlaying the seismic bedrock at each urban center in order to draw the microzoning maps (MM) at three main knowledge levels.
The Level 1 MM consists of geo-lithological maps of the surficial deposits that show typical successions and the amplified frequency map drawn through the measurements of microtremors elaborated by horizontal to vertical spectral ratio HVSR Nakamura's technique. Nakamura's method (1989), the horizontal to vertical noise components are calculated to derive the natural frequency of surficial soft deposits and their thickness.
The Level 2/3 MM consists of drawing maps after performing the numerical analyses of (a) seismic local amplification factors in terms of acceleration FA and velocity FV; (b) liquefaction potential LP and (c) permanent displacements due to seismically induced slope instability.
After 10 years of training the ICMS and the related methods, it is now the time to start analyzing some arisen weak points. Starting from the large data acquired worldwide on recent strong motion earthquakes, the experiences developed in seismic hazard assessment, and the site-specific seismic response characterization carried out by the writing authors after the 2009 L'Aquila earthquake, the aforementioned weak points (related to some aspects of the steps 1 and 3) are hereinafter discussed and some proposals are made to improve the efficiency of the microzoning studies especially in NFAs.
In this paper, after a brief background section on the procedures to accomplish the reference seismic hazard assessment (background section), the methods to calculate the Ground Motion Prediction Equations (GMPEs) and the HVSR (Nakamura's method) are briefly recalled in section 2. Then in section 3, the results from observations of recorded peak ground acceleration (PGA) and pseudo-spectral acceleration (PSA) values within the NFAs from the L'Aquila earthquake and other worldwide strong earthquakes have been discussed. In addition, some applications of Nakamura's procedure to characterize the natural frequency of the sites throughout the Aterno Valley have been discussed. Finally, in the conclusion section, some relevant points drawn from the discussed microzoning experiences have been highlighted to improve the efficiency of the microzonation studies in urban centers especially located in NFAs.
Background on seismic hazard assessment
Several theoretical and experimental studies performed worldwide in the last 50 years (see Kramer 1996 and the reference herein), highlighted that seismic shaking intensity is due to the magnitude of the earthquake generated at the source, to the travel paths of the seismic waves from the source to the buried or outcropping bedrock (that is called reference seismic hazard RSH) and the additional phenomena of local amplification or de-amplification take place where soil deposits overlay the rocky bedrock, named local seismic response LSR (Paolucci 2002; Vessia and Venisti 2011; Vessia et al. 2011; Vessia and Russo 2013; Vessia et al. 2013, 2017; Boncio et al. 2018, among others). The RSH maps drawn worldwide on national territories do not take into account the results of LSR studies.
The pioneering work by Signanini et al. (1983) after the 1979 Friuli earthquake confirmed the observations on the ground: local seismic effects could enlarge the referenced hazard at a site by 2–3 times in terms of MCS scale Intensity but also in PGA values owing to the local morphological and stratigraphic settings. Such RSL is particularly evident in near field areas, from then on named NFAs. The NFAs have been defined among others by Boore (2014a) as the Fault Damage Zones. These areas cannot be uniquely identified depending on the source rupture mechanisms, the surficial soil deposits and the multiple calculation methods used for measuring the distance between the seismic stations and the source. Especially in these areas, about the first 30 km aside the source, spot-like amplifications are the common amplification pattern captured through the Maximum Intensity Felt maps. These maps estimate the differentiated damages suffered by buildings and urban structures by means of the Macroseismic Intensity scale (e.g. Mercalli-Cancani-Sieberg MCS scale, European Macroseismic scale EMS, Modified Mercalli Intensity MMI scale). One of the Maximum Intensity Felt maps on the Italian territory was drawn by Boschi et al. (1995). They took into account the seismic events that occurred from 1 to 1992 AD with a minimum intensity felt of VI MCS. This latter value is the one commonly used to highlight those areas where seismic events caused relevant damages to dwellings and infrastructures, ranging from severe damages to collapse. Boschi et al. (1995) map is reported in Fig. 1: it showed IX-X MCS at L'Aquila district based on historical earthquakes that are in agreement with the seismic intensity map drawn by Galli and Camassi (2009) after the mainshock of 2009 L'Aquila earthquake. This map is also in very good agreement with other recent earthquakes such as the 2012 Emilia Romagna and 2016–17 Central Italy earthquakes (Fig. 1). Moreover, Boschi et al. (1995), Midorikawa (2002) and more recently, Paolini et al. (2012) proposed a direct use of the Maximum Felt Intensity maps to highlight those areas where the reference seismic hazard is largely increased by the local amplificated responses of soil deposits, that is the NFAs.
Italian Maximum Intensity felt map (After Boschi et al. 1995, modified) with the areas of two recent Italian earthquake sequences considered in the present work
The most used method to perform the reference seismic hazard assessment has been conceived in the late '60s. It is Cornell's method (1968) that was implemented into a numerical code by Mc Guire (1978). Cornell (1968) introduced the Probabilistic Seismic Hazard Assessment (PSHA) method to carry out the reference seismic hazard at a site considering the contribution of the seismic source and the travel path of the seismic waves by considering the uncertainties related to these estimations. This method consists of four steps (Kramer 1996), as illustrated in Fig. 2:
STEP 1. To identify the seismogenic sources as single faults and faulting regions in terms of magnitude amplitude generated at different time spans. The probabilistic approach to such a characterization needs to know the rate of the earthquake at different magnitudes at the site and the spatial distribution of the fault segment or the source volume that can be activated.
STEP 2_1. To calculate the seismic rate in a region the Gutenberg-Richter law is used, where a and b coefficients are drawn by interpolating numerous data from a database of seismic events (instrumental and non-instrumental) available for a limited number of source areas and affected by the lack of completeness distortions. The earthquake occurrence probability is estimated by means of a Poisson distribution over time that is independent of the time span of the last strong seismic event.
STEP 2_2. To calculate the spatial distribution of the seismic events alongside a fault zone is a character difficult to get known and the spatial distribution of earthquake sources within a seismogenic area is commonly assumed uniformly distributed.
STEP 3. To define the ground motion prediction equations GMPEs that enable to predict, at different magnitude ranges, the decrease with the distance from the seismic source of the strong motion parameter assumed to be representative of the earthquake at a site.
STEP 4. To calculate the probability of exceedance of a target shaking value of the considered ground motion parameter, i.e. PGA, in a time span at a chosen site, due to the contribution of different seismogenic sources.
Cornell's probabilistic seismic hazard assessment (PSHA) explained in four steps. The blue house represents the site under study
The previous 4 steps attempt to take into account several sources of uncertainties, such as the limited knowledge about the fault activity, the qualitative and documental estimations of the past earthquake effects at the sites, the lack of completeness of the seismic catalogs (meaning that the database of the seismic events is populated by several data related to both low and high magnitude ones) and the dependency among the recorded strong seismic events. In addition, the distortions in Gutenberg-Richter law, defined for different regions worldwide and the uncertainties related to the GMPEs can generate underestimations of the seismic shaking parameters (i.e. PGA) at specific sites where local seismic effects are relevant (Paolucci 2002; Vessia and Venisti 2011; Vessia and Russo 2013; Vessia et al. 2013; Yagoub 2015; Miyajima et al. 2019; Lanzo et al. 2019; among others). Logic trees are commonly used to take into account different formulations of GMPEs and several Gutenberg-Richter rates of magnitude occurrence (Kramer 1996).
Molina et al. (2001) pointed out that the PSHA has its strength in the systematic parameterization of seismicity and the way in which also epistemic uncertainties are carried out through the computations into the final results.
Recently, alternative approaches to PSHA calculation have been suggested, such as through extensions of the zonation method (Frankel 1995; Frankel et al. 1996, 2000; Perkins 2000) where multiple source zones, parameter smoothing and quantification of geology and active faults have been successfully applied. The Frankel et al. (1996) method applied a Gaussian function to smooth a-values (within Gutenberg-Richter law) from each zone, thereby being a forerunner for the later zonation-free approaches of Woo (1996). This latter approach tries to amalgamate statistical consistency with the empirical knowledge of the earthquake catalogue (with its fractal character) into the computation of seismic hazard. Furthermore, Jackson and Kagan (1999) developed a non-parametric method with a continuous rate-density function (computed from earthquake catalogues) used in earthquake forecasting. Nonetheless, all these methods need a function to propagate the strong motion parameter values from the source to the site under study. To this end, the GMPEs are built by interpolating large databases of seismic records (related to specific geographical and tectonic environments worldwide), taking into account the contributions of the earthquake magnitude M and the distance to the seismic source R, according to the following form (Kramer 1996):
$$ \mathit{\ln}\;(Y)=f\left(M,R,{S}_i\right) $$
where Y is the ground motion parameter, commonly the peak ground horizontal acceleration PGA or the spectral acceleration SA at fixed period; Si is related to the source and site: they are the refinement terms due to the enlargement of the seismic databases and the possibility of drawing specific regional GMPEs.
The uncertainty of GMPEs in near field areas
Several examples of GMPEs are provided in literature (Kramer 1996 among others) while a recent throughout review of several possible formulations of GMPEs used in the USA can be found at the Pacific Earthquake Engineering Research center PEER website http://peer.berkeley.edu/publications/peer_reports_complete.html. In Italy, the GMPEs are built based on the PGAs drawn from the Italian shape wave database of strong motion events (Faccioli 2012; Bindi et al. 2011, 2014; Cauzzi et al. 2014). Commonly, the PGA values represent the strong motion parameter used in microzoning studies but the related GMPEs are highly uncertain especially in the first tens of kilometers as shown in Fig. 3a (Faccioli 2012) and Fig. 4a (Boore 2013). According to Boore (2013, 2014a), fault zone records show significant variability in amplitude and polarization of PGA, SA especially at low periods (as shown in Fig. 4a) and magnitude saturation beyond Mw 6, although the causes of this variability are not easy to be unraveled. The main drawback of the GMPEs is the weakness of their predictivity at a short distance from the seismic source due to two main issues affecting the NFAs worldwide:
a few seismic stations installed;
highly scattered measures of strong motion parameters, especially in terms of accelerations (i.e. PGA, SA, etc) (Fig. 2, step 3), that do not show any decreasing trend with distance.
a Ground Motion Prediction Equation (GMPE) of PGA versus the minimum source-to-site distance (After Faccioli 2012, modified): a band of uncertainty (grey) of GMPEs proposed by Faccioli et al. (2010). b 2008 Boore and Atkinson ground motion prediction equation (BA08 GMPE) of PGA based on data collected in United States for Magnitude 7.3 Mw, strike-slip fault type and VS30equal to 255 m/s: solid line is the mean equation; dashed lines represent the confidence interval at one standard deviation (After Boore 2013, modified)
a Measures of PSA during the Parkfield earthquake 2004 (6 Mw) are reported near the active fault at the measure seismic stations (After Boore 2014b, modified); b Onna sector of Aterno River Valley: the records are for an aftershock of 3.2 Ml
The PGA spatial uncertainties have been observed after several recent strong earthquakes, such as 2009 L'Aquila earthquake (Lanzo et al. 2010; Bergamaschi et al. 2011; Di Giulio et al. 2011), 2011 Christchurch and 2010 Darfield earthquakes in New Zealand (Bradley and Cubrinovski 2011) (Fig. 5), 2012 Emilia Romagna earthquake in Italy, 1994 Northridge earthquake in USA (Boore 2004) and 2013 Fivizzano earthquake (Fig. 5). In the case of the 2009 L'Aquila earthquake (Fig. 4b), the areal distribution of PGAs around the source seems to be highly random although they show that the most dramatic increase occurs where thick soft sediments are met over rigid bedrocks or where bedrock basin shapes can be recognized. This latter traps the seismic waves and caused longer duration accelerograms with increased amplitudes at short and moderate periods (lower than 2 s) (Rainone et al. 2013).
Horizontal and vertical PGA values recorded within the first 20 km epicenter distance during 1) 22 February 2011 Christchurch earthquake (6.3 Mw) (square) and 2) 21 June 2013 Fivizzano earthquake (5.1 Mw) (triangle)
The GMPEs based on PGAs tend to saturate for large earthquakes as the distance from the fault rupture to the observation point decreases. Boore (2014b) showed that the PGA parameter is a poor measure of the ground-motion intensity due to its non-unique correspondence to the frequency and acceleration content of the shaking waves (Fig. 4), especially at high frequencies. Furthermore, Bradley and Cubrinovski (2011) and Boore (2004) stated that the influence on the amplitude and shape response by local surface geology and geometrical conditions is noted to be much more relevant than the forward directivity and the source-site path on spectral accelerations in near field areas and for periods shorter than 3 s.
Thus, the GMPEs of PGAs within the NFAs are highly uncertain and cumbersome to be predicted even when fitted on single seismic events as shown in Fig. 5 (Bradley and Cubrinovski 2011; Faccioli 2012; Boore 2014b). Faccioli (2012) evidenced 100% of the coefficient of variation about the mean trend of PGA GMPE versus source-to-site distance (Fig. 3a). This GMPE was built based on the ITACA 2010 database (Luzi et al. 2008, http://itaca.mi.ingv.it/ItacaNet_30/#/home) that collects Italian strong motion shape waves. It is worth noticing that within the first tens of kilometers from the source, these data indicate that the GMPEs are not accurate in predicting the PGA values. To avoid the pitfalls in GMPEs based on peak parameters, integral ground motion parameters have been proposed in the literature (Kempton and Stewart 2006; Abrahamson and Silva 2008; Campbell and Bozorgnia 2012), such as Arias intensity (AI) and cumulative absolute velocity (CAV). In addition, Hollenback et al. (2015) and Stewart et al. (2015) formulated new generation GMPEs based on median ground-motion models as part of the Next Generation Attenuation for Central and Eastern North America project. They provided a set of adjustments to median GMPEs that are necessary to incorporate the source depth effects and the rupture distances in the range from 0 to 1500 Km. Moreover, the preceding authors suggest a distinct expression for the GMPE at short-distance to the source (by 10 km), that is:
$$ lnGMPE={c}_1+{c}_2\mathit{\ln}{\left({R}_{RUP}+h\right)}^{1/2} $$
where RRUP is the rupture distance, that is the closest distance to the earthquake rupture plane (km); c1 and c2 are the regression coefficients and h is a "fictitious depth" used for ground-motion saturation at close distances.
Ambient noise measures elaborated by means of the Nakamura horizontal to vertical ratio HVSR
In 1989 Nakamura proposed to use the ambient noise measurements to derive a seismic property of a site, that is the frequency range of amplification, through the spectral ratio of horizontal H and vertical V ambient vibration (microtremors) components of the recorded signals. If the site does not amplify, the ratio H/V is equal to 1. The Nakamura method shows the advantage to solve the troublesome issue to find out a reference site. In fact, it considers the vertical component as the one that is not modified by the site where horizontal subsoil layers are set and SH seismic waves represent the ambient noise signal content in a quite site (far from urban or industrial areas). This latter is the reference signal whereas the horizontal component is the only one that can be affected by the amplifying properties of the soils. As a matter of fact, Nakamura assumed that:
locally random distributed sources of microtremors generate not directional signals almost made up of shear horizontal or Rayleigh waves;
the microtremors are confined in the surficial layers because the subsoil is made up of soft layered sediments overlaying a rigid seismic bedrock.
A relevant implication of the Nakamura method is that the peaks of the ratio H/V are related to the presence of high acoustic impedance contrast at the depth h that can be derived by the following expression:
$$ h=\frac{V_S}{4\bullet {f}_0} $$
where VS is the mean value of the measured shear wave velocity profile and fo is the amplified frequency measured by means of the noise measurement.
The fundamental rules to perform a correct ambient noise recording was provided by the European research project named SESAME (Bard and the WG 2004) that analyzed the possible drawbacks of the simple model introduced by the Nakamura method and issued guidelines that offer important recommendations regarding the places where the method can be successfully used in urbanized areas. The given recommendations are based on a rather strict set of criteria, that are essentially composed of (1) experimental conditions and (2) criteria for gaining reliable results (Table 1).
Table 1 A summary of recommendations from SESAME guidelines (Bard and the WG 2004)
As can be seen from Table 1, the recommendations are focused on the weather conditions that influence the quality of the noise measurements and they highlight the need to record at distance from structures, trees, slopes because all these items affect the records. Unfortunately, it is not possible to quantify the minimum distance from the structure where the influence is negligible, as this distance depends on too many external factors (structure type, wind strength, soil type, etc.). Furthermore, related to the measurement spacing, SESAME guidelines suggest to never use a single measurement point to derive f0 value, make at least three measurement points. This latter advice is often disregarded.
An interesting alternative way to apply the HVSR method is to investigate the "heavy tails" of its statistical distribution that is like that of a critical system (Signanini and De Santis 2012). This is likely indicative of the strong non-linear properties of rocks forming the uppermost crust resulting in a power-law trend.
However, the Nakamura technique has been introduced in microzoning studies at Level 1 in Italy (ICMS 2008) to draw the natural frequency map of urban sites but limitations to suitable sites have not been prescribed. Although the Nakamura method seems to be simple, lost cost and short time consuming, the suitable sites where it can be applied are few, especially in urban centers. This type of indirect investigation method is not applicable in complex geological contexts (e.g. buried inclined fold settings) and is not easily handled for the difficulties in reproducing the same measurements under variable site conditions and noise sources and acquisitions performed by different operators even at the same site.
Rainone et al. (2018) undertook a thorough study on the effectiveness of HVSR in predicting amplification frequencies at two Italian urban areas characterized by different subsoil setting and noise distribution. Results from this study show that HVSR works well only where horizontally layered sediments overlay a rigid bedrock: these conditions are the most relevant and the most influential on the predictivity of the actual fo measured values.
A new proposal for GMPEs in near field areas
Recently, a study to formulate ad hoc GMPEs for PGAs within NFAs of the Central Italy Apennine sector has been performed. Only horizontal PGA values measured from seismic stations set on A and A* soil category (Vs ≥ 800 m/s), generated by seismic events with Mw ranging between 5.0 and 6.5 and normal fault mechanism, have been extracted from ITACA shape wave database (Luzi et al. 2008). The selected events cover the period from 1997 to 2017 and consider 25 seismic events from three strong seismic sequences generated by normal faults: the 1997 Umbria-Marche, 2009 L'Aquila, 2016–2017 Central Italy. The studied source area is a quadrant whose edges' coordinates are (43.5°, 12.3°) and (42.2°, 13.6°) in decimal degrees. The PGA measures within the first 35 km from the seismic source have been taken into account. The hypocentral distance has been used to define the source to site distance. Two GMPEs within the first 35 km have been drawn for two ranges of moment magnitude: 5 ≤ Mw1 < 5.5 and 5.5 ≤ Mw2 ≤ 6.5. These ranges represent the injurious magnitudes of the Italian moderately-high magnitude earthquakes (Fig. 6). As can be noted from Fig. 6 the PGA values seem not to be highly different in the two magnitude ranges and they do not show a clear trend with the hypocentral distance. Thus, these two datasets have been kept distinct and a box and whisker plot has been used to calculate their medians, quartiles, and interquartile distance.
Datasets of PGA values recorded at NFAs in the Central Italy Apennine Sector from 1997 to 2017 divided into two ranges of Mw: a 5 ≤ Mw1 < 5.5; b 5.5 ≤ Mw2 ≤ 6.5
Figure 7a, b show the two datasets with a different number of bins of hypocentral distance: it is due to the circumstance that for higher magnitudes (Fig. 7b) the seismic stations within the first 10 km are that few that cannot be considered a distinct bin. Thus, through Fig. 7a, b the outliers are evidenced and eliminated. Then the 95th percentiles of the PGAs within each bin of the two datasets have been calculated. The mean value of the preceding percentiles has been considered as the representative constant value of the first 30 km of the hypocentral distance: 0.27 g for 5 ≤ Mw1 < 5.5 and 0.37 g for 5.5 ≤ Mw2 ≤ 6.5. It is worthy to be noted that this proposal is related to the PGA values at the rigid ground to be used in microzonation studies at the sites located in the Central Italy Apennine sector within the first 30 km hypocentral distance and in the two ranges of magnitudes of moderately high earthquakes. The disaggregation pairs at each site within NFAs can be determined according to the Ingv study issued at the website: esse1.mi.ingv.it. Then, to select the reference PGA can be used the abovementioned method and the two values found by this study. Further studies must be accomplished to characterize the PGA of different seismic regions within Italian territory. The same proposed approach or several other proposals can be conceived and applied worldwide within the NFAs taking into account that the surficial soil response there is not dependent on the distance from the source but it is much more dependent on the non-linearity of the soil response combined with the complex geological conditions that cannot be easily modelled.
Box and whistler plots of the two datasets of PGA values recorded at NFAs in the Central Italy Apennine Sector from 1997 to 2017 divided into two ranges of Mw: a 5 ≤ Mw1 < 5.5; b 5.5 ≤ Mw1 ≤ 6.5. The bins of hypocentral distance are: a) 1(0–9.95 km), 2(10–19.95 km), 3(20–29.95 km); b) 1(10–19.95 km), 2(20–29.95 km). The void circles are the outliers identified by the box and whistler method
HVSR measurements addressed in the Aterno Valley
The seismic characterization of surface geology by means of microtremors was introduced by ICMS (2008) and it was then applied in the aftermath of the 2009 L'Aquila earthquake. Many research groups started to record ambient vibrations and process them through Nakamura's method ignoring, in details, the surface geology of each testing point. At Villa Sant'Angelo and Tussillo sites (falling into the Macroarea 6 of the Aterno Valley named L'Aquila crater), we performed several microtremors acquisitions at one station through two devices: Tromino and DAQLink III. The following acquisition parameters have been used: (1) time windows longer than 30′; (2) the sampling frequency higher than 125 Hz; (3) the sampling time lower than 8 ms. Furthermore, Fast Fourier Transform FFT has been used to calculate the ratio H/V. Finally, the spectral smoothing has been performed by means of the Konno-Ohmachi smoothing window. The HVSR values have been calculated for each sub-windows of 20s, then the mean and the standard deviation of all ratios have been calculated and plotted. Further details on the technical aspects of the acquisitions by both devices can be found in Vessia et al. (2016).
Figure 8a shows the HVSR measurements acquired at two neighboring points in Tussillo center, where geological characters were similar, by two research groups: T1 (the writing authors) and M5 (the Italian Department of Civil Protection DPC). These acquisitions have been done by the Tromino equipment. As can be noted, the two plots are different: T1 evidences peaks at 2.5 Hz and 8 Hz; on the contrary, the M5 shows the main peak at 2 Hz and minor peaks at 10-20 Hz, 40 Hz and 55 Hz. In the presence of these peaks, the operator would select the most representative one: of course, this selection is highly subjective although the SESAME rules suggest to take into account the highest peaks, such as 2 Hz in both cases (T1 and M5) and disregard the peaks higher than 20 Hz.
a H/V measurements at two neighbor points at Tussillo center: T1 (this study) and M5 (DPC). b H/V measurements performed by the Tromino at T5 (this study) and at S4 (DPC) under similar ground conditions
On the contrary, Fig. 8b shows the HVSRs measured at two nearby points on a different type of ground type compared with the previous points: T5 (the writing authors) and S4 (DPC group).
In this latter case, the two plots show an evident peak at 2 Hz although the peak amplitude is double at T5 with respect to S4. This difference could be due to the presence of disregarded Love waves that do not have vertical components contributing to the amplification of the horizontal components.
Figure 9a, b compare HVSR measured at NE of T5, at Villa Sant'Angelo historical center. In this case, the signals are recorded by two devices used by us: the Tromino and the DAQLink devices. As can be seen, they show similar peaks although no unique peak values can be drawn from each HVSR. In this case, the operator choices can affect the results in terms of the natural frequency of the site. However, the SESAME rule of three acquisitions at 3 different points to assess HVSR could be useful to get to a robust assessment of fo.
Noise measurements at Villa Sant'Angelo center by a the Tromino device; b the DAQLink device
Another weak point in the calculation of the amplified frequency f0 is the systematic differences in calculated amplified frequencies coming from the noise measurements, that induce very small deformation in soil deposits and f0 drawn from the weak motion tails of the strong motion signals generated by the strong motion events at the site that cause medium to large deformation levels in the ground. From our experience, the f0 drawn from noise measurements are rarely confirmed by amplified frequencies from actual records. From the field experience, the amplified frequencies f0 have been measured through the HVSR function from the noise tracks acquired at the Tussillo site, at the point T1 and the weak motion tails of a seismic event recorded on July 7, 2009, at 10:15 local time at the same site.
Figure 10 shows the HVSR functions. It is easy to notice that the calculated f0 related to the noise (Fig. 10a) is shown at 8 Hz whereas and the one related to the weak motion (Fig. 10b) is calculated at 2 Hz: the peak frequencies do not match and the weak tail, after the strong motion excitation shows a lower amplification frequency due to the non-linear response of the soil compared to the peak related to the noise measures. These results have been confirmed by other comparisons accomplished in several other places within the Aterno Valley (Vessia et al. 2016).
HVSR function measured at the T1 site, at Tussillo site (see Fig. 8) from: a the noise measurements; b the weak motion tail of a seismic event recorded on 7 July 2009, at 10:15 local time
Finally, from the abovementioned experiences, three issues can be pointed out: (1) Nakamura's method often provides more than one peak corresponding to different natural frequencies; (2) the peaks are heavily affected by many external factors, especially in urban areas, that are not easy to be disregarded by filtering the measurements; (3) the peaks in HVSR functions are not commonly related to both weak and strong motion amplified frequencies.
Thus, the use of the noise measurements in microzoning activities to derive the bedrock depth should be discouraged especially when the geological conditions of the site are not known, such as the shear wave velocity profile of the soil deposits up to the bedrock depth. In addition, the amplified frequency of the site should be determined through more than one measurement, according to the SESAME rules, in order to check the possible differences induced by the different time of the day and weather conditions at the site. However, the amplified frequency measured at a very low deformation level is modified at medium and large deformations induced during the strong and even weak motion seismic events. Thus the calculation of the f0 from noise measures can only be used to determine the bedrock depth through Eq. (3) but the shear wave velocity profile is needed as well as the buried geological conditions to guarantee the applicability of the Nakamura method.
In this paper two weak points in microzoning studies have been discussed starting from the authors' experience in microzonation in Italy, that is: (a) the lack of predictivity of GMPEs of PGA measurements in NFAs and (b) the ability of noise measurements to capture the amplified frequency at site even in a complex geological conditions. Throughout the paper, some past experiences of microzoning activity by the present authors are discussed and two proposals have neem put forward. On one hand, concerning the GMPEs of PGAs according to the reference seismic hazard assessment performed in Italy, the need for specific GMPE values in NFAs have been highlighted by several scientists. Here, the proposal of using the 95 percentile of the scattered values recorded within the first 30 km from the hypocentral distance has been provided for the Central Italy Apennine Sector. These values have been drawn from the ITACA database limited to seismic events ranging from 5 to 6.5 Mw occurred from 1997 to 2017. On the other hand, after a large experience gained in the noise measurements recorded in the Aterno Valey after the 2009 L'Aquila earthquake, it can be concluding that the noise measurements are not inherently repeatable, thus at least two or three measurements, according to the SESAME guidelines (this represents the European standard to perform ambient vibration measurements) must be requested to calculate the f0 by means of Nakamura's method. Although noise measurements can provide relevant differences in amplified frequencies according to the operator or the weather conditions, following the SESAME rules guarantee both technical standards to an unregulated geophysical technique that relies on a simple buried geological model of horizontally layered soil deposits. This model, when not applicable, can make the HVSR function from noise measurements totally misleading. Thus, even at level 1 microzoning studies, the use of direct and indirect measures is needed in order to confirm the layered planar setting of the subsurface geo-lithological model and measuring shear wave velocity profiles to enable a robust prediction of the bedrock depth by means of the amplified frequency f0 estimation.
Finally, when the noise measurements are compared with the weak motion tails of actual seismic events, they show different amplified ranges of frequencies. Accordingly, it must be kept in mind that soil behavior is strain-dependent: this means that their natural frequencies at small strain levels (microtremors) differ from the ones at medium strain level (weak motions) and at high strain level (strong motions). Then, depending on the purpose of the natural frequency measurement, different strain levels will be investigated to do an adequate characterization of the site response under seismic excitation.
AI:
Arias Intensity
CAV:
Cumulative Absolute Velocity
Amplification factor in terms of accelerations
FV:
Amplification factor in terms of velocities
GMPE:
Ground Motion Prediction Equation
HVSR:
Horizontal to Vertical Spectral Ratio
MCS:
Mercalli-Cancani-Sieberg Intensity Scale
MM:
Microzoning map
NFA:
Near Field Area
PGA:
PSA:
Peak Spectral Acceleration
PSHA:
Probabilistic Seismic Hazard Assessment
RSH:
Reference Seismic Hazard
Abrahamson NA, Silva WJ (2008) Summary of the Abrahamson & Silva NGA ground motion relations. Earthquake Spectra 24(1):67–97
Bard P-Y and the WG (2004) SESAME European research project WP12 – Deliverable D23.12. Guidelines for the implementation of the H/V spectral ratio technique on ambient vibrations measurements, processing and interpretation. http://sesame-fp5.obs.ujf-grenoble.fr/index.htm
Bergamaschi F, Cultrera G, Luzi L, Azzara RM, Ameri G, Augliera P, Bordoni P, Cara F, Cogliano R, D'alema E, Di Giacomo D, Di Giulio G, Fodarella A, Franceschina G, Galadini F, Gallipoli MR, Gori S, Harabaglia P, Ladina C, Lovati S, Marzorati S, Massa M, Milana G, Mucciarelli M, Pacor F, Parolai S, Picozzi M, Pilz M, Pucillo S, Puglia R, Riccio G, Sobiesiak M (2011) Evaluation of site effects in the Aterno river valley (Central Italy) from aftershocks of the 2009 L'Aquila earthquake. Bull Earthq Eng 9:697–715
Bindi D, Massa M, Luzi L, Ameri G, Pacor F, Puglia R, Augliera P (2014) Pan-European ground-motion prediction equations for the average horizontal component of PGA, PGV and 5%-damped PSA at spectral periods up to 3.0 s using the RESORCE dataset. Bull Earthq Eng 12:391–430. https://doi.org/10.1007/s10518-013-9525-5
Bindi D, Pacor F, Luzi L, Puglia R, Massa M, Ameri G, Paolucci R (2011) Ground-motion prediction equations derived from the Italian strong motion database. Bull Earthq Eng 9(6):1899–1920. https://doi.org/10.1007/s10518-011-9313-z
Boncio P, Amoroso S, Vessia G, Francescone M, Nardone M, Monaco P, Famiani D, Di Naccio D, Mercuri A, Manuel MR, Galadini F, Milana G (2018) Evaluation of liquefaction potential in an intermountain quaternary lacustrine basin (Fucino basin, Central Italy): implications for seismic microzonation mapping. Bull Earthq Eng 16(1):91–111. https://doi.org/10.1007/s10518-017-0201-z
Boore DM (2004) Can site response be predicted? J Earthq Eng 8(1):1–41
Boore DM (2013) What Do Ground-Motion Prediction Equations Tell Us About Motions Near Faults?. 40th Workshop of the International School of Geophysics on properties and processes of crustal fault zones, Ettore Majorana Foundation and Centre for Scientific Culture, Erice, Sicily, Italy, May 18–24 (invited talk)
Boore DM (2014a) What do data used to develop ground-motion prediction equations tell us about motions near faults? Pure Appl Geophys 171:3023–3043
Boore DM (2014b) The 2014 William B. Joyner lecture: ground-motion prediction equations: past, present, and future. New Mexico State University, Las Cruces, New Mexico, April 17
Boschi E, Favali F, Frugoni F, Scalera G, Smriglio G (1995) Massima Intensità Macrosismica risentita in Italia (Map, scale 1:1.500.000)
Bradley BA, Cubrinovski M (2011) Near-source strong ground motions observed in the 22 February 2011 Christchurch earthquake. Bull New Zealand Soc for Earthq Eng 44(4):181–194
Campbell KW, Bozorgnia Y (2012) A comparison of ground motion prediction equations for arias intensity and cumulative absolute velocity developed using a consistent database and functional form. Earthquake Spectra 28(3):931–941. https://doi.org/10.1193/1.4000067
Cauzzi C, Faccioli E, Vanini M, Bianchini A (2014) Updated predictive equations for broadband (0.01–10 s) horizontal response spectra and peak ground motions, based on a global dataset of digital acceleration records. Bull Earthq Eng 13:1587–1612. https://doi.org/10.1007/s10518-014-9685-y
Cornell CA (1968) Engineering seismic risk analysis. BSSA 58(5):1583–1606
Di Giulio G, Marzorati S, Bergamaschi F, Bordoni P, Cara F, D'Alema E, Ladina C, Massa M, and il Gruppo dell'esperimento L'Aquila (2011) Local variability of the ground shaking during the 2009 L'Aquila earthquake (April 6, 2009 mw 6.3): the case study of Onna and Monticchio villages. Bull Earthq Eng 9:783–807
Faccioli E (2012) Relazioni empiriche per l'attenuazione del moto sismico del suolo. Teaching notes
Faccioli E, Bianchini A, Villani E (2010) New ground motion prediction equations for T > 1 s and their influence on seismic hazard assessment. Proceedings of the University of Tokyo Symposium on long-period ground motion and urban disaster mitigation, March 17-18
Frankel A (1995) Mapping seismic hazard in the central and eastern United States. Seism Res Lett 66:8–21
Frankel A, Mueller C, Barnhard T, Leyendecker E, Wesson R, Harmsen S, Klein F, Perkins D, Dickman N, Hanson S, Hopper M (2000) USGS national seismic hazard maps. Earthq Spectr 16:1–19
Frankel A, Mueller C, Barnhard T, Perkins D, Leyendecker E, Dickman N, Hanson S, Hopper M (1996) National seismic hazard maps. Open- file-report 96-532, U.S.G.S., Denver, p 110
Galli P, Camassi R (eds) (2009) Rapporto sugli effetti del terremoto aquilano del 6 aprile 2009, Rapporto congiunto DPC-INGV, p 12 http://www.mi.ingv.it/eq/090406/quest.html
Hollenback J, Goulet CA, Boore DM (2015) Adjustment for source depth, chapter 3 in NGA-east: adjustments to median ground-motion models for central and eastern North America, PEER report 2015/08, Pacific Earthquake Engineering Research Center
Indirizzi e criteri per la microzonazione sismica (ICMS). Dipartimento di Protezione Civile (DPC) e Conferenza delle Regioni e delle Province Autonome (2008). www.protezionecivile.gov.it/jcms/it/view_pub.wp?contentId=PUB1137.
Jackson D, Kagan Y (1999) Testable earthquake forecasts for 1999. Seism Res Lett 70:393–403
Kempton JJ, Stewart JP (2006) Prediction equations for significant duration of earthquake ground motions considering site and near-source effects. Earthquake Spectra 22(4):985–1013. https://doi.org/10.1193/1.2358175
Kramer SL (1996) Geotechnical earthquake engineering. Prentice-Hall, Upper Saddle River
Lanzo G, Di Capua G, Kayen RE, Kieffer DS, Button E, Biscontin G, Scasserra G, Tommasi P, Pagliaroli A, Silvestri F, d'Onofrio A, Violante C, Simonelli AL, Puglia R, Mylonakis G, Athanasopoulos G, Vlahakis V, Stewart JP (2010) Seismological and geotechnical aspects of the mw=6.3 L'Aquila earthquake in Central Italy on 6 April 2009. Int J Geoeng Case Histories 1(4):206–339
Lanzo G, Tommasi P, Ausilio A, Aversa S, Bozzoni F, Cairo R, D'Onofrio A, Durante MG, Foti S, Giallini S, Mucciacciaro M, Pagliaroli A, Sica S, Silvestri F, Vessia G, Zimmaro P (2019) Reconnaissance of geotechnical aspects of the 2016 Central Italy earthquakes. Bull Earthq Eng 17:5495–5532. https://doi.org/10.1007/s10518-018-0350-8
Luzi L, Hailemikael S, Bindi D, Pacor F, Mele F, Sabetta F (2008) ITACA (Italian ACcelerometric archive): a web portal for the dissemination of Italian strong-motion data. Seismol Res Lett 79(5):716–722. https://doi.org/10.1785/gssrl.79.5.716
Mc Guire RK (1978) FRISK: computer program for seismic risk analysis using faults as earthquake sources. Open file report no 78-1007, U.S.G.S., Denver
Midorikawa S (2002) Importance of damage data from destructive earthquakes for seismic microzoning. Damage distribution during the 1923 Kanto, Japan, earthquake. Ann Geophys-Italy 45(6):769–778
Miyajima M, Setiawan H, Yoshida M et al (2019) Geotechnical damage in the 2018 Sulawesi earthquake, Indonesia. Geoenviron Disasters 6:6. https://doi.org/10.1186/s40677-019-0121-0
Molina S, Lindholm CD, Bungum H (2001) Probabilistic seismic hazard analysis: zoning free versus zoning methodology. B Geofis Teor Appl 42(1–2):19–39
Nakamura Y (1989) A method for dynamic characteristics estimation of subsurface using microtremor on the ground surface. QR Railway Tech Res Inst 30(1):25–33
Paolini S, Martini G, Carpani B, Forni M, Bongiovanni G, Clemente P, Rinaldis D, Verrubbi V (2012) The may 2012 seismic sequence in Pianura Padana Emiliana: hazard, historical seismicity and preliminary analysis of accelerometric records. Special issue on focus - Energia, Ambiente, Innovazione: the Pianura Padana Emiliana Earthquake 4–5(II):6–22
Paolucci R (2002) Amplification of earthquake ground motion by steep topographic irregularities. Earthquake Eng Struc 31(10):1831–1853
Perkins D (2000) Fuzzy sources, maximum likelihood and the new methodology. In: Lapajne JK (ed) Seismicity modelling in seismic hazard mapping. Geophysical Survey of Slovenia, Ljubljana, pp 67–75
Rainone ML, D'Elia G, Vessia G, De Santis A (2018) The HVSR interpretation technique of ambient noise to seismic characterization of soils in heterogeneous geological contexts. Book of abstract of 36th general assembly of the European seismological commission, ESC2018-S29-639: 428-429, 2-7 September 2018, Valletta (Malta) ISBN: 978-88-98161-12-6
Rainone ML, Vessia G, Signanini P, Greco P, Di Benedetto S (2013) Evaluating site effects in near field conditions for microzonation purposes: The case study of L'Aquila earthquake 2009. (special issue on L'Aquila earthquake 2009). Ital Geotechnical J 47(3):48–68
Signanini P, Cucchi F, Frinzi U, Scotti A (1983) Esempio di microzonizzazione nell'area di Ragogna (Udine). Rendiconti della Soc Geol Italiana 4:645–653
Signanini P, De Santis A (2012) Power-law frequency distribution of H/V spectral ratio of seismic signals: evidence for a critical crust. Earth Planets Space 64:49–54
Stewart JP, Boore DM, Seyhan E, Atkinson GM, M.EERI Atkinson GM (2016) NGA-West2 Equations for Predicting Vertical-Component PGA, PGV, and 5%-Damped PSA from Shallow Crustal Earthquakes show less. Earthq Spectr 32(2):1005–1031. https://doi.org/10.1193/072114EQS116M
Vessia G, Parise M, Tromba G (2013) A strategy to address the task of seismic micro-zoning in landslide-prone areas. Adv Geosci 1:1–27. https://doi.org/10.5194/adgeo-35-23-2013
Vessia G, Pisano L, Tromba G, Parise M (2017) Seismically induced slope instability maps validated at an urban scale by site numerical simulations. Bull Eng Geol Envir 76(2):457–476
Vessia G, Rainone ML, Signanini P (2016) Springer book title: "earthquakes and their impacts on society", Eds. S. D'Amico, 2016 chapter 9 title: "working strategies for addressing microzoning studies in urban areas: lessons from 2009 L'Aquila earthquake", 233–290, Springer international publishing Switzerland
Vessia G, Russo S, Lo Presti D (2011) A new proposal for the evaluation of the amplification coefficient due to valley effects in the simplified local seismic response analyses. Ital Geotechnical J 4:51–77
Vessia G, Russo S (2013) Relevant features of the valley seismic response: the case study of Tuscan northern Apennine sector. Bull Earthq Eng 11(5):1633–1660
Vessia G, Venisti N (2011) Liquefaction damage potential for seismic hazard evaluation in urbanized areas. Soil Dyn Earthq Eng 31:1094–1105
Woo G (1996) Kernel estimation methods for seismic hazard area source modeling. Bull Seism Soc Am 86:1–10
Yagoub MM (2015) Spatio-temporal and hazard mapping of earthquake in UAE (1984–2012): remote sensing and GIS application. Geoenviron Disasters 2:13. https://doi.org/10.1186/s40677-015-0020-y
The authors are grateful to Prof. Patrizio Signanini who gave precious suggestions during the paper preparation and inspired several discussions on the effects of inefficient microzonation studies on people's daily life quality in near field seismic areas.
Department of Engineering and Geology, University "G.d'Annunzio" of Chieti-Pescara, Via dei Vestini 31, 66013, Chieti, Scalo (CH), Italy
Giovanna Vessia, Mario Luigi Rainone & Giuliano D'Elia
Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata, 605, 00143, Rome, Italy
Angelo De Santis
Giovanna Vessia
Mario Luigi Rainone
Giuliano D'Elia
To this study, GV was responsible for the state of the art research (Introduction and Methods section) and the Results section related to 3.1 paragraph; MLR and ADS were responsible for the geophysical campaign and the Results section discussing results; GE acquired some ambient noise records and was responsible for the figures. All the authors contributed together to the writing of the manuscript. The author(s) read and approved the final manuscript.
Correspondence to Giovanna Vessia.
Vessia, G., Rainone, M., De Santis, A. et al. Lessons from April 6, 2009 L'Aquila earthquake to enhance microzoning studies in near-field urban areas. Geoenviron Disasters 7, 11 (2020). https://doi.org/10.1186/s40677-020-00147-x
Reference seismic hazard map
Seismic microzoning study
HVSR Nakamura's method
GMPEs | CommonCrawl |
Astrophysics > High Energy Astrophysical Phenomena
[Submitted on 4 Aug 2015]
Title:PSR J1906+0722: An Elusive Gamma-ray Pulsar
Authors:C. J. Clark, H. J. Pletsch, J. Wu, L. Guillemot, M. Ackermann, B. Allen, A. de Angelis, C. Aulbert, L. Baldini, J. Ballet, G. Barbiellini, D. Bastieri, R. Bellazzini, E. Bissaldi, O. Bock, R. Bonino, E. Bottacini, T. J. Brandt, J. Bregeon, P. Bruel, S. Buson, G. A. Caliandro, R. A. Cameron, M. Caragiulo, P. A. Caraveo, C. Cecchi, D. J. Champion, E. Charles, A. Chekhtman, J. Chiang, G. Chiaro, S. Ciprini, R. Claus, J. Cohen-Tanugi, A. Cuéllar, S. Cutini, F. D'Ammando, R. Desiante, P. S. Drell, H. B. Eggenstein, C. Favuzzi, H. Fehrmann, E. C. Ferrara, W. B. Focke, A. Franckowiak, P. Fusco, F. Gargano, D. Gasparrini, N. Giglietto, F. Giordano, T. Glanzman, G. Godfrey, I. A. Grenier, J. E. Grove, S. Guiriec, A. K. Harding, E. Hays, J. W. Hewitt, A. B. Hill, D. Horan, X. Hou, T. Jogler, A. S. Johnson, G. Jóhannesson, M. Kramer, F. Krauss, M. Kuss, H. Laffon, S. Larsson, L. Latronico, J. Li, L. Li, F. Longo, F. Loparco, M. N. Lovellette, P. Lubrano, B. Machenschalk, A. Manfreda, M. Marelli, M. Mayer, M. N. Mazziotta, P. F. Michelson, T. Mizuno, M. E. Monzani, A. Morselli, I. V. Moskalenko, S. Murgia, E. Nuss, T. Ohsugi, M. Orienti, E. Orlando, F. de Palma, D. Paneque, M. Pesce-Rollins, F. Piron, G. Pivato, S. Rainò, R. Rando, M. Razzano, A. Reimer
, P. M. Saz Parkinson, M. Schaal, A. Schulz, C. Sgrò, E. J. Siskind, F. Spada, G. Spandre, P. Spinelli, D. J. Suson, H. Takahashi, J. B. Thayer, L. Tibaldo, P. Torne, D. F. Torres, G. Tosti, E. Troja, G. Vianello, K. S. Wood, M. Wood, M. Yassine
et al. (20 additional authors not shown)
Abstract: We report the discovery of PSR J1906+0722, a gamma-ray pulsar detected as part of a blind survey of unidentified Fermi Large Area Telescope (LAT) sources being carried out on the volunteer distributed computing system, Einstein@Home. This newly discovered pulsar previously appeared as the most significant remaining unidentified gamma-ray source without a known association in the second Fermi-LAT source catalog (2FGL) and was among the top ten most significant unassociated sources in the recent third catalog (3FGL). PSR J1906+0722 is a young, energetic, isolated pulsar, with a spin frequency of $8.9$ Hz, a characteristic age of $49$ kyr, and spin-down power $1.0 \times 10^{36}$ erg s$^{-1}$. In 2009 August it suffered one of the largest glitches detected from a gamma-ray pulsar ($\Delta f / f \approx 4.5\times10^{-6}$). Remaining undetected in dedicated radio follow-up observations, the pulsar is likely radio-quiet. An off-pulse analysis of the gamma-ray flux from the location of PSR J1906+0722 revealed the presence of an additional nearby source, which may be emission from the interaction between a neighboring supernova remnant and a molecular cloud. We discuss possible effects which may have hindered the detection of PSR J1906+0722 in previous searches and describe the methods by which these effects were mitigated in this survey. We also demonstrate the use of advanced timing methods for estimating the positional, spin and glitch parameters of difficult-to-time pulsars such as this.
Comments: 7 pages, 4 figures, accepted for publication in Astrophysical Journal Letters
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
Journal reference: ApJ 809 L2 (2015)
DOI: 10.1088/2041-8205/809/1/L2
Cite as: arXiv:1508.00779 [astro-ph.HE]
(or arXiv:1508.00779v1 [astro-ph.HE] for this version)
From: Colin J. Clark [view email]
[v1] Tue, 4 Aug 2015 14:07:58 UTC (4,010 KB)
astro-ph.HE | CommonCrawl |
When p-n junction diode is forward biased
asked Nov 5, 2018 by pady_1
The manifestation of band structure in solids is due to
A piece of copper and another of germanium are cooled from room temperature to 77 K, the resistance of
For a transistor amplifier in common emitter configuration having load impedance of $1 \; k \Omega\; (h_{fe} = 50$ and $h_{oe} = 25)$ the current gain is
When npn transistor is used as amplifier
An $\alpha$-particle of energy $5 \;MeV$ is scattered through $180^{\circ}$ by a fixed uranium nucleus. The distance of the closest approach is of the order of
The binding energy per nucleon of deuteron $(_1^2H)$ and helium nucleus $(_2^4 He)$ is $1.1\; MeV$ and $7\; MeV$ respectively. If two deuteron nuclei react to form a single helium nucleus, then the energy released is
A nucleus disintegrates into two nuclear parts which have their velocities in the ratio $2:1$. The ratio of their nuclear sizes will be
A charged oil drop is suspended in a uniform field of $3 \times 10^4 V/m$ so that it neither falls nor rises. The charge on the drop will be (take the mass of the charge = $9.9 \times 10^{-15} kg$ and $g = 10\;m/s^2$)
The work function of a substance is 4.0 eV. Then longest wavelength of light that can cause photoelectron emission from this substance approximately
According to Einstein's photoelectric equation, the plot of the kinetic energy of the emitted photo electrons from a metal $V_s$ the frequency, of the incident radiation gives straight line whose slope
A metal conductor of length 1 m rotates vertically about one of its ends at angular velocity 5 radians per second. If the horizontal component of earth's magnetic field is $0.3 \times 10^{-4}T$, then the e.m.f. developed between the two ends of the conductor is
In a LCR circuit capacitance is changed from C to 2C. For the resonant frequency to remain unchanged, the inductance should be changed from L to
In a uniform magetic field of induction B a wire in the form of semicircle of radius r rotates about the diameter of the circle with angular frequency $\omega$. The axis of rotation is perpendicular to the field. If the total resistance of the circuit is R the mean power generated per period of rotation is
A coil having n turns and resistance $4 R \Omega$. This combination is moved in time t seconds from a magnetic field $W_1$ weber to $W_2$ weber. The induced current in the circuit is
In an LCR series a.c. circuit, the voltage across each of the components, L, C and R is 50 V. The voltage across the LC combination will be
The materials suitable for making electromagnets should have
The length of a magnet is large compared to its width and breadth. The time period of its width and breadth. The time period of its oscillation in a vibration magnetometer is 2 s. The magnet is cut along its length into three equal parts and three parts are then placed on each other with their like poles together. The time period of this combination will be
Two long conductors, separated by a distance $d$ carry current $I_1$ and $I_2$ in the same direction. They exert a force $F$ on each other. Now the current in one of them increased to two times and its direction reversed. The distance is also increased to $3d$. The new value of the force between them is
The magnetic field due to a current carrying circular loop of radius 3 cm at a point on the axis at a distance of 4 cm from the centre is 54 $\mu$T. What will be its value at the centre of the loop ?
A long wire carries a steady current. It is bent into a circle of one turn and the magnetic field at the centre of the coil is B. It is then bent into a circular loop of n turns. The magnetic field at the centre of the coil will be
A current I ampere flows along an infinitely long straight thin walled tube, then the magnetic induction at any point inside the tube is
The electrochemical equivlent of a metal is $3.3 \times 10^{-7}$ kg per coulomb. The mass of the metal liberated at the cathode when a 3A current is passed for 2 seconds will be
The thermo emf of a thermocouple varies with the temperature $\theta$ of the hot junction as $E= a \theta + b \theta^2$ in volts where the ratio $a/b$ is $700^{\circ}C$. If the cold junction is kept at $0^{\circ}C$, then the neutral temperature is
Time taken by a 836 W heater to heat one litre of water from $10^{\circ}C$ to $40^{\circ}C$ is
The thermistors are usually made of
In a metre bridge experiment null point is obtained at 20 cm from one end of the wire when resistance X is balanced against another resistance Y. If X < Y, then where will be the new position of the null point from the same end, if one decides to balance a resistance of 4X against Y?
An electric current is passed through a circuit containing two wires of the same material, connected in parallel. If the length and radii of the wires are in the ratio of 4/3 and 2/3, then the ratio of the currents passing through the wire will be
The resistance of the series combination of two resistances is S. When they are joined in parallel through total resistance is P. If $S=nP$, then the minimum possible value of n is
The total current supplied to the circuit by the battery is <br>
Alternating current can not be measured by D.C. ammeter because
Four charges equal to -Q are placed at the four corners of a square and a charge q is at its centre. If the system is in equilibrium the value of q is
A charged particle q is shot towards another charged particle Q which is fixed, with a speed v it approaches Q upto a closest distance r and then returns. If q were given a speed 2v, the closest distances of approach would be
Two spherical conductor B and C having equal radii and carrying equal charges in them repel each other with a force F when kept apart at some distance. A third spherical conductor having same radius as that of B but uncharged brought in contact with B, then brought in contact with C and finally removed away from both. The new force of repulsion, between B and C is
An electromagnetic wave of frequency $v =3.0 \;MHz$ passes from vacuum into a dielectric medium with permittivity $\varepsilon = 4.0$. Then
The maximum number of possible interference maxima for slit-separation equal to twice the wavelength in Young's double-slit experiment is
The angle of incidence at which reflected light totally polarized for reflection from air to glass (refractive index n), is
A plane convex lens of refractive index 1.5 and radius of curvature 30 cm is silvered at the curved surface. Now this lens has been used to form the image of an object. At what distance from this lens an object be placed in order to have a real image of the size of the object ?
A light ray is incident perpendicular to one face of a $90^{\circ}$ prism and is totally internally reflected at the glass-air interface. If the angle of reflection is $45^{\circ}$, we conclude that the refractive index $n$ <br>
The temperature of two outer surface of a composite slab, consisting of two materials having coefficients of thermal conductivity $K$ and $2K$ and thickness $x$ and $4x$, respectively are $T_2$ and $T_1 (T_2> T_1)$. The rate of heat tranfer through the slab, in a steady state is $(\frac{A(T_2 - T_1) K}{x})f$, with $f$ equal to <br>
A radiation of energy E falls normally on a perfectly reflecting surface. The momentum transferred to the surface is
Two thermally insulated vessels 1 and 2 are filled with air at temperatures $(T_1, \; T_2)$, volume $(V_1, \; V_2)$ and pressure $(P_1, \; P_2)$ respectively. If the value joining two vessels is opened, the temperature inside the vessel at equilibrium will be
Which of the following statements is correct for any thermodynamic system ?
If the temperature of the Sun were to increase from $T$ to $2T$ and its radius from $R$ to $2R$, then the ratio of the radiant energy received on earth to what it was previously will be
One mole of ideal monoatomic gas $(\gamma = 5/30)$ is mixed with one mole of diatomic gas $(\gamma = 7/5)$. What is $\gamma$ for the mixture ? $\gamma$ denotes the ratio of specific heat at constant pressure, to that at constant volume.
In forced oscillation of a particle the amplitude is maximum frequency $\omega_1$ of the force, while the energy is maximum for a frequency $\omega_2$ of the force, then
A particle of mass $m$ is attached to a spring (of spring constant $k$) and has a natural angular frequency $\omega_0$. An external force $F(t)$ proportional to $\cos \omega t ( \omega \neq \omega_0)$ is applied to the oscillator. The time displacement of the oscillator will be proportional to
The displacement $y$ of a particle in a medium can be expressed as $y = 10^{-6} \sin (110 t + 20 x + \pi/4)\;m$ where $t$ is in seconds and $x$ in meter. The speed of the wave is
The total energy of particle, executing simple harmonic motion is
A particle at the end of a spring executes simple harmonic motion with a period $t_1$, while the corresponding period for another spring is $t_2$. If the period of oscillation with the two springs in series is $t$, then | CommonCrawl |
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.
Professor Halfbrain and the fantasy knight
Professor Halfbrain owns a 99×99 board for fantasy chess, whose rows are numbered consecutively from 1 to 99 and whose columns are also numbered consecutively from 1 to 99. A fantasy knight can jump from a square in the 𝑘-th column to any square in the 𝑘-th row (and can jump to no other square); note that if the knight can jump from square 𝑥 to square 𝑦, then this does not mean that it can also jump from square 𝑦 to square 𝑥.
The professor claims that there exists a closed fantasy knight tour on the chessboard that makes the knight visit every square exactly once, and in the end takes it back to its starting square.
Question: Is Halfbrain's claim indeed true, or has the professor once again made one of his mathematical blunders?
mathematics combinatorics graph-theory
GamowGamow
Yes there is a solution with a very simple strategy:
Start in (1,1).
Always go the right most square that's unvisited
I'll try to illustrate it. I checked it by hand on an 9x9 board and a very nice pattern emerges that makes it clear it works on any X by X board.
1 1 79 74 67 58 47 34 19 2
6 57 55 53 51 49 48 37 24 9
9 18 16 14 12 10 8 6 4 3
Ivo BeckersIvo Beckers
Let $(x,y)$ be the square in row $x$, column $y$, so that a fantasy knight can move from $(x,y)$ to $(y,z)$. A closed tour is described by a cyclic sequence $$x_0,x_1,x_2,\ldots,x_{99^2-1},x_{99^2}=x_0,$$ where the knight moves from $(x_0,x_1)$ to $(x_1,x_2)$, then to $(x_2,x_3)$, and so on up to $(x_{99^2-1},x_0)$, then finally back to $(x_0,x_1)$. Each square is visited exactly once, so this is an example of a de Bruijn sequence (specifically a $99$-ary de Bruijn sequence of order $2$). De Bruijn sequences are known to exist (the Wikipedia article describes a construction), so Halfbrain's claim is true.
Some other puzzles on this site have answers involving de Bruijn sequences (I found this, this, this, and this).
Julian RosenJulian Rosen
Thanks for contributing an answer to Puzzling Stack Exchange!
Not the answer you're looking for? Browse other questions tagged mathematics combinatorics graph-theory or ask your own question.
Unique Licence Plates
One beer too many
Shortest Number Containing the Numbers 1-100?
Arranging all the Dominos
The Erasmus rook tour
Professor Halfbrain's chessboard dissection theorem
Professor Halfbrain and the next square
Professor Halfbrain and the 52 cards
Professor Halfbrain and the 99x99 chessboard (Part 1)
Professor Halfbrain and the powers of 2016
Professor Halfbrain and the tennis club
Professor Halfbrain and the wonderful rectangles
Professor Halfbrain and numbers with many zeros | CommonCrawl |
Empirical CDF vs CDF
I'm learning about the Empirical Cumulative Distribution Function. But I still don't understand
Why is it called 'Empirical'?
Is there any difference between Empirical CDF and CDF?
distributions terminology cdf ecdf
GammariesGammaries
$\begingroup$ Check here stats.stackexchange.com/questions/222120/… $\endgroup$ – Tim♦ Oct 13 '16 at 6:11
$\begingroup$ There is a simple, straightforward, elegant explanation in terms of tickets in a box models: the CDF describes what is in the original box. The ECDF is what you get when you put your sample (which is a set of tickets drawn from the original box: so-called "empirical" data) into an empty box. $\endgroup$ – whuber♦ Oct 13 '16 at 18:57
$\begingroup$ One thing to be aware of is that your empirical distribution is usually bounded by the way it's constructed, while the CDF may not be. For instance, if you build empirical CDF from observations of Poisson variable, the obtained ECDF is going to be bounded by the highest observed frequency, while the true CDF is unbounded. $\endgroup$ – Aksakal Oct 17 '16 at 17:37
Let $X$ be a random variable.
The cumulative distribution function $F(x)$ gives the $P(X \leq x)$.
An empirical cumulative distribution function function $G(x)$ gives $P(X \leq x)$ based on the observations in your sample.
The distinction is which probability measure is used. For the empirical CDF, you use the probability measure defined by the frequency counts in an empirical sample.
Simple example (coin flip):
Let $X$ be a random variable denoting the result of a single coin flip where $X=1$ denotes heads and $X=0$ denotes tails.
The CDF for a fair coin is given by: $$ F(x) = \left\{ \begin{array}{ll} 0 & \text{for } x < 0\\ \frac{1}{2} & \text{for } 0 \leq x < 1 \\1 & \text{for } 1 \leq x \end{array} \right. $$
If you flipped 2 heads and 1 tail, the empirical CDF would be: $$ G(x) = \left\{ \begin{array}{ll} 0 & \text{for } x < 0\\ \frac{2}{3} & \text{for } 0 \leq x < 1 \\1 & \text{for } 1 \leq x \end{array} \right. $$
The empirical CDF would reflect that in your sample, $2/3$ of your flips were heads.
Another example ($F$ is CDF for normal distribution):
Let $X$ be a normally distributed random variable with mean $0$ and standard deviation $1$.
The CDF is given by:
$$F(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}}$$
Let's say you had 3 IID draws and obtained the values $x_1 < x_2 < x_3$. The empirical CDF would be: $$ G(y) = \left\{ \begin{array}{ll} 0 & \text{for } y < x_1\\ \frac{1}{3} & \text{for } x_1 \leq y < x_2 \\\frac{2}{3} & \text{for } x_2 \leq y < x_3 \\1 & \text{for } x_3 \leq y \end{array} \right. $$
With enough IID draws (and certain regularity conditions are satisfied), the empirical CDF would converge on the underlying CDF of the population.
Matthew GunnMatthew Gunn
Yes, they're different. An empirical cdf is a proper cdf, but empirical cdfs will always be discrete even when not drawn from a discrete distribution, while the cdf of a distribution can be other things besides discrete.
If you treat a sample as if it were a population of values, each one equally probable (i.e. place probability 1/n on each observation) then the cdf of that distribution would be the ECDF of the data.
Why does it called 'Empirical'?
It's an estimate of the population cdf based on the sample; specifically if you treat the proportions of the sample at each distinct data value and treat it like it was a probability in the population, you get the ECDF.
Empirical has a meaning something like "by observation rather than theory", and that's exactly what it means in this case ... using the observations to determine the distribution function.
The empirical CDF is built from an actual data set (in the plot below, I used 100 samples from a standard normal distribution). The CDF is a theoretical construct - it is what you would see if you could take infinitely many samples.
The empirical CDF usually approximates the CDF quite well, especially for large samples (in fact, there are theorems about how quickly it converges to the CDF as the sample size increases).
Chris TaylorChris Taylor
Empirical is something you build from data and observations. For instance, suppose you want to know about the distribution of the height of people in a country. You start by measuring people and come up with a histogram that can be approximated to a distribution. Then you calculate the empirical CDF.
If you are using a statistical distribution (a deterministic formula that gives the exact same output with the same parameters) you can calculate its CDF also.
You can say "The height of the people in this country is distributed similar to normal distribution with the mean 1.75 m and the standard deviation 0.1 m. Then you can use CDF of ~$N(\mu=1.75\ \text{m},\sigma=0.1\ \text{m})$ instead of the constructed CDF of the empirical distribution.
Nick Cox
berkorbayberkorbay
$\begingroup$ Is there a confidence measurement employed that expresses the likelihood that CDF and Emperical CDF describe the same population in the limit of all the experimental sampling in the world? This would seem to have application to Electoral polling, for instance. (though maybe not, since the output is not strictly describable as a function...) $\endgroup$ – BenPen Oct 13 '16 at 15:00
According to Dictionary.com, the definitions of "empirical" include:
derived from or guided by experience or experiment.
Hence, the Empirical CDF is the CDF you obtain from your data. This contrasts with the theoretical CDF (often just called "CDF"), which is obtained from a statistical or probabilistic model such as the Normal distribution.
Waldir LeoncioWaldir Leoncio
Not the answer you're looking for? Browse other questions tagged distributions terminology cdf ecdf or ask your own question.
Can anyone clarify the concept of a "sum of random variables"
Why is the empirical cumulative distribution of 1:1000 a straight line?
Use Empirical CDF vs Distribution CDF?
Calculate expectation from empirical cdf
What inferential method produces the empirical CDF?
Discrete analog of CDF: "cumulative mass function"?
Goodness of fit (cdf: empirical vs theoretical)?
Fitting parametric CDF to ecdf
When to use empirical cdf?
Empirical PDF from Empirical CDF
In trouble with CDF graph | CommonCrawl |
Refining projected multidecadal hydroclimate uncertainty in East-Central Europe using CMIP5 and single-model large ensemble simulations
Dániel Topál ORCID: orcid.org/0000-0001-9348-44941,2,
István Gábor Hatvani ORCID: orcid.org/0000-0002-9262-73151 &
Zoltán Kern ORCID: orcid.org/0000-0003-4900-25871
Theoretical and Applied Climatology volume 142, pages 1147–1167 (2020)Cite this article
Future hydroclimate projections of global climate models for East-Central Europe diverge to a great extent, thus, constrain adaptation strategies. To reach a more comprehensive understanding of this regional spread in model projections, we make use of the CMIP5 multi-model ensemble and six single-model initial condition large ensemble (SMILE) simulations to separate the effects of model structural differences and internal variability, respectively, on future hydroclimate projection uncertainty. To account for model uncertainty, we rank 32 CMIP5 models based on their predictive skill in reproducing multidecadal past hydroclimate variability. Specifically, we compare historical model simulations to long instrumental and reanalysis surface temperature and precipitation records. The top 3–ranked models—that best reproduce regional past multidecadal temperature and precipitation variability—show reduced spread in their projected future precipitation variability indicating less dry summer and wetter winter conditions in part at odds with previous expectations for Central Europe. Furthermore, not only does the regionally best performing CMIP5 models belong to the previously identified group of models with more realistic land-atmosphere interactions, their future summer precipitation projections also emerge from the range of six SMILEs' future simulations. This suggests an important role for land-atmosphere coupling in regulating hydroclimate uncertainty on top of internal variability in the upcoming decades. Our results help refine the relative contribution of structural differences between models in affecting future hydroclimate uncertainty in the presence of irreducible internal variability in East-Central Europe.
Human-induced changes in the climate system are already detectable on daily-to-decadal timescales (Santer et al. 2018; Sippel et al. 2020). Changing weather patterns (Rosenzweig et al. 2008), expanding dryland regions (Huang et al. 2016) and the dramatic reduction of Arctic sea ice (Screen and Simmonds 2010; Dai et al. 2019), are just a few from the vast set of climatic changes attributed to rising greenhouse gas concentrations. Besides global impacts, alterations in regional scale climate patterns are also observable. Specifically, East-Central Europe is believed to become more susceptible to the incidence of climate extremities in response to increased radiative forcing (Seneviratne et al. 2006; Bartholy and Pongrácz 2007; Beniston et al. 2007; Hirschi et al. 2010). Nevertheless, internal variability—an inherent feature of chaotic climate dynamics (Zeng et al. 1993)—is also known to have substantially contributed to, or masked the effects of anthropogenically forced climate change (Ding et al. 2014, 2019; Swart et al. 2015; Baxter et al. 2019; Topál et al. 2020), although conclusions on how internal variability might behave under future warming scenarios are still controversial (Deser et al. 2012, 2020; Dai et al. 2015; Haszpra and Herein 2019; Haszpra et al. 2020a,b).
Global climate models (GCMs) are elaborate tools for simulating past, present, and future climatic and environmental changes on various timescales; however, any projection is riddled with three commonly mentioned uncertainties: scenario uncertainty, model uncertainty, and internal variability (Stainforth et al. 2007; Knutti 2008; Hawkins and Sutton 2009). Thus, coordinated modeling experiments have launched to address these uncertainties. The Coupled Model Intercomparison Project Phase 5 (CMIP5, Taylor et al. 2012b) collected a number of GCMs with differing model physics, manifested chiefly in the different parametrization schemes applied in them. This dissimilarity tends to lead to structural differences between the models and thought to account for most of the uncertainty regarding their performance (Knutti 2008; Knutti and Sedláček 2013; Harrison et al. 2015), while the choice of the external forcing scenario plays a subtle role (Reichler and Kim 2008). One practice to acknowledge GCM limitations is to use a multi-model ensemble (Suh et al. 2012; L'Heureux et al. 2017) and consider each GCM with equal weight. However, to abandon "model democracy" and weight or give preference to certain models in a multi-model ensemble based on performance, ranking was also proposed (Knutti 2010; Merrifield et al. 2019). The latter has proved effective in constraining model uncertainty (Knutti et al. 2017) and is of particular importance when studying climatic variables (e.g., precipitation) whose future projections show large spread between different models (Garfinkel et al. 2020).
Action taken in this direction introduced the application of diverse model ranking methodologies, ranging from studies using correlation, root-mean-square error, and variance ratio (Boer and Lambert 2001; Gleckler et al. 2008) to the application of prediction indices (Murphy et al. 2004) or to those taking a Bayesian approach (Min and Hense 2006). In addition, the concern of interdependency of CMIP models (Sanderson et al. 2015) has been re-evaluated recently (Olson et al. 2019). Regarding the target area of model evaluation Garfinkel et al. (2020) studied the sources of CMIP5 intermodel spread in precipitation changes globally, however, ample analyses are targeted at more regional areas, e.g., the North-Atlantic (Perez et al. 2014), parts of Europe (Coppola et al. 2010; Pieczka et al. 2017), Africa (Brands et al. 2013; Dyer et al. 2019; Yapo et al. 2020), South-America (Lovino et al. 2018), or Asia (Ahmed et al. 2019).
Policy and management actions taken in response to the environmental hazards linked to climatic change, especially the consideration of the societal and agricultural impacts of extreme climate events, are limited by the uncertainties around GCM performance and internal variability too (Knutti and Sedláček 2013). There is growing body of evidence that the regional accuracy of GCM simulations is critical for regional climate model (RCM) projections. An RCM nested in a GCM that lacks the skillful representation of the observed large-scale climatic modes and circulation (i.e., boundary conditions for the RCM) cannot be expected to generate realistic results (Gautam and Mascaro 2018; Verfaillie et al. 2019). Despite a model's agreement with current or more distant past observations does not always guarantee credibility to its future projections, the fact that it is based on physical principles supports the idea of using past observations as constraints in hope of selecting models with more reliable future projections (Reichler and Kim 2008; Sheffield et al. 2013; Sillmann et al. 2013; Wang et al. 2014; Barnes and Polvani 2015). Consequently, boosting the robustness of regional GCM projections via the rigorous evaluation of their uncertainties is crucial, especially for those that drive RCMs in the MED-CORDEX project (Ruti et al. 2016).
Several studies indicate that structural differences, namely the land-atmosphere feedback strength, between models can indeed be a source of uncertainty in future precipitation projections (e.g., Schwingshackl et al. 2018). However, the exact physical mechanisms such as how changes in soil moisture affect precipitation or temperature extremes remain uncertain (Boberg and Christensen 2012; Taylor et al. 2012a; Berg et al. 2016). Recently Vogel et al. (2018)—based on CMIP5 models with more realistic land-atmosphere couplings—concluded a future reduction in summer drying and warm extremes in Central Europe in line with a study by Selten et al. (2020). Nevertheless, how internal variability may influence the selection of best performing models and thus the uncertainty in future precipitation projections remains unaddressed.
Internal variability can be considered the parallel existence of numerous climate states at a given time (Lorenz 1963). It is an inherent feature of the climate system driven by chaotic dynamics (Bódai and Tél 2012; Drótos et al. 2015, 2016, 2017; Herein et al. 2016, 2017). In single-model initial condition large ensemble (SMILE) simulations, unlike the CMIP5 multi-model ensemble, the same model is run several times with perturbations in the initial condition. The single runs—differing in their initial conditions exclusively—constitute the members of the ensemble whose spread is related to internal variability with the ensemble mean reflecting the forced component.
In this paper, we aim to complement the inconclusive literature on East-Central European future precipitation projections and assess CMIP5 historical model performance based on long instrumental records from the HISTALP database (Auer et al. 2007) and the National Oceanic and Atmospheric Administration (NOAA) twentieth century reanalysis (Compo et al. 2011; Slivinski et al. 2019) for 1861–2005 and compare our constrained model ensemble's future precipitation projections to six SMILEs (Deser et al. 2020). Within the CMIP5 multi-model ensemble, we cannot estimate the relative role for internal variability and model structural differences in influencing the spread of future precipitation projections since, for example, land-atmosphere feedbacks appear with different precision in the 32 CMIP5 models (Cheruy et al. 2014). However, with the inclusion of SMILEs, new opportunities open: we can explore the range of future precipitation projections solely due to internal variability (per model) and thus place CMIP5 model structural differences in the context of internal variability when assessing future hydroclimate uncertainty.
We identify three CMIP5 models with outstanding performance in simulating both regional past hydroclimate variability and land-atmosphere feedbacks (Seneviratne et al. 2013; Vogel et al. 2018), which unanimously indicate significantly less dry future summer conditions relative to the spread of CMIP5 and six SMILE simulations. This emphasizes the role for land-atmosphere coupling in regulating future summer hydroclimate uncertainty and a possible limitation affecting the state-of-the-art SMILE simulations that requires future work to disentangle. Our paper provides new insights into how those models that show better skills in reproducing observed climate variability can help refine future hydroclimate uncertainty in the presence of internal variability and advocates new efforts dedicated to improving model performance in simulating land-atmosphere feedbacks in Central Europe.
Data and methods
Study area description
The primary target area consists of the northeast (NE) and southeast (SE) subregions of the Greater Alpine Region (43° N-50° N; 13° E-19.5° E; Fig. 1), which have been delineated by Auer et al. (2007) based on the regionalization of certain climatic variables. It was chosen to cover the region of interest (East-Central Europe), where precipitation projections of CMIP5 models show large spread for both summer (Fig. 2a) and winter (Fig. 2b). To ensure the robustness of results based on the primary target area, supplementary calculations were performed on an extended domain (43° N-57° N; 4° E-20° E; Fig. 1) corresponding to the East-Central European part of the area used in Vogel et al. (2018).
Location of the study area on an orography map. The purple and blue dashed lines represent the HISTALP Northeast (NE) and Southeast (SE) regions. The primary target area is selected as the rectangle overlapping the NE and SE regions, i.e., 43° N-50° N;13° E-19.5° E. The black rectangle represents the Central Europe (CEU) domain (extended target area) used to evaluate CMIP5 models against the NOAA twentieth century reanalysis
Time series of standardized (relative to 1971–2000 mean) seasonal mean precipitation projections of 31 CMIP5 models for 2021–2085 (31-year moving averaged) for the primary target area (43° N-50° N; 13° E-19.5° E) a for summer (June-July-August: JJA) and b for winter (December-January-February: DJF)
HISTALP instrumental and the NOAA twentieth century reanalysis data
For the basis of model assessment, we used monthly surface temperature (TS) and precipitation (PR) data from the HISTALP coarse resolution subregional mean (CRSM) series for the NE and SE subregions of the Greater Alpine Region (Fig. 1) for 1861–2005 (Auer et al. 2007; we refer to this data as observations). The CRSMs are arithmetic means of the homogenized anomaly series (reference period: 1961–1990) for the stations situating within the boundaries of NE and SE subregions. We also utilize TS and PR data from the NOAA twentieth century reanalysis version 3 (Compo et al. 2011; Slivinski et al. 2019) for the assessment of model performance.
CMIP5 models and single-model initial condition large ensembles
For the analysis, we selected CMIP5 models that have so-called historical or future RCP8.5 simulations following the historical experimental design for 1861–2005 and RCP8.5 forcing scenario for 2006–2100 (Lamarque et al. 2010; Taylor et al. 2012b). This resulted in the selection of 32 different model versions for the historical and 31 models for the future timeframe from 16 modeling centers worldwide (Table 1). In addition, we made use of six SMILE simulations' future (2006–2080) precipitation simulations: Max Planck Institute Grand Ensemble (MPI-GE), Canadian Earth System Model Large Ensemble (CanESM-LE), Community Earth System Model Large Ensemble (CESM-LE), Geophysical Fluid Dynamics Laboratory Earth System Model version 2 Large Ensemble (GFDL_ESM2M-LE), Commonwealth Scientific and Industrial Research Large Ensemble (CSIRO-LE), and EC-EARTH Large Ensemble (EC_EARTH-LE). In addition, we used two historical (1861–2005) precipitation simulations of the MPI-GE and EC_EARTH-LE (further details and references are found in Table 2).
Table 1 32 CMIP5 models used in the study (excluding NorESM1-M for the future timeframe). Expansions/definitions of the models are available online (https://www.ametsoc.org/PubsAcronymList)
Table 2 Large ensemble simulations used in the study
Ranking the individual CMIP5 models
As a preliminary step, model output was interpolated onto the same regular 1.5° grid, and anomalies (relative to 1961–1990 to match the HISTALP anomaly time series) were calculated for all the individual historical CMIP5 simulations. Boreal summer (June-July-August: JJA) and winter (December-January-February: DJF) averages were derived annually for both the CMIP5 historical (1861–2005) and future (2006–2100) simulations and the observations of TS and PR. Additionally, both observational and model data were smoothed with a centralized 31-year moving average to mostly account for multidecadal low-frequency variability (as is the standard practice to minimize the effect of internal variability in single model realizations; e.g., McCabe and Palecki 2006; Senftleben et al. 2020) and to ensure comparability with the GCM data with relatively coarse grid resolution. Data preparation resulted in area-averaged and smoothed time series for the two subregions (NE and SE) for each model, variable, and season along with the observed time series.
We used three statistics for the individual CMIP5 models' assessment with root-mean square error (RMSE) being the primary one in addition to the fraction of temporal Pearson correlation coefficient and mean-absolute error (referred to as: rank) and the Nash-Sutcliffe efficiency (NSE, Nash and Sutcliffe 1970) calculated between the observed and simulated time series. The reason we include temporal correlation is to measure to what extent simulated long-term (the time series are smoothed with a 31-year moving average) changes in PR and TS are in-phase with observations as it is expected for a model to reproduce observed low-frequency TS and PR changes. The NSE (Eq. 1) is calculated based on the observed (obs) and simulated (sim) time series pairs as:
$$ NSE=1-\frac{\sum_{i=1}^n{\left( obs- sim\right)}^2}{\sum_{i=1}^n{\left[\left( obs-\left(\overline{obs}\right)\right.\right]}^2} $$
where n is the length of the timeseries and \( \left(\overline{\mathrm{obs}}\right) \) indicates the time-mean of the observed timeseries. The NSE ranges from -∞ to 1, where 1 would mean the perfect observation-simulation match (which is not possible) and NSE = 0 indicates that the modeled time series' mean-square-error is commensurable with the variance of the observed time series.
For simplicity, we now only go through the ranking steps for the RMSE as the calculations are the same for the other two statistics. First, RMSE corresponding to each of the two seasons (JJA and DJF) was calculated for both variables (referred to as TS and PR seasonal RMSE) for the two subregions separately. Then, the RMSE values were averaged for the two subregions and seasons for the two variables separately (referred to as TS and PR mean RMSE). We also assess the overall performance of a model in reproducing the observed past hydroclimate variability in the target region and introduce the grand-RMSE, which is the average of the TS and PR mean RMSE values. To ensure comparability of the RMSE of PR and TS, we rescaled the values (for both variables) to range between 0 and 1 (Eq. 2) before averaging them into the grand-RMSE, which is the arithmetic mean of the scaled mean RMSE of TS and PR.
$$ {RMSE}_{scaled}=\frac{meanRMSE-\underset{m=1,\cdots, M}{\min }(meanRMSE)}{\max_{m=1,\cdots, M}(meanRMSE)-\underset{m=1,\cdots, M}{\min }(meanRMSE)} $$
where m goes through the M = 32 CMIP5 models.Note that the applied rescaling is based on the maximum and minimum values of the mean RMSE to maintain the relative differences between each model's performances.
Rank histogram to assess the performance of an ensemble
Additionally, to assess the performance of an ensemble as a whole, we apply the rank histogram on year-to-year seasonal (JJA and DJF) averaged HISTALP and simulated data (Talagrand et al. 1997; Annan and Hargreaves 2010; Maher et al. 2019) for the two SMILEs with sufficiently long historical simulations (MPI-GE and EC_EARTH-LE) and for the CMIP5 ensemble. To do so, let us consider an ensemble with n members and initially let the rank = 1. At each time-step (1861–2005), we count the number of members of a given ensemble that are greater than the observed value at that time-step, which can be between count = 0 and count = n. If count = 0, then the rank = 1, or if count = n, then the rank = n + 1, else the rank = count. We plot the histogram of the ranks and check for consistency with uniformity based on a chi-squared test (Annan and Hargreaves 2010). If the ensemble underestimates the observed variability, then observations will frequently lie close to, or outside the edges of the ensemble resulting in a u-shaped rank histogram, while a well performing ensemble would yield a flat rank histogram.
Using observations to constrain the CMIP5 ensemble
Ranking CMIP5 models based on their historical performance (1861–2005)
To begin with, we assess the historical performance of 32 CMIP5 models based on the HISTALP observations (1861–2005) and use the above described ranking method. At first, seasonal RMSE, rank, and NSE were calculated for the NE and SE subregions for both seasons separately, then averaged over the primary target area (Fig. 3). The 32 CMIP5 models show diverging performance in capturing past seasonal TS and PR variability (Fig. 3). Some models (e.g., FGOALS-s2; MRI-CGCM3) stand out from others, suggesting that abandoning the "one model one vote" approach (Knutti 2010) is a right decision for the target area. The spread between the performances of the models are larger in summer (Fig. 3a) than in winter (Fig. 3b), which discrepancy might be rooted in that (i) summertime convective precipitation is more challenging for GCMs to capture (Dai 2006) and (ii) that the regional surface temperature warming signal over the past decades is more pronounced in summer than in winter. For some of the models (e.g., CanESM2; CSIRO-Mk3.6) the seasonal rank is negative, along with higher seasonal RMSE values. The negative rank means negative correlation between the observed and simulated time series, which is indicative of that the model is out of phase with the long-term observed changes.
Surface temperature (TS: red crosses) and precipitation (PR: blue circles) seasonal ranks (uppermost panel), NSE (middle panel), and RMSE (lower panel) for the historical era (1861–2005) across 32 CMIP5 climate models (listed below the x-axis) a for summer (June-July-August: JJA) and b for winter (December-January-February: DJF)
In the next steps, first, we average the seasonal statistics and obtain the mean RMSE, rank, and NSE (Fig. 4a) per variable, second, re-scale them to range from 0 to 1 based on Eq. 2 and third, via averaging the re-scaled mean RMSE, rank, and NSE values, we obtain the grand-RMSE, -rank, and -NSE (Fig. 4b). Those models that performed above the 90th percentile of the CMIP5 ensemble based on any of the three metrics—a total of six models (FGOALS-s2; IPSL-CM5B-LR; MPI-ESM-LR; MRI-CGCM3; MRI-ESM1; GISS-E2-R-CC)—are selected that skillfully reproduce multidecadal TS and PR variability over the past ~ 150 years in East-Central Europe (Fig. 4b; Table 3). To get a more visual picture of the six top performing models' past climate variability, 31-year moving averaged TS and PR time series for 1861–2005 are plotted against the HISTALP observations for both JJA and DJF for the two Greater Alpine subregions, separately (Fig. S1). Overall, models show large spread in their historical projections in the two subregions for both variables and seasons, which are visibly reduced among the six selected models (see the colored solid lines in Fig. S1).
a Surface temperature (TS) and precipitation (PR) mean ((JJA + DJF)/2) ranks (upper panel), mean NSE (middle panel), and mean RMSE (lower panel) for East-Central Europe for the historical era (1861–2005) across 32 CMIP5 climate models (indicated below the x-axis). b Box-and-whiskers plot of the scaled [0;1] grand-rank, grand-NSE, and grand-RMSE (indicated below the x-axis) showing the 10th and 90th percentiles of 32 CMIP5 model's performances. Red circles show models above 90th and below 10th percentiles, respectively. Those six models above the 90th percentile in the grand-rank or -NSE or -RMSE are highlighted with green on a
Table 3 Scaled [0; 1] grand-RMSE, -NSE, and -rank of 32 CMIP5 models for East-Central Europe. Models performing above the 90th percentile of the CMIP5 ensemble are italicized
Validation of the ranking based on the NOAA twentieth century reanalysis
To account for possible obscuring effects of the moderate size of the primary target area on the selection of the best performing models, we repeat the ranking using only the RMSE statistic for the extended domain (section 2.1; Fig. 1) based on the NOAA twentieth century version 3 gridded reanalysis (Slivinski et al. 2019). The calculation method is equivalent to the one applied to the HISTALP records except the 32 CMIP5 models are evaluated against the gridded reanalysis product. In Fig. 5, we demonstrate the spatial distribution of the reanalysis-based TS/PR mean RMSE relative to the CMIP5 multi-model ensemble mean for the six previously selected models (Fig. 5a–l) and show the TS/PR mean RMSE and the grand-RMSE for each model averaged over the extended target area (Fig. 1) as a box-and-whiskers plot (Fig. 5m). Spatial maps of the TS/PR mean RMSE for each of the 32 models are additionally shown in Fig. S2. Based on the grand-RMSE for the extended target area (Fig. 5m), only three out of the previously selected six models exhibit similar good overall performance; thus, we further reduce the range of selected models to the MRI-CGCM3, MRI-ESM1, and FGOALS-s2 and refer to them as the constrained ensemble.
Spatial map of a–f surface temperature (TS) and g–l precipitation (PR) RMSE based on the NOAA twentieth century reanalysis (1861–2005) relative to the CMIP5 ensemble mean shown for the top 6-ranked models (based on historical instrumental data, see Methods) and m box-and-whiskers plot of the TS (violet) and PR (light blue) RMSE averaged for the Central European (CEU) domain (43°-57° N;4° E-20° E) and the average of the TS and PR RMSE values (Grand RMSE with gray) for 32 CMIP5 models each of which is marked as in the legend. The whiskers extend to the minimums and maximums. The median of each group is indicated with orange horizontal lines. The means are marked with ×
Rank histograms
Furthermore, since internal variability cannot be correctly assessed in a multi-model ensemble because of the initial condition problem and differences in model structures (Branstator and Teng 2010; Knutti 2010; Bódai and Tél 2012), it must be considered that it may leave its fingerprint on our ranking and study rank histograms of historical precipitation projections of the MPI-GE and EC_EARTH-LE in the primary target area. Figure 6a–d exhibit that both SMILEs underestimate the observed summer and winter precipitation variability (histograms are u-shaped), which is reinforced by the chi-squared tests indicating significant differences from uniformity (on the 99% confidence level). Additionally, the CMIP5 multi-model ensemble shows similar rank histograms to the SMILEs' (Fig. 6e–f), except that the winter rank histogram does not differ significantly from a flat one. These indicate that (i) conclusions based on simulated internal variability by these two state-of-the-art SMILEs (and possibly by the others as well) should be treated with caution and that (ii) observational constraints may indeed be helpful in revealing models with structural advances relative to other models. In the upcoming sections, we further elucidate these issues.
Rank histograms based on the year-to-year seasonal mean HISTALP observed precipitation in the target area for two large ensembles a–b MPI-GE and c–d EC-EARTH-LE, and the CMIP5 multi-model ensemble e–f
A possible source for a reduced projection spread: land-atmosphere couplings
We are particularly concerned with how future projections of the constrained model ensemble look like in East-Central Europe. We find that not only did the ranking result in a reduced spread in historical simulations (Fig. S1), but the members of the constrained ensemble also show reduced spread in their future projections relative to the CMIP5 ensemble mean for both summer (Fig. 7a) and winter (Fig. 7b). Moreover, the difference between the CMIP5 ensemble mean (28 models' mean: − 3.9%/decade) and the constrained ensemble mean (3 models' mean: − 0.1%/decade) future precipitation trend is significant based on a two-sample t test (99% confidence level). The three top-ranked models indicate less dry summer and wetter winter conditions in the upcoming decades not only in the primary target area but also on the extended domain in parallel with considerable surface temperature rise (Fig. 8). Members of the constrained CMIP5 ensemble indicate − 0.7 to + 1%/decade summer and + 1 to + 5%/decade winter precipitation change for East-Central Europe relative to 1971–2000 (Fig. 8). Examining the constrained ensemble members' future seasonal surface temperature projections, we find no noticeable differences relative to the CMIP5 ensemble mean; therefore, we rule out the possibility that the discrepancy in future precipitation projections may be due to a negative surface temperature bias in those models (Fig. 8).
Time series of standardized (relative to 1971–2000 mean) seasonal mean precipitation (PR) for 2021–2085 (31-year moving averaged) for the members of the constrained CMIP5 ensemble (colored solid lines) and the mean of 31 CMIP5 models (thick solid gray line) in addition to the 31 individual models in CMIP5 (thin solid gray lines) for the primary target area (blue box on Fig. 8: 43° N-50° N;13° E-19.5° E) a for JJA and b for DJF
Spatial map of the linear trend (relative to 1971–2000 mean) of a–c and g–i surface temperature (TS: K/decade) and d–f and j–l precipitation (PR: %/decade) for 2021–2085 (31-year moving averaged) under RCP8.5 scenario in the members of the constrained CMIP5 ensemble (see Methods for ranking details) for JJA and DJF
Our results are partly at odds with previous expectations that project extensive summer drying in the Central European region (Feng and Fu 2013; Sheerwood and Fu 2014; Polade et al. 2015; Pfleiderer et al. 2019). One mechanism for the advanced summer aridification in the region has been associated with the moist lapse-rate feedback due to global warming (Brogli et al. 2019). A warmer atmosphere, deduced from Clausius-Clapeyron relation, can hold more moisture, which, during moist adiabatic vertical motions, allows enhanced latent heat release and thus upper-tropospheric warming. These altogether result in an increased dry atmospheric static stability as the thermal stratification remains close to the moist adiabat during summer (Schneider 2007; Brogli et al. 2019). Another mechanism regarding changes in atmospheric circulation regimes, such as the poleward shifted subsidence zone with the projected expansion of the Hadley-cell, has also been suggested to influence future hydroclimate in the region due to enhanced radiative forcing (Perez et al. 2014; Mann et al. 2018). Nevertheless, inconclusive literature (e.g., Kröner et al. 2017) hinders us from a complete understanding of possible future precipitation changes in transitional climatic zones, such as Central Europe.
Recent studies highlight a competing role for land-atmosphere interactions and the extent of its realistic representation in climate models in determining future hydroclimate uncertainty in the Mediterranean and Central Europe, where soil moisture largely affects temperature and precipitation via the partitioning of net radiation into sensible and latent heat fluxes (Boberg and Christensen 2012; Lorenz et al. 2016; Vogel et al. 2018; Al-Yaari et al. 2019; Selten et al. 2020). It has also been proposed that it is not enough for a model to faithfully represent observed soil-atmosphere feedbacks because convection, land-surface, and cloud parametrization schemes not only influence how soil moisture-precipitation feedbacks are handled in a model but also affect soil moisture-temperature feedbacks in turn (Christensen and Boberg 2012). This further complicates and highlights the importance of land-atmosphere interactions in determining future hydroclimate uncertainty in our target region.
Members of our constrained CMIP5 ensemble belong to the group of CMIP5 models that was identified by Vogel et al. (2018) with (i) more fidelity in representing land-atmosphere couplings and (ii) less pronounced summer hot and dry extremes for central Europe. A physical mechanism strongly connected to land-atmosphere feedbacks that might balance the decrease in precipitation during future transition into semi-arid regions in Central Europe was also suggested (Taylor et al. 2012a). In an early study, Dai (2006) showed that a previous version of MRI-CGCM3 (the MRI-CGCM version 2.3.2a) better captured observed global rainfall patterns than other models indicating that some basic features rooted in the model physics (most likely the convective and stratiform precipitation parametrization schemes) can indeed be sources of intermodel spread.
These lines of evidence reinforce the idea of ranking to constrain future hydroclimate projections of different CMIP5 models based on evaluating their historical performance and suggest an important physical mechanism that can explain why our selected models perform better regionally. Furthermore, presented results provide valuable implications for future RCM simulations and advocate future research to revisit the problem of the fidelity of land-atmosphere feedbacks in RCM simulations, where the enhanced resolution allows for a more detailed picture of regional feedback mechanisms.
Although the 32 GCMs differ in their external forcing components for their historical simulations, no pronounced differences are observable between the members of the constrained and the CMIP5 ensemble (not shown). We argue that the varying historical model projection skills are not primarily rooted in the differences between the external forcing components in line with previous studies, e.g., Reichler and Kim (2008). Nevertheless, we note that the choice of the external forcing scenario does influence future model projections (Santer et al. 2018), especially under a changing climate, when the external forcing components are time-dependent (Bódai et al. 2020; Haszpra et al. 2020a,b).
Placing future precipitation projections of the constrained ensemble in the context of SMILE projections
Based on the ranking, we identified a constrained CMIP5 multi-model ensemble that shows reduced spread in their historical and future precipitation projections indicating less dry summer and wetter winter conditions in the upcoming decades (Figs. 7 and 8). We have also seen that land-atmosphere feedbacks may be of key importance in explaining why some models perform better than others. The advantage of including SMILE simulations in our study is to provide an estimate (i) for the forced response (ensemble mean) in precipitation to greenhouse gas emissions and (ii) for all possible states allowed by internal variability in a certain model (ensemble spread), which allows us to place the observationally constrained CMIP5 ensemble (three top-ranked models) in the context of internal variability. What is more, with the inclusion of six SMILEs, we can compare the internal variability of projected precipitation of various models, thus, get a more robust estimate of future states of hydroclimate allowed by internal variability in the region. We are aware of caveats added by the coarse spatial and topography resolution of SMILEs; however, currently, it is our best estimate for projected hydroclimate uncertainty due to internal variability.
We plot the spatial map of the ensemble mean future precipitation projections' linear trends for Europe and the spread across all members of the ensembles as a box-and-whiskers plot for our primary target area (Fig. 1) for summer (Fig. 9) and winter (Fig. 10). In summer, all SMILE mean simulations show drier future conditions in East-Central Europe indicating − 2 to − 7%/decade precipitation decrease during the upcoming decades relative to 1971–2000 (Fig. 9), while the constrained CMIP5 ensemble mean trend indicates less pronounced summer drying (− 0.1%/decade). However, the magnitudes of the ensemble mean projections and the ensemble spread of different SMILE simulations vary considerably across the six SMILEs that implies a role for model uncertainty in regulating future hydroclimate changes on top of internal variability. Furthermore, the constrained ensemble's mean future (2021–2085) precipitation trend (− 0.1%/decade) emerges from the interquartile range of simulated internal variability by six SMILEs ((− 8%, − 1%)/decade). The difference between the group of future precipitation trends spanned by all the members of the six SMILEs (a total of 256 members) and the constrained ensemble (3 members) is significant based on a two-sample t test on the 99% confidence level (the means of the two groups' trends: − 4.8%/decade for the six SMILEs and − 0.1%/decade for the constrained ensemble).
Above: a–f spatial map of the ensemble mean (forced component) linear trend (relative to 1971–2000) of summer (June-July-August: JJA) precipitation for 2021–2085 (31-year moving averaged) for the six SMILEs. Below: g box-and-whiskers plot (with the whiskers extending to 1.5 × interquartile range) of JJA precipitation linear trends (relative to 1971–2000) for 2021–2085 (31-year moving averaged) for the CMIP5 multi-model and the six SMILEs (indicated below the x-axis) for the primary target area (indicated by the red rectangles on a–f: 43° N-50° N; 13° E-19.5° E). The median of each ensemble is indicated with numbers above the boxes in addition to the orange lines. The means are marked with ×, while the outliers (extending 1.5 × interquartile range) are marked with +. Trend values of the members of the constrained CMIP5 ensemble are indicated with markers on the first box-and-whiskers
Above: a–f spatial map of the ensemble mean (forced component) linear trend (relative to 1971–2000) of winter (December-January-February: DJF) precipitation for 2021–2085 (31-year moving averaged) for the six SMILEs. Below: g box-and-whiskers plot (with the whiskers extending to 1.5×interquartile range) of DJF precipitation linear trends (relative to 1971–2000) for 2021–2085 (31-year moving averaged) for the CMIP5 multi-model and the six SMILEs (indicated below the x-axis) for the primary target area (indicated by the red rectangles on a–f: 43° N-50° N; 13° E-19.5° E). The median of each ensemble is indicated with numbers above the boxes in addition to the orange lines. The means are marked with ×, while the outliers (extending 1.5 ×interquartile range) are marked with +. Trend values of the members of the constrained CMIP5 ensemble are indicated with markers on the first box-and-whiskers
Since we used observations to constrain the CMIP5 ensemble, which resulted in the selection of models with more realistic representations of land-atmosphere feedbacks, we assume that the difference between the constrained ensemble's and the six SMILEs' future summer precipitation trends may be attributable to land-atmosphere coupling discrepancies between the models. Importantly, except for the CESM1, the base models of the large ensemble simulations were either involved in the ranking, or we evaluated their historical simulations (see Sect. 3.3). Thus, it is unlikely that the SMILE simulations would regionally outperform the members of the constrained CMIP5 ensemble in capturing observed precipitation variability. Although this needs further efforts to clarify, these lines of evidence suggest less extreme summer drying in East-Central Europe and that land-atmosphere coupling may play a key role in regulating future summer hydroclimate uncertainty in line with several recent studies (Boberg and Christensen 2012; Vogel et al. 2018; Selten et al. 2020).
On the other hand, in winter, the six SMILE mean simulations show future regional precipitation changes ranging from − 0.2 to + 4.5%/decade relative to 1971–2000 (Fig. 10a–f). For winter, the SMILEs show a similar range (− 3.7 to 7.1%/decade) of possible future precipitation conditions in our region to both the unconstrained and the constrained CMIP5 ensemble; however, the differences between the six SMILEs are discernible (Fig. 10g). Unlike in summer, the top 3–ranked CMIP5 models' future winter precipitation trend values are well within 1.5×interquartile range of SMILE simulations (Fig. 10g). However, the relative role for internal variability compared with model structural differences, or the exact physical mechanism responsible for the spread in either among the different ensembles or among the members of a SMILE, remains uncertain and needs future work to untangle. For example, based on Fig. 10a–f, we note the importance of the exact geographical location of the simulated transition zone between regions with drier and wetter future conditions in the different models. This suggests that internal atmospheric circulation changes may play an important role (e.g., via regulating the extent of the northward expansion of the Hadley-cell and thus the subsidence zone (Lu et al. 2007)) in determining the geographical location of the transition between projected drier and wetter conditions that might also be dependent on the amount of emitted greenhouse gases in the future (Haszpra et al. 2020b). There are plenty of rooms for future studies in these directions, which is strongly encouraged in hope of a more comprehensive understanding of future hydroclimate uncertainties.
Summary and conclusions
In this paper, we applied a ranking method to account for the possible role of structural differences between 32 CMIP5 GCMs in regulating hydroclimate projection uncertainty. The assessment of historical performance of GCM projections resulted in a constrained CMIP5 ensemble with reduced future seasonal precipitation projection spread for East-Central Europe. Moreover, the members of the constrained ensemble belong to a group of models previously identified with more realistically simulated land-atmosphere coupling (Vogel et al. 2018) and the mean of their future summer precipitation projections—indicating little-to-no changes (− 0.1%/decade)—significantly differ from the unconstrained CMIP5 ensemble mean (− 3.9%/decade) and from the mean of the spread of six SMILEs (− 4.8%/decade). These altogether suggest an important role for land-atmosphere coupling differences among climate models in regulating future summer hydroclimate uncertainty on top of the irreducible internal variability and calls for caution when interpreting future summer precipitation projections of the state-of-the-art SMILE simulations. We urge coordinated efforts to further quantify the relative contribution of internal variability and model structural differences in regulating future seasonal hydroclimate uncertainty in Central Europe.
Our results also shed more light on how future efforts toward reducing hydroclimate uncertainty based on regional climate models may be organized. Recent studies note that RCMs driven by GCMs with more realistic precipitation variability are more likely to have reliable precipitation projections (Syed et al. 2019). Therefore, the careful selection of driving GCMs for RCMs and the thorough evaluation of RCMs based on their land-atmosphere coupling feedbacks (e.g., soil moisture-temperature/precipitation couplings) may be a useful step toward alleviating RCM projection uncertainty, which physical constraint must also be taken into account before downscaling SMILEs to get regional ensemble simulations. In light of our results, we emphasize the possibility of less than previously thought dry summer conditions in the upcoming decades and advocate the parallel application of SMILE simulations with multi-model ensembles when producing inputs for future policymaking.
All data is publicly available via the Earth System Grid Federation website (https://esgf-node.llnl.gov/projects/cmip5/) and the US CLIVAR Large Ensemble archive (http://www.cesm.ucar.edu/projects/community-projects/MMLEA/).
Ahmed K, Sachindra DA, Shahid S, Demirel MC, Chung ES (2019) Selection of multi-model ensemble of general circulation models for the simulation of precipitation and maximum and minimum temperature based on spatial assessment metrics. Hydrol Earth Syst Sci 23(11):4803–4824. https://doi.org/10.5194/hess-23-4803-2019
Al-Yaari A, Ducharne A, Cheruy F et al (2019) Satellite-based soil moisture provides missing link between summertime precipitation and surface temperature biases in CMIP5 simulations over conterminous United States. Sci Rep 9:1657. https://doi.org/10.1038/s41598-018-38309-5
Annan JD, Hargreaves JC (2010) Reliability of the CMIP3 ensemble. Geophys Res Lett 37:L02703
Auer I, Böhm R, Jurkovic A, Lipa W, Orlik A, Potzmann R, Schöner W, Ungersböck M, Matulla C, Briffa K, Jones P, Efthymiadis D, Brunetti M, Nanni T, Maugeri M, Mercalli L, Mestre O, Moisselin JM, Begert M, Müller-Westermeier G, Kveton V, Bochnicek O, Stastny P, Lapin M, Szalai S, Szentimrey T, Cegnar T, Dolinar M, Gajic-Capka M, Zaninovic K, Majstorovic Z, Nieplova E (2007) HISTALP—historical instrumental climatological surface time series of the greater Alpine region. Int J Climatol 27:17–46. https://doi.org/10.1002/joc.1377
Barnes EA, Polvani LM (2015) CMIP5 projections of arctic amplification, of the North American/North Atlantic circulation, and of their relationship. J Clim 28(13):5254–5271. https://doi.org/10.1175/JCLI-D-14-00589.1
Bartholy J, Pongrácz R (2007) Regional analysis of extreme temperature and precipitation indices for the Carpathian Basin from 1946 to 2001. Glob Planet Change 57(1–2):83–95. https://doi.org/10.1016/j.gloplacha.2006.11.002
Baxter I, Ding Q, Schweiger A et al (2019) How tropical Pacific surface cooling contributed to accelerated sea ice melt from 2007 to 2012 as ice is thinned by anthropogenic forcing. J Clim 32(24):8583–8602. https://doi.org/10.1175/JCLI-D-18-0783.1
Beniston M, Stephenson DB, Christensen OB, Ferro CAT, Frei C, Goyette S, Halsnaes K, Holt T, Jylhä K, Koffi B, Palutikof J, Schöll R, Semmler T, Woth K (2007) Future extreme events in European climate: an exploration of regional climate model projections. Clim Chang 37:71–95. https://doi.org/10.1007/s10584-006-9226-z
Berg A, Findell K, Lintner B, Giannini A, Seneviratne SI, van den Hurk B, Lorenz R, Pitman A, Hagemann S, Meier A, Cheruy F, Ducharne A, Malyshev S, Milly PCD (2016) Land–atmosphere feedbacks amplify aridity increase over land under global warming. Nat Clim Chang 6:869–874. https://doi.org/10.1038/nclimate3029
Boberg F, Christensen JH (2012) Overestimation of Mediterranean summer temperature projections due to model deficiencies. Nat Clim Chang 2:433–436. https://doi.org/10.1038/nclimate1454
Bódai T, Tél T (2012) Annual variability in a conceptual climate model: snapshot attractors, hysteresis in extreme events, and climate sensitivity. Chaos 22:023110. https://doi.org/10.1063/1.3697984
Bódai T, Drótos G, Herein M, Lunkeit F, Lucarini V (2020) The forced response of the El Niño–Southern Oscillation–Indian monsoon teleconnection in ensembles of Earth System Models. J Clim 33(6):2163–2182. https://doi.org/10.1175/jcli-d-19-0341.1
Boer GJ, Lambert SJ (2001) Second-order space-time climate difference statistics. Clim Dyn 17:213–218. https://doi.org/10.1007/PL00013735
Brands S, Herrera S, Fernández J, Gutiérrez JM (2013) How Well Do Cmip5 Earth System Models simulate present climate conditions in Europe and Africa? Clim Dyn 41:803–817. https://doi.org/10.1007/s00382-013-1742-8
Branstator G, Teng H (2010) Two limits of initial-value decadal predictability in a CGCM. J Clim 23(23):6292–6311. https://doi.org/10.1175/2010JCLI3678.1
Brogli R, Kröner N, Sørland SL, Lüthi D, Schär C (2019) The role of Hadley circulation and lapse-rate changes for the future European summer climate. J Clim 32(2):385–404. https://doi.org/10.1175/JCLI-D-18-0431.1
Cheruy F, Dufresne JL, Hourdin F, Ducharne A (2014) Role of clouds and land-atmosphere coupling in midlatitude continental summer warm biases and climate change amplification in CMIP5 simulations. Geophys Res Lett 41:6493–6500. https://doi.org/10.1002/2014GL061145
Christensen JH, Boberg F (2012) Temperature dependent climate projection deficiencies in CMIP5 models. Geophys Res Lett 39:L24705. https://doi.org/10.1029/2012GL053650
Compo GP, Whitaker JS, Sardeshmukh PD, Matsui N, Allan RJ, Yin X, Gleason BE, Vose RS, Rutledge G, Bessemoulin P, Brönnimann S, Brunet M, Crouthamel RI, Grant AN, Groisman PY, Jones PD, Kruk MC, Kruger AC, Marshall GJ, Maugeri M, Mok HY, Nordli Ø, Ross TF, Trigo RM, Wang XL, Woodruff SD, Worley SJ (2011) The twentieth century reanalysis project. Q J R Meteorol Soc 137:1–28. https://doi.org/10.1002/qj.776
Coppola E, Giorgi F, Rauscher SA, Piani C (2010) Model weighting based on mesoscale structures in precipitation and temperature in an ensemble of regional climate models. Clim Res 44:121–134. https://doi.org/10.3354/cr00940
Dai A (2006) Precipitation Characteristics in Eighteen Coupled Climate Models. J Clim 19(18):4605–4630. https://doi.org/10.1175/JCLI3884.1
Dai A, Fyfe J, Xie S et al (2015) Decadal modulation of global surface temperature by internal climate variability. Nat Clim Chang 5:555–559. https://doi.org/10.1038/nclimate2605
Dai A, Luo D, Song M, Liu J (2019) Arctic amplification is caused by sea-ice loss under increasing CO2. Nat Commun 10:121. https://doi.org/10.1038/s41467-018-07954-9
Deser C, Phillips A, Bourdette V, Teng H (2012) Uncertainty in climate change projections: the role of internal variability. Clim Dyn 38:527–546. https://doi.org/10.1007/s00382-010-0977-x
Deser C, Lehner F, Rodgers KB, Ault T, Delworth TL, DiNezio PN, Fiore A, Frankignoul C, Fyfe JC, Horton DE, Kay JE, Knutti R, Lovenduski NS, Marotzke J, McKinnon KA, Minobe S, Randerson J, Screen JA, Simpson IR, Ting M (2020) Insights from Earth system model initial-condition large ensembles and future prospects. Nat Clim Chang 10:277–286. https://doi.org/10.1038/s41558-020-0731-2
Ding Q, Wallace JM, Battisti DS, Steig EJ, Gallant AJE, Kim HJ, Geng L (2014) Tropical forcing of the recent rapid Arctic warming in northeastern Canada and Greenland. Nature 509:209–212. https://doi.org/10.1038/nature13260
Ding Q, Schweiger A, L'Heureux M, Steig EJ, Battisti DS, Johnson NC, Blanchard-Wrigglesworth E, Po-Chedley S, Zhang Q, Harnos K, Bushuk M, Markle B, Baxter I (2019) Fingerprints of internal drivers of Arctic sea ice loss in observations and model simulations. Nat Geosci 12:28–33. https://doi.org/10.1038/s41561-018-0256-8
Drótos G, Bódai T, Tél T (2015) Probabilistic concepts in a changing climate: a snapshot attractor picture. J Clim 28(8):3275–3288. https://doi.org/10.1175/JCLI-D-14-00459.1
Drótos G, Bódai T, Tél T (2016) Quantifying nonergodicity in nonautonomous dissipative dynamical systems: an application to climate change. Phys Rev E 94:022214. https://doi.org/10.1103/PhysRevE.94.022214
Drótos G, Bódai T, Tél T (2017) On the importance of the convergence to climate attractors. Eur Phys J Spec Top 226:2031–2038. https://doi.org/10.1140/epjst/e2017-70045-7
Dyer E, Washington R, Teferi Taye M (2019) Evaluating the CMIP5 ensemble in Ethiopia: creating a reduced ensemble for rainfall and temperature in Northwest Ethiopia and the Awash basin. Int J Climatol 40:2964–2985. https://doi.org/10.1002/joc.6377
Feng S, Fu Q (2013) Expansion of global drylands under a warming climate. Atmos Chem Phys 13:10081–10094. https://doi.org/10.5194/acp-13-10081-2013
Garfinkel CI, Adam O, Morin E, Enzel Y, Elbaum E, Bartov M, Rostkier-Edelstein D, Dayan U (2020) The role of zonally averaged climate change in contributing to intermodel spread in CMIP5 predicted local precipitation changes. J Clim 33(3):1141–1154. https://doi.org/10.1175/JCLI-D-19-0232.1
Gautam J, Mascaro G (2018) Evaluation of Coupled Model Intercomparison Project Phase 5 historical simulations in the Colorado River basin. Int J Climatol 38:3861–3877. https://doi.org/10.1002/joc.5540
Gleckler PJ, Taylor KE, Doutriaux C (2008) Performance metrics for climate models. J Geophys Res 113:D06104. https://doi.org/10.1029/2007JD008972
Harrison SP, Bartlein PJ, Izumi K, Li G, Annan J, Hargreaves J, Braconnot P, Kageyama M (2015) Evaluation of CMIP5 palaeo-simulations to improve climate projections. Nature Clim Chang 5:735–743. https://doi.org/10.1038/nclimate2649
Haszpra T, Herein M (2019) Ensemble-based analysis of the pollutant spreading intensity induced by climate change. Sci Rep 9:3896. https://doi.org/10.1038/s41598-019-40451-7
Haszpra T, Herein M, Bódai T (2020a) Investigating ENSO and its teleconnections under climate change in an ensemble view – a new perspective. Earth Syst Dynam 11:267–280. https://doi.org/10.5194/esd-11-267-2020
Haszpra T, Topál D, Herein M (2020b) On the time evolution of the Arctic Oscillation and related wintertime phenomena under different forcing scenarios in an ensemble approach. J Clim 33(8):3107–3124. https://doi.org/10.1175/JCLI-D-19-0004.1
Hawkins E, Sutton R (2009) The potential to narrow uncertainty in regional climate predictions. B Am Meteor Soc 90(8):1095–1108. https://doi.org/10.1175/2009BAMS2607.1
Hazeleger W, Severijns C, Semmler T, Ştefănescu S, Yang S, Wang X, Wyser K, Dutra E, Baldasano JM, Bintanja R, Bougeault P, Caballero R, Ekman AML, Christensen JH, van den Hurk B, Jimenez P, Jones C, Kållberg P, Koenigk T, McGrath R, Miranda P, van Noije T, Palmer T, Parodi JA, Schmith T, Selten F, Storelvmo T, Sterl A, Tapamo H, Vancoppenolle M, Viterbo P, Willén U (2010) EC-Earth. EC-Earth B Am Meteor Soc 91(10):1357–1364. https://doi.org/10.1175/2010BAMS2877.1
Herein M, Márfy J, Drótos G, Tél T (2016) Probabilistic concepts in intermediate-complexity climate models: a snapshot attractor picture. J Clim 29(1):259–272. https://doi.org/10.1175/JCLI-D-15-0353.1
Herein M, Drótos G, Haszpra T, Márfy J, Tél T (2017) The theory of parallel climate realizations as a new framework for teleconnection analysis. Sci Rep 7:44529. https://doi.org/10.1038/srep44529
Hirschi M, Seneviratne SI, Alexandrov V, Boberg F, Boroneant C, Christensen OB, Formayer H, Orlowsky B, Stepanek P (2010) Observational evidence for soil-moisture impact on hot extremes in southeastern Europe. Nat Geosci 4:17–21. https://doi.org/10.1038/ngeo1032
Huang J, Yu H, Guan X, Wang G, Guo R (2016) Accelerated dryland expansion under climate change. Nat Clim Chang 6:166–171. https://doi.org/10.1038/nclimate2837
Jeffrey S et al (2013) Australia's CMIP5 submission using the CSIRO-Mk3.6 model. Aust Meteorol Oceanogr J 63:1–13. https://doi.org/10.22499/2.6301.001
Kay JE, Deser C, Phillips A, Mai A, Hannay C, Strand G, Arblaster JM, Bates SC, Danabasoglu G, Edwards J, Holland M, Kushner P, Lamarque JF, Lawrence D, Lindsay K, Middleton A, Munoz E, Neale R, Oleson K, Polvani L, Vertenstein M (2015) The Community Earth System Model (CESM) large ensemble project: a community resource for studying climate change in the presence of internal climate variability. B Am Meteor Soc 96(8):1333–1349. https://doi.org/10.1175/BAMS-D-13-00255.1
Kirchmeier-Young MC, Zwiers FW, Gillett NP (2017) Attribution of extreme events in Arctic Sea ice extent. J Clim 30(2):553–571. https://doi.org/10.1175/JCLI-D-16-0412.1
Knutti R (2008) Should we believe model predictions of future climate change? Philos Trans R Soc A 366:4647–4664. https://doi.org/10.1098/rsta.2008.0169
Knutti R (2010) The end of model democracy? Clim Chang 102:395–404. https://doi.org/10.1007/s10584-010-9800-2
Knutti R, Sedláček J (2013) Robustness and uncertainties in the new CMIP5 climate model projections. Nat Clim Chang 3:369–373. https://doi.org/10.1038/nclimate1716
Knutti R, Sedlacek J, Sanderson B et al (2017) A climate model projection weighting scheme accounting for performance and interdependence. Geophys Res Lett 44:1909–1918. https://doi.org/10.1002/2016GL072012
Kröner N, Kotlarski S, Fischer E, Lüthi D, Zubler E, Schär C (2017) Separating climate change signals into thermodynamic, lapse-rate and circulation effects: theory and application to the European summer climate. Clim Dyn 48:3425–3440. https://doi.org/10.1007/s00382-016-3276-3
Lamarque JF, Bond TC, Eyring V, Granier C, Heil A, Klimont Z, Lee D, Liousse C, Mieville A, Owen B, Schultz MG, Shindell D, Smith SJ, Stehfest E, van Aardenne J, Cooper OR, Kainuma M, Mahowald N, McConnell JR, Naik V, Riahi K, van Vuuren DP (2010) Historical (1850–2000) gridded anthropogenic and biomass burning emissions of reactive gases and aerosols: methodology and application. Atmos Chem Phys 10:7017–7039. https://doi.org/10.5194/acp-10-7017-2010
L'Heureux M, Tippett MK, Kumar A et al (2017) Strong relations between ENSO and the Arctic Oscillation in the North American multimodel ensemble. Geophys Res Lett 44(11):654–662. https://doi.org/10.1002/2017GL074854
Lorenz EN (1963) Deterministic Nonperiodic Flow. J Atmos Sci 20(2):130–141. https://doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2
Lorenz R, Argüeso D, Donat MG, Pitman AJ, van den Hurk B, Berg A, Lawrence DM, Chéruy F, Ducharne A, Hagemann S, Meier A, Milly PCD, Seneviratne SI (2016) Influence of land-atmosphere feedbacks on temperature and precipitation extremes in the GLACE-CMIP5 ensemble. J Geophys Res Atmos 121:607–623. https://doi.org/10.1002/2015JD024053
Lovino MA, Müller OV, Berbery EH, Müller GV (2018) Evaluation of CMIP5 retrospective simulations of temperature and precipitation in northeastern Argentina. Int J Climatol 38:e1158–e1175. https://doi.org/10.1002/joc.5441
Lu J, Vecchi GA, Reichler T (2007) Expansion of the Hadley cell under global warming. Geophys Res Lett 34:L06805. https://doi.org/10.1029/2006GL028443
Maher N et al (2019) The Max Planck Institute Grand Ensemble – enabling the exploration of climate system variability. J Adv Model 11:2050–2069. https://doi.org/10.1029/2019MS001639
Mann ME et al (2018) Projected changes in persistent extreme summer weather events: the role of quasi-resonant amplification. Sci Adv 4(10):eaat3272. https://doi.org/10.1126/sciadv.aat3272
McCabe GJ, Palecki MA (2006) Multidecadal climate variability of global lands and oceans. Int J Climatol 26:849–865. https://doi.org/10.1002/joc.1289
Merrifield AL, Brunner L, Lorenz R et al (2019) A weighting scheme to incorporate large ensembles in multi-model ensemble projections. Earth Syst Dyn Discuss (in review). https://doi.org/10.5194/esd-2019-69
Min SK, Hense A (2006) A Bayesian approach to climate model evaluation and multi-model averaging with an application to global mean surface temperatures from IPCC AR4 coupled climate models. Geophys Res Lett 33:L08708. https://doi.org/10.1029/2006GL025779
Murphy JM, Sexton DMH, Barnett DN, Jones GS, Webb MJ, Collins M, Stainforth DA (2004) Quantification of modelling uncertainties in a large ensemble of climate change simulations. Nature 430:768–772. https://doi.org/10.1038/nature02771
Nash JE, Sutcliffe JV (1970) River flow forecasting through conceptual models part I - a discussion of principles. J Hydrol 10(3):280–290. https://doi.org/10.1016/0022-1694(70)90255-6
Olson R, An S, Fan Y et al (2019) A novel method to test non-exclusive hypotheses applied to Arctic ice projections from dependent models. Nat Commun 10:3016. https://doi.org/10.1038/s41467-019-10561-x
Perez J, Menendez M, Mendez FJ, Losada IJ (2014) Evaluating the performance of CMIP3 and CMIP5 global climate models over the north-east Atlantic region. Clim Dyn 43:2663–2680. https://doi.org/10.1007/s00382-014-2078-8
Pfleiderer P, Schleussner C, Kornhuber K et al (2019) Summer weather becomes more persistent in a 2 °C world. Nat Clim Chang 9:666–671. https://doi.org/10.1038/s41558-019-0555-0
Pieczka I, Pongrácz R, Szabóné André K, Kelemen FD, Bartholy J (2017) Sensitivity analysis of different parameterization schemes using RegCM4.3 for the Carpathian region. Theor Appl Climatol 130:1175–1188. https://doi.org/10.1007/s00704-016-1941-4
Polade S, Pierce D, Cayan D et al (2015) The key role of dry days in changing regional climate and precipitation regimes. Sci Rep 4:4364. https://doi.org/10.1038/srep04364
Reichler T, Kim J (2008) How well do coupled models simulate today's climate? B Am Meteor Soc 89(3):303–312. https://doi.org/10.1175/BAMS-89-3-303
Rodgers KB, Lin J, Frölicher TL (2015) Emergence of multiple ocean ecosystem drivers in a large ensemble suite with an Earth system model. Biogeosciences 12:3301–3320. https://doi.org/10.5194/bg-12-3301-2015
Rosenzweig C, Karoly D, Vicarelli M, Neofotis P, Wu Q, Casassa G, Menzel A, Root TL, Estrella N, Seguin B, Tryjanowski P, Liu C, Rawlins S, Imeson A (2008) Attributing physical and biological impacts to anthropogenic climate change. Nature 453:353–357. https://doi.org/10.1038/nature06937
Ruti PM, Somot S, Giorgi F, Dubois C, Flaounas E, Obermann A, Dell'Aquila A, Pisacane G, Harzallah A, Lombardi E, Ahrens B, Akhtar N, Alias A, Arsouze T, Aznar R, Bastin S, Bartholy J, Béranger K, Beuvier J, Bouffies-Cloché S, Brauch J, Cabos W, Calmanti S, Calvet JC, Carillo A, Conte D, Coppola E, Djurdjevic V, Drobinski P, Elizalde-Arellano A, Gaertner M, Galàn P, Gallardo C, Gualdi S, Goncalves M, Jorba O, Jordà G, L'Heveder B, Lebeaupin-Brossier C, Li L, Liguori G, Lionello P, Maciàs D, Nabat P, Önol B, Raikovic B, Ramage K, Sevault F, Sannino G, Struglia MV, Sanna A, Torma C, Vervatis V (2016) Med-CORDEX initiative for Mediterranean climate studies. B Am Meteor Soc 97(7):1187–1208. https://doi.org/10.1175/BAMS-D-14-00176.1
Sanderson BM, Knutti R, Caldwell P (2015) Addressing interdependency in a multimodel ensemble by interpolation of model properties. J Clim 28(13):5150–5170. https://doi.org/10.1175/JCLI-D-14-00361.1
Santer BD et al (2018) Human influence on the seasonal cycle of tropospheric temperature. Science 361(6399):eaas8806. https://doi.org/10.1126/science.aas8806
Schneider T (2007) The thermal stratification of the extratropical troposphere. In: Schneider T, Sobel AH (eds) The Global Circulation of the Atmosphere. Princeton University Press, pp 47–77
Schwingshackl C, Hirschi M, Seneviratne SI (2018) A theoretical approach to assess soil moisture–climate coupling across CMIP5 and GLACE-CMIP5 experiments. Earth Syst Dynam 9:1217–1234. https://doi.org/10.5194/esd-9-1217-2018
Screen JA, Simmonds I (2010) The central role of diminishing sea ice in recent Arctic temperature amplification. Nature 464:1334–1337. https://doi.org/10.1038/nature09051
Selten FM, Bintanja R, Vautard R, van den Hurk BJJM (2020) Future continental summer warming constrained by the present-day seasonal cycle of surface hydrology. Sci Rep 10:4721. https://doi.org/10.1038/s41598-020-61721-9
Seneviratne S, Lüthi D, Litschi M et al (2006) Land–atmosphere coupling and climate change in Europe. Nature 443:205–209. https://doi.org/10.1038/nature05095
Seneviratne SI, Wilhelm M, Stanelle T (2013) Impact of soil moisture-climate feedbacks on CMIP5 projections: First results from the GLACE-CMIP5 experiment. Geophys Res Lett 40:5212–5217. https://doi.org/10.1002/grl.50956
Senftleben D, Lauer A, Karpechko A (2020) Constraining uncertainties in CMIP5 projections of September Arctic sea ice extent with observations. J Clim 33(4):1487–1503. https://doi.org/10.1175/JCLI-D-19-0075.1
Sheerwood F, Fu Q (2014) A drier future? Science 343(6172):737–739. https://doi.org/10.1126/science.1247620
Sheffield J, Barrett AP, Colle B, Nelun Fernando D, Fu R, Geil KL, Hu Q, Kinter J, Kumar S, Langenbrunner B, Lombardo K, Long LN, Maloney E, Mariotti A, Meyerson JE, Mo KC, David Neelin J, Nigam S, Pan Z, Ren T, Ruiz-Barradas A, Serra YL, Seth A, Thibeault JM, Stroeve JC, Yang Z, Yin L (2013) North American climate in CMIP5 experiments. Part I: evaluation of historical simulations of continental and regional climatology. J Clim 26(23):9209–9245. https://doi.org/10.1175/JCLI-D-12-00592.1
Sillmann J, Kharin VV, Zhang X, Zwiers FW, Bronaugh D (2013) Climate extremes indices in the CMIP5 multimodel ensemble: part 1. Model evaluation in the present climate. J Geophys Res Atmos 118:1716–1733. https://doi.org/10.1002/jgrd.50203
Sippel S, Meinshausen N, Fischer EM, Székely E, Knutti R (2020) Climate change now detectable from any single day of weather at global scale. Nat Clim Chang 10:35–41. https://doi.org/10.1038/s41558-019-0666-7
Slivinski LC, Compo GP, Whitaker JS, Sardeshmukh PD, Giese BS, McColl C, Allan R, Yin X, Vose R, Titchner H, Kennedy J, Spencer LJ, Ashcroft L, Brönnimann S, Brunet M, Camuffo D, Cornes R, Cram TA, Crouthamel R, Domínguez-Castro F, Freeman JE, Gergis J, Hawkins E, Jones PD, Jourdain S, Kaplan A, Kubota H, Blancq FL, Lee TC, Lorrey A, Luterbacher J, Maugeri M, Mock CJ, Moore GWK, Przybylak R, Pudmenzky C, Reason C, Slonosky VC, Smith CA, Tinz B, Trewin B, Valente MA, Wang XL, Wilkinson C, Wood K, Wyszyński P (2019) Towards a more reliable historical reanalysis: Improvements for version 3 of the Twentieth Century Reanalysis system. Q J R Meteorol Soc 145:2876–2908. https://doi.org/10.1002/qj.3598
Stainforth DA, Allen MR, Tredger ER, Smith LA (2007) Confidence, uncertainty and decision-support relevance in climate predictions. Phil Trans R Soc A 365:2145–2161. https://doi.org/10.1098/rsta.2007.2074
Suh MS, Oh SG, Lee DK, Cha DH, Choi SJ, Jin CS, Hong SY (2012) Development of new ensemble methods based on the performance skills of regional climate models over South Korea. J Clim 25(20):7067–7082. https://doi.org/10.1175/JCLI-D-11-00457.1
Swart NC, Fyfe JC, Hawkins E, Kay JE, Jahn A (2015) Influence of internal variability on Arctic sea ice trends. Nat Clim Chang 5:86–89. https://doi.org/10.1038/nclimate2483
Syed FS, Latif M, Al-Maashi A et al (2019) Regional climate model RCA4 simulations of temperature and precipitation over the Arabian Peninsula: sensitivity to CORDEX domain and lateral boundary conditions. Clim Dyn 53:7045–7064. https://doi.org/10.1007/s00382-019-04974-z
Talagrand O, Vautard R, Strauss B (1997) Evaluation of probabilistic prediction systems. Proc. ECMWF Workshop on Predictability, Reading, United Kingdom, ECMWF, 1–25, https://www.ecmwf.int/en/elibrary/12555-evaluation-probabilistic- prediction-systems
Taylor C, de Jeu R, Guichard F et al (2012a) Afternoon rain more likely over drier soils. Nature 489:423–426. https://doi.org/10.1038/nature11377
Taylor KE, Stouffer RJ, Meehl GA (2012b) An overview of CMIP5 and the experiment design. B Am Meteor Soc 93(4):485–498. https://doi.org/10.1175/BAMS-D-11-00094.1
Topál D, Ding Q, Mitchell J, Baxter I, Herein M, Haszpra T, Luo R, Li Q (2020) An internal atmospheric process determining summertime Arctic sea ice melting in the next three decades: lessons learned from five large ensembles and multiple CMIP5 climate simulations. J Clim 33(17):7431–7454. https://doi.org/10.1175/JCLI-D-19-0803.1
Verfaillie D, Favier V, Gallée H, Fettweis X, Agosta C, Jomelli V (2019) Regional modeling of surface mass balance on the Cook Ice Cap, Kerguelen Islands (49°S, 69°E). Clim Dyn 53:5909–5925. https://doi.org/10.1007/s00382-019-04904-z
Vogel MM, Zscheischler J, Seneviratne SI (2018) Varying soil moisture–atmosphere feedbacks explain divergent temperature extremes and precipitation projections in central Europe. Earth Syst Dyn 9:1107–1125. https://doi.org/10.5194/esd-9-1107-2018
Wang C, Zhang L, Lee SK, Wu L, Mechoso CR (2014) A global perspective on CMIP5 climate model biases. Nat Clim Chang 4:201–205. https://doi.org/10.1038/nclimate2118
Yapo ALM, Diawara A, Kouassi BK, Yoroba F, Sylla MB, Kouadio K, Tiémoko DT, Koné DI, Akobé EY, Yao KPAT (2020) Projected changes in extreme precipitation intensity and dry spell length in Côte d'Ivoire under future climates. Theor Appl Climatol 140:871–889. https://doi.org/10.1007/s00704-020-03124-4
Zeng X, Pielke RA, Eykholt R (1993) Chaos theory and its applications to the atmosphere. B Am Meteor Soc 74(4):631–644. https://doi.org/10.1175/1520-0477(1993)074<0631:CTAIAT>2.0.CO;2
We thank Tamás Bódai and two anonymous reviewers in addition to the Editor and Ian Baxter for their insightful comments, which helped to considerably improve the paper. Authors also acknowledge the ESGF site (https://esgf.llnl.gov) and the US CLIVAR Large Ensembles Working Group in addition to the HISTALP database and the NOAA 20C Reanalysis project. We also thank for the support of the Hungarian Academy of Sciences "Lendület" program (LP2012-27/2012). This is contribution No. 68 of 2 ka Palæoclimatology Research Group.
Fortran codes of the analysis are available upon request from D.T. ([email protected]).
Open access funding provided by ELKH Research Centre for Astronomy and Earth Sciences. D.T. was supported by the ÚNKP-19-3 New National Excellence Program of the Ministry for Innovation and Technology and grant NTP-NFTÖ-18 of the Ministry of Human Capacities. The work of I.G.H was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences.
Institute for Geological and Geochemical Research, Research Centre for Astronomy and Earth Sciences, MTA Centre for Excellence, Budapest, H-1112, Hungary
Dániel Topál, István Gábor Hatvani & Zoltán Kern
Earth Research Institute, University of California, Santa Barbara, Santa Barbara, CA, USA
Dániel Topál
István Gábor Hatvani
Zoltán Kern
Z.K. provided the original idea for the paper. D.T suggested the use of large ensembles and designed the experiment with input from I.G.H. D.T processed the data, performed the calculations, created the figures, and wrote the manuscript with input from I.G.H and Z.K. All authors took part in revising the manuscript.
Correspondence to Dániel Topál.
The authors declare that they have no conflict of interest.
ESM 1
(DOCX 6562 kb)
Topál, D., Hatvani, I.G. & Kern, Z. Refining projected multidecadal hydroclimate uncertainty in East-Central Europe using CMIP5 and single-model large ensemble simulations. Theor Appl Climatol 142, 1147–1167 (2020). https://doi.org/10.1007/s00704-020-03361-7
Issue Date: November 2020
Hydroclimate | CommonCrawl |
Degree Ceremony
What does Pythagoras' Theorem tell you about these angles: 90°, (45+x)° and (45-x)° in a triangle?
After Thought
Which is larger cos(sin x) or sin(cos x) ? Does this depend on x ?
Sketch the graphs of y = sin x and y = tan x and some straight lines. Prove some inequalities.
Congratulations to Andrei from Tudor Vianu National College, Bucharest, Romania for this excellent solution.
(1) Plot the graph of the function $y=f(x)$ where $f(x) = \sin x +|\sin x|$. Differentiate the function and say where the derivative is defined and where it is not defined.
I observe that $\sin x$ takes positive values between $2k\pi$ and $(2k+1)\pi$ (for integer $k$), that is in the first two quadrants, and $\sin x$ takes negative values between $(2k+1)\pi$ and $(2k+2)\pi$, that is in the third and fourth quadrants. So
$$\eqalign{ f(x) &= 2\sin x \quad for\ 2k\pi \leq x \leq (2k+1)\pi \cr &= 0 \quad for \ (2k+1)\pi \leq x \leq (2k+2)\pi .}$$
Below is represented the graph of $y = f(x)$:
Now I calculate the first derivative of $f(x)$:
$$\eqalign{ f^\prime (x) &= 2\cos x \quad for\ 2k\pi < x < (2k+1)\pi \cr &= 0 \quad for\ (2k+1)\pi < x < (2k+2)\pi}$$
The derivative $f^\prime(x)$ is not defined at the points $x = k\pi$ for any integer $k$ and it does not have a tangent at these points.
(2) Now I express the function $f(x) = \sin x + \cos x$ in the form $f(x)=A\sin (x+\alpha)$, find $A$ and $\alpha$ and plot the graph of this function. Similarly I express the function $g(x) = \sin x - \cos x$ in the form $g(x) = B\sin (x +\beta)$ where $-\pi /2 < \beta < \pi /2$, and plot its graph on the same axes.
$$\eqalign { f(x) &= \sin x + \cos x = A\sin (x + \alpha) \cr &= A\sin x\cos\alpha + A\cos x \sin \alpha }$$
As this formula must be valid for any $x$, I obtain: $$A\sin \alpha = 1 \quad {\rm and} \ A\cos \alpha = 1$$ and hence $\tan \alpha = 1$, $\alpha = \pi/4$ and $A=\pm \sqrt 2$.
Comment: If $A$ has the meaning of an amplitude, $A$ is positive, and only the positive solution must be kept. This type of problem is typical for the composition of oscillations.
Hence the graph of this function is a sine graph with a phase shift of $\pi/4$, that is $f(x)=0$ when $x=k\pi - \pi/4$, it takes the value 1 when $x=2k\pi$ and $x=2k\pi + \pi/2$ and the value -1 when $x=(2k+1)\pi$ and $x=2k\pi - \pi/2$, has maximum values $(2k\pi +\pi/4, \sqrt 2)$ and minimum values $((2k+1)\pi + \pi/4, -\sqrt 2)$
In a similar manner I write $g(x) = \sin x - \cos x$:
$$\eqalign { f(x) &= \sin x - \cos x = B\sin (x + \beta) \cr &= B\sin x\cos\beta + B\cos x \sin \beta }$$
So I have $$B\sin \beta = -1 \quad {\rm and} \ B\cos \beta = 1$$ and hence $\tan \beta = -1$, $\beta = -\pi/4$ and $B=\sqrt 2$. Hence the graph of this function is a sine graph with a phase shift of $-\pi/4$, that is $f(x)=0$ when $x=k\pi + \pi/4$, it takes the value 1 when $x=k\pi$ and $x=2k\pi + \pi/2$, has maximum values $(2k+1)\pi - \pi/4, \sqrt 2)$ and minimum values $(2k\pi - \pi/4, -\sqrt 2)$.
(3) Now, I calculate the function $f(x) = \sin x + |\cos x|$ and plot the graph.
$$\eqalign{ f(x) &= \sin x + \cos x = \sqrt 2 \sin(x + \pi /4) \ for \ 2k\pi -\pi/2\leq x \leq 2k\pi +\pi/2 \cr &= \sin x - \cos x = \sqrt 2 \sin(x - \pi /4)\ for \ 2k\pi + \pi/2 \leq x \leq 2k\pi + 3\pi/2 .}$$
The derivative is not defined at $x= k\pi + \pi/2$ and for other values of $x$ it is:
$$\eqalign{ f^\prime(x) &= \cos x - \sin x \ {\rm for} \ 2k\pi -\pi/2< x < 2k\pi +\pi/2 \cr &= \cos x + \sin x \ {\rm for} \ 2k\pi + \pi/2 < x < 2k\pi + 3\pi/2 .}$$
Transformation of functions. Maths Supporting SET. Sine, cosine, tangent. Engineering. Trigonometric functions and graphs. Trigonometric identities. Graphs. Graph plotters. Differentiation. Investigations. | CommonCrawl |
Event-to-event intensification of the hydrologic cycle from 1.5 °C to a 2 °C warmer world
Precipitation trends determine future occurrences of compound hot–dry events
Emanuele Bevacqua, Giuseppe Zappa, … Jakob Zscheischler
Sharp rises in large-scale, long-duration precipitation extremes with higher temperatures over Japan
Daisuke Hatsuzuka, Tomonori Sato & Yoshihito Higuchi
Drivers behind the summer 2010 wave train leading to Russian heatwave and Pakistan flooding
G. Di Capua, S. Sparrow, … D. Coumou
Madden–Julian Oscillation-induced extreme rainfalls constrained by global warming mitigation
Shijing Liang, Dashan Wang, … Zhenzhong Zeng
Emergence of climate change in the tropical Pacific
Jun Ying, Matthew Collins, … Karl Stein
Increasing precipitation volatility in twenty-first-century California
Daniel L. Swain, Baird Langenbrunner, … Alex Hall
Record-breaking climate extremes in Africa under stabilized 1.5 °C and 2 °C global warming scenarios
Shingirai Nangombe, Tianjun Zhou, … Donghuan Li
Summer weather becomes more persistent in a 2 °C world
Peter Pfleiderer, Carl-Friedrich Schleussner, … Dim Coumou
Reduced exposure to extreme precipitation from 0.5 °C less warming in global land monsoon regions
Wenxia Zhang, Tianjun Zhou, … Xiaolong Chen
Gavin D. Madakumbura1 nAff2,
Hyungjun Kim ORCID: orcid.org/0000-0003-1083-84163,
Nobuyuki Utsumi4,
Hideo Shiogama ORCID: orcid.org/0000-0001-5476-21485,
Erich M. Fischer ORCID: orcid.org/0000-0003-1931-67376,
Øyvind Seland7,
John F. Scinocca8,
Daniel M. Mitchell ORCID: orcid.org/0000-0002-0117-34869,
Yukiko Hirabayashi10 &
Taikan Oki ORCID: orcid.org/0000-0003-4067-46783
Scientific Reports volume 9, Article number: 3483 (2019) Cite this article
Projection and prediction
The Paris agreement was adopted to hold the global average temperature increase to well below 2 °C and pursue efforts to limit it to 1.5 °C. Here, we investigate the event-to-event hydroclimatic intensity, where an event is a pair of adjacent wet and dry spells, under future warming scenarios. According to a set of targeted multi-model large ensemble experiments, event-wise intensification will significantly increase globally for an additional 0.5 °C warming beyond 1.5 °C. In high latitudinal regions of the North American continent and Eurasia, this intensification is likely to involve overwhelming increases in wet spell intensity. Western and Eastern North America will likely experience more intense wet spells with negligible changes of dry spells. For the Mediterranean region, enhancement of dry spells seems to be dominating compared to the decrease in wet spell strength, and this will lead to an overall event-wise intensification. Furthermore, the extreme intensification could be 10 times stronger than the mean intensification. The high damage potential of such drastic changes between flood and drought conditions poses a major challenge to adaptation, and the findings suggest that risks could be substantially reduced by achieving a 1.5 °C target.
The Paris agreement of 2015 was adopted at the 21st Conference of the Parties of the United Nations Framework Convention on Climate Change following concurrence to hold the global average temperature increase to well below 2 °C and pursue efforts to limit it to 1.5 °C above pre-industrial levels by the year 21001,2,3. Since then, climate scientists have been engaging in efforts to investigate the impacts of an additional half degree warming from 1.5 °C to 2 °C. After a tremendous effort, a special report was produced by the Intergovernmental Panel on Climate Change (IPCC) on the impacts and greenhouse gas emission pathways related to 1.5 °C global warming target4. Global warming is highly likely to surpass 1.5 °C target under emission scenarios based on current policies and strengthened climate actions than current pledges made under the Paris Agreement will be required to limit the global warming to 1.5 °C5,6. Studies have already discussed the impacts of limiting the warming to 1.5 °C in many areas of earth system sciences7,8,9,10. However, changes in many aspects of natural phenomena on Earth are still uncertain between a 1.5 °C and 2 °C target, and such changes need to be quantified.
Human-induced global warming has contributed to an increase in the magnitude and frequency of climate extremes11. Global warming has the potential to change the frequency and the intensity of precipitation events by intensifying the hydrologic cycle12,13,14,15,16. Precipitation change can manifest itself as the rain events becoming more frequent and more intense, more frequent and less intense, or less frequent and more intense17. This can lead to a hydroclimatic intensification with increased consecutive dry days or increased precipitation intensity, or both16,17. From a thermodynamic perspective, this hydroclimatic intensification is mainly linked to the increase in the atmospheric water holding capacity according to the Clausius–Clapyeron (C-C) relation, the increase in evapotranspiration with rising temperatures and the imbalance in the rate of increase of these variables16. By using global and regional climate model experiments, studies have shown that intensification of the water cycle is a consistent and ubiquitous signature of 21st century greenhouse-induced global warming for medium to high warming scenarios16,17. However, such hydroclimatic intensification assessments for lower warming targets such as 1.5 °C and 2 °C have not been conducted yet.
In regard to the daily precipitation, there are days with precipitation (wet spells) and days where no significant precipitation occur (dry spells). The number of these wet and dry spells and their severities are naturally interconnected and potentially related to extreme hydroclimatic events such as droughts and floods. Droughts are naturally associated with sustained periods of little to no precipitation, i.e., extended dry spells. Flood events can arise though short intense storms, and also from continuous periods of heavy or moderate precipitation, which correspond to intensified and/or extended wet spells. Intensification of adjacent dry and wet spells together has the potential to transform conditions into prolonged droughts followed by extreme flooding and vice versa, such as the switch from extreme drought to severe flooding that occurred in California during the recent past18. Such events are even suggested to be increased in California in a higher global warming scenario in an inter-annual context19. However, the changes in the frequency and intensity of wet and dry spells and their interconnectivity at a sub-seasonal to seasonal scale, which can form adverse conditions, are not well understood.
This study was conducted to investigate the global water cycle intensification, with an emphasis on changes in the intensity and frequency of wet and dry spells, which can be expected in 1.5 °C and 2 °C warmer worlds. Analyses were performed at the intra-annual scale, and extreme conditions were assessed as well. For the analyses, we utilized four atmospheric general circulation model (AGCM) experiments from the project titled "half a degree additional warming, prognosis and projected impacts" (HAPPI)2,3. With the models MIROC5, NorESM, CanAM4, and CAM4, three sets of scenarios were employed, namely, a historical scenario for the period 2006–2015 (ALL) and 1.5 °C and 2 °C equilibrium warming scenarios for a 10-year period in the beginning of 22nd century (hypothetically for the 2106–2115 period). Daily precipitation output from 100 ensemble members per scenario per model was used. Inspired by an earlier work16, here, we propose the "event-to-event hydrological intensification index" (E2E), which combines normalized "aggregated precipitation intensity" (API) and "dry spell length" (DSL), to capture the interconnectivity of adjacent dry and wet spells and the intensification of their phase shifts (see Methods for more details). Governing processes that change the wet and dry spell intensity and frequency are likely to be interconnected12,14,16. This will result in changes of DSL and precipitation intensity in an interrelated manner. In an intensified hydrologic cycle, either both variables will increase when the mean precipitation does not change significantly, or the increase in one variable will overwhelm the change in the other when the mean precipitation changes16. E2E provides an integrated assessment of these variables and such assessments of hydroclimatic intensification have been demonstrated to give ubiquitous, and enhanced signals of the hydrologic cycle's response to global warming than individual metrics such as the DSL and precipitation intensity16,17.
Figure 1 shows the multi-model mean change of the E2E between 2 °C and 1.5 °C climates. Probability density functions indicate that there will be a clear increase in the E2E with warming (Fig. S5). The zonal mean indicates that the tropics will face a weakening of event-to-event variability, while mid latitudes will experience a peak, which disappears in the high latitudes. For the additional 0.5 °C warming, a significant decrease in the E2E can be seen over the eastern part of Greenland, Central America, Amazon, Sahara, and East, and South Africa, as well as the Tibetan Plateau. North America, North East Brazil, Southeastern South America, the Mediterranean region, Europe, North, East, and South East Asia, and Southern Australia show a significant increase in the E2E. To understand the changes in total precipitation and dry days during wet and dry spells, the changes were decomposed into the changes in the intensity and the frequency of the spells (see Methods). The box-whisker plots in Fig. 1 show the multi-model ensemble results for the regional mean calculated for each ensemble. Globally, the ensemble mean precipitation showed a total increase of around 110 mm per decade (equivalent to 11 mm per year), which is about a 1.5% increase from 1.5 °C ensemble mean. This increase in precipitation amount is about 1.4% of recent estimations of the annual mean terrestrial precipitation20. Change in frequency of wet spells contributed to 20 mm of that change and the increase in the API contributed to the rest. The multi-model ensemble range of the precipitation change was mostly positive. Total dry days during dry spells showed no apparent change with a small multi-model ensemble range, and this finding was suggestive of a good model agreement. Frequency of dry spells was found to increase slightly while contributing to an increase of around 5 days per decade, but the decrease in the DSL compensates for that. Here, we use regional domains from the IPCC special report on extremes (IPCC SREX regions)21 to investigate regional changes in the East North America (ENA), Amazon (AMZ), South Europe/Mediterranean (MED), North Asia (NAS), and East Asia (EAS) regions. Results for the other SREX regions are included in the Supplementary Materials. With the additional 0.5 °C warming, the AMZ domain averaged mean precipitation showed a decrease and the number of dry days showed an increase. The number of events increased with shorter wet spells and extended dry spells, and both the intensity (DSL) and frequency terms contributed to the increase in total dry days. The decrease in the API was dominant in the AMZ compared to the increase in the DSL, which in turn led the decrease in the E2E. Central America behaved fairly similar to the AMZ (Fig. S3). An area with a significant decrease in the E2E was detected in the Southeastern part of Western Africa and the area north of South Africa, where there were increasing numbers of events with a decrease in the mean wet spell length and mean precipitation and therefore a decreasing API. A significant decrease in the DSL further contribute to the decrease of the E2E. The HAPPI multi-model ensembles have shown that the West African region will experience a decrease in the rainy season length with the additional 0.5 °C warming, which potentially is due to an anomalous migration of the Intertropical Convergence Zone towards the northern equatorial Atlantic region22, which is consistent with the decrease in wet spell lengths observed here. Among the regions with a significant increase in the E2E, the ENA experienced an increase in the API. The mean precipitation increased by about 100 mm per decade with no apparent change in the mean total dry days. The decrease in wet spells contributed to a slight decrease in precipitation with a dominating increase in the mean API of about 150 mm. The decrease in dry spell occurrence and increase in mean DSL of about 10 days per decade have offset each other. The significant increase in the E2E in the MED region with the additional 0.5 °C warming was caused by an increase in the DSL along with a decrease in the API. This behavior was common for other Mediterranean climate regions in South America, South Africa, and Australia. In the MED region, total mean precipitation decreased by about 100 mm per decade, while total mean dry days increased by about 20 days per decade. The inter-model ensemble spread for the change in total dry days was smaller, which suggests a robust signal. Occurrence of events decreased with shorter wet spells and longer dry spells, and therefore, a decrease (increase) in the API (DSL) was observed. This could be due to the increased moisture divergence owing to the establishment of quasi-stationary subtropical high-pressure systems with the warming, which has the potential to increase dry days and to decrease precipitation frequency in Mediterranean climate regions23. The EAS region showed a significant increase in the E2E due to the increase in the API even with the DSL decreasing. The mean total precipitation increased by about 180 mm per decade, mainly due to the increase in the API. This increase in the API in the Asian–Australian monsoon regions can be attributed to the increase in summer monsoon precipitation shown in the HAPPI AGCMs24,25. Total mean dry days decreased slightly where the decrease from the DSL change governed compared to the increase from the frequency term. The NAS region behaved similar to the EAS region under the warming climate.
Global spatial map and zonal mean of the 2 °C minus 1.5 °C E2E multi-model ensemble mean of the HAPPI data3 (95% significant level is stippled). Black boxes represent the IPCC AR5 reference regions (http://www.ipcc-data.org/guidelines/pages/ar5_regions.html). Box-whisker plots show the area averaged (land only) difference between 2 °C and 1.5 °C climates (2 °C minus 1.5 °C) for the total precipitation (dP), frequency term (Frq), and intensity term (Int) during wet spells in mm per decade (in the middle panel) globally and over the East North America (ENA), Amazon (AMZ), South Europe/Mediterranean (MED), and East Asia (EAS) regions. The bottom panel is the same as the middle panel but for dry spells where dD is the total number of dry days and the unit is dry days per decade. Box-whisker plots indicate the 10th, 25th, 50th, 75th, and 90th percentiles, and the other points are outliers.
Extreme conditions of the cases for the E2E change with the warming are shown in Fig. 2 as the 99th percentile value (P99). Spatial patterns and the zonal mean distribution of the E2E P99 were very similar to those of E2E mean. Spatially, the change due to 0.5 °C warming is about 10 times in P99, compared to the mean. However, the area fraction with a significant difference shows a reduction, globally. Peak values of the probability distribution of the global mean of P99 anomalies in 1.5 °C and 2 °C climates increased around 10-fold compared to the mean E2E (Fig. S5). Mean of the P99 anomaly distribution increased by about 63%, globally, from 1.5 °C to 2 °C (Tables S1 and S2). A higher skewness is an indicator of an increase in tail end values of the distribution which could occur through increased extreme DSL or API or both. For instance, regions where a change in DSL, have a higher contribution to the change in E2E, such as MED, are suggested to have higher extreme events with larger DSL, when they have an increased positive skewness in E2E P99 anomaly distribution. This is consistent with changes shown in P99 of DSL and API (Fig. S6). A statistically distinguishable (p value < 0.01) clear positive shift in P99 anomaly distributions can be seen between 1.5 °C and 2 °C globally and regionally for many regions such as ENA, MED, NAS, and EAS. The AMZ region with the decreasing E2E experienced a decrease in the peak for 2 °C compared to that for 1.5 °C and an elongated negative tail where 2 °C results had a higher frequency for E2E range from −0.6 to −1.0.
Global spatial map and zonal mean of the 2 °C minus 1.5 °C E2E multi-model ensemble 99th percentile (P99) of the HAPPI data3 (95% significant level is stippled). Black boxes represent the IPCC AR5 reference regions (http://www.ipcc-data.org/guidelines/pages/ar5_regions.html). Histograms and kernel density estimations (KDE; thick line) are of the area averaged (land only) E2E P99 distribution in each region for the historical (ALL) period and 1.5 °C (15) and 2 °C (20) future scenarios globally and over the East North America (ENA), Amazon (AMZ), South Europe/Mediterranean (MED), and East Asia (EAS) regions. The x-axis shows the E2E P99 values, and the y-axis shows the probability. ALL, 1.5 °C, and 2 °C scenarios are shown by green, blue, and red colors, respectively. The bin width of the P99 histograms was set to 0.1.
This study demonstrates that intensification of the hydrologic cycle will occur with the projected warming, and the emphasis was placed on the sub-seasonal to seasonal variability of combined wet and dry spell characteristics for the additional half degree warming from 1.5 °C to 2 °C. Although some regional studies argued coupled climate ocean–atmospheric internal variability can be important for simulating realistic extreme conditions such as drought26,27, the utilization of multiple models and large ensemble experiments, which has merits such as reduced individual model inherent uncertainties and incorporation of large natural variability, represented the global patterns in accord with previous studies9,17. Based on the multi-model large ensemble AGCM experiments, we showed that warming from 1.5 °C to 2 °C will cause an escalation in the intensification of event-to-event variability in terms of magnitude. The results presented here clearly suggest extreme dry and wet events will increasingly co-occur in an event such as the switch from extreme drought to severe flooding in California during the recent past18, and most recently, the 2018 flood in Japan, which was followed by one of the most intense heatwaves the country has ever faced. At least, in terms of disaster mitigation and water security, there would be significant benefits to limiting global warming to 1.5 °C to dampen the intensified event-to-event variability to which our society will likely be exposed more frequently under the business-as-usual warming.
HAPPI simulations
We used MIROC5, NorESM, CanAM4, and CAM4 models and three sets of scenarios, namely, a historical scenario (for the 2006–2015 period) and 1.5 °C and 2 °C equilibrium warming scenarios (for a 10-year period in the beginning of 22nd century. Hypothetically for the 2106–2115 period). ALL scenario is forced by observations. Forcing and boundary conditions of the 1.5 °C warming scenario corresponds to those of the year 2095 of the representative concentration pathway (RCP) 2.6 of Coupled Model Intercomparison project Phase 528. Similar conditions are used for the 2 °C warming scenario, except for greenhouse gases, sea surface temperature and sea ice forcing, which are taken as a weighted combination of RCP2.6 and RCP4.5 scenarios. Further details are given in the HAPPI overview paper3.
Derivation of E2E and the utilization of the HAPPI AGCMs
We derived the "event-to-event hydrological intensification index" (E2E) as follows. First, wet and dry days were demarcated by using the precipitation threshold 1 mm/day. We defined each consecutive wet spell and dry spell as a single event (Fig. S1). The number of these events can change temporally and can represent intra-annual conditions, which will reflect the event-to-event intensification. For each event, we calculate the dry spell length (DSL) as the consecutive number of dry days and the total daily precipitation during the wet spell, which is called the "aggregated precipitation intensity" (API) throughout this study. The E2E is the event-to-event intensification index. The DSL and API values were normalized by their 10 year historical (i.e., the ALL simulation) annual average before calculating the E2E (Eq. 1). The mean of the API can be computed by Eq. 2. Here, P is the annual total precipitation during wet days and nw is the number of wet spells, which is equal (or different by 1) to the number of events (number of dry spells).
$${\rm{E2E}}={\rm{API}}\times {\rm{DSL}}$$
$${{\rm{API}}}_{{\rm{mean}}}=(\frac{{\boldsymbol{P}}}{{{\boldsymbol{n}}}_{{\boldsymbol{w}}}})$$
In Fig. S1, we demonstrate the derivation of the event-wise E2E by combining spell 1 with 2, 3 with 4, and so on. We further checked the sensitivity of the E2E by shifting the position of one spell, i.e., by combining 2 with 3, 4 with 5, and so on (will use the term E2E#2 for this from hereon). By using the global GPCP-1DD daily precipitation data set29 for the period 1 October 2006–1 October 2015, we derived the observed DSL, API, E2E, and E2E#2. Fig. S1 shows the E2E and E2E#2 results, which indicate that for a 10 year period, they will give similar results for the mean conditions.
Daily precipitation output from 100 ensemble members per scenario of each model was used. Initially, the event-wise DSL, API, and E2E were calculated for each ensemble (i.e., for 10 years) in their original model resolution. For the analysis of the extreme cases of hydroclimatic intensity, the 99th percentile (P99) of E2E was then obtained along with the DSL and API components of that event. This resulted in 100 values for each parameter (i.e., P99 of E2E, etc.) per model per experiment (ALL, 1.5 °C, and 2 °C). Before combining these parameters for the multi-model analysis, results were regridded into a 1-degree resolution and concatenated to calculate the multi-model data (i.e., 400 values per experiment for each grid). When deriving the probability density distributions, to remove the model inherent biases for each experiment of each model, the ensemble mean value of the ALL experiment was removed before regridding and concatenating. For instance, in the P99 E2E values of the ALL, 1.5 °C, and 2 °C experiments with the MIROC model, the ensemble mean of ALL from the same model was deducted from all values. Afterward, the anomalies were obtained. Comparison between modeled and observed variables are shown in Fig. S2.
Intensity-frequency decomposition
The total change in the wet day precipitation (total dry days) during each decade of each ensemble was investigated in the context of the frequency and intensity of the wet (dry) events. Here, the frequency is the number of wet/dry spells and the intensity is the API (DSL) for wet (dry) spells. For wet spells, frequency–intensity decomposition is as follows. If the total precipitation (P) can be represented as a combination of the mean precipitation intensity (I, that is the mean API for wet spells) and mean frequency (n), i.e., as P = n.I, then change in the total precipitation from 1.5 °C to 2 °C warming can be decomposed as follows:
$${\rm{\Delta }}{\rm{P}}={\rm{P}}^{\prime} \mbox{--}{\rm{P}}=({\rm{n}}+{\rm{\Delta }}{\rm{n}}).({\rm{I}}+{\rm{\Delta }})\mbox{--}{\rm{n}}.{\rm{I}}={\rm{\Delta }}{\rm{n}}.{\rm{I}}+{\rm{n}}.{\rm{\Delta }}{\rm{I}}+{\rm{\Delta }}{\rm{n}}.{\rm{\Delta }}I$$
P′ is the precipitation under 2 °C conditions, and P, n, and I are the parameters under 1.5 °C conditions; Δ represents the change between 1.5 °C and 2 °C climates. Here, the Δn.I term represents the change due to the frequency change and n.ΔI represents the change due to the intensity change. Δn.I and n.ΔI will be called the frequency term and intensity term from now on30. This decomposition was conducted for precipitation larger than 1 mm/day (i.e., precipitation during wet days) in warming scenarios 1.5 °C and 2 °C. For dry spells, we can replace P with the total dry days (D) and I is equal to the mean DSL. We found that the covariance term was negligible during our analysis (Fig. S3).
Significance tests
Two-tailed Student's t-test was applied to calculate the statistical significance shown in spatial figures of Figs 1, 2 and Fig. S3. Assessment of the statistical significance of probability density functions were conducted using two-sided Kolmogorov-Smirnov test.
UNFCCC. Adoption of the Paris Agreement FCCC/CP/2015/L.9/Rev.1, http://unfccc.int/resource/docs/2015/cop21/eng/l09r01.pdf, United Nations Framework Convention on Climate Change (2015).
Mitchell, D. et al. Realizing the impacts of a 1.5 °C warmer world. Nat. Clim. Change 6, 735–737 (2016).
Article ADS Google Scholar
Mitchell, D. et al. Half a degree additional warming, prognosis and projected impacts (HAPPI): background and experimental design. Geosci. Model Dev. 10, 571 (2017).
IPCC Global Warming of 1.5 °C (eds Masson-Delmotte, V. et al.) (World Meteorological Organization, 2018).
Rogelj, J. et al. Paris Agreement climate proposals need a boost to keep warming well below 2 oC. Nature 534, 631 (2016).
Rogelj, J. et al. In Global Warming of 1.5 °C (eds Masson-Delmotte, V. et al.) Ch. 2 (World Meteorological Organization, 2018).
Kraaijenbrink, P. D. A., Bierkens, M. F. P., Lutz, A. F. & Immerzeel, W. W. Impact of a global temperature rise of 1.5 degrees Celsius on Asia's glaciers. Nature 549, 257 (2017).
King, A. D., Karoly, D. J. & Henley, B. J. Australian climate extremes at 1.5 °C and 2 °C of global warming. Nat. Clim. Change 7, 412–416 (2017).
Lehner, F. et al. Projected drought risk in 1.5 °C and 2 °C warmer climates. Geophys. Res. Lett. 44, 7419–7428 (2017).
Döll, P. et al. Risks for the global freshwater system at 1.5 °C and 2 °C global warming. Environ. Res. Lett. 13, 044038 (2018).
IPCC Climate Change2013: The PhysicalScience Basis (eds Stocker, T. F. et al.) (Cambridge Univ. Press, 2013).
Trenberth, K. E. Conceptual framework for changes of extremes of the hydrological cycle with climate change. Clim. Change 42, 327–339 (1999).
Allen, M. R. & Ingram, W. J. Constraints on the future changes in climate and the hydrological cycle. Nature 419, 224–232 (2002).
Trenberth, K. E., Dai, A., Rasmussen, R. & Parsons, D. The changing character of precipitation. Bull. Am. Meteorol. Soc. 84, 1205–1217 (2003).
Trenberth, K. E. Changes in precipitation with climate change. Clim. Res. 47, 123–138 (2011).
Giorgi, F. et al. Higher hydroclimatic intensity with global warming. J. Clim. 24, 5309–5324 (2011).
Giorgi, F., Coppola, E. & Raffaele, F. A consistent picture of the hydroclimatic response to global warming from multiple indices: Models and observations. J. Geophys. Res. Atmos. 119, 11–695 (2014).
Wang, S. Y. S., Yoon, J. H., Becker, E. & Gillies, R. California from drought to deluge. Nat. Clim. Change 7, 465–468 (2017).
Swain, D. L., Langenbrunner, B., Neelin, J. D. & Hall, A. Increasing precipitation volatility in twenty-first-century California. Nat. Clim. Change 8, 427 (2018).
Park, K. J., Yoshimura, K., Kim, H. & Oki, T. Chronological Development of Terrestrial Mean Precipitation. Bull. Am. Meteorol. Soc. 98, 2411–2428 (2017).
IPCC Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (eds Field, C. B. et al.) (Cambridge Univ. Press, 2012).
Saeed, F. et al. Robust changes in tropical rainy season length at 1.5 °C and 2 °C. Environ. Res. Lett. 13, 064024 (2018).
Polade, S. D., Gershunov, A., Cayan, D. R., Dettinger, M. D. & Pierce, D. W. Precipitation in a warming world: Assessing projected hydro-climate changes in California and other Mediterranean climate regions. Sci. Rep. 7, 10783 (2017).
Chevuturi, A., Klingaman, N. P., Turner, A. G. & Hannah, S. Projected Changes in the Asian‐Australian Monsoon Region in 1.5 °C and 2.0 °C Global‐Warming Scenarios. Earth's Future 6, 339–358 (2018).
Lee, D. et al. Impacts of half a degree additional warming on the Asian summer monsoon rainfall characteristics. Environ. Res. Lett. 13, 044033 (2018).
Coats, S. et al. Internal ocean-atmosphere variability drives megadroughts in Western North America. Geophys. Res. Lett. 43, 9886–9894 (2016).
Seager, R., Kushnir, Y., Herweijer, C., Naik, N. & Velez, J. Modeling of tropical forcing of persistent droughts and pluvials over western North America: 1856–2000. J. Clim. 18, 4065–4088 (2005).
Taylor, K. E., Stouffer, R. J. & Meehl, G. A. An overview of CMIP5 and the experiment design. Bull. Am. Meteorol. Soc. 93, 485–498 (2012).
Huffman, G. J. et al. Global precipitation at one-degree daily resolution from multisatellite observations. J. Hydrometeorol. 2, 36–50 (2001).
Utsumi, N., Kim, H., Kanae, S. & Oki, T. Which weather systems are projected to cause future changes in mean and extreme precipitation in CMIP5 simulations? J. Geophys. Res. Atmos. 121(10), 522–10,537 (2016).
H.K., Y.H. and H.S. are supported by Integrated Research Program for Advancing Climate Models (TOUGOU program) from the Ministry of Education, Culture, Sports, Science and Technology, Japan. H.K. acknowledges Grant‐in‐Aid for Scientific Research (18KK0117) from JSPS. H.K. and T.O. acknowledge support by Grant-in-Aid for Specially promoted Research 16H06291 from JSPS. N.U. is supported by JSPS Overseas Research Fellowships. The GSWP3 is archived and provided under the framework of the Data Integration and Analysis System (DIAS) funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT).
Gavin D. Madakumbura
Present address: Department of Atmospheric and Oceanic Sciences, University of California, Los Angeles, Los Angeles, CA, USA
Department of Civil Engineering, The University of Tokyo, Tokyo, Japan
Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
Hyungjun Kim & Taikan Oki
Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA
Nobuyuki Utsumi
Center for Global Environmental Research, National Institute for Environmental Studies, Tsukuba, Japan
Hideo Shiogama
Institute for Atmospheric and Climate Science, ETH Zurich, Universitätstrasse 16, 8092, Zurich, Switzerland
Erich M. Fischer
Norwegian Meteorological Institute, Oslo, Norway
Øyvind Seland
Canadian Centre for Climate Modelling and Analysis, Environment and Climate Change Canada, University of Victoria, Victoria, V8W 2Y2, Canada
John F. Scinocca
School of Geographical Sciences, University of Bristol, Bristol, UK
Daniel M. Mitchell
Department of Civil Engineering, Shibaura Institute of Technology, 3-7-5 Toyosu, Koto-ku, Tokyo, Japan
Yukiko Hirabayashi
Hyungjun Kim
Taikan Oki
H.K. conceived the idea of the study. G.D.M. and H.K. conducted the analyses and developed the manuscript. D.M.M., H.S., E.M.F., Ø.S., J.F.S. conducted the HAPPI AGCM experiments. N.U., H.S., E.M.F., Y.H. and T.O. contributed to the analyses and the interpretation by provision of comments and feedback.
Correspondence to Hyungjun Kim.
Madakumbura, G.D., Kim, H., Utsumi, N. et al. Event-to-event intensification of the hydrologic cycle from 1.5 °C to a 2 °C warmer world. Sci Rep 9, 3483 (2019). https://doi.org/10.1038/s41598-019-39936-2
Earth's sediment cycle during the Anthropocene
Jaia Syvitski
Juan Restrepo Ángel
Daniel Olago
Nature Reviews Earth & Environment (2022)
The timing of unprecedented hydrological drought under climate change
Yusuke Satoh
Kei Yoshimura
Changing hydroclimate dynamics and the 19th to 20th century wetting trend in the English Channel region of northwest Europe
Serena R. Scholz
Richard Seager
Seung Hun Baek
Climate Dynamics (2022)
Anthropogenic influence on extreme precipitation over global land areas seen in multiple observational datasets
Chad W. Thackeray | CommonCrawl |
CLRS Solutions 24.2 Single-source shortest paths in directed acyclic graphs
24.2 Single-source shortest paths in directed acyclic graphs 24.2 Single-source shortest paths in directed acyclic graphs Table of contents
Run $\text{DAG-SHORTEST-PATHS}$ on the directed graph of Figure 24.5, using vertex $r$ as the source.
$d$ values:
$$ \begin{array}{cccccc} r & s & t & x & y & z \\ \hline 0 & \infty & \infty & \infty & \infty & \infty \\ 0 & 5 & 3 & \infty & \infty & \infty \\ 0 & 5 & 3 & 11 & \infty & \infty \\ 0 & 5 & 3 & 10 & 7 & 5 \\ 0 & 5 & 3 & 10 & 7 & 5 \\ 0 & 5 & 3 & 10 & 7 & 5 \end{array} $$
$\pi$ values:
$$ \begin{array}{cccccc} r & s & t & x & y & z \\ \hline \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} & \text{NIL} \\ \text{NIL} & r & r & \text{NIL} & \text{NIL} & \text{NIL} \\ \text{NIL} & r & r & s & \text{NIL} & \text{NIL} \\ \text{NIL} & r & r & t & t & t \\ \text{NIL} & r & r & t & t & t \\ \text{NIL} & r & r & t & t & t \end{array} $$
Suppose we change line 3 of $\text{DAG-SHORTEST-PATHS}$ to read
3 for the first |V| - 1 vertices, taken in topologically sorted order
Show that the procedure would remain correct.
When we reach vertex $v$, the last vertex in the topological sort, it must have $out\text-degree$ $0$. Otherwise there would be an edge pointing from a later vertex to an earlier vertex in the ordering, a contradiction. Thus, the body of the for-loop of line 4 is never entered for this final vertex, so we may as well not consider it.
The PERT chart formulation given above is somewhat unnatural. In a more natural structure, vertices would represent jobs and edges would represent sequencing constraints; that is, edge $(u, v)$ would indicate that job $u$ must be performed before job $v$. We would then assign weights to vertices, not edges. Modify the $\text{DAG-SHORTEST-PATHS}$ procedure so that it finds a longest path in a directed acyclic graph with weighted vertices in linear time.
Give an efficient algorithm to count the total number of paths in a directed acyclic graph. Analyze your algorithm.
We will compute the total number of paths by counting the number of paths whose start point is at each vertex $v$, which will be stored in an attribute $v.paths$. Assume that initial we have $v.paths = 0$ for all $v \in V$. Since all vertices adjacent to $u$ occur later in the topological sort and the final vertex has no neighbors, line 4 is well-defined. Topological sort takes $O(V + E)$ and the nested for-loops take $O(V + E)$ so the total runtime is $O(V + E)$.
PATHS(G)
topologically sort the vertices of G
for each vertex u, taken in topologically sorted order
for each v ∈ G.Adj[u]
v.paths = u.paths + 1 + v.paths
return the sum of all paths attributes
Previous 24.1 The Bellman-Ford algorithm
Next 24.3 Dijkstra's algorithm | CommonCrawl |
Short-term stratospheric ozone fluctuations observed by GROMOS microwave radiometer at Bern
Lorena Moreira ORCID: orcid.org/0000-0002-4791-85001,
Klemens Hocke1,2 &
Niklaus Kämpfer1,2
Earth, Planets and Space volume 70, Article number: 8 (2018) Cite this article
The ground-based millimeter wave ozone spectrometer (GROMOS) has been continually measuring middle atmospheric ozone volume mixing ratio profiles above Bern, Switzerland (\(46.95^{\circ }\hbox {N}\), \(7.44^{\circ }\hbox {E}\), 577 m), since 1994 in the frame of NDACC. The high temporal resolution of GROMOS (30 min) allows the analysis of short-term fluctuations. The present study analyses the temporal perturbations, ranging from 1 to 8 h, observed in stratospheric ozone from June 2011 to May 2012. The short-term fluctuations of stratospheric ozone are within 5%, and GROMOS appears to have relative amplitudes stable over time smaller than 2% at 10 hPa (32 km). The strongest natural fluctuations of stratospheric ozone (about 1% at 10 hPa) above Bern occur during winter due to displacements and deformations of the polar vortex towards mid-latitudes.
Even though ozone is a minor constituent in the atmosphere, it is a component of major interest. Stratospheric ozone filters most of the UV-B radiation from the Sun and thus allows life on Earth. High-resolution observations of minor constituents in the middle atmosphere often reveal small-scale perturbations (present in the horizontal, in the vertical and in time) overlaid upon the mean background distribution (Eckermann et al. 1998). Atmospheric waves were identified as an important source of such variability (Zhu and Holton 1986; Appenzeller and Holton 1997; Eckermann et al. 1998; Calisesi et al. 2001; Fritts and Alexander 2003; Moustaoui et al. 2003; Hocke et al. 2006; Noguchi et al. 2006; Flury et al. 2009; Chane et al. 2016). For example, the wintertime Arctic middle atmosphere is characterised by the presence of large amplitude planetary Rossby waves that often interact with the stratospheric polar vortex and trigger sudden stratospheric warming (SSW) events (Chandran et al. 2013). These events are characterised by a sudden increase in the temperature and a reversal of the zonal wind. The effect in ozone at mid-latitudes is depletion in the lower and upper stratosphere. The depletion in lower stratospheric ozone is due to transport of ozone poor air from the polar vortex, whereas the ozone depletion in the upper stratosphere is caused by the sudden increase in temperature (Flury et al. 2009). Gravity waves (GW) may also play a role in forcing of SSW when propagating into the stratosphere as a consequence of variations in the tropopause jet during instabilities in the upper troposphere (Flury et al. 2010; Yamashita et al. 2010). Mid-latitude gravity waves produce periodic fluctuations of ozone volume mixing ratio in the upper stratosphere and lower mesosphere (Hocke et al. 2006) possibly due to vertical advection of air parcels by gravity waves (Zhu and Holton 1986; Eckermann et al. 1998). Another example is, for instance, horizontal mixing processes through the transport barriers, either by small-scale structures (filaments or laminae) or by large-scale structures (streamers), are thought to play a significant role in the variability of atmospheric trace gases in the middle stratosphere (Krüger et al. 2005). Atmospheric soundings at middle and high latitudes frequently disclose enhancements or depletions of lower stratospheric ozone confined to narrow vertical layers (Eckermann et al. 1998). These structures are known as filaments or laminae, and are mainly observed at the edge of the polar vortex in the lower stratosphere (tropopause–25 km), generated by planetary Rossby wave breaking (Eckermann et al. 1998; Krüger et al. 2005). Regarding the large-scale structures, there are two types of stratospheric streamers: the tropical–subtropical streamer and the polar vortex streamer (Waugh 1993; Offermann et al. 1999; Manney et al. 2001; Eyring et al. 2003; Krüger et al. 2005). These structures transport tropical–subtropical and polar vortex air masses into mid-latitudes more frequently during Arctic winter. The breaking of planetary Rossby waves at the edge of the polar vortex (polar vortex streamers) or at the edge of the tropics (tropical–subtropical streamers) seems to be linked with the transport of air masses into mid-latitudes (Krüger et al. 2005). The observed effect in ozone is the meridional mixing of ozone (Eyring et al. 2003). The stratospheric streamers occur preferentially at higher altitudes above 20 km, in the middle stratosphere, in contrast to filaments (laminae) which occur below 25 km, in the lower stratosphere (Krüger et al. 2005). Nevertheless, a streamer can eventually develop into a filament-like structure (Waugh 1993; Krüger et al. 2005).
Many of the instruments used for measuring the composition of the atmosphere make use of the spectral properties of its constituent gases (Parrish 1994). Millimeter wave radiometry is a well-established tool for the monitoring of atmospheric species. It is a passive remote sensing technique which detects radiation emitted by rotational transitions of molecules in the atmosphere. The spectral analysis of the pressure broadened lines emitted by the species under analysis permits the retrieval of the vertical profile from the lower stratosphere to the mesosphere (20–70 km) (Kämpfer 1995). The technique enables day-round measurements as there is no reliance on the Sun as a source and under nearly all weather conditions. In addition, microwave radiometry provides high temporal resolution. In the present study, we take advantage of this feature and we analyse the short-term fluctuations (1–4 h) of stratospheric ozone measured by the GROMOS radiometer. The aim of this analysis is to initiate a new field of study regarding the short-term stratospheric perturbations in trace constituents, ozone in our case, since we are not aware of any other studies on this topic, except for Hocke et al. (2006), but this study is restricted to mesospheric fluctuations. The characterisation of the short-term ozone fluctuations can lead to a better understanding of the role of atmospheric waves and nonlinear wave-wave interactions to induce perturbations in trace gas profiles.
"The GROMOS radiometer" section describes the GROMOS instrument along with an overview of the retrieval method, which has been modified to enable the study of these small-scale perturbations. In "Method" section is explained the method used for this purpose. "Results and discussion" section shows the results obtained along with a discussion on the geophysical causes of short-term mid- and upper stratospheric ozone fluctuations. And finally "Conclusions" section offers some concluding remarks.
The GROMOS radiometer
GROMOS (ground-based millimeter wave ozone spectrometer) was constructed by the Institute of Applied Physics of the University of Bern. The instrument has been operated in Bern (\(46.95^{\circ }\hbox {N}\), \(7.44^{\circ }\hbox {E}\), 577 m) since November 1994 in the scope of the network for the detection of atmospheric composition change (NDACC). NDACC is an international global network of more than 80 stations making high-quality measurements of atmospheric composition that began official operations in 1991 (Mazière et al. 2017). The GROMOS microwave radiometer detects the thermal emission of the pressure broadened rotational transition of ozone at 142.175 GHz. The spectrum measured by the instrument is given in terms of brightness temperature. The brightness temperature is the physical temperature at which a perfect blackbody would emit the same power as it is measured by the instrument. For a review of technical details on the instrument, we refer to, for instance, Moreira et al. (2015) or Peter (1997).
Retrieval technique
The shape of the spectral line measured by the instrument contains information on the vertical distribution of the emitting molecule, because of the pressure broadening (Parrish 1994). Therefore, the vertical distribution of ozone VMR can be retrieved from the observed emission line shape by means of radiative transfer in a model atmosphere and an optimal estimation method. The atmospheric radiative transfer simulator (ARTS2) (Eriksson et al. 2011) is used as forward model to simulate atmospheric radiative transfer and calculate an ozone spectrum for the modelled atmosphere by using an a priori ozone profile. The inversion is done through the accompanying MATLAB package Qpack2 (Eriksson et al. 2005), which uses the optimal estimation method (OEM) (Rodgers 1976) to derive the best estimate of the vertical profile by combining the measured and modelled spectra. During the inversion process, a priori information is required. The a priori ozone profiles are from a monthly climatology from reanalysis data of the European Centre for Medium-range Weather Forecasts (ECMWF) up to 70 km and extended above by a monthly ozone climatology from observations close to Bern of the satellite microwave limb sounder Aura/MLS. In the present study, we use the retrieval version 111 which is optimised for the retrieval of short-term ozone fluctuations since we take into account uncertainties of the retrieved ozone resulting from the tropospheric opacity as described later in more detail. The a priori covariance matrix of retrieval version 111 is 2 ppm for the diagonal elements, and the values decay exponentially with a correlation length of 3 km for the off-diagonal elements.
Figure 1 shows the a priori (green line–left panel) and the retrieved profile (blue line–left panel) recorded at noon the 28 August 2011 obtained by the retrieval version 111. The averaging kernels (AVKs) and the area of the AVKs, the measurement response (MR), are represented in the middle panel. The AVKs are multiplied by 4 in order to be displayed along with the MR (red line–middle panel). The AVK-lines are the grey lines except for some altitude levels, which are in different colours: orange for 20 km, green for 30 km, magenta for 40 km, cyan for 50 km, black for 60 km and blue for 70 km. The a priori contribution to the retrieved profile can be estimated by 1 minus the area of the AVKs, the so-called measurement response (MR–middle panel). A reliable altitude range for the retrieval is considered where the MR is larger than 0.8, which corresponds to an a priori contribution smaller than 20%. The vertical resolution (cyan line–right panel) is quantified by the full width at half maximum of the AVKs. The vertical resolution of GROMOS lies from 10 to 18 km in the stratosphere and up to 20 km in the middle mesosphere. The same panel shows the altitude peak (magenta line–right) of the corresponding kernels, and as it can be observed in the coloured AVK-lines, the AVKs peak at its nominal altitude for the considered vertical range.
The signal from the stratosphere detected by the instrument is attenuated by the troposphere, mainly due to water vapour content. Tropospheric water vapour significantly influences the measured ozone spectrum by increasing the continuum emission and attenuating the stratospheric ozone emission line. Accordingly, it is important to correct the measured spectra for the tropospheric effect. The tropospheric correction depends upon the opacity of the troposphere. The transmission factor:
$$\begin{aligned} e^{-\tau }=\frac{T_{\rm b}(z_{0})-T_{\rm trop}}{T_{\rm b}(z_{\rm trop})-T_{\rm trop}} \end{aligned}$$
$$\begin{aligned} T_{\rm trop}=T_{\rm surface}+\Delta T \end{aligned}$$
where \(\tau\) is the tropospheric opacity that can be calculated from the wings of the measured spectrum where the wings are about 0.5 GHz away from the ozone line centre. \(T_{\rm trop}\) is the mean tropospheric temperature (Eq. 2), which is estimated according to a linear model proposed by Ingold et al. (1998), considering the surface air temperature (\(T_{\rm surface}\)) measured at a nearby weather station and a temperature offset \(\Delta T=-\,10.4\) K, depending on the frequency range (142 GHz) and on the altitude (577 m.a.s.l). \(T_{\rm b}(z_{\rm trop})\) is the brightness temperature that the radiometer would measure at the tropopause level. \(T_{\rm b}(z_{0})\) is the brightness temperature measured at ground level, and it is estimated from the off-resonance emission at the wings of the spectrum. The brightness temperature of the wings corresponds to the continuum emission due to tropospheric oxygen and water vapour. The larger (smaller) the tropospheric opacity or the smaller (larger) the transmittance of the signal through the troposphere, the larger (smaller) the tropospheric correction. The correction of the tropospheric contribution consists of scaling the amplitude of the measured line spectrum, as if it would be measured at tropopause level in an isothermal troposphere with a mean tropospheric temperature, \(T_{\rm trop}\).
Nevertheless, because of the tropospheric correction the noise in the spectrum is magnified. In Fig. 2 are shown two measurements of the GROMOS spectra binned in frequency, one for a clear sky case and another for a cloudy sky case. The GROMOS spectra are binned in time and frequency. The binning in time is 30 min. The fast Fourier transform spectrometer (FFTS) has around 32,768 channels, and after the binning in frequency the number of points in frequency is 54 with higher frequency resolution in the line centre compared to the line wings. In the clear sky case, the brightness temperature \((T_{\rm b})\) is around 90 K at the peak (142.175 GHz), whereas in the cloudy sky case the brightness temperature is quite high, 200 K at the peak and 192 K on the wings. Therefore, when at a glance the sky is cloudy, the instrument measures higher brightness temperatures. In the second row are represented the spectra \((T'_{\rm b})\) of both cases after the application of the tropospheric correction. The consequence of a larger tropospheric correction is what we observe in the third row, where the error of the brightness temperature \((\Delta T'_{\rm b})\) for both cases is represented. \(\Delta T'_{\rm b}\) in the cloudy sky case is larger than in the clear sky case, due to the amplification of the error by the tropospheric correction. Accordingly, the retrieval error of the measurements is larger when the tropospheric correction is larger, i.e. smaller tropospheric transmittance. Normally, the GROMOS retrieval is performed with a constant error of \(\Delta T'_{\rm b}=0.8\hbox{K}\), but retrieval version 111 of this study has a variable error depending on the tropospheric transmission:
$$\begin{aligned} \Delta T'_{\rm b}=0.5 {\rm K} +\frac{\Delta T_{\rm b}}{e^{-\tau }} \end{aligned}$$
the error of the measured brightness temperature, \(\Delta T_{\rm b}\), is given by the radiometer equation:
$$\begin{aligned} \Delta T_{\rm b}=\frac{ T_{\rm b}+T_{\rm rec}}{\sqrt{\Delta f \cdot t_{\rm int}}} \end{aligned}$$
The radiometer equation gives the resolution of the radiation measured, which is determined by the bandwidth of the individual spectrometer channels (\(\Delta f\)), by the integration time (\(t_{\rm int}\)) and by the total power measured by the spectrometer. A constant error of 0.5 K is considered as a systematic bias of the spectra, due to spectroscopic errors and the water vapour continuum. As it is shown in Fig. 2, the error of the brightness temperature (\(\Delta T_{\rm b}\)) is of the order of a few Kelvins in the line centre and 0.5 K in the line wings of the spectrum. Therefore, the measurement noise (\(\Delta T'_{\rm b}\)) depends on the frequency due to different spectral binning and on the tropospheric transmittance. \(\Delta T'_{\rm b}\) is larger in the ozone line centre since the binning is only over a few channels at the ozone line centre, while the binning in the line wings is over several hundred channels or more. Thus, the thermal noise is reduced in the ozone line wings by averaging over a high number of channels. This is a more realistic approach for the retrieval than considering a constant measurement noise, resulting in an improvement in the retrieved ozone VMR in the lower stratosphere.
The present study analyses the short-term fluctuations in stratospheric ozone measured by the GROMOS radiometer from June 2011 to May 2012. We selected this time interval since it covers a full year with the winter in the centre. Further, the disturbance of the Northern polar vortex was relatively simple in winter 2011/2012 which is mainly characterised by a minor sudden stratospheric warming in mid-January 2012 (Chandran et al. 2013). Thus, the attribution between the polar vortex disturbance and the behaviour of the short-term ozone fluctuations above Bern is easier in this year compared to other years.
We have used the standard deviation, calculated after the removing of the linear trend, of 8 consecutive ozone profiles within a time window of about 4 h as a proxy for the strength of the fluctuations. The standard deviation is a measure that is used to quantify the amount of dispersion of a set of data from its mean. The deviation is higher when the data are spread out to the mean, and in our case indicates stronger fluctuations. Since the sampling rate is 30 min, oscillations with periods of 1 h (Nyquist period) to about 8 h will contribute to the calculated standard deviation. An example of these standard deviations is presented in Fig. 3. First panel of Fig. 3 shows the ozone VMR (ppm) through the blue line, the mean of 8 consecutive ozone values (red line) and the standard deviation calculated every 8 ozone values after the removal of the linear trend (red area) at 10 hPa for a time interval of nearly 2 days at the beginning of June 2011. The second panel of Fig. 3 displays the tropospheric transmittance observed for the same interval as the upper panel. This tropospheric transmittance corresponds to cloudy sky cases, in which tropospheric corrections were performed. We cannot find a clear relation between the tropospheric transmittance (green line) and the fluctuations (red area). Later, we quantify that the uncertainty of the ozone retrieval only depends marginal on the tropospheric transmission. We conclude from Fig. 3 that GROMOS measurements are stable over time with a standard deviation around 2% (0.15 ppm) at 10 hPa (32 km). Generally, the relative amplitudes are stable over time within 5% in the stratosphere (from 20 to 50 km altitude). In addition, Fig. 3 shows that the amplitudes of natural short-term ozone fluctuations are smaller than 2% at 10 hPa for the time interval shown, since the fluctuations also contain the influence of the random retrieval error. The resulted time series are due to natural short-term fluctuations and to some random retrieval errors. The random retrieval error includes the thermal noise on the spectra due to the receiver noise, which propagates into the ozone profiles. Unfortunately, this contribution is impossible to discriminate from the retrieval error; nevertheless, we did not find any artificial periodicity in the temporal range of our study (from 1 up to 8 h). Since the mid-latitude stratosphere is known to be quiet during summer, it can be assumed that the retrieval error is mainly due to the random retrieval error (thermal receiver noise) in summer.
We focus our study on the 10 hPa pressure level in the middle stratosphere where the ozone retrieval is most reliable since the ozone volume mixing ratio is high at 10 hPa, and the influence of the water vapour continuum is smaller at 10 hPa than in the lower stratosphere at 50 hPa. The goal is to investigate whether natural short-term ozone fluctuations can occur exceeding the 2% standard deviation level. These disturbances are believed to occur naturally in the stratosphere primarily due to atmospheric waves propagating from the troposphere into the stratosphere during winter, since the winds are eastward at all altitudes levels and also due to the winter stratosphere which is more dynamically driven than radiatively driven (Zhu and Holton 1986; Appenzeller and Holton 1997; Eckermann et al. 1998; Calisesi et al. 2001; Fritts and Alexander 2003; Moustaoui et al. 2003; Hocke et al. 2006; Noguchi et al. 2006; Flury et al. 2009; Chane et al. 2016).
The short-term stratospheric ozone fluctuations (\(\sigma _{\rm strato}\)) can also be affected by the tropospheric correction, since the retrieval error of the measurements is larger when the tropospheric correction is larger. This contribution can be estimated; therefore, we have calculated the influence of the tropospheric correction in the retrieved profiles. To bring about this requirement, we consider during the retrieval procedure that the brightness temperature error depends on the transmission factor (Eq. 3). In Fig. 4, we can observe the effect of the tropospheric transmittance in the random retrieval error of the profile, which is provided by the optimal estimation method (the smoothing error is not considered), at different pressure levels for the period from June 2011 to May 2012. The retrieval error is smaller when the tropospheric transmittance is larger. However, this effect is smaller than 0.02 ppm at 10 hPa. The green lines are the mean values of the retrieval error and of the tropospheric transmittance, and the red lines are the values of the second-degree polynomial regression of both variables evaluated at the tropospheric transmittance values.
In order to quantify the fluctuations generated by the tropospheric correction (\(\bar{\sigma }_{\rm retrieval}\)), we performed a second-degree polynomial regression (\(\sigma _{\rm retrieval}(t)=p_{1} t^{2}+p_{2}t+p_{3}\)) between the retrieval error (\(\sigma _{\rm retrieval}\)) and the tropospheric transmittance (t). The coefficients (\(p_{1}, p_{2}, p_{3}\)) resulted are evaluated at the mean of 8 consecutive values of the tropospheric transmittance (\(\bar{t}\)) to obtain the \(\bar{\sigma }_{\rm retrieval}(\bar{t})=p_{1} \bar{t}^{2}+p_{2}\bar{t}+p_{3}\). Although \(\bar{\sigma }_{\rm retrieval}\) basically plays no role and it could be neglected, we consider it in the calculation of the stratospheric fluctuations \(\left( \sigma _{\rm strato}=\sqrt{\sigma _{\rm total}^{2}-\bar{\sigma }_{\rm retrieval}^{2}}\right)\).
Figure 5 shows in the first panel the total short-term stratospheric ozone relative fluctuations contained in the GROMOS data in magenta (\(\sigma _{\rm total}\)), which is covered by the stratospheric relative fluctuations (\(\sigma _{\rm strato}\)) in blue, as they are practically identical, and the relative fluctuations caused by the tropospheric opacity (\(\bar{\sigma }_{\rm retrieval}\)) in red, in per cent at 10 hPa (32 km) for the period from June 2011 to May 2012. \(\bar{\sigma }_{\rm retrieval}\) is rather small, and hence, the tropospheric transmission on the retrieval noise has a minor effect on the observed temporal fluctuations of stratospheric ozone. Nevertheless, a random retrieval error is still included in \(\sigma _{\rm strato}\). Therefore, the blue line is the sum of the natural ozone fluctuations and the unknown influence of the random noise on the ozone time series. However, we learn from Fig. 5 that an upper limit to this contribution can be set, around 2%. In addition, there seems to be a temporal evolution of \(\sigma _{\rm strato}\), which is more likely due to natural ozone fluctuations, since the receiver noise is constant in time, usually around 2500 K. The ozone fluctuations are shown in the middle stratosphere because at this level the random retrieval error is about 2%, while the disturbed polar vortex edge often reaches mid-latitudes. Thus, we expect enhanced ozone fluctuations above Switzerland during times of a disturbed polar vortex. We can state from Fig. 5 that the strongest fluctuations occur during December and January, with an increase in \(\sigma _{\rm strato}\) of about 1%. Up to now, the magnitude of short-term ozone fluctuations was unknown. Our study gives the result that the relative standard deviation of short-term ozone fluctuations in the vicinity of the polar vortex is about 1%. Thus, our study showed that short-term ozone fluctuations of the mid-stratosphere are quite small even at the polar vortex edge. We obtained similar results for other years too.
In the second panel is represented the ozone VMR in ppm at 10 hPa (32 km) with the aim to observe its behaviour during the period under assessment. The standard deviations and the ozone VMR displayed in first and second panel of Fig. 5 have been smoothed in time by a moving average over an interval of 3–4 days. The third panel shows the geopotential height (GPH) at 10 hPa (32 km) from June 2011 to May 2012 from ECMWF reanalysis data. The stratospheric ozone fluctuations (\(\sigma _{\rm strato}\)) are larger when the GPHs are lower above our location. These lower GPHs are associated with deformations and southward excursions of the polar vortex, which are caused by planetary Rossby wave activity (Calisesi et al. 2001). The fourth panel of Fig. 5 shows the vertical wind from ECMWF reanalysis data. The strong vertical wind oscillations during December and January occur presumably because of the planetary Rossby wave breaking. In fact, Chandran et al. (2013) have reported a minor sudden stratospheric warming (SSW) in mid-January 2012. This SSW is also seen in the bottom panel which shows an increase in the relative fluctuations of potential vorticity at 10 hPa at Bern. It is considered a minor SSW since the zonal mean wind reversal did not reach the 10 hPa (32 km) level. However, we can observe its effect in the short-term stratospheric ozone fluctuations (first panel of Fig. 5) and also in ozone at 10 hPa from ECMWF operational data (Fig. 6). Figure 6 displays plots of potential vorticity, temperature and ozone from ECMWF operational data at 10 hPa for the 15 January 2012. From the potential vorticity plot, we know that the polar vortex is shifted southward and Bern is located inside the polar vortex. We notice an increase of about 1% in the stratospheric ozone fluctuations during winter, when the polar vortex makes its incursions towards the mid-latitudes. Enhanced relative standard deviations, i.e. stronger fluctuations (Fig. 5), are found when GROMOS is measuring inside the polar vortex (Fig. 6). Thus, this minor SSW seems to be the reason for the enhancement of short-term stratospheric ozone fluctuations during January 2012.
Nevertheless, the short-term stratospheric ozone fluctuations are not getting stronger than 3% in average. We would have expected that stronger amplitudes would occur at the polar vortex edge during times of breaking planetary Rossby waves.
The short temporal perturbations in stratospheric ozone were investigated through the data recorded by the GROMOS ground-based microwave radiometer at Bern from June 2011 to May 2012. In the present study, the retrieval takes into account the variable noise of the tropospheric corrected spectra. Accordingly, we can estimate the influence of the random retrieval error on the temporal ozone fluctuations. These ozone fluctuations only weakly depend upon the tropospheric transmittance. We find that the effect of tropospheric transmittance on the retrieval error is less than 0.02 ppm at 10 hPa. The contribution to the stratospheric fluctuations is due to the random retrieval error (about 2% at 10 hPa) and to natural short-term ozone fluctuations. We find that during times of a disturbed vortex the short-term ozone fluctuations can reach a standard deviation of about 1% superposed on the random retrieval error. This is a new result which quantifies the magnitude of short-term ozone fluctuations in the wintertime mid-stratosphere at mid-latitudes.
Example of a retrieved (blue line) and an a priori (green line) ozone VMR profile, AVKs (grey and colour lines in the middle panel), the measurement response (MR) (red line in the panel), vertical resolution (cyan line in the right panel) and altitude peak (magenta line in the right panel) of the GROMOS retrieval version 111 for 28 August 2011 with an integration time of 30 min. The xlabel "altitude" of the right panel also stands for the vertical resolution
Binned spectra of a clear sky case and a cloudy sky case. In the first row are represented the brightness temperatures for both cases, whereas in the second row are the spectra corrected for the tropospheric contribution. The third row shows the brightness temperature error for both cases
Ozone VMR (ppm) is represented by the blue line with a time resolution of 30 min as function of day and day fraction in June 2011 . The mean of 8 consecutive ozone values is the red line, and the red area is the standard deviation calculated every 8 values after removing its linear trend at 10 hPa for the time interval of nearly 2 days at the beginning of June 2011. The second panel shows the tropospheric transmittance observed for the same interval as the upper panel
Scatter plot of the tropospheric transmittance and retrieval error at different pressure levels during the time interval from June 2011 to May 2012. The green lines are the mean values of both variables and the red lines are the values of the second-degree polynomial regression of both variables evaluated at the tropospheric transmittance values
Standard deviation of relative short-term ozone fluctuations, ozone VMR, geopotential height, vertical wind and relative fluctuation of potential vorticity at 10 hPa above Bern from June 2011 to May 2012. The standard deviations and the ozone VMR have been smoothed in time by a moving average over an interval of 3–4 days. The magenta line of \(\sigma _{\rm total}\) is just below the blue line of \(\sigma _{\rm strato}\)
ECMWF plots of potential vorticity, temperature and ozone at 10 hPa for the northern hemisphere, for the 15 January 2012. The black dot shows the location of Bern, Switzerland
Appenzeller C, Holton JR (1997) Tracer lamination in the stratosphere: a global climatology. J Geophys Res Atmos 102(D12):13555–13569. https://doi.org/10.1029/97JD00066
Calisesi Y, Wernli H, Kämpfer N (2001) Midstratospheric ozone variability over bern related to planetary wave activity during the winters 1994–1995 to 1998–1999. J Geophys Res Atmos 106(D8):7903–7916. https://doi.org/10.1029/2000JD900710
Chandran A, Garcia RR, Collins RL, Chang LC (2013) Secondary planetary waves in the middle and upper atmosphere following the stratospheric sudden warming event of January 2012. Geophys Res Lett 40(9):1861–1867. https://doi.org/10.1002/grl.50373
De Mazière M, Thompson AM, Kurylo MJ, Wild J, Bernhard G, Blumenstock T, Hannigan J, Lambert J-C, Leblanc T, McGee TJ, Nedoluha G, Petropavlovskikh I, Seckmeyer G, Simon PC, Steinbrecht W, Strahan S, Sullivan JT (2017) The network for the detection of atmospheric composition change (ndacc): history, status and perspectives. Atmos Chem Phys Discuss 2017:1–40. https://doi.org/10.5194/acp-2017-402
Eckermann SD, Gibson-Wilde DE, Bacmeister JT (1998) Gravity wave perturbations of minor constituents: a parcel advection methodology. J Atmos Sci 55(24):3521–3539. https://doi.org/10.1175/1520-0469(1998)055%3c3521:GWPOMC%3e2.0.CO;2
Eriksson P, Jiménez C, Buehler SA (2005) Qpack, a general tool for instrument simulation and retrieval work. J Quant Spectrosc Radiat Transf 91(1):47–64. https://doi.org/10.1016/j.jqsrt.2004.05.050
Eriksson P, Buehler SA, Davis CP, Emde C, Lemke O (2011) Arts, the atmospheric radiative transfer simulator, version 2. J Quant Spectrosc Radiat Transf 112(10):1551–1558. https://doi.org/10.1016/j.jqsrt.2011.03.001
Eyring V, Dameris M, Grewe V, Langbein I, Kouker W (2003) Climatologies of subtropical mixing derived from 3d models. Atmos Chem Phys 3(4):1007–1021. https://doi.org/10.5194/acp-3-1007-2003
Flury T, Hocke K, Haefele A, Kämpfer N, Lehmann R (2009) Ozone depletion, water vapor increase, and PSC generation at midlatitudes by the 2008 major stratospheric warming. J Geophys Res Atmos. https://doi.org/10.1029/2009JD011940
Flury T, Hocke K, Kämpfer N, Wu DL (2010) Enhancements of gravity wave amplitudes at midlatitudes during sudden stratospheric warmings in 2008. Atmos Chem Phys Discuss 10:29971–29995. https://doi.org/10.5194/acpd-10-29971-2010
Fritts DC, Alexander MJ (2003) Gravity wave dynamics and effects in the middle atmosphere. Rev Geophys. https://doi.org/10.1029/2001RG000106
Hocke K, Kämpfer N, Feist DG, Calisesi Y, Jiang JH, Chabrillat S (2006) Temporal variance of lower mesospheric ozone over Switzerland during winter 2000/2001. Geophys Res Lett. https://doi.org/10.1029/2005GL025496
Ingold T, Peter R, Kämpfer N (1998) Weighted mean tropospheric temperature and transmittance determination at millimeter-wave frequencies for ground-based applications. Radio Sci 33(4):905–918. https://doi.org/10.1029/98RS01000
Kämpfer N (1995) Microwave remote sensing of the atmosphere in Switzerland. Opt Eng 34(8):2413–2424. https://doi.org/10.1117/12.205666
Krüger K, Langematz U, Grenfell JL, Labitzke K (2005) Climatological features of stratospheric streamers in the fub-cmam with increased horizontal resolution. Atmos Chem Phys 5(2):547–562. https://doi.org/10.5194/acp-5-547-2005
Manney GL, Michelsen HA, Bevilacqua RM, Gunson MR, Irion FW, Livesey NJ, Oberheide J, Riese M, Russell JM, Toon GC, Zawodny JM (2001) Comparison of satellite ozone observations in coincident air masses in early november 1994. J Geophys Res Atmos 106(D9):9923–9943. https://doi.org/10.1029/2000JD900826
Ming FC, Vignelles D, Jegou F, Berthet G, Renard J-B, Gheusi F, Kuleshov Y (2016) Gravity-wave effects on tracer gases and stratospheric aerosol concentrations during the 2013 ChArMEx campaign. Atmos Chem Phys Discuss. https://doi.org/10.5194/acp-2015-889
Moreira L, Hocke K, Eckert E, von Clarmann T, Kämpfer N (2015) Trend analysis of the 20-year time series of stratospheric ozone profiles observed by the gromos microwave radiometer at bern. Atmos Chem Phys 15(19):10999–11009. https://doi.org/10.5194/acp-15-10999-2015
Moustaoui M, Teitelbaum H, Valero FPJ (2003) Ozone laminae inside the antarctic vortex produced by poleward filaments. Q J R Meteorol Soc 129(594):3121–3136. https://doi.org/10.1256/qj.03.19
Noguchi K, Imamura T, Oyama KI, Bodeker GE (2006) A global statistical study on the origin of small-scale ozone vertical structures in the lower stratosphere. J Geophys Res Atmos. https://doi.org/10.1029/2006JD007232
Offermann D, Grossmann K-U, Barthol P, Knieling P, Riese M, Trant R (1999) Cryogenic infrared spectrometers and telescopes for the atmosphere (crista) experiment and middle atmosphere variability. J Geophys Res Atmos 104(D13):16311–16325. https://doi.org/10.1029/1998JD100047
Parrish A (1994) Millimeter-wave remote sensing of ozone and trace constituents in the stratosphere. Proc IEEE 82(12):1915–1929. https://doi.org/10.1109/5.338079
Peter R (1997) The ground-based millimeter-wave ozone spectrometer-gromos. IAP research report, University of Bern, Bern, Switzerland
Rodgers CD (1976) Retrieval of atmospheric temperature and composition from remote measurements of thermal radiation. Rev Geophys 14(4):609–624. https://doi.org/10.1029/RG014i004p00609
Waugh DW (1993) Subtropical stratospheric mixing linked to disturbances in the polar vortices. Nature 365(6446):535–537
Yamashita C, Liu HL, Chu X (2010) Gravity wave variations during the 2009 stratospheric sudden warming as revealed by ECMWF-T799 and observations. Geophys Res Lett. https://doi.org/10.1029/2010GL045437
Zhu X, Holton JR (1986) Photochemical damping of inertio-gravity waves. J Atmos Sci 43:2578–2584. https://doi.org/10.1175/1520-0469
KH performed the retrieval of the GROMOS measurements. LM carried out the data analysis and prepared the manuscript. NK is the principal investigator of the radiometry project. All authors have contributed to the interpretation of the results.
Acknowlegements
This work was supported by the Swiss National Science Foundation under Grant 200020-160048 and MeteoSwiss GAW Project: "Fundamental GAW parameters measured by microwave radiometry".
Institute of Applied Physics, University of Bern, Bern, Switzerland
Lorena Moreira, Klemens Hocke & Niklaus Kämpfer
Oeschger Centre for Climate Change Research, University of Bern, Bern, Switzerland
Klemens Hocke & Niklaus Kämpfer
Lorena Moreira
Klemens Hocke
Niklaus Kämpfer
Correspondence to Lorena Moreira.
Moreira, L., Hocke, K. & Kämpfer, N. Short-term stratospheric ozone fluctuations observed by GROMOS microwave radiometer at Bern. Earth Planets Space 70, 8 (2018). https://doi.org/10.1186/s40623-017-0774-4
Stratospheric ozone
Atmospheric variability | CommonCrawl |
Dynamic properties of independent chromatin domains measured by correlation spectroscopy in living cells
Malte Wachsmuth1,
Tobias A. Knoch2 &
Karsten Rippe3
Genome organization into subchromosomal topologically associating domains (TADs) is linked to cell-type-specific gene expression programs. However, dynamic properties of such domains remain elusive, and it is unclear how domain plasticity modulates genomic accessibility for soluble factors.
Here, we combine and compare a high-resolution topology analysis of interacting chromatin loci with fluorescence correlation spectroscopy measurements of domain dynamics in single living cells. We identify topologically and dynamically independent chromatin domains of ~1 Mb in size that are best described by a loop-cluster polymer model. Hydrodynamic relaxation times and gyration radii of domains are larger for open (161 ± 15 ms, 297 ± 9 nm) than for dense chromatin (88 ± 7 ms, 243 ± 6 nm) and increase globally upon chromatin hyperacetylation or ATP depletion.
Based on the domain structure and dynamics measurements, we propose a loop-cluster model for chromatin domains. It suggests that the regulation of chromatin accessibility for soluble factors displays a significantly stronger dependence on factor concentration than search processes within a static network.
The three-dimensional organization of chromosomes of eukaryotic interphase cells is emerging as an important parameter for the regulation of genomic function [1–4]. Beyond the mere storage of genetic information, the spatial structure fosters its compaction, replication and transcription on all scales ranging from the single base pair (bp) to ~100 Mbp of a whole chromosome. Chromatin interaction maps obtained by the chromatin conformation capture (3C) assay [5, 6] and derived methods like 5C, Hi-C [7] or T2C [8] provide detailed genome-wide information on the three-dimensional organization of the mammalian genome for cell ensembles [9–12] or even single cells [13]. These analyses suggest that the genome is organized into distinct topologically associating domains (TADs) [3, 11, 14]. They partition the genome into repressive and active chromatin regions, also referred to as subchromosomal domains [15, 16] and as concluded from a number of microscopy studies on the topology of active gene clusters [17–19] or the timing differences between early- and late-replicating DNA loci [20]. Notably, the spatial segregation of the genome into chromatin regions with different gene expression status is not simply the result of transcriptional activity. Rather, spatial chromatin organization actively participates in shaping cellular functions [4, 21–24]. Yet, details of the folding of the nucleosome chain into subchromosomal domains or TADs and entire chromosomes remain largely elusive. For the chromatin fiber, a variety of models covering a broad range from unordered and less compact to regular and more compacted states have been suggested [25–27], and likewise, for the higher-order folding of the fiber there is experimental evidence for both more ordered loop- or rosette-like [12, 28–31] and less ordered, e.g., fractal globule-like topologies [10].
Despite the impressive advancements in the field, details on the organization and dynamic properties of chromatin in single living cells are elusive. However, the plasticity of chromatin organization is a central determinant of genome function as it modulates access of factors to the genome and targets them to biologically active subcompartments [32]. In addition to large-scale chromosomal movements [33], local chromatin dynamics are mostly studied by tracking of few genomic loci and chromatin-associated or chromatin-embedded molecules and particles as reviewed previously [34–37]. The resulting translocation data can be quantified as mean-squared displacement (MSD) versus time curves to extract apparent velocities or diffusion coefficients. These studies revealed spatially confined movements of tagged chromatin loci as intuitively evident for a segment of a polymer without center-of-mass translocation [38–40]. However, extending this approach to a systematic analysis of endogenous chromatin loci faces a number of limitations. Imaging-based techniques typically require the labeling of specific genomic regions using repetitive, e.g., lacO operator arrays integrated into the genome at random or defined positions [41]. These arrays are big compared to the dimensions of the structures under investigation and potentially alter their architecture. Furthermore, this approach is limited in its time resolution to the image acquisition time, which is typically in the range of 50 ms or higher. At the molecular level, methods like fluorescence recovery after photobleaching (FRAP), continuous photobleaching (CP) and fluorescence correlation spectroscopy (FCS) provide information on the binding of proteins to chromatin and on their mobility within the chromosomal environment on the microsecond to minute time scale [42, 43]. However, with these methods no information on the dynamics of nucleosome chains and higher-order domains has yet been obtained. While biophysical polymer models have been widely used to quantitatively describe and directly or inversely compare 3D chromatin structure to experimental data as reviewed recently [44, 45], they mostly do not include dynamics. Thus, our current knowledge is lacking both experimental information and theoretical treatment of the conformational dynamics of chromatin in vivo that is important for the understanding of the differential readout of DNA sequence information or interactions between different genomic loci.
In a number of studies, intramolecular dynamics have been investigated by FCS [46, 47]. By uncoupling the center-of-mass diffusion from higher-order relaxation modes via trapping or tracking [48, 49], a series representation of relaxation modes was obtained to describe the internal dynamics of double-stranded DNA in vitro [49–51]. In this manner, the MSD of polymer segments can be described as confined diffusion relative to the center of mass. When taking into account hydrodynamic interactions, molecules like long DNA chains with a sufficiently large ratio of contour to persistence length, i.e., 'soft' polymers, show Zimm relaxation behavior [52].
Here, we combine for the first time the topological interpretation of 3C-derived data from large ensembles of fixed cells with the measurement of mesoscale chromatin dynamics in individual living cells. We confirm the formation of loop clusters in TADs from contact probability maps (5C, T2C) from other studies ([11, 53], NCBI GEO accession GSE35721) pointing to rosettes as a prominent structural feature of such topologically independent domains. By applying FCS, we measured chromatin dynamics extracted from fluorescence intensity fluctuations by exploiting the linker histone variant H1.0 tagged with EGFP (H1-EGFP) as a proxy for chromatin movement. H1 is particularly suited for this purpose since it decorates chromatin globally and reflects its density but binds only transiently [54, 55] such that photobleached molecules are constantly replaced by fluorescent ones. We found distinct chromatin relaxation times, hallmarking the presence of dynamically and topologically independent chromatin units with an average genomic content of ~1 Mb. Treatment of cells with trichostatin A (TSA) and azide-induced ATP depletion resulted in decelerated relaxations, revealing chromatin decondensation and compaction, respectively, hence delivering insight into factors that change chromatin dynamics. Based on the experimental data, an analytical polymer model was developed. It correctly describes both the contact probability maps from 3C-based ensemble analysis and the internal dynamics of chromatin domains observed by FCS. We hypothesize that these domains might be TADs. From the dynamic properties measured, we infer that the different time scales of structural reorganization and particle dynamics provide an additional regulatory layer for targeting soluble nuclear factors to chromatin subcompartments.
A loop-cluster substructure domain model shows good agreement with experimental 5C and T2C data
To gain insight into the topological organization of chromatin, we applied a simple domain and peak detection approach to 5C data of a 4.5-Mb region containing the Xist gene crucial for X inactivation in female mouse embryonic stem cells [11] and T2C data of a 2.2-Mb region of the IGF/H19 locus in human HB2 cells [8]. Figure 1a shows the analysis of the experimental 5C data set for which we confirmed the existence of TAD-like domains such as the highlighted ~1.1-Mb region that emerged as square-shaped regions of increased internal contact probability as expected [7, 11, 14, 56]. A one-dimensional projection over the whole domain region yielded primary peaks corresponding to genomic sites involved in loop formation (Additional file 1: Fig. S1). Orthogonal local projections around each so-determined peak revealed all partner sites with which it interacts to form loops. We obtained 17 primary peaks within this domain (Additional file 1: Fig. S2). Most of them also emerged in the local projections, strongly indicating that this domain consisted to a significant extent of an intricately tied loop cluster such as a rosette. We followed the same procedure for an experimental T2C data set from Knoch et al. [53] (Fig. 1b). Again, we found domains such as the highlighted ~0.95-Mb region and 15 primary peaks within this domain (Additional file 1: Fig. S3), most of which also emerged in the local projections, again indicating a rosette-like loop-cluster organization of the domain.
5C and T2C analysis and polymer modeling. a Genomic contact probability matrix for experimental 5C data [11]. The black square highlights a domain that is further studied. The dashed profile shows how the non-redundant triangular representation was extracted. We could identify loop bases (circles) with higher (black) or smaller (gray) significance. The 1D plot represents the global projection of the highlighted domain. Arrows indicate identified loop bases. The extracted loops allowed to simulate and visualize an exemplary configuration and to compute the R g . b Same as a, but for experimental T2C data [53]. c The different chromatin domain conformations probed in this study to model the FCS data: blob, globule, loop and loop cluster. The radius of gyration R g (gray circle) of domains depends on physical parameters, solvent conditions and the topology of the underlying chromatin fiber. It determines the characteristic time constants of internal relaxation kinetics observed in this study. d Same as a, but for a model configuration of the loop-cluster conformation under theta-solvent conditions (see Additional file 1: Supplementary Text). e Same as a, but for a model configuration of the globular conformation
Domain configurations are well described with a quantitative polymer model
While these examples support the notion of loop-induced domain formation, also less ordered crumpled, globular or ordinary domain structures were suggested previously [10, 12, 44]. Accordingly, we derived a quantitative polymer model that describes 4 different domain topologies to comprehensively cover the previously proposed features of chromatin domain organization (Fig. 1c; Additional file 1: Fig. S4): Scaling laws from polymer theory [57] suggest that chromatin adopts the shape of a chain of topologically and dynamically independent domains under the semi-dilute conditions met in mammalian interphase nuclei (see Additional file 1: Supplementary Text for more details). Thus, we first assumed the formation of such blobs, i.e., globular subchains of the full chromosome that are significantly shorter and behave like independent, almost self-penetrating molecules (so-called theta-solvent conditions where repulsive and attractive segment–segment interactions compensate each other), connected with a linker. Second, the formation of space-filling fractal or crumpled globules [10, 44] was evaluated. Third, we assumed the formation of single or rosette-like branched loops [29, 30, 58, 59] under theta-solvent conditions. Fourth, the same topology was used, but under so-called good-solvent conditions where the excluded volume interaction between segments dominates and the structure appears swollen as compared to theta-solvent conditions. The physical contour length L of the chromatin fiber contained in the domain is directly related to DNA content and density, and the persistence length l p is a measure for the fiber flexibility. Together with the number of contained loops f, these parameters determine the radius of gyration R g , which characterizes the volume effectively occupied by the domain—Additional file 1: Eq. S14, S20, S22, S24—according to Eq. 1:
$$R_{g}^{2} = \left\{ {\begin{array}{*{20}l} {\frac{{L \cdot l_{p} }}{6}\left( {\frac{2f - 1}{{f^{2} }}} \right)} \hfill & {\text{loop-rosette conformation, theta-solvent conditions,}} \hfill \\ {\frac{{L^{{{6 \mathord{\left/ {\vphantom {6 5}} \right. \kern-0pt} 5}}} \cdot l_{p}^{{{4 \mathord{\left/ {\vphantom {4 5}} \right. \kern-0pt} 5}}} }}{9.59}\left( {\frac{1.92f - 0.92}{{f^{{{{11} \mathord{\left/ {\vphantom {{11} 5}} \right. \kern-0pt} 5}}} }}} \right)} \hfill & {\text{loop-rosette conformation, good-solvent conditions,}} \hfill \\ {\frac{{L^{{{2 \mathord{\left/ {\vphantom {2 3}} \right. \kern-0pt} 3}}} \cdot l_{p}^{{{4 \mathord{\left/ {\vphantom {4 3}} \right. \kern-0pt} 3}}} }}{1.76}} \hfill & {\text{globular conformation,}} \hfill \\ {\frac{{L \cdot l_{p} }}{3}} \hfill & {{\text{blob conformation}} .} \hfill \\ \end{array} } \right.$$
An estimation of stochastic contact probabilities—Additional file 1: Eq. S25—directly allowed to compute 5C-/T2C-like contact probability maps. Figure 1d, e shows such maps for both the theta-solvent loop-cluster and the globular conformation (Additional file 1: Supplementary Text), i.e., for a 5-Mb stretch comprising 4 rosette-like loop clusters and 4 globular domains, respectively, linked with a relaxed chromatin stretch. Here too, domains emerged as square-shaped regions of increased internal contact probability. The highlighted rosette domain in Fig. 1d was computed assuming 10 loops (three with positional noise). Applying the same analysis as above allowed us to quantitatively retrieve the topological details used for the simulation: Some ties were found in both projection directions, others, especially those with positional noise, less reliably in only one direction. Using the topology retrieved, we performed Monte Carlo (MC) simulations of the domain (with one example visualized, Fig. 1d) to yield its radius of gyration of ~240 nm. The globular domain model yielded a smaller radius of gyration of 210 nm but was incompatible with the experimental data since no peaks were detected (Fig. 1e). To further validate the analysis and simulation pipeline, we used the topology obtained from the experimental 5C and T2C data to re-calculate the experimental contact probability maps, which were in good agreement with the initial ones (Additional file 1: Figs. 5, 6). From MC simulations, we found a radius of gyration of ~240 nm for the domain highlighted in the 5C data set and of ~220 nm in the T2C data set (Fig. 1a, b). In summary, a much better agreement with the experimental data was found for the loop-cluster model than for the globular domain model.
Chromatin fiber dynamics can be evaluated with FCS of transiently bound linker histone
5C and T2C analyses yield structural information from large ensembles of fixed cells. However, the dynamic properties of the observed domains remain elusive. Therefore, we measured chromatin dynamics with FCS using the approach depicted in Fig. 2. The dynamics of linker histone H1-EGFP were determined in the cytoplasm, in less chromatin-dense areas in the nucleus referred to as 'euchromatin' and in denser chromatin regions in the nuclear and in the nucleolar periphery referred to as 'heterochromatin' in the following [60] (Fig. 2a; Additional file 1: Supplementary Text, Fig. S7 for details on classification). In the cytoplasm, we obtained a fast decay with a characteristic diffusion coefficient of D ≈ 20 µm2 s−1 that we assigned to free diffusion of H1.0 (Fig. 2b). Inside the nucleus, the autocorrelation functions (ACFs) decayed bimodally. The first component decayed within 1 ms owing to a freely diffusive fraction. The second, slower decaying contribution was about two magnitudes slower between ~90 and ~160 ms depending on the previously defined nuclear subcompartments used for the measurement. We assigned these slower decays to chromatin-associated movements (Fig. 2c): Distinct relaxation times of chromatin measured by FCS clearly indicated the existence of topologically and dynamically independent chromatin units of a certain scale. The detailed analysis of H1.0 chromatin interactions with FRAP and FCS experiments as well as FCS measurements of H2A and H2B core histones (see below) further corroborated this. Processes that occur at times above 1 s like photobleaching or cellular movements were not detected in FCS due to the short effective measurement time (Additional file 1: Supplementary Text, Fig. S8). Thus, combining FCS measurements with hydrodynamic polymer models should enable us to extract the size of these domains as well as their topologies and physical properties (Fig. 1c).
Observation and interpretation of chromatin dynamics seen with FCS. a MCF7 cell stably expressing H1-EGFP with typical localizations for FCS measurements used throughout this study. b Typical ACFs obtained at the different locations, showing fast decay due to free diffusion in the cytoplasm and slower decay in the nuclear and nucleolar periphery and even slower decay in euchromatin. c Different regimes of the ACFs correspond to different processes: A fast initial decay results from free H1.0 diffusion, followed by a slow decay due to chromatin-associated diffusion or relaxation, whose time constant depends on R g . Slower processes such as photobleaching do not show up
Both transient chromatin-binding modes of H1.0 are slower than fluctuations seen by FCS
To further rule out that the relaxations in FCS were association–dissociation events, we precisely quantified transient chromatin binding of H1.0 labeled with EGFP with fluorescence recovery after photobleaching (FRAP) experiments. We bleached a strip through the cell nucleus (Fig. 3a) in non-, TSA- and azide-treated cells. The mobility of H1-EGFP was analyzed by fitting the bleach profile (Fig. 3b; Additional file 1: Fig. S11) with Additional file 1: Eq. S91 to follow its broadening as given by its width σ. From linear regressions of σ 2 plotted versus time, apparent diffusion coefficients of D app = (10 ± 5)·10−3 µm2 s−1, (12 ± 4)·10−3 µm2 s−1 and (10 ± 3)·10−3 µm2 s−1 were derived (non-, TSA- and azide-treated; Fig. 3c). These values were at least two orders of magnitude smaller than those for free H1-EGFP (D ≈ 20 µm2 s−1) and at least one order of magnitude larger than the apparent diffusion coefficient of chromatin loci obtained by tracking [35]. Thus, the apparent diffusion process represents coupled diffusion and binding as reported previously [61]. Inspecting the integrated fluorescence intensity in the bleached region over time revealed that the expected intensity change calculated for diffusive redistribution using these D app values differed significantly from the experimentally observed behavior (Fig. 3d; Additional file 1: Fig. S12). Therefore, at least two different binding states must be present, with D app comprising the kinetics of the faster one. Accordingly, the intensity change was fitted with the uncoupled diffusion and binding model given in Additional file 1: Eq. S92. It includes fast free diffusion for which recovery is already complete at the first postbleach time point. The second term covers fast binding and diffusion, while slow dissociation was taken into account separately [62, 63]. This yielded free diffusive fractions of 6 ± 3, 11 ± 4 and 18 ± 12 % and slow dissociation rates of (8.8 ± 2.6)·10−3 s−1, (13.7 ± 6.3)·10−3 s−1 and (12.2 ± 2.3)·10−3 s−1 for non-, TSA-, and azide-treated cells, respectively.
Photobleaching analysis of H1.0-chromatin binding. a Imaging FRAP experiment of H1-EGFP expressed in an MCF7 cell. Strip B (red) is bleached into the nucleus. The redistribution is followed over time and analyzed in different ways. b Averaging along the direction of the long strip dimension A (blue in a), plotting the profile perpendicularly in direction P and normalizing to the prebleach distribution (Additional file 1: Fig. S11) provided time-dependent profiles. They were fitted with Additional file 1: Eq. S91 to yield the MSD over time. c From a linear fit, apparent diffusion coefficients around 10−3 µm2 s−1 were extracted. d However, the apparent diffusion model, already comprising a fast reaction–diffusion scheme, did not explain exhaustively the intensity time trace obtained by averaging over the bleach region B in a. It required additional fast diffusive, transiently binding and immobilized fractions of the molecules for comprehensive modeling of the recovery data. However, a closed expression for a full reaction–diffusion scheme with two immobilization states cannot be derived. e We used continuous fluorescence photobleaching (CP), for which a closed expression with two bound states existed and which also allowed to address more specifically the localization types used in this study. This yielded a short-lived (residence time ~1 s) and a long-lived (~2 min) type of immobilization, whose fractions and detailed properties depended on localization and treatment of the cells with ATP or azide. f Globally fitting point FRAP experiments featuring bleach times series confirmed the CP and imaging FRAP results. g Resulting model of H1.0 binding: molecules bind to the DNA entry–exit sites of nucleosomes with rate k on. Either they rapidly dissociate again with rate k off,1, or they engage with rate k switch to the longer-lived conformation, from which they dissociate eventually with rate k off,2
As an independent confirmation of the above results and to extract also the faster dissociation rate, we conducted a continuous photobleaching (CP) analysis (Fig. 3e). The much higher spatial resolution of CP allowed to address local differences in H1-EGFP mobility. Fitting CP curves with Additional file 1: Eq. S93 confirmed the existence of two chromatin-binding states. The analysis yielded fast dissociation rates of 1.05 ± 0.13 s−1 in heterochromatin and 0.76 ± 0.21 s−1 in euchromatin of non-treated cells and fractions of 18 ± 2 and 31 ± 9 %, respectively, of the molecules in this association state. Point FRAP (Fig. 3f) confirmed these results by performing series of experiments acquired at single spots in euchromatin with different lengths of the bleach segment [43]. The resulting dissociation rates of (8.2 ± 3.5)·10−3 s−1 and 0.83 ± 0.20 s−1 for the two binding states were in good agreement with the above findings.
Using this and the previously reported presence of two DNA binding domains in H1 [64], we suggest the following model (Fig. 3g): One binding domain of H1.0 interacts with the entry–exit site of DNA at the nucleosome and either dissociates quickly or engages the second domain to form a longer-lived binding state, from which it dissociates again later. Deriving the rate equations for the different binding states allowed us to calculate the remaining parameters in differently treated cells [65] and in euchromatin and heterochromatin (Additional file 1: Eq. S95; Table 1): The residence time of H1.0 in the short-lived binding state was ~1 s, whereas the average residence time on chromatin was ~4 s. Thus, the fluctuations observed with FCS with relaxation times of ~100 ms did not result from association/dissociation events but rather from chromatin dynamics. Despite our purely intensity-based distinction of euchromatin and heterochromatin, we found a higher effective affinity of H1.0 to heterochromatin as expected [66].
Table 1 Properties of histone H1.0 binding to chromatin obtained with FRAP and CP
FCS measurements of core histones H2A and H2B confirm chromatin fluctuations with ~100 ms relaxation times
To confirm that the ~100 ms relaxation times indeed represent chain dynamics and not unbinding events or photophysical effects of the fluorescent protein domains, we repeated the measurements in HeLa cells stably expressing histone H2B–mCherry fusions and transiently expressing H2A–EGFP fusions at a ratio of ~5 % to the corresponding endogenous protein [60]. As expected, both the spatial chromatin distribution and the relaxation times were virtually the same for both histones (Fig. 4a). The measured values for nuclear relaxation times were in excellent agreement with H1.0 measurements, which are elucidated in detail in the following section. Fitting the ACFs with model functions for chromatin relaxation based on the comprehensive set of 4 polymer models (Eq. 3) allowed us to quantify the differences between the intranuclear positions studied: In heterochromatin, we obtained 83±7 and 94±6 ms for H2A–EGFP and H2B–mCherry, respectively, as first-order mode relaxation time under theta-solvent conditions (see next section for details and Table 2 for good-solvent and globular conditions). Corresponding values in euchromatin were approximately twofold slower with 165±11 and 174±10 ms, respectively, in contrast to the expectation that in lower density regions, relaxations would be faster. Importantly, the fluctuations showed a pronounced cross-correlation due to the co-diffusion of H2A and H2B simultaneously integrated into nucleosomes and chromatin. In contrast, there was no cross-correlation in the cytoplasm as expected. These observations corroborate our conclusion that chromatin dynamics are the source of the observed fluctuations. It can be ruled out that they are due to blinking of fluorescent protein domains because this would not result in a cross-correlated signal. Furthermore, the cross-correlation cannot result from spectral cross-talk because this would yield high cross-correlation in the cytoplasm, too.
FCS analysis of chromatin dynamics. a HeLa cell expressing H2A–EGFP (transient) and H2B–mCherry (stable). The correlation plots show H2A–EGFP ACFs (green), H2B–mCherry ACFs (red) and their CCF (black) acquired in the nucleus (euchromatin—3) and in the cytoplasm (4), revealing significant cross-correlation in the nucleus, but not in the cytoplasm. Fitting them with a relaxation model for loop-rosette-structured polymers under theta-solvent conditions yielded a significant difference in relaxation time distribution between hetero-(1/2) and euchromatin (3) both for H2A (ch1) and H2B (ch2). b Untreated MCF7 cell expressing H1-EGFP. At the three positions (nuclear periphery—1, blue; nucleolar periphery—2, purple; euchromatin—3, orange), the corresponding ACFs were acquired. Fitting them like in a (res—residuals) yielded a significant difference in relaxation time distribution between hetero- (1, 2) and euchromatin (3). c Same as b, but cells were treated with TSA, resulting in globally increased relaxation times without significant differences between 1, 2 and 3. d Same as b, but cells were ATP-depleted, resulting in globally increased relaxation times without significant differences between 1, 2 and 3
Table 2 Dynamic and structural parameters of histone-FP-labeled chromatin domains obtained with FCS at different nuclear localizations
Polymer relaxation modes seen by autocorrelation analysis reflect persistence length, mass density and topology of chromatin domains
To decompose the autocorrelation analysis into parameters that describe features of polymer domains, the Rouse–Zimm model was applied for a quantitative characterization of domain dynamics [52]. Independent relaxation modes represent distinct characteristic times τ p and amplitudes \(a_{p} = \langle {{\mathbf{X}}_{p}^{2} } \rangle\) that are observable in the FCS experiments. These parameters depend on topology, solvent conditions, viscosity η s, temperature T, Boltzmann constant k B and radius of gyration R g (see Additional file 1: Supplementary Text for more details):
$$\begin{array}{*{20}l} \begin{aligned} \tau_{1} \approx 6.111\frac{{\eta_{s} R_{g}^{3} }}{{k_{B} T}} ,\quad \tau_{p} = \frac{{\tau_{1} }}{{p^{{{3 \mathord{\left/ {\vphantom {3 2}} \right. \kern-0pt} 2}}} }},\quad a_{p} \approx 0.152\frac{{R_{g}^{2} }}{{p^{2} }}, \hfill \\ \hfill \\ \end{aligned} \hfill & \begin{aligned} {\text{loop-rosette conformation,}} \hfill \\ {\text{theta-solvent conditions,}} \hfill \\ \hfill \\ \end{aligned} \hfill \\ \begin{aligned} \tau_{1} \approx 4.114\frac{{\eta_{s} R_{g}^{3} }}{{k_{B} T}} ,\quad \tau_{p} = \frac{{\tau_{1} }}{{p^{{{{17} \mathord{\left/ {\vphantom {{17} {20}}} \right. \kern-0pt} {20}}}} }},\quad a_{p} \approx 0.172\frac{{R_{g}^{2} }}{{p^{{{9 \mathord{\left/ {\vphantom {9 4}} \right. \kern-0pt} 4}}} }}, \hfill \\ \hfill \\ \end{aligned} \hfill & \begin{aligned} {\text{loop-rosette conformation,}} \hfill \\ {\text{good-solvent conditions,}} \hfill \\ \hfill \\ \end{aligned} \hfill \\ \begin{aligned} \tau_{1} \approx 7.151\frac{{\eta_{s} R_{g}^{3} }}{{k_{B} T}} ,\quad \tau_{p} = \frac{{\tau_{1} }}{p} ,\quad a_{p} \approx 0.236\frac{{R_{g}^{2} }}{{p^{{{5 \mathord{\left/ {\vphantom {5 3}} \right. \kern-0pt} 3}}} }}, \hfill \\ \hfill \\ \end{aligned} \hfill & \begin{aligned} {\text{globular conformation,}} \hfill \\ \hfill \\ \end{aligned} \hfill \\ \begin{aligned} \tau_{1} \approx 5.849\frac{{\eta_{s} R_{g}^{3} }}{{k_{B} T}} ,\quad \tau_{p} = \frac{{\tau_{1} }}{{p^{{{3 \mathord{\left/ {\vphantom {3 2}} \right. \kern-0pt} 2}}} }},\quad a_{p} \approx 0.152\frac{{R_{g}^{2}} }{{p^{2} }}, \hfill \\ \hfill \\ \end{aligned} \hfill & \begin{aligned} {\text{blob/linear conformation;}} \hfill \hfill \\ {\text{mode number }}p = 1,2,3, \ldots \hfill \\ \hfill \\ \end{aligned} \hfill \\ \end{array}$$
These relaxations result in local concentration fluctuations of segments even when the center-of-mass translocation is negligible. An obvious way to study such fluctuations is their evaluation by autocorrelation analysis as conducted for FCS measurements. Relaxation modes are independent of each other and have exponentially decaying position correlation functions [52]. Thus, each mode is represented by a diffusion process in a harmonic potential, which is an Ornstein–Uhlenbeck process, the simplest example of a stationary Markovian process with Gaussian probability distribution at all times [67]. To this theoretical framework, the FCS formalism was applied [68, 69] (Additional file 1: Supplementary Text), yielding the autocorrelation function
$$ G\left( \tau \right) \propto a_{p} \left[ {\left( {1 + \frac{{1 - \exp \left[ { - {\tau \mathord{\left/ {\vphantom {\tau {\tau_{p} }}} \right. \kern-0pt} {\tau_{p} }}} \right]}}{\upsilon }} \right)^{ - 1} \left( {1 + \frac{{1 - \exp \left[ { - {\tau \mathord{\left/ {\vphantom {\tau {\tau_{p} }}} \right. \kern-0pt} {\tau_{p} }}} \right]}}{{\kappa^{2} \upsilon }}} \right)^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} } \right.\left. { - \left( {1 + \frac{1}{\upsilon }} \right)^{ - 1} \left( {1 + \frac{1}{{\kappa^{2} \upsilon }}} \right)^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} } \right]. $$
Here, \(\upsilon = {{\tau_{D} } \mathord{\left/ {\vphantom {{\tau_{D} } {\tau_{p} }}} \right. \kern-0pt} {\tau_{p} }}\) is the ratio of diffusion correlation and relaxation time and \(\kappa = {{z_{0} } \mathord{\left/ {\vphantom {{z_{0} } {w_{0} }}} \right. \kern-0pt} {w_{0} }}\) the structure parameter (Methods). Polymer relaxation was thus modeled by summing over \(p = 1,2,3, \ldots\) of Eq. 3. The relaxation time τ 1 from a fit of the model function to experimental data yielded the radii of gyration according to Eq. 2 with the nuclear solvent viscosity determined independently (Additional file 1: Supplementary Text). For known genomic content, a well-defined relationship between chromatin persistence length, mass density and domain topology such as the number of loops in a cluster/rosette can be established. Thus, the formalism links structural domain parameters from 3C-derived methods with dynamic features measured by FCS.
FCS measurements of chromatin dynamics reveal different states of domain organization in hetero- and euchromatin
Fitting the ACFs with the polymer models (Eq. 1–3) allowed us to quantitatively determine chromatin relaxations times and other polymer parameters at different intranuclear positions and conditions (Fig. 4b; Table 2): In heterochromatin, e.g., at the nuclear or the nucleolar periphery, we obtained 90 ± 6 and 78 ± 6 ms, respectively, as first-order mode relaxation time under theta-solvent conditions. In the rest of the nucleus, in euchromatin, we measured 161 ± 15 ms, i.e., approximately twofold bigger values. Independent of the actual topological conformation, this can only be explained with a weaker local confinement of euchromatin due to a lesser degree of domain compaction because a purely chromatin density-driven relaxation would be faster in euchromatin compared to heterochromatin. In other words, comparing the relaxation with the oscillation of a bead on a string, the oscillation time is longer for a weaker string. Thus, the more open and less compact euchromatin can be compared to a weaker, more open string and the more compact heterochromatin to a stronger, more compact one.
After treatment of the cells with TSA, chromatin became hyperacetylated and adopted a decondensed state of the chromatin fiber [70, 71]. This process resulted in a homogeneous nuclear morphology and chromatin density distribution (Fig. 4c). The differences in chromatin relaxation at different nuclear loci vanished. The relaxations slowed down to time constants of 292 ± 34 ms at peripheral and 307 ± 37 ms at central nuclear positions (under theta-solvent conditions; Fig. 4c; see Table 2 for a summary of the different conformations). These values were even higher than those measured for euchromatin of untreated cells and indicated a further reduction in local confinement and an increased genomic content of domains.
The dynamics changed numerically similarly upon ATP depletion after treatment of the cells with azide. Here, however, the chromatin distribution became more aggregated with a less homogeneous morphology (Fig. 4d). The differences in chromatin relaxation vanished and the relaxations slowed down, resulting in time constants of 303 ± 51 ms in peripheral and 278 ± 43 ms in central positions (theta-solvent conditions, Fig. 4d; see Table 2 for a summary of the different conformations). This and the structural differences as seen in the images argue for increased sizes of domains due to agglutination effects. Interestingly, fundamentally different processes—decondensation and aggregation—result in the same effect of effective growth of independent domains. However, in the former case, the domains are distributed more and in the latter case less homogeneously than in untreated cells.
FCS measurements of chromatin dynamics identify 1-Mb-sized dynamic domains
From the observed relaxation times, the radii of gyration of dynamic domains could be extracted according to Eq. 2 for loop-cluster topologies under theta-solvent conditions, for the same under good-solvent conditions, for globular conformations and for blobs. For untreated cells, this resulted for heterochromatin in 240 ± 6 nm and for euchromatin in 297 ± 9 nm (theta-solvent conditions, Fig. 5a; see Table 2 for a summary of the different conformations). Next, from fluorescence images we extracted chromatin densities in euchromatin of 91 ± 1 % and in heterochromatin of 156 ± 5 % of the mean nucleosome concentration of 100–140 µM [60, 72, 73] (Additional file 1: Supplementary Text, Fig. S7). In combination with a nucleosomal repeat length of 191 bp [72, 74], this enabled us to transform the domain volume determined from the radius of gyration into genomic content (Additional file 1: Eq. S10): We obtained 700–1120 and 830–1160 kb for hetero- and euchromatin, respectively, for blobs and loop clusters under theta-solvent conditions, 1230–1830 and 1470–2050 kb for loop clusters under good-solvent conditions, and 710–1050 and 840–1170 kb for globules.
Physical properties and dynamics of domain structure. a Radii of gyration for the three different localization classes as extracted from the FCS data in Fig. 4 for untreated cells. The given range covers the results from the blob and the loop-cluster conformation under theta-solvent conditions and from the globular conformation and reveals differences in local domain size between euchromatin and heterochromatin. b MSD plots for typical chromatin segments in hetero- (het) and euchromatin (eu) calculated (straight lines) using the loop-rosette model under theta-solvent conditions and the radii of gyration from a and extracted from typical FCS measurements (symbols), showing confined diffusion on the 100 ms and 100 nm time and length scale. c Same as a for TSA-treated and ATP-depleted cells, respectively, showing that the domain size increased to similar values upon perturbation of chromatin structure. d Chromatin mass density versus the number of loops per domain and the fiber persistence length calculated for the loop-cluster conformation under theta-solvent conditions. Highlighted areas represent the parameter subspace in agreement with previous studies. e Same as d, but for the globular and the blob conformation and thus without dependence on loop number
For the good-solvent loop-cluster topology, the genomic content of domains was significantly larger than the previously observed 500–1000 kb for subchromosomal domains/TADs [11, 14], i.e., the assumption of good-solvent conditions would lead to a pronounced overestimation of domain size. Accordingly, the loop-cluster conformation under theta-solvent conditions was considered for further analysis. For this description, only minor excluded volume effects are present and thus a high structural flexibility on the level of the chain of nucleosomes. The blob and the globular polymer conformation would fit the TAD genome content but not the experimental interaction data from the 5C and T2C analysis as discussed above.
The polymer models predict a confined movement of chromatin segments relative to the center of mass of a domain, which is stationary on the time scale under consideration. Using the relaxation times obtained for the theta-solvent model, we calculated the MSD curves of a genomic site in euchromatin and heterochromatin (Fig. 5b), which clearly showed confinement of translocations and agreed well with experimental ones extracted directly from ACFs of exemplary measurements in euchromatin and heterochromatin according to Additional file 1: Eq. S83. Furthermore, the calculated MSDs corresponded well with previous studies of chromatin translocations [38–40, 72] and thus confirm our approach.
Hyperacetylation and ATP depletion differentially affect chromatin dynamics and alter the radius of gyration of domains
Chromatin hyperacetylation due to TSA treatment of the cells slowed down chromatin relaxation, as apparent from a similarly increased radius of gyration at peripheral (R g = 362 ± 14 nm) and central nuclear positions (R g = 368 ± 15 nm) under theta-solvent conditions (Fig. 5c; Table 2). With a homogeneous nucleosome concentration of 100–140 µM, the genomic size of dynamic domains was 1650–2610 kb (Table 2), i.e., twofold larger than in untreated cells. This corroborates the view that hyperacetylation induces a larger-scale rearrangement of chromatin toward a more uniform conformation [70, 71] and the notion of discriminable compact and passive domains [56] whose differences vanish upon TSA treatment.
For ATP-depleted cells, radii of gyration increased to 367 ± 21 nm at peripheral and 356 ± 18 nm at central nuclear positions (Fig. 5c; Table 2). We obtained 2680–4100 and 1430–2160 kb for peripheral and central positions, respectively, when using the same mean nucleosome concentrations as for untreated cells. This suggests that in contrast to hyperacetylation, ATP depletion affects euchromatin and heterochromatin differentially as reflected by the increased heterogeneity in the images possibly due to agglutination of domains and increased packing density of nucleosomes.
Local compaction of chromatin is determined by its flexibility, mass density and topology
To characterize the organization of the chromatin fiber into domains, a set of structural and physical parameters is required: the persistence length, the mass density and, in the case of looping, the number of loops per domain. We found that only certain combinations of the properties comply with the observed radius of gyration and genomic content. Figure 5d shows the relationship of number of loops per domain, chromatin persistence length and linear mass density computed for hetero- and euchromatin for loop clusters under theta-solvent conditions using Eq. 1 and a nucleosomal repeat length of 191 bp [72, 74]. The encircled area covers the parameter range compatible with previous knowledge [27, 74–77], i.e., a mass density of 0.5–6 nucleosomes/11 nm, a persistence length of 10–200 nm and up to 20 loops. A possible chromatin conformation with 9 loops per domain has a mass density of 4.5 nucleosomes/11 nm and a persistence length of 110 nm for euchromatin and 5.5 nucleosomes/11 nm and 100 nm for heterochromatin in very good agreement with Knoch et al. [53]. For a globular domain structure, the relation of persistence length and linear mass density computed for hetero- and euchromatin is depicted in Fig. 5e. Again, the marked area highlights the accessible part of parameter space and reveals a range of possible combinations, e.g., a mass density of 4.5 nucleosomes/11 nm and a persistence length of 55 nm for euchromatin and 5.5 nucleosomes/11 nm and 45 nm for heterochromatin. For both examples, the heterochromatin fiber would be more compacted but also locally more flexible. In contrast, for a blob-like domain structure, the relation of persistence length and mass density (Fig. 5e) does not overlap with previously obtained values, i.e., a purely generically formed chain-of-blob topology does not provide enough topological compaction. Thus, only the globule and the loop-cluster model agree with our observations for domain size and genomic content and only the latter with the 5C and T2C data.
Comparison of Fig. 5d with Fig. 1a, b showed that the large number of loops found for the ~1-Mb domains matched well with a persistence length of ~ 100 nm when assuming a mass density of ~4 nucleosomes/11 nm. Thus, FCS dynamics measurements allowed to detect dynamically independent subchromosomal domains, whereas 5C and T2C data allowed to detect topologically independent domains, and identifying them with each other enabled us to extract their size, genomic content, topology and average physical properties of the underlying chromatin fiber.
Local chromatin dynamics determine genome accessibility
From the initial linear increase in the MSD (Fig. 5b), an apparent diffusion coefficient of ~0.1 µm2s−1 of chromatin segments could be extracted with a segment concentration of 104–105 µm−3 (Fig. 5d, e). From these parameters, a frequency of collisions with other sites could be estimated for a given genomic site inside a topological domain [78]: Intradomain collisions occur at a rates of ~100 collisions/s, whereas interdomain collisions are at least 100-fold less frequent. Therefore, contacts between genomic sites showing up in 3C-derived methods must be physically stable and long-lived enough to not be disrupted by the rapid local movements of the chromatin fiber, rendering stable looping a highly probable mechanism of domain formation.
The confined diffusion of chromatin segments (Fig. 5b) translates into pronounced volume fluctuations of the domains on the time scale of the observed relaxation times. The volume fluctuations are of the same order of magnitude as the volume itself, i.e., in the order of 0.1 µm3 (Additional file 1: Eq. S9). The time it takes soluble factors to cross a volume of the size of the domains by diffusion is around a few ms and much shorter than the relaxation time on the 100 ms time scale. Thus, the short-term accessibility of the domains for a single molecule is given by the statically occupied volume (Fig. 6). Many lacunae and corrals in the chromatin environment [42, 79] are devoid of scarce factors, so that locally, their effective concentration can be significantly smaller than the mean. For abundant molecules or complexes, however, it is defined by the fluctuation-induced maximum accessible volume. Thus, domains are adiabatically replenished to the mean concentration with molecules or complexes except for the net chromatin volume. Therefore, diffusion-limited reactions such as transcription factor binding to DNA are expected to display a more than linear dependence on factor concentration, in contrast to the case of soluble binding partners [78].
Static and dynamic chromatin domain accessibility. Model for domain accessibility of heterochromatin, euchromatin, TSA-treated and ATP-depleted chromatin. A certain volume fraction of the domain (blue) is not accessible (checkered) for soluble factors (red) due to steric hindrance. The domain is virtually static on time scales long enough for a single molecule to roam the domain volume by diffusion (~1 ms; light red), resulting in an unaccessible volume significantly larger than the net fiber volume. The single molecule is highly unlikely to return to a previously visited domain so that it only senses the static, snapshot-like unaccessible volume. Thus, accessibility for low-abundance molecules is defined by the apparently static conformation during the ms passage time. This effect is more pronounced for compact heterochromatin than for open euchromatin. On the time scale of domain reorganization (~100–200 ms), molecules can search different domain areas such that the effectively unaccessible volume decreases toward the net fiber volume (including 'classical' excluded volume effects). Accordingly, high-abundance molecules effectively sense a significantly higher accessible volume, i.e., accessibility depends on molecular concentration in addition to a binding reaction itself. Moreover, it is determined by the size of the molecule or complex (arrows in 1D plots), confirming previous findings on static chromatin accessibility. Thus, formation of domains consisting of dynamic loops provides an additional degree of freedom to differentially regulate chromatin accessibility. Chemical modifications and chromatin remodeling processes take place on significantly longer time scales so that access for required molecules can be regulated by domain and loop dynamics
We calculated the accessible volume fraction according to Additional file 1: Eq. S74, S75 for euchromatin and heterochromatin as well as for TSA-treated cells, assuming both static and fluctuating domain sizes (Fig. 6). The accessibility limit, i.e., the molecular radius, for which accessibility was reduced to 50 %, was approximately twofold larger for dynamic domains than for static ones. Assuming an effective chromatin fiber diameter of 14 nm and a mass density of 1.6 nucleosomes/11 nm, the limit was 5 and 10 nm for heterochromatin, 10 and 20 nm for euchromatin, and 15 and 30 nm for TSA-treated cells for low- and high-abundance particles, respectively. This agreed well with previous results on chromatin accessibility [42, 71, 80] and showed that the fluctuations of the domains provide differential genome access in nonlinear dependence on particle size and concentration.
The results presented here provide a missing link between chromatin organization maps that reveal the subchromosomal domain structure at steady state from 3C-type analyses and the dynamic properties of these compartments measured here by FCS. The 3C-derived methods such as 5C, Hi-C or T2C as well as light microscopy measurements by fluorescence in situ hybridization/FISH [1, 7, 11, 14, 29, 81] yield more or less direct information about the relation between genomic and spatial distance in steady state. These have been used to evaluate physical models of three-dimensional chromatin organization [5, 29, 75–77, 82–85]. By applying a simple peak detection algorithm to exemplary experimental 5C and T2C data, the presence of loops and loop clusters is apparent, corroborating previous models and findings. From our analysis, we conclude that the highly dynamic nature of domains observed in our study provides an additional constraint on three-dimensional modeling of chromatin structure for 3C-type data: A high contact probability can only result from sufficiently stable physical contact between two loci, otherwise the pronounced fluctuations would effectively segregate them. We estimate that the lifetime of chromatin interactions must exceed a few seconds, i.e., significantly longer than the observed relaxation time, to be detected by chromosome conformation capture techniques. Moreover, the frequently occurring intradomain collisions of genomic sites are not rate limiting for contact formation between them. So far, one could only conclude that the interactions persisted for a significant fraction of the cross-linking incubation time of a few minutes [86, 87]. To our knowledge, this aspect has not been considered previously for the interpretation of 3C-like data.
Chromatin dynamics have been studied mostly by time-lapse microscopy and tracking or bleaching of spatially defined loci [35, 36, 40, 41]. While the time dependence of the MSD derived in these experiments provides evidence for the existence of distinct topological domains, it is difficult to draw quantitative conclusions on the underlying chromatin structure, especially on the time scale below one second. On the other hand, with our FCS-based methods we detected characteristic chromatin domain relaxation times in the order of 100 ms from measurements of the nuclear H1-EGFP signal (as well as of chromatin-incorporated core histones H2A and H2B). Furthermore, we developed an analytical Rouse–Zimm-based model that allows to derive polymer features from these data. Different conformations with topologies ranging from generically formed blobs via crumpled or fractal globules to loop-cluster/rosette formations can be represented to derive corresponding physical properties like persistence length and fiber density. In conjunction with the 5C/T2C analyses, we conclude that the dynamics of topological domains are best described by a clustered loop model in a theta solvent with radii of gyration of the domains of ~300 nm in euchromatin and ~240 nm in heterochromatin and a genomic content of ~0.8–1.2 Mb in the unperturbed state. We suggest to assign these domains to previously reported subchromosomal domains [15, 16] or TADs [11, 14], which have emerged as general pattern for chromatin organization in vertebrates [1, 3] and have been further confirmed by recent low-noise high-resolution T2C data [53]. They feature a typical size of ~1 Mb. Our data are in excellent agreement with previous studies that tracked chromatin foci [38–40, 72] and with persistence lengths and mass densities inferred from other studies [5, 27, 74–77]. We conclude that our observations are an independent and methodologically complementary quantitative evidence for dynamically and topologically independent domains that define both structural and dynamic properties of chromatin on the 1 Mb scale. In TSA-treated cells, euchromatin and heterochromatin become indistinguishable and both domain volume and genomic content increase, indicating a significant rearrangement of domains possibly owing to alternative remodeling following transcription and replication. In ATP-depleted cells, however, chromatin becomes more aggregated and both domain volume and genomic content increase, here possibly due to arrested transcription and chromatin remodeling.
Physical interactions between genomic loci via chromatin loops are important for the repression and activation of genes in the three-dimensional nuclear environment [4, 21, 23]. While the stability of loops is crucial for the robustness of gene expression patterns, plasticity and potential of domains for reorganization are key for gene up- or down-regulation in response to cellular stimuli [24, 35]. The highly dynamic nature of chromatin on the size scale of up to 1 Mb observed here with a typical locus spatially fluctuating by ~100 nm within ~100 ms facilitates fast rearrangement of three-dimensional topologies. In addition, as depicted in Fig. 6, it increases the effective chromatin accessibility, in good quantitative agreement with previous results: More compact heterochromatic domains have a larger unaccessible volume fraction than more open euchromatic ones. This effect additionally depends on the size of the molecules or complexes trying to access the genome [42, 71, 88]. Molecular diffusion is fast enough to roam a complete domain within few milliseconds, during which the domain itself appears static. Relaxation of domains in the 100 ms range affects genome access in a nonlinear protein concentration-dependent manner: Highly abundant molecules at several 100 nM concentrations 'fill' the fluctuating domain so that a larger volume fraction than for a static TAD becomes adiabatically accessible. In contrast, for low-abundance molecules encounters with specific loci within a domain are not only diffusion limited, but further impeded by transient occlusion of binding sites. They sense a higher inaccessible volume fraction. As a result, domain dynamics introduce an additional factor for nuclear target search. The concentration-dependent differential accessibility of this process leads to largely different search times as compared to a static chromatin network. Furthermore, it allows of locus-specific variations as relaxation times between heterochromatin and euchromatin are different and additionally dependent on reversible chromatin modifications like the TSA-induced hyperacetylation. Thus, by integrating the structural features of chromatin domains with their dynamic properties we reveal an additional regulatory layer for target search processes in the nucleus that may contribute to establishing cell-type-specific gene expression programs.
In this study, we present a missing link between chromatin organization maps that reveal the subchromosomal domain structure at steady state from 3C-type analyses and the dynamic properties of these compartments measured here by FCS. Both 5C/T2C and FCS results suggest that chromatin is organized into topologically and dynamically independent domains of ~300 nm radius in euchromatin and ~240 nm in heterochromatin and a genomic content of ~0.8–1.2 Mb, confirming numerous previous results. Loops/loop clusters as domain-forming features are required to match the measured level of compaction and the observed features of 5C/T2C data. In addition to the structural aspects, the dynamics of domains in different epigenetic states propose that the regulation of chromatin accessibility for soluble factors displays a significantly stronger dependence on factor concentration than search processes within a static network.
The plasmid vector with the autofluorescent histone H1.0-GFP was constructed as described [89]. The human histone gene for H1.0 (Gene bank M87841) was amplified by PCR and inserted into the SalI–BamHI site of the promoterless plasmid pECFP-1 (Clontech, Mountain View, CA, USA). The HindIII fragment of simian virus 40 (SV40) was inserted in reverse direction into the HindIII site of the multiple cloning site of pECFP-1, and the ECFP sequence was replaced with EGFP. The resulting construct pSV-HIII-H1.0-EGFP expresses a 440-amino-acid fusion protein from the early SV40 promoter and consists of the human H1.0 gene, a 7-amino-acid linker and the C-terminal EGFP domain. This plasmid was introduced into MCF7 cells with Lipofectamin (Life Technologies, Carlsbad, CA, USA), and a stable monoclonal cell line was selected with 500 µg/ml G418 (Life Technologies). H1.0-expressing cells as well as non-transfected MCF7 cells were grown in RPMI 1640 (Life Technologies) supplemented with 10 % FCS in a humidified atmosphere under 5 % CO2 at 37 °C. HeLa cells expressing H2B–mCherry stably and H2A–EGFP transiently were made as described elsewhere [90].
For microscopy, cells were allowed to attach for at least 24 h in Nunc LabTek chambered coverglasses (Nalge Nunc, Rochester, NY, USA) or in MatTek glass-bottom dishes (MatTek, Ashland, MA, USA) before the experiments. For TSA treatment, cells were allowed to attach for at least 24 h in chambered coverglasses and then incubated with 100 ng/ml TSA (Sigma-Aldrich, St. Louis, MO, USA) for 15–20 h before the experiments. For Na-azide treatment, cells were allowed to attach for at least 24 h in chambered coverglasses and then incubated with 10 mM Na-azide for 20 min. Experiments were then performed within 40 min.
Confocal fluorescence microscopy images, FRAP image series, CP data, point FRAP data and FCS data were acquired with a Leica TCS SP2 AOBS FCS and with a Leica TCS SP5 AOBS FCS (Leica Microsystems, Mannheim, Germany) equipped with a 63×/1.2NA water immersion lens or with a Zeiss LSM 510 ConfoCor2 system (Carl Zeiss AIM, Jena, Germany) equipped with a 40×/1.2NA water immersion lens. For H1-EGFP, we used the 488 nm line of an Argon laser for excitation and a detection band-pass window of 500–550 nm. For imaging, photomultiplier tubes were used. For CP, point FRAP and FCS, avalanche photodiode single-photon counting detectors were used. Live cells were maintained at 37 °C on the microscopes using either a PeCon stage heating system (PeCon, Erbach, Germany), a Life Cell Imaging stage heating system (LCI, Seoul, South Korea) or an EMBL incubation box (EMBL-EM, Heidelberg, Germany).
Imaging FRAP, point FRAP, CP
For imaging FRAP, a rectangular strip bleach region was defined. Acquisition of 10 prebleach images (time resolution 0.6 s) was followed by two bleach frames, 10 postbleach images (time resolution 0.6 s) and additional 40 postbleach images (time resolution 6 s). The data were then processed as described elsewhere [91, 92] to yield the mean intensity recovery curve integrated over the bleach region. This was then fitted with Additional file 1: Eq. S92, resulting in three different fractions, a diffusion coefficient and a dissociation rate. Alternatively, an average projection along the direction of the longer dimension of the bleach strip was plotted as profile along the other direction for all time points studied. Appropriate normalization steps [64, 92] (Additional file 1: Fig. S11) yielded profile plots that were then fitted with Additional file 1: Eq. S91 to yield an apparent diffusion coefficient.
Point FRAP and CP data were acquired as described elsewhere [43, 93, 94]. CP data were fitted with Additional file 1: Eq. S93 to yield two independent dissociation rates and corresponding fractions. Point FRAP data were fitted as described in Im et al. [43], however with two binding states.
Fluorescence correlation spectroscopy
FCS data were acquired at cellular positions selected in confocal images for 30–60 s. A frequently encountered problem of FCS, especially in living samples, is slow but pronounced signal fluctuations, e.g., due to bulk photobleaching [43, 93–95] (Additional file 1: Fig. S8). Fluctuations contribute to the resulting correlation function (CF) weighted with the square of their brightness so that often slow fluctuations obscured completely the contributions from single diffusing molecules and rendered a further evaluation impossible. To overcome this obstacle, raw fluorescence intensity traces were saved to disk and then processed using the FluctuationAnalyzer software [90] written in our laboratory in C++ and LabVIEW (National Instruments, Austin, TX, USA) that used a local average approach where the CF is calculated over a small time window Θ and subsequently averaged over the complete length T according to
$$\begin{aligned} G_{kl} \left( {t^{\prime},\tau } \right) = \frac{{\left\langle {\delta F_{k} \left( t \right)\delta F_{l} \left( {t + \tau } \right)} \right\rangle_{{t^{\prime},\varTheta }} }}{{\left\langle {F_{k} \left( t \right)} \right\rangle_{{t^{\prime},\varTheta }} \left\langle {F_{l} \left( t \right)} \right\rangle_{{t^{\prime},\varTheta }} }}\quad {\text{with}}\quad \left\langle \ldots \right\rangle_{{t^{\prime},\varTheta }} = \frac{1}{\varTheta }\int\limits_{{t^{\prime}}}^{{t^{\prime} + \varTheta }} {{d}t \ldots } \;, \\ G_{kl} \left( \tau \right) = \left\langle {G\left( {t,\tau } \right)} \right\rangle \,\quad {\text{with}}\quad \left\langle \ldots \right\rangle = \left\langle \ldots \right\rangle_{{0,{T}}} \;. \\ \end{aligned}$$
Here, k, l = 1, 2 represent the two available detection channels. For k = l = 1, 2, the autocorrelation function (ACF) of channel 1, 2 is obtained, whereas k = 1, l = 2 yields the cross-correlation function (CCF). A good yet subjective criterion for a proper choice of the window size is a smooth transition of the CF to zero. In a more systematic way, we fitted the data with appropriate model functions, Eq. 3, 5. When finding a range of window sizes where, e.g., the relaxation time obtained from the fit was independent of the window size, we selected a window size within the range. Otherwise, the data were not taken into consideration.
To fit FCS data of the diffusive fraction of histone molecules and of free EGFP, we used the standard fit function modeling free anomalous diffusion and fluorescent protein-like blinking [96]
$$G_{kl} \left( \tau \right) = \frac{1}{N}\left[ {1 - \varTheta_{T} + \varTheta_{T} \exp \left( { - \frac{\tau }{{\tau_{T} }}} \right)} \right] \cdot \left[ {1 + \left( {\frac{\tau }{{\tau_{D} }}} \right)^{\alpha } } \right]^{ - 1} \left[ {1 + \frac{1}{{\kappa^{2} }}\left( {\frac{\tau }{{\tau_{D} }}} \right)^{\alpha } } \right]^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}$$
where N is the number of molecules in the focal volume, Θ T the fraction of molecules in a non-fluorescent state with lifetime τ T , \(\tau_{D} = {{w_{0}^{2} } \mathord{\left/ {\vphantom {{w_{0}^{2} } {4D}}} \right. \kern-0pt} {4D}}\) the diffusion correlation time, α the anomaly parameter and \(\kappa = {{z_{0} } \mathord{\left/ {\vphantom {{z_{0} } {w_{0} }}} \right. \kern-0pt} {w_{0} }}\) the ratio of axial and lateral focal radius. Fitting FCS data with a chromatin relaxation model is described above.
Numerical modeling of chromatin conformations
For the visualization and for the analysis of static physical properties of chromatin, we simulated chains as beads occupying sites on a three-dimensional cubic lattice with a grid constant of a = 30 nm. Neighboring sites were connected by chain segments, and neighbors could occupy any of the surrounding 26 sites, resulting in a mean distance or bond length of \(b = \sqrt 2 a = 42{\text{ nm}}\) corresponding to 2500 bp when assuming 60 bp/nm or 3.5 nucleosomes/11 nm and 195 bp nucleosomal repeat length. The grid constant is set to an assumed fiber diameter of 30 nm. Double occupancy of sites is suppressed to ensure self-avoidance of the chain. In general, chains were modeled as a sequence of loops and linear stretches. Properties such as radii of gyration were calculated according to the respective definition. Calculations were implemented in Python 3.3, and renderings were generated using the VPython module.
Calculation of genomic contact probability maps
We calculated genomic contact probability maps for simulated chromatin conformation using Additional file 1: Eq. S25 and the algorithm described in the Additional file 1: Supplementary Text. Data were saved as matrices with a resolution of 2.5 kb. For the configurations used in Fig. 1d, e we used the following parameters:
Figure 1d: theta-solvent loop-rosette conformation; lin(x)—linear stretch of x kb; dom(y)—domain of y kb consisting of a set of loops; loop(z)—looped stretch of z kb; loops with multiple numbers were varied synchronously in length and then averaged to generate variation in loop length. lin(100) – dom (1000) [loop(166) – loop(167) – loop(166) – loop(167) – loop(166)] – lin(150) – dom (1300) [loop(100/125/150/175/200) – loop(95) – loop(90) – loop(85) – loop(120/145/170/195/220) – loop(150) – loop(125) – loop(115) – loop(160) – loop(150)] – lin(150) – dom(1000) [loop(185) – loop(120) – loop(95) – loop(120) – loop(235) – loop(245)] – lin(50) – dom(1100) [loop(138) – loop(160) – loop(95) – loop(170) – loop(160) – loop(183) – loop(128) – loop(68)] – lin(150)
Figure 1e: globular conformation; lin(x)—linear stretch of x kb; dom(y)—domain of y kb consisting of a globular stretch; glob(z)—globular stretch of z kb. lin(100) – dom(1000) [glob(1000)] – lin(150) – dom(1300) [glob(1300)] – lin(150) – dom(1000) [glob(1000)] – lin(50) – dom(1100) [glob(1100)] – lin(150)
Analysis of genomic contact probability maps
To detect peaks in the two-dimensional contact probability maps, both experimental and simulated data were imported into a software module written in LabVIEW. It allowed to interpolate data to a resolution of 2.5 kb and to symmetrize them. After manually selecting a domain region easily recognizable as square area of increased contact probabilities (Fig. 1a, b, d, e), the diagonal and its vicinity of ±30–75 kb (±12–30 data points of 2.5 kb) were removed. A one-dimensional average of a maximum and a mean projection (Additional file 1: Fig. S1) yielded a one-dimensional profile, to which a peak detection algorithm was applied based on parabolic fitting to continuous stretches of 30 kb (12 data points). Maxima above 80 % of the profile average were accepted as peak locations.
Then, local average projections in a 25- to 30-kb vicinity of each peak were calculated (Additional file 1: Fig. S1), to which the same peak detection algorithm was applied. Thus, for each peak detected in this way, a pair of genomic sites of high interaction probability could be obtained, corresponding to a loop base. Pairs detected in both directions featured higher recognition probability and were marked with black circles (Fig. 1a, b, d, e), and those detected with lower probability, i.e., only in one direction, were marked with gray circles. This approach corresponds to an effective thresholding of distances instead of using their values [97] justified by the dynamic nature of domains and is applied to non-corrected and smoothed data similar to Giorgetti et al. [28]. The binarization is especially robust against bias effects, which are not completely known even though corrections can be applied [98, 99].
Belmont AS. Large-scale chromatin organization: the good, the surprising, and the still perplexing. Curr Opin Cell Biol. 2014;26:69–78.
Bickmore WA. The spatial organization of the human genome. Annu Rev Genom Hum Genet. 2013;14:67–84.
Gibcus JH, Dekker J. The hierarchy of the 3D genome. Mol Cell. 2013;49:773–82.
Rouquette J, Cremer C, Cremer T, Fakan S. Functional nuclear architecture studied by microscopy: present and future. Int Rev Cell Mol Biol. 2010;282:1–91.
Dekker J, Rippe K, Dekker M, Kleckner N. Capturing chromosome conformation. Science. 2002;295:1306–11.
Tolhuis B, Palstra RJ, Splinter E, Grosveld F, de Laat W. Looping and interaction between hypersensitive sites in the active beta-globin locus. Mol Cell. 2002;10:1453–65.
Dekker J, Marti-Renom MA, Mirny LA. Exploring the three-dimensional organization of genomes: interpreting chromatin interaction data. Nat Rev Genet. 2013;14:390–403.
Kolovos P, van de Werken HJ, Kepper N, Zuin J, Brouwer RW, Kockx CE, Wendt KS, van Ijcken WF, Grosveld F, Knoch TA. Targeted chromatin capture (T2C): a novel high resolution high throughput method to detect genomic interactions and regulatory elements. Epigenet Chromatin. 2014;7:10.
Jin F, Li Y, Dixon JR, Selvaraj S, Ye Z, Lee AY, Yen C-A, Schmitt AD, Espinoza CA, Ren B. A high-resolution map of the three-dimensional chromatin interactome in human cells. Nature. 2014;503:290–4.
Lieberman-Aiden E, van Berkum NL, Williams L, Imakaev M, Ragoczy T, Telling A, Amit I, Lajoie BR, Sabo PJ, Dorschner MO, et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science. 2009;326:289–93.
Nora EP, Lajoie BR, Schulz EG, Giorgetti L, Okamoto I, Servant N, Piolot T, van Berkum NL, Meisig J, Sedat J, et al. Spatial partitioning of the regulatory landscape of the X-inactivation centre. Nature. 2012;485:381–5.
Rao SS, Huntley MH, Durand NC, Stamenova EK, Bochkov ID, Robinson JT, Sanborn AL, Machol I, Omer AD, Lander ES, Aiden EL. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell. 2014;159:1665–80.
Nagano T, Lubling Y, Stevens TJ, Schoenfelder S, Yaffe E, Dean W, Laue ED, Tanay A, Fraser P. Single-cell Hi-C reveals cell-to-cell variability in chromosome structure. Nature. 2013;502:59–64.
Dixon JR, Selvaraj S, Yue F, Kim A, Li Y, Shen Y, Hu M, Liu JS, Ren B. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature. 2012;485:376–80.
Cremer T, Cremer T, Cremer C, Cremer C. Chromosome territories, nuclear architecture and gene regulation in mammalian cells. Nat Rev Genet. 2001;2:292–301.
Misteli T. Beyond the sequence: cellular organization of genome function. Cell. 2007;128:787–800.
Müller WG, Rieder D, Karpova TS, John S, Trajanoski Z, McNally JG. Organization of chromatin and histone modifications at a transcription site. J Cell Biol. 2007;177:957–67.
Shopland LS, Johnson CV, Byron M, McNeil J, Lawrence JB. Clustering of multiple specific genes and gene-rich R-bands around SC-35 domains: evidence for local euchromatic neighborhoods. J Cell Biol. 2003;162:981–90.
Verschure PJ, van Der Kraan I, Manders EM, van Driel R. Spatial relationship between transcription sites and chromosome territories. J Cell Biol. 1999;147:13–24.
Pope BD, Ryba T, Dileep V, Yue F, Wu W, Denas O, Vera DL, Wang Y, Hansen RS, Canfield TK, et al. Topologically associating domains are stable units of replication-timing regulation. Nature. 2014;515:402–5.
Joffe B, Leonhardt H, Solovei I. Differentiation and large scale spatial organization of the genome. Curr Opin Genet Dev. 2010;20:562–9.
Kolovos P, Knoch TA, Grosveld FG, Cook PR, Papantonis A. Enhancers and silencers: an integrated and simple model for their function. Epigenet Chromatin. 2012;5:1.
Nora EP, Dekker J, Heard E. Segmental folding of chromosomes: a basis for structural and regulatory chromosomal neighborhoods? BioEssays. 2013;35:818–28.
Sexton T, Cavalli G. The role of chromosome domains in shaping the functional genome. Cell. 2015;160:1049–59.
Li G, Zhu P. Structure and organization of chromatin fiber in the nucleus. FEBS Lett. 2015;589(20 Pt A):2893–904. doi:10.1016/j.febslet.2015.04.023.
Maeshima K, Hihara S, Eltsov M. Chromatin structure: does the 30-nm fibre exist in vivo? Curr Opin Cell Biol. 2010;22:291–7.
Stehr R, Schöpflin R, Ettig R, Kepper N, Rippe K, Wedemann G. Exploring the conformational space of chromatin fibers and their stability by numerical dynamic phase diagrams. Biophys J. 2010;98:1028–37.
Giorgetti L, Galupa R, Nora EP, Piolot T, Lam F, Dekker J, Tiana G, Heard E. Predictive polymer modeling reveals coupled fluctuations in chromosome conformation and transcription. Cell. 2014;157:950–63.
Jhunjhunwala S, van Zelm MC, Peak MM, Cutchin S, Riblet R, van Dongen JJ, Grosveld FG, Knoch TA, Murre C. The 3D structure of the immunoglobulin heavy-chain locus: implications for long-range genomic interactions. Cell. 2008;133:265–79.
Münkel C, Langowski J. Chromosome structure described by a polymer model. Phys Rev E. 1998;57:5888–96.
Naumova N, Imakaev M, Fudenberg G, Zhan Y, Lajoie BR, Mirny LA, Dekker J. Organization of the mitotic chromosome. Science. 2013;342:948–53.
Cremer T, Cremer M, Hubner B, Strickfaden H, Smeets D, Popken J, Sterr M, Markaki Y, Rippe K, Cremer C. The 4D nucleome: evidence for a dynamic nuclear landscape based on co-aligned active and inactive nuclear compartments. FEBS Lett. 2015;589(20 Pt A):2931–43. doi:10.1016/j.febslet.2015.05.037.
Gerlich D, Beaudouin J, Kalbfuss B, Daigle N, Eils R, Ellenberg J. Global chromosome positions are transmitted through mitosis in mammalian cells. Cell. 2003;112:751–64.
Belmont AS. Visualizing chromosome dynamics with GFP. Trends Cell Biol. 2001;11:250–7.
Wachsmuth M, Caudron-Herger M, Rippe K. Genome organization: balancing stability and plasticity. Biochim Biophys Acta. 2008;1783:2061–79.
Pederson T. Repeated TALEs: visualizing DNA sequence localization and chromosome dynamics in live cells. Nucleus. 2014;5:1–4.
Dion V, Gasser SM. Chromatin movement in the maintenance of genome stability. Cell. 2013;152:1355–64.
Jegou T, Chung I, Heuvelmann G, Wachsmuth M, Görisch SM, Greulich-Bode K, Boukamp P, Lichter P, Rippe K. Dynamics of telomeres and promyelocytic leukemia nuclear bodies in a telomerase negative human cell line. Mol Biol Cell. 2009;20:2070–82.
Levi V, Ruan Q, Plutz M, Belmont AS, Gratton E. Chromatin dynamics in interphase cells revealed by tracking in a two-photon excitation microscope. Biophys J. 2005;89:4275–85.
Lucas JS, Zhang Y, Dudko OK, Murre C. 3D trajectories adopted by coding and regulatory DNA elements: first-passage times for genomic interactions. Cell. 2014;158:339–52.
Chen B, Gilbert LA, Cimini BA, Schnitzbauer J, Zhang W, Li GW, Park J, Blackburn EH, Weissman JS, Qi LS, Huang B. Dynamic imaging of genomic loci in living human cells by an optimized CRISPR/Cas system. Cell. 2013;155:1479–91.
Baum M, Erdel F, Wachsmuth M, Rippe K. Retrieving the intracellular topology from multi-scale protein mobility mapping in living cells. Nat Commun. 2014;5:4494.
Im K-B, Schmidt U, Kang MS, Lee JY, Bestvater F, Wachsmuth M. Diffusion and binding analyzed with combined point FRAP and FCS. Cytometry A. 2013;83:876–89.
Halverson JD, Smrek J, Kremer K, Grosberg AY. From a melt of rings to chromosome territories: the role of topological constraints in genome folding. Rep Prog Phys. 2014;77:022601.
Rosa A, Zimmer C. Computational models of large-scale genome architecture. Int Rev Cell Mol Biol. 2014;307:275–349.
Lumma D, Keller S, Vilgis T, Radler JO. Dynamics of large semiflexible chains probed by fluorescence correlation spectroscopy. Phys Rev Lett. 2003;90:218301.
Shusterman R, Alon S, Gavrinyov T, Krichevsky O. Monomer dynamics in double- and single-stranded DNA polymers. Phys Rev Lett. 2004;92:048303.
Cohen AE, Moerner WE. Principal-components analysis of shape fluctuations of single DNA molecules. Proc Natl Acad Sci USA. 2007;104:12622–7.
McHale K, Mabuchi H. Precise characterization of the conformation fluctuations of freely diffusing DNA: beyond Rouse and Zimm. J Am Chem Soc. 2009;131:17901–7.
Petrov EP, Ohrt T, Winkler RG, Schwille P. Diffusion and segmental dynamics of double-stranded DNA. Phys Rev Lett. 2006;97:258101.
Tothova J, Brutovsky B, Lisy V. Monomer dynamics in single- and double-stranded DNA coils. Eur Phys J E Soft Matter. 2007;24:61–7.
Doi M, Edwards SF. The theory of polymer dynamics. Oxford: Oxford University Press; 1986.
Knoch TA, Wachsmuth M, Kepper N, Lesnussa M, Abuseiris A, Ali Imam AM, Kolovos P, Zuin J, Kockx CEM, Brouwer RWW, van de Werken HJG, van IJken WFJ, Wendt5 KS, Grosveld FG (in press) The detailed 3D multi-loop aggregate/rosette chromatin architecture and functional dynamic organization of the human and mouse genomes. Epigenetics Chromatin. doi:10.1186/s13072-016-0089-x
Misteli T, Gunjan A, Hock R, Bustin M, Brown DT. Dynamic binding of histone H1 to chromatin in living cells. Nature. 2000;408:877–81.
Lever MA, Th'ng JP, Sun X, Hendzel MJ. Rapid exchange of histone H1.1 on chromatin in living human cells. Nature. 2000;408:873–6.
Tanay A, Cavalli G. Chromosomal domains: epigenetic contexts and functional implications of genomic compartmentalization. Curr Opin Genet Dev. 2013;23:197–203.
De Gennes PG. Dynamics of entangled polymer solutions. II. Inclusion of hydrodynamic interactions. Macromolecules. 1976;9:594–8.
Erenpreisa J. Large rossettes: the element of the suprachromonemal organisation of interphase cell nucleus (Russ.). Proc Latv Acad Sci. 1989;7:68–71.
Sachs RK, van den Engh G, Trask B, Yokota H, Hearst JE. A random-walk/giant-loop model for interphase chromosomes. Proc Natl Acad Sci USA. 1995;92:2710–4.
Weidemann T, Wachsmuth M, Knoch TA, Müller G, Waldeck W, Langowski J. Counting nucleosomes in living cells with a combination of fluorescence correlation spectroscopy and confocal imaging. J Mol Biol. 2003;334:229–40.
Carrero G, Crawford E, Hendzel MJ, de Vries G. Characterizing fluorescence recovery curves for nuclear proteins undergoing binding events. Bull Math Biol. 2004;66:1515–45.
Brown DT, Izard T, Misteli T. Mapping the interaction surface of linker histone H10 with the nucleosome of native chromatin in vivo. Nat Struct Mol Biol. 2006;13:250–5.
Catez F, Ueda T, Bustin M. Determinants of histone H1 mobility and chromatin binding in living cells. Nat Struct Mol Biol. 2006;13:305–10.
Stasevich TJ, Mueller F, Brown DT, McNally JG. Dissecting the binding mechanism of the linker histone in live cells: an integrated FRAP analysis. EMBO J. 2010;29:1225–34.
Raghuram N, Carrero G, Stasevich TJ, McNally JG, Th'ng J, Hendzel MJ. Core histone hyperacetylation impacts cooperative behavior and high-affinity binding of histone H1 to chromatin. Biochemistry. 2010;49:4420–31.
Harshman SW, Young NL, Parthun MR, Freitas MA. H1 histones: current perspectives and challenges. Nucleic Acids Res. 2013;41:9593–609.
van Kampen NG. Stochastic processes in physics and chemistry. Amsterdam: Elsevier; 1992.
Capoulade J, Wachsmuth M, Hufnagel L, Knop M. Quantitative fluorescence imaging of protein diffusion and interaction in living cells. Nat Biotechnol. 2011;29:835–9.
Wachsmuth M. Fluoreszenzfluktuationsmikroskopie: Entwicklung Eines Prototyps, Theorie Und Messung Der Beweglichkeit Von Biomolekülen Im Zellkern. Ruprecht-Karls-Universität Heidelberg, Fakultät für Physik und Astronomie; 2001.
Tóth KF, Knoch TA, Wachsmuth M, Stöhr M, Frank-Stöhr M, Bacher CP, Müller G, Rippe K. Trichostatin A induced histone acetylation causes decondensation of interphase chromatin. J Cell Sci. 2004;117:4277–87.
Görisch SM, Wachsmuth M, Fejes Tóth K, Lichter P, Rippe K. Histone acetylation increases chromatin accessibility. J Cell Sci. 2005;118:5825–34.
Hihara S, Pack C-G, Kaizu K, Tani T, Hanafusa T, Nozaki T, Takemoto S, Yoshimi T, Yokota H, Imamoto N, et al. Local nucleosome dynamics facilitate chromatin accessibility in living mammalian cells. Cell Rep. 2012;2:1645–56.
Zeskind BJ, Jordan CD, Timp W, Trapani L, Waller G, Horodincu V, Ehrlich DJ, Matsudaira P. Nucleic acid and protein mass mapping by live-cell deep-ultraviolet microscopy. Nat Methods. 2007;4:567–9.
Kepper N, Foethke D, Stehr R, Wedemann G, Rippe K. Nucleosome geometry and internucleosomal interactions control the chromatin fiber conformation. Biophys J. 2008;95:3692–705.
Bystricky K, Heun P, Gehlen L, Langowski J, Gasser SM. Long-range compaction and flexibility of interphase chromatin in budding yeast analyzed by high-resolution imaging techniques. Proc Natl Acad Sci USA. 2004;101:16495–500.
Cook PR, Marenduzzo D. Entropic organization of interphase chromosomes. J Cell Biol. 2009;186:825–34.
Rosa A, Becker NB, Everaers R. Looping probabilities in model interphase chromosomes. Biophys J. 2010;98:2410–9.
Berg OG, von Hippel PH. Facilitated target location in biological systems. J Biol Chem. 1989;264:675–8.
Di Rienzo C, Piazza V, Gratton E, Beltram F, Cardarelli F. Probing short-range protein Brownian motion in the cytoplasm of living cells. Nat Commun. 2014;5:5891.
Bancaud A, Lavelle C, Huet S, Ellenberg J. A fractal model for nuclear organization: current evidence and biological implications. Nucleic Acids Res. 2012;40:8783–92.
van de Corput MP, de Boer E, Knoch TA, van Cappellen WA, Quintanilla A, Ferrand L, Grosveld FG. Super-resolution imaging reveals three-dimensional folding dynamics of the beta-globin locus upon gene activation. J Cell Sci. 2012;125:4630–9.
Barbieri M, Chotalia M, Fraser J, Lavitas L-M, Dostie J, Pombo A, Nicodemi M. Complexity of chromatin folding is captured by the strings and binders switch model. Proc Natl Acad Sci USA. 2012;109:16173–8.
Baù D, Sanyal A, Lajoie BR, Capriotti E, Byron M, Lawrence JB, Dekker J, Marti-Renom MA. The three-dimensional folding of the α-globin gene domain reveals formation of chromatin globules. Nat Struct Mol Biol. 2011;18:107–14.
Hu M, Deng K, Qin Z, Dixon J, Selvaraj S, Fang J, Ren B, Liu JS. Bayesian inference of spatial organizations of chromosomes. PLoS Comput Biol. 2013;9:e1002893.
Meluzzi D, Arya G. Recovering ensembles of chromatin conformations from contact probabilities. Nucleic Acids Res. 2012;41:63–75.
Dostie J, Dekker J. Mapping networks of physical interactions between genomic elements using 5C technology. Nat Protoc. 2007;2:988–1002.
Orlando V. Mapping chromosomal proteins in vivo by formaldehyde-crosslinked-chromatin immunoprecipitation. Trends Biochem Sci. 2000;25:99–104.
Dross N, Spriet C, Zwerger M, Muller G, Waldeck W, Langowski J. Mapping eGFP oligomer mobility in living cell nuclei. PLoS ONE. 2009;4:e5041.
Knoch TA. Approaching the three-dimensional organization of the human genome. Ruprecht-Karls-Universität Heidelberg, Fakultät für Physik und Astronomie; 2002.
Wachsmuth M, Conrad C, Bulkescher J, Koch B, Mahen R, Isokane M, Pepperkok R, Ellenberg J. High-throughput fluorescence correlation spectroscopy enables analysis of proteome dynamics in living cells. Nat Biotechnol. 2015;33:384–9.
Wachsmuth M. Molecular diffusion and binding analyzed with FRAP. Protoplasma. 2014;251:373–82.
Wachsmuth M, Weisshart K. Fluorescence photobleaching and fluorescence correlation spectroscopy: two complementary technologies to study molecular dynamics in living cells. In: Shorte SL, Frischknecht F, editors. Imaging cellular and molecular biological functions. Berlin: Springer; 2007. p. 179–228.
Schmidt U, Im K-B, Benzing C, Janjetovic S, Rippe K, Lichter P, Wachsmuth M. Assembly and mobility of exon-exon junction complexes in living cells. RNA. 2009;15:862–76.
Wachsmuth M, Weidemann T, Müller G, Hoffmann-Rohrer UW, Knoch TA, Waldeck W, Langowski J. Analyzing intracellular binding and diffusion with continuous fluorescence photobleaching. Biophys J. 2003;84:3353–63.
Ries J, Bayer M, Csucs G, Dirkx R, Solimena M, Ewers H, Schwille P. Automated suppression of sample-related artifacts in Fluorescence correlation spectroscopy. Opt Express. 2010;18:11073–82.
Wachsmuth M, Waldeck W, Langowski J. Anomalous diffusion of fluorescent probes inside living cell nuclei investigated by spatially-resolved fluorescence correlation spectroscopy. J Mol Biol. 2000;298:677–89.
Lesne A, Riposo J, Roger P, Cournac A, Mozziconacci J. 3D genome reconstruction from chromosomal contacts. Nat Methods. 2014;11:1141–3.
Imakaev M, Fudenberg G, McCord RP, Naumova N, Goloborodko A, Lajoie BR, Dekker J, Mirny LA. Iterative correction of Hi-C data reveals hallmarks of chromosome organization. Nat Methods. 2012;9:999–1003.
Yaffe E, Tanay A. Probabilistic modeling of Hi-C contact maps eliminates systematic biases to characterize global chromosomal architecture. Nat Genet. 2011;43:1059–65.
MW and KR conceived the project. MW, KR and TAK wrote the paper. TAK acquired T2C data. MW performed experiments, wrote software, evaluated results and developed theory. All authors read and approved the final manuscript.
We are indebted to Thibaud Jegou for the cell line that stably expresses H1.0-EGFP and Jutta Bulkescher as well as Birgit Koch for some help with cell culture. We are grateful to Katalin Fejes-Tóth and A.M.A. Imam for critical comments. We would like to thank Kerstin Wendt for supporting the T2C experiments. This work was supported by EMBL research funding.
Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Meyerhofstrasse 1, 69117, Heidelberg, Germany
Malte Wachsmuth
Biophysical Genomics Group, Department of Cell Biology and Genetics, Erasmus Medical Center, Dr. Molewaterplein 50, 3015 GE, Rotterdam, The Netherlands
Tobias A. Knoch
Research Group Genome Organization and Function, Deutsches Krebsforschungszentrum (DKFZ) & BioQuant, Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
Karsten Rippe
Search for Malte Wachsmuth in:
Search for Tobias A. Knoch in:
Search for Karsten Rippe in:
Correspondence to Malte Wachsmuth.
Additional file
13072_2016_93_MOESM1_ESM.pdf
Additional file 1. Additional documentation.
Wachsmuth, M., Knoch, T.A. & Rippe, K. Dynamic properties of independent chromatin domains measured by correlation spectroscopy in living cells. Epigenetics & Chromatin 9, 57 (2016). https://doi.org/10.1186/s13072-016-0093-1
Chromatin structure
Polymer model
Chromatin conformation capture carbon copy (5C)
Targeted chromatin capture (T2C)
Fluorescence correlation spectroscopy (FCS)
Quantitative microscopy | CommonCrawl |
Materials Research (101)
Statistics and Probability (28)
Epidemiology & Infection (25)
Publications of the Astronomical Society of Australia (12)
Symposium - International Astronomical Union (10)
Powder Diffraction (5)
Acta Neuropsychiatrica (3)
Cardiology in the Young (3)
Infection Control & Hospital Epidemiology (3)
Journal of Developmental Origins of Health and Disease (3)
Mineralogical Magazine (3)
Palliative & Supportive Care (3)
Epidemiology and Psychiatric Sciences (2)
International Astronomical Union (21)
test society (9)
Royal College of Speech and Language Therapists (3)
Scandinavian College of Neuropsychopharmacology (3)
British Institute of International and Comparative Law (2)
EDPS Sciences - Radioprotection (2)
Physiological Society (2)
The Paleontological Society (2)
American Academy of Cerebral and Developmental Medicine (1)
British Mycological Society Symposia (1)
Cambridge Handbooks in Behavioral Genetics (1)
Case Studies in Neurology (1)
Biomarkers improve prediction of 30-day unplanned readmission or mortality after paediatric congenital heart surgery
Jeremiah R. Brown, Meagan E. Stabler, Devin M. Parker, Luca Vricella, Sara Pasquali, JoAnna K. Leyenaar, Andrew R. Bohm, Todd MacKenzie, Chirag Parikh, Marshall L. Jacobs, Jeffrey P. Jacobs, Allen D. Everett
Journal: Cardiology in the Young , First View
To evaluate the association between novel pre- and post-operative biomarker levels and 30-day unplanned readmission or mortality after paediatric congenital heart surgery.
Children aged 18 years or younger undergoing congenital heart surgery (n = 162) at Johns Hopkins Hospital from 2010 to 2014 were enrolled in the prospective cohort. Collected novel pre- and post-operative biomarkers include soluble suppression of tumorgenicity 2, galectin-3, N-terminal prohormone of brain natriuretic peptide, and glial fibrillary acidic protein. A model based on clinical variables from the Society of Thoracic Surgery database was developed and evaluated against two augmented models.
Unplanned readmission or mortality within 30 days of cardiac surgery occurred among 21 (13%) children. The clinical model augmented with pre-operative biomarkers demonstrated a statistically significant improvement over the clinical model alone with a receiver-operating characteristics curve of 0.754 (95% confidence interval: 0.65–0.86) compared to 0.617 (95% confidence interval: 0.47–0.76; p-value: 0.012). The clinical model augmented with pre- and post-operative biomarkers demonstrated a significant improvement over the clinical model alone, with a receiver-operating characteristics curve of 0.802 (95% confidence interval: 0.72–0.89; p-value: 0.003).
Novel biomarkers add significant predictive value when assessing the likelihood of unplanned readmission or mortality after paediatric congenital heart surgery. Further exploration of the utility of these novel biomarkers during the pre- or post-operative period to identify early risk of mortality or readmission will aid in determining the clinical utility and application of these biomarkers into routine risk assessment.
Early conversion of classic Fontan conversion may decrease term morbidity: single centre outcomes
David Blitzer, Asma S. Habib, John W. Brown, Adam C. Kean, Jiuann-Huey I. Lin, Mark W. Turrentine, Mark D. Rodefeld, Jeremy L. Herrmann, William Aaron Kay
The initial classic Fontan utilising a direct right atrial appendage to pulmonary artery anastomosis led to numerous complications. Adults with such complications may benefit from conversion to a total cavo-pulmonary connection, the current standard palliation for children with univentricular hearts.
A single institution, retrospective chart review was conducted for all Fontan conversion procedures performed from July, 1999 through January, 2017. Variables analysed included age, sex, reason for Fontan conversion, age at Fontan conversion, and early mortality or heart transplant within 1 year after Fontan conversion.
A total of 41 Fontan conversion patients were identified. Average age at Fontan conversion was 24.5 ± 9.2 years. Dominant left ventricular physiology was present in 37/41 (90.2%) patients. Right-sided heart failure occurred in 39/41 (95.1%) patients and right atrial dilation was present in 33/41 (80.5%) patients. The most common causes for Fontan conversion included atrial arrhythmia in 37/41 (90.2%), NYHA class II HF or greater in 31/41 (75.6%), ventricular dysfunction in 23/41 (56.1%), and cirrhosis or fibrosis in 7/41 (17.1%) patients. Median post-surgical follow-up was 6.2 ± 4.9 years. Survival rates at 30 days, 1 year, and greater than 1-year post-Fontan conversion were 95.1, 92.7, and 87.8%, respectively. Two patients underwent heart transplant: the first within 1 year of Fontan conversion for heart failure and the second at 5.3 years for liver failure.
Fontan conversion should be considered early when atrial arrhythmias become common rather than waiting for severe heart failure to ensue, and Fontan conversion can be accomplished with an acceptable risk profile.
Patients with laboratory evidence of West Nile virus disease without reported fever
K. Landry, I. B. Rabe, S. L. Messenger, J. K. Hacker, M. L. Salas, C. Scott-Waldron, D. Haydel, E. Rider, S. Simonson, C. M. Brown, S. C. Smole, D. F. Neitzel, E. K. Schiffman, A. K. Strain, S. Vetter, M. Fischer, N. P. Lindsey
Journal: Epidemiology & Infection / Volume 147 / 2019
Published online by Cambridge University Press: 17 June 2019, e219
In 2013, the national surveillance case definition for West Nile virus (WNV) disease was revised to remove fever as a criterion for neuroinvasive disease and require at most subjective fever for non-neuroinvasive disease. The aims of this project were to determine how often afebrile WNV disease occurs and assess differences among patients with and without fever. We included cases with laboratory evidence of WNV disease reported from four states in 2014. We compared demographics, clinical symptoms and laboratory evidence for patients with and without fever and stratified the analysis by neuroinvasive and non-neuroinvasive presentations. Among 956 included patients, 39 (4%) had no fever; this proportion was similar among patients with and without neuroinvasive disease symptoms. For neuroinvasive and non-neuroinvasive patients, there were no differences in age, sex, or laboratory evidence between febrile and afebrile patients, but hospitalisations were more common among patients with fever (P < 0.01). The only significant difference in symptoms was for ataxia, which was more common in neuroinvasive patients without fever (P = 0.04). Only 5% of non-neuroinvasive patients did not meet the WNV case definition due to lack of fever. The evidence presented here supports the changes made to the national case definition in 2013.
Calculation and Measurement of Integral Reflection Coefficient Versus Wavelength of "Real" Crystals on an Absolute Basis
D. B. Brown, M. Fatemi, L. S. Birks
Journal: Advances in X-ray Analysis / Volume 17 / 1973
A method for calculation of the integral reflection coefficient of crystals of interrnediate perfection is introduced. This method can greatly reduce experimental effort for the selection and calibration of crystals, It also serves as a conceptual framework for studies of mosaic block structure and of crystal modification. Good agreement between calculated and experimental values of the integral reflection coefficient is shown for, (a) LiF crystals of two degrees of perfection, (b) elastically bent quartz, and (c) 001, 005, 006, and 007 diffraction from KAP. Zachariasen's division of crystals into two types is extended. It is concluded that the integral reflection coefficients for 200 LiF cannot be raised to the ideally imperfect limiting values.
An Experimental Evaluation of the Atomic Number Effect
L. Parobek, J. D. Brown
A method for measuring the atomic number effect is developed using a sandwich sample technique. The depth distributions of x-ray production, ϕ(ρz) curves, have been measured for a zinc tracer in aluminum, copper, silver and gold matrices at 30, 25, 20 and 15 keV. The ϕ(ρz) curves were measured using a Cambridge Microscan 5 in which the electron beam is normal to the sample surface and the x-ray take-off angle is 75°.
Samples of the low concentrations of copper (∼1 Weight %) in aluminum, nickel, silver and gold were prepared. For each alloy system (for example, Cu - Al), three different concentrations of copper were prepared. The intensity ratios from the sample to the pure element (standard) for each system have been plotted against concentration. At such low concentrations of copper the relation between this ratio and concentration is linear. The slopes of the curves have been compared to the equivalent factors obtained as ratios of the area under F(ρz) curves for aluminum, silver and gold to the area under F(ρz) curve for copper, respectively. The F(ρz) curves are obtained from ϕ(ρz) curves; F(ρz) = ϕ(ρz) exp(-μρz csc ψ) where μ is mass absorption coefficient.
Comparisons are made between these experimental data and the current methods of calculating the atomic number effect.
The pattern of symptom change during prolonged exposure therapy and present-centered therapy for PTSD in active duty military personnel
Lily A. Brown, Joshua D. Clapp, Joshua J. Kemp, Jeffrey S. Yarvis, Katherine A. Dondanville, Brett T. Litz, Jim Mintz, John D. Roache, Stacey Young-McCaughan, Alan L. Peterson, Edna B. Foa, For the STRONG STAR Consortium
Published online by Cambridge University Press: 17 September 2018, pp. 1-10
Few studies have investigated the patterns of posttraumatic stress disorder (PTSD) symptom change in prolonged exposure (PE) therapy. In this study, we aimed to understand the patterns of PTSD symptom change in both PE and present-centered therapy (PCT).
Participants were active duty military personnel (N = 326, 89.3% male, 61.2% white, 32.5 years old) randomized to spaced-PE (S-PE; 10 sessions over 8 weeks), PCT (10 sessions over 8 weeks), or massed-PE (M-PE; 10 sessions over 2 weeks). Using latent profile analysis, we determined the optimal number of PTSD symptom change classes over time and analyzed whether baseline and follow-up variables were associated with class membership.
Five classes, namely rapid responder (7–17%), steep linear responder (14–22%), gradual responder (30–34%), non-responder (27–33%), and symptom exacerbation (7–13%) classes, characterized each treatment. No baseline clinical characteristics predicted class membership for S-PE and M-PE; in PCT, more negative baseline trauma cognitions predicted membership in the non-responder v. gradual responder class. Class membership was robustly associated with PTSD, trauma cognitions, and depression up to 6 months after treatment for both S-PE and M-PE but not for PCT.
Distinct profiles of treatment response emerged that were similar across interventions. By and large, no baseline variables predicted responder class. Responder status was a strong predictor of future symptom severity for PE, whereas response to PCT was not as strongly associated with future symptoms.
Using Sub-Sampling/Inpainting to Control the Kinetics and Observation Efficiency of Dynamic Processes in Liquids
N. D. Browning, B. L. Mehdi, A. Stevens, M. E. Gehm, L. Kovarik, N. Jiang, H. Mehta, A. Liyu, S. Reehl, B. Stanfill, L. Luzzi, K. MacPhee, L. Bramer
Journal: Microscopy and Microanalysis / Volume 24 / Issue S1 / August 2018
Pulsatile hyperglycemia increases insulin secretion but not pancreatic β-cell mass in intrauterine growth-restricted fetal sheep
B. H. Boehmer, L. D. Brown, S. R. Wesolowski, W. W. Hay, P. J. Rozance
Journal: Journal of Developmental Origins of Health and Disease / Volume 9 / Issue 5 / October 2018
Impaired β-cell development and insulin secretion are characteristic of intrauterine growth-restricted (IUGR) fetuses. In normally grown late gestation fetal sheep pancreatic β-cell numbers and insulin secretion are increased by 7–10 days of pulsatile hyperglycemia (PHG). Our objective was to determine if IUGR fetal sheep β-cell numbers and insulin secretion could also be increased by PHG or if IUGR fetal β-cells do not have the capacity to respond to PHG. Following chronic placental insufficiency producing IUGR in twin gestation pregnancies (n=7), fetuses were administered a PHG infusion, consisting of 60 min, high rate, pulsed infusions of dextrose three times a day with an additional continuous, low-rate infusion of dextrose to prevent a decrease in glucose concentrations between the pulses or a control saline infusion. PHG fetuses were compared with their twin IUGR fetus, which received a saline infusion for 7 days. The pulsed glucose infusion increased fetal arterial glucose concentrations an average of 83% during the infusion. Following the 7-day infusion, a square-wave fetal hyperglycemic clamp was performed in both groups to measure insulin secretion. The rate of increase in fetal insulin concentrations during the first 20 min of a square-wave hyperglycemic clamp was 44% faster in the PHG fetuses compared with saline fetuses (P<0.05). There were no differences in islet size, the insulin+ area of the pancreas and of the islets, and β-cell mass between groups (P>0.23). Chronic PHG increases early phase insulin secretion in response to acute hyperglycemia, indicating that IUGR fetal β-cells are functionally responsive to chronic PHG.
P102: A quality improvement project: identifying and managing latent safety threats though a zone wide emergency department in-situ multidiscipline simulation program
L. Mews, D. O'Dochartaigh, M. Chan, T. Brown, A. Robb, W. Ma
Journal: Canadian Journal of Emergency Medicine / Volume 20 / Issue S1 / May 2018
Published online by Cambridge University Press: 11 May 2018, p. S93
Introduction: High fidelity in-situ simulation has been found to detect system deficiencies, equipment failures, and conditions predisposing to medical errors, also known as latent safety threats (LST). What is not well reported is whether these LSTs are effectively managed. As a part of an ongoing quality improvement project, multidisciplinary, in-situ simulations were conducted across emergency departments (ED) in the Edmonton zone with the aim to identify LST and subsequently manage them to improve patient care. Methods: In 2017 simulations were conducted at EDs in the Edmonton Zone (N=10). Following each simulation, a cross sectional, survey based assessment tool, was completed by participants to identify LST. These LST were shared with the site clinical nurse educator and/or site manager and a management plan made. Two to six months follow-up was made to track progress. For reporting, LST were grouped into themes, progress on LST were coded as either resolved, ongoing, or not managed. Results: A total of 112 LST were identified through 18 separate simulations. The most commonly identified LTS were: resuscitation resource required (n 23), lack of staff training (21), equipment not immediately available (20), IT resource required (8), medication not immediately available (6), staff requiring familiarization (5), medication resource required (5), IT issue (4), large equipment needed (4), small equipment needed (4), lack of staff resource (3), medication needed, (3), equipment malfunction (2), Environment cluttered (2), non-appropriate resource removed (2). Site follow-up identified a total of 52 LST that where resolved, and 60 LST that had ongoing work to manage them. No occurrences of LST not being managed were identified. Conclusion: Simulation was used to effectively identify LST. Creating a structured plan and follow up allowed many LST to be resolved and effectively managed. In 2018 simulation will reassess if LST remain.
Magnetothermodynamics: measurements of the thermodynamic properties in a relaxed magnetohydrodynamic plasma
M. Kaur, L. J. Barbano, E. M. Suen-Lewis, J. E. Shrock, A. D. Light, D. A. Schaffner, M. B. Brown, S. Woodruff, T. Meyer
Journal: Journal of Plasma Physics / Volume 84 / Issue 1 / February 2018
Published online by Cambridge University Press: 19 February 2018, 905840114
We have explored the thermodynamics of compressed magnetized plasmas in laboratory experiments and we call these studies 'magnetothermodynamics'. The experiments are carried out in the Swarthmore Spheromak eXperiment device. In this device, a magnetized plasma source is located at one end and at the other end, a closed conducting can is installed. We generate parcels of magnetized plasma and observe their compression against the end wall of the conducting cylinder. The plasma parameters such as plasma density, temperature and magnetic field are measured during compression using HeNe laser interferometry, ion Doppler spectroscopy and a linear ${\dot{B}}$ probe array, respectively. To identify the instances of ion heating during compression, a PV diagram is constructed using measured density, temperature and a proxy for the volume of the magnetized plasma. Different equations of state are analysed to evaluate the adiabatic nature of the compressed plasma. A three-dimensional resistive magnetohydrodynamic code (NIMROD) is employed to simulate the twisted Taylor states and shows stagnation against the end wall of the closed conducting can. The simulation results are consistent to what we observe in our experiments.
Follow Up of GW170817 and Its Electromagnetic Counterpart by Australian-Led Observing Programmes
I. Andreoni, K. Ackley, J. Cooke, A. Acharyya, J. R. Allison, G. E. Anderson, M. C. B. Ashley, D. Baade, M. Bailes, K. Bannister, A. Beardsley, M. S. Bessell, F. Bian, P. A. Bland, M. Boer, T. Booler, A. Brandeker, I. S. Brown, D. A. H. Buckley, S.-W. Chang, D. M. Coward, S. Crawford, H. Crisp, B. Crosse, A. Cucchiara, M. Cupák, J. S. de Gois, A. Deller, H. A. R. Devillepoix, D. Dobie, E. Elmer, D. Emrich, W. Farah, T. J. Farrell, T. Franzen, B. M. Gaensler, D. K. Galloway, B. Gendre, T. Giblin, A. Goobar, J. Green, P. J. Hancock, B. A. D. Hartig, E. J. Howell, L. Horsley, A. Hotan, R. M. Howie, L. Hu, Y. Hu, C. W. James, S. Johnston, M. Johnston-Hollitt, D. L. Kaplan, M. Kasliwal, E. F. Keane, D. Kenney, A. Klotz, R. Lau, R. Laugier, E. Lenc, X. Li, E. Liang, C. Lidman, L. C. Luvaul, C. Lynch, B. Ma, D. Macpherson, J. Mao, D. E. McClelland, C. McCully, A. Möller, M. F. Morales, D. Morris, T. Murphy, K. Noysena, C. A. Onken, N. B. Orange, S. Osłowski, D. Pallot, J. Paxman, S. B. Potter, T. Pritchard, W. Raja, R. Ridden-Harper, E. Romero-Colmenero, E. M. Sadler, E. K. Sansom, R. A. Scalzo, B. P. Schmidt, S. M. Scott, N. Seghouani, Z. Shang, R. M. Shannon, L. Shao, M. M. Shara, R. Sharp, M. Sokolowski, J. Sollerman, J. Staff, K. Steele, T. Sun, N. B. Suntzeff, C. Tao, S. Tingay, M. C. Towner, P. Thierry, C. Trott, B. E. Tucker, P. Väisänen, V. Venkatraman Krishnan, M. Walker, L. Wang, X. Wang, R. Wayth, M. Whiting, A. Williams, T. Williams, C. Wolf, C. Wu, X. Wu, J. Yang, X. Yuan, H. Zhang, J. Zhou, H. Zovaro
Journal: Publications of the Astronomical Society of Australia / Volume 34 / 2017
Published online by Cambridge University Press: 20 December 2017, e069
The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor.
Unusually high illness severity and short incubation periods in two foodborne outbreaks of Salmonella Heidelberg infections with potential coincident Staphylococcus aureus intoxication
J. H. NAKAO, D. TALKINGTON, C. A. BOPP, J. BESSER, M. L. SANCHEZ, J. GUARISCO, S. L. DAVIDSON, C. WARNER, M. G. McINTYRE, J. P. GROUP, N. COMSTOCK, K. XAVIER, T. S. PINSENT, J. BROWN, J. M. DOUGLAS, G. A. GOMEZ, N. M. GARRETT, H. A. CARLETON, B. TOLAR, M. E. WISE
Journal: Epidemiology & Infection / Volume 146 / Issue 1 / January 2018
Published online by Cambridge University Press: 06 December 2017, pp. 19-27
We describe the investigation of two temporally coincident illness clusters involving salmonella and Staphylococcus aureus in two states. Cases were defined as gastrointestinal illness following two meal events. Investigators interviewed ill persons. Stool, food and environmental samples underwent pathogen testing. Alabama: Eighty cases were identified. Median time from meal to illness was 5·8 h. Salmonella Heidelberg was identified from 27 of 28 stool specimens tested, and coagulase-positive S. aureus was isolated from three of 16 ill persons. Environmental investigation indicated that food handling deficiencies occurred. Colorado: Seven cases were identified. Median time from meal to illness was 4·5 h. Five persons were hospitalised, four of whom were admitted to the intensive care unit. Salmonella Heidelberg was identified in six of seven stool specimens and coagulase-positive S. aureus in three of six tested. No single food item was implicated in either outbreak. These two outbreaks were linked to infection with Salmonella Heidelberg, but additional factors, such as dual aetiology that included S. aureus or the dose of salmonella ingested may have contributed to the short incubation periods and high illness severity. The outbreaks underscore the importance of measures to prevent foodborne illness through appropriate washing, handling, preparation and storage of food.
Non-Destructive Characterization of UO2+x Nuclear Fuels
Reeju Pokharel, Donald W. Brown, Bjørn Clausen, Darrin D. Byler, Timothy L. Ickes, Kenneth J. McClellan, Robert M. Suter, Peter Kenesei
Journal: Microscopy Today / Volume 25 / Issue 6 / November 2017
Published online by Cambridge University Press: 27 October 2017, pp. 42-47
Averages and moments associated to class numbers of imaginary quadratic fields
D. R. Heath-Brown, L. B. Pierce
Journal: Compositio Mathematica / Volume 153 / Issue 11 / November 2017
For any odd prime $\ell$ , let $h_{\ell }(-d)$ denote the $\ell$ -part of the class number of the imaginary quadratic field $\mathbb{Q}(\sqrt{-d})$ . Nontrivial pointwise upper bounds are known only for $\ell =3$ ; nontrivial upper bounds for averages of $h_{\ell }(-d)$ have previously been known only for $\ell =3,5$ . In this paper we prove nontrivial upper bounds for the average of $h_{\ell }(-d)$ for all primes $\ell \geqslant 7$ , as well as nontrivial upper bounds for certain higher moments for all primes $\ell \geqslant 3$ .
Milliarcsecond Polarization Properties of Several BL Lacertae Objects
D. C. Gabuzda, D. H. Roberts, J. F. C. Wardle, L. F. Brown
Journal: Symposium - International Astronomical Union / Volume 129 / 1988
We discuss the λ6 cm total intensity and polarization structures of a number of BL Lacertae objects at milliarcsecond resolution. 0235+164 was unresolved and weakly polarized at each of two epochs a year apart; each of the other objects displays structure in polarized flux. 0735+178 and 1749+096 can be adequately modeled by two or three point components—a "core" plus one or two "knots." The core components were moderately polarized (≃ 5%), while "knots" may be polarized at 8% or more, consistent with these components being optically thin. Preliminary results for BL Lac indicate that the total intensity structure can be modeled well by a set of four gaussian components; the polarization structure is complex, but is dominated by the northernmost knot in the jet.
Global Fringe Fitting for Polarization VLBI
L. F. Brown, D. H. Roberts
We present a global fringe fitting technique for the polarized fringes in VLBI. The standard search method imposes a signal-to-noise (SNR) limit on usable data. In our method the search procedure is circumvented and the SNR limitation removed.
Evolution of the Milliarcsecond Polarization Structure of the Superluminal Quasar 3C345
J. F. C. Wardle, D. H. Roberts, L. F. Brown, D. C. Gabuzda
The λ6 cm milliarcsecond polarization structure of 3C345 has been determined at three epochs between December 1981 and March 1984. The knots C2, C3, and C4 all showed changes as they moved away from the core, which remained virtually unpolarized.
Educational Outreach by Avocational Paleontologists and Citizen Scientist for National Fossil Day—Junior Paleontologist Educational Kits
Paul R. Roth, B. Alex Kittle, Vincent L. Santucci, Russell D. Brown
Journal: The Paleontological Society Special Publications / Volume 13 / 2014
Published online by Cambridge University Press: 26 July 2017, p. 155
#cutting: Non-suicidal self-injury (NSSI) on Instagram
R. C. Brown, T. Fischer, A. D. Goldwich, F. Keller, R. Young, P. L. Plener
Journal: Psychological Medicine / Volume 48 / Issue 2 / January 2018
Print publication: January 2018
Social media presents an important means for social interaction, especially among adolescents, with Instagram being the most popular platform in this age-group. Pictures and communication about non-suicidal self-injury (NSSI) can frequently be found on the internet.
During 4 weeks in April 2016, n = 2826 (from n = 1154 accounts) pictures which directly depicted wounds on Instagram were investigated. Those pictures, associated comments, and user accounts were independently rated for content. Associations between characteristics of pictures and comments as well as weekly and daily trends of posting behavior were analyzed.
Most commonly, pictures depicted wounds caused by cutting on arms or legs and were rated as mild or moderate injuries. Pictures with increasing wound grades and those depicting multiple methods of NSSI generated elevated amounts of comments. While most comments were neutral or empathic with some offering help, few comments were hostile. Pictures were mainly posted in the evening hours, with a small peak in the early morning. While there was a slight peak of pictures being posted on Sundays, postings were rather evenly spread across the week.
Pictures of NSSI are frequently posted on Instagram. Social reinforcement might play a role in the posting of more severe NSSI pictures. Social media platforms need to take appropriate measures for preventing online social contagion.
Low-Dose and In-Painting Methods for (Near) Atomic Resolution STEM Imaging of Metal Organic Frameworks (MOFs)
B. L. Mehdi, A. J. Stevens, P. Moeck, A. Dohnalkova, A. Vjunov, J. L. Fulton, D. M. Camaioni, O. K. Farha, J. T. Hupp, B. C. Gates, J. A. Lercher, N. D. Browning
Journal: Microscopy and Microanalysis / Volume 23 / Issue S1 / July 2017 | CommonCrawl |
What would the effects be on Earth if Jupiter was turned into a star?
In Clarke's book 2010, the monolith and its brethren turned Jupiter into the small star nicknamed Lucifer. Ignoring the reality that we won't have any magical monoliths appearing in our future, what would the effects be on Earth if Jupiter was turned into a star?
At it's closest and furthest:
How bright would the "back-side" of the earth be with light from Lucifer?
How much heat would the small star generate on earth?
How many days or months would we actually have night when we circled away behind the sun?
How much brighter would the sun-side of earth be when Lucifer and the sun both shine on the same side of the planet?
star the-sun light jupiter heat
HDE 226868♦
MaelishMaelish
$\begingroup$ While this is an interesting question, I don't know if there's a proper way to answer it. Jupiter's mass is far less that that of the smallest brown dwarfs, also dubbed "failed stars". Brown dwarfs don't have enough mass to sustain hydrogen fusion, and don't emit a whole lot of light. I don't think that there's any way that you could realistically do the calculations for a Jupiter-star scenario, because of the impossibility of it beginning hydrogen fusion. Still, it's an interesting idea. $\endgroup$ – HDE 226868♦ Aug 6 '14 at 17:06
$\begingroup$ Okay, I relent. +1 for an interesting idea. $\endgroup$ – HDE 226868♦ Aug 11 '14 at 18:15
$\begingroup$ One who looks for a good physics expert's opinion about that, look here. Note: Ī̲ don't advertise that site as a whole, only this particular posting. $\endgroup$ – Incnis Mrsi Sep 10 '16 at 21:44
$\begingroup$ Jupiter can burn as brightly as you want it to depending on how much mass you add to it. If you somehow put a very massive core at the center of Jupiter, the total mass of the system would determine how much fusion can take place. It can probably range from a supernova if you put in a neutron star just below the Chandrasekhar limit inside to a very weak red dwarf if you just add enough mass to make fusion start. $\endgroup$ – A. C. A. C. Oct 13 '17 at 22:57
$\begingroup$ How do you know we won't have any "magical" monoliths appearing in the future? It's as good of a scenario of first contact as any. $\endgroup$ – Jack R. Woods Jun 14 '18 at 0:22
Before I start, I'll admit that I've criticized the question based on its improbability; however, I've been persuaded otherwise. I'm going to try to do the calculations based on completely different formulas than I think have been used; I hope you'll stay with me as I work it out.
Let's imagine that Lucifer becomes a main-sequence star - in fact, let's call it a low-mass red dwarf. Main-sequence stars follow the mass-luminosity relation:
$$\frac{L}{L_\odot} = \left(\frac{M}{M_\odot}\right)^a$$
Where $L$ and $M$ are the star's luminosity and mass, and $L_\odot$ and $M_\odot$ and the luminosity and mass of the Sun. For stars with $M < 0.43M_\odot$, $a$ takes the value of 2.3. Now we can plug in Jupiter's mass ($1.8986 \times 10 ^{27}$ kg) into the formula, as well as the Sun's mass ($1.98855 \times 10 ^ {30}$ kg) and luminosity ($3.846 \times 10 ^ {26}$ watts), and we get
$$\frac{L}{3.846 \times 10 ^ {26}} = \left(\frac{1.8986 \times 10 ^ {27}}{1.98855 \times 10 ^ {30}}\right)^{2.3}$$
This becomes $$L = \left(\frac{1.8986 \times 10 ^ {27}}{1.98855 \times 10 ^ {30}}\right)^{2.3} \times 3.846 \times 10 ^ {26}$$
which then becomes
$$L = 4.35 \times 10 ^ {19}$$ watts.
Now we can work out the apparent brightness of Lucifer, as seen from Earth. For that, we need the formula
$$m = m_\odot - 2.5 \log \left(\frac {L}{L_\odot}\left(\frac {d_\odot}{d}\right) ^ 2\right)$$
where $m$ is the apparent magnitude of the star, $m_\odot$ is the apparent magnitude of the Sun, $d_\odot$ is the distance to the Sun, and $d$ is the distance to the star. Now, $m = -26.73$ and $d(s)$ is 1 (in astronomical units). $d$ varies. Jupiter is about 5.2 AU from the Sun, so at its closest distance to Earth, it would be ~4.2 AU away. We plug these numbers into the formula, and find
$$m = -6.25$$
which is a lot less brighter than the Sun. Now, when Jupiter is farthest away from the Sun, it is ~6.2 AU away. We plug that into the formula, and find
which is dimmer still - although, of course, Jupiter would be completely blocked by the Sun. Still, for finding the apparent magnitude of Jupiter at some distance from Earth, we can change the above formula to
$$m = -26.73 - 2.5 \log \left(\frac {4.35 \times 10 ^ {19}}{3.846 \times 10 6 {26}}\left(\frac {1}{d}\right) ^ 2\right)$$
By comparison, the Moon can have an average apparent magnitude of -12.74 at full moon - much brighter than Lucifer. The apparent magnitude of both bodies can, of course, change - Jupiter by transits of its moon, for example - but these are the optimal values.
While the above calculations really don't answer most parts of your question, I hope it helps a bit. And please, correct me if I made a mistake somewhere. LaTeX is by no means my native language, and I could have gotten something wrong.
The combined brightness of Lucifer and the Sun would depend on the angle of the Sun's rays and Lucifer's rays. Remember how we have different seasons because of the tilt of the Earth's axis? Well, the added heat would have to do with the tilt of Earth's and Lucifer's axes relative to one another. I can't give you a numerical result, but I can add that I hope it wouldn't be too much hotter than it is now, as I'm writing this!
Second Edit
Like I said in a comment somewhere on this page, the mass-luminosity relation really only works for main-sequence stars. If Lucifer was not on the main sequence. . . Well, then none of my calculations would be right.
a CVn
$\begingroup$ It's an interesting answer! It sounds as thought there would be very little effect in regards to extra light or temperature. $\endgroup$ – Maelish Aug 11 '14 at 17:55
$\begingroup$ In answer to the edit you made to your comment: Yep. Not a big difference. At least, not on Earth. An interesting follow-up would be to see if it could indeed cause conditions on Europa to change in favor of life. $\endgroup$ – HDE 226868♦ Aug 11 '14 at 20:42
$\begingroup$ @HDE 226868 Just for fun did you think anymore about what it would take to make Europa habitable for the aliens (I know, it depends on the alien). Jupiter couldn't get "too hot" obviously. I love A. C. Clarke, but he did need to ignore science for the sake of the story sometimes (ie. humans wouldn't survive in Jupiter's orbit due to the magnetic field). $\endgroup$ – Jack R. Woods Jul 8 '17 at 15:22
I think it's a fun question, if impossible. The only way to turn Jupiter into a star that's even remotely practical is to add to it's mass. Ignoring brown dwarfs that are very limited in energy output, to get a red dwarf going, you'd need to add at least 75-80 or so Jupiter masses. (a bit more than 24,000 earth masses). You'd want to add a fair percentage of hydrogen, but some rocky debris wouldn't hurt the mix.
Anyway, assuming the impossible is done, there's several things to consider. The greater gravity (75-80 times) would significantly alter all the planets orbits. Predicting exactly how is hard, but that much more mass and the planets orbits, certainly all the inner ones, would wobble a lot more and some might get pulled completely out of their orbit, likely thrown out of the solar system.
You might think that the planets nearer to Jupiter would be the most affected, but it really has more to do with tidal synch than anything else. Any of the 4 inner planets could get tugged into a new orbit. You'd also likely see the earth's orbit elongate in resonance with Jupiter perhaps increasing the ice age/ice melt cycle. Precise answers are hard, and none of these things would happen over 1 orbit, but over time, certainly. Orbital changes to all the inner planets and perhaps Saturn as well would be inevitable if Jupiter becomes a red-dwarf. Imagine if Saturn was pulled closer to the earth, into an orbit between Mars and Jupiter, or Mercury was pulled out past the earth. Odds are it wouldn't hit us, but we might want to keep an eye on it.
http://en.wikipedia.org/wiki/Stability_of_the_Solar_System#Mercury.E2.80.93Jupiter_1:1_resonance
2nd thing to consider is magnetism and solar flares. Young stars tend to spin very fast due to conservation of angular momentum when the stars form and this creates enormous magnetic fields and huge solar flares, much bigger than we get from the sun. It's strange to think that a tiny red dwarf, 4 times as far from our sun as the sun would create solar flares to worry about but it's possible. Whether it would need high angular momentum for this to happen, I'm not sure, but we could see larger solar flares from star-Jupiter than from the sun.
http://en.wikipedia.org/wiki/Flare_star
Brightness, heat and visibility was covered above, but I'll touch on that. Brightness of -6.25 would be 5-6 times brighter than Venus and you'd see it at night, Venus isn't seen in peak darkness, so it would be significantly brighter than any other star/planet in the sky, but significantly less bright than the moon, like, you couldn't make your way with just that star's light the way you can see things around you in moonlight. But when I run the numbers, I think it would be quite a bit brighter than that.
Mass to Luminosity is to the power of 3.5 - quick estimate, so, lets say the red dwarf has a mass of 80 Jupiters. That's 0.076 Suns. 0.076^3.5 = about 1/8,000, so 4.2 times as far away at closest point (square of that), 1/8000th as bright, we're looking at 1/140,000 times the light we get from the sun - not very much and likely less than that in it's early stages and because the smaller stars tend to fall off, so lets estimate 1/200,000 - 1/300,000 the apparent brightness of the sun as a ballpark estimate. That's not enough to heat the earth at all, but that's still brighter, (a little bit) than the full moon which is about 1/400,000 the brightness of the sun. it would be enough light to see your way around, but I wouldn't want to try to read by it. It would also be distinctly reddish light. Not the white light we're used to getting from the day or night sky.
Finally size - a red-dwarf star of 80 Jupiter masses would actually be slightly smaller than Jupiter due to the gravitation so it would appear like a planet - not quite a point in the sky, but almost a point, but a bit brighter than the full moon and red. That's likely bright enough to see during the day too. I don't think it would be hard to look at or hurt your eyes, but it would shine like a tiny bright red flashlight in the distance.
http://www.space.com/21420-smallest-star-size-red-dwarf.html
I don't think I like star-Jupiter. Lets not plan on doing this. :-)
userLTKuserLTK
Ignoring the impossibility of Jupiter going solar:
Assume that Jupiter turns into duplicate of the Sun in terms of energy output. Energy transmitted to the earth follows an inverse-square law. Since Jupiter is, at best, 4 times farther from the Earth than the Sun, Jupiter will supply the Earth with, at most, 1/16 the energy that the Sun supplies, for an increase of a bit more than 6%, at the most.
By comparison, between aphelion and perihelion, the Sun-Earth distance increases from around 147 million kilometers to around 152 million kilometers. This implies a seasonal energy input change of about 7%, that we experience now every year...
DJohnMDJohnM
$\begingroup$ And I'm pretty sure Lucifer's energy output was far less than the Sun's, so the increase would be even smaller. $\endgroup$ – Keith Thompson Aug 7 '14 at 20:41
$\begingroup$ While this is a good answer I can be a lot more concerned about how the change in mass of jupiter can affect orbits of the planets since now it can be a binary system $\endgroup$ – jean May 11 '17 at 19:07
In reality, Jupiter doesn't have nearly enough mass to initiate stellar ignition or sustain it if we could somehow start it going.
Even the smallest star would require on the order of some 80 to 90 times the mass of Jupiter just to put out a faint red glow.
Even to become a brown dwarf proto-star, Jupiter would require a mass increase on the order of at least 10-fold or so.
Lucifer is simply not possible unless Jupiter collides with something to provide the extra mass it needs to go stellar and even then, it would be a red dwarf at best and quite faint, like a red-hot nail glowing in the dark.
But one can dream.
JayTJayT
$\begingroup$ Just a correction to make: a brown dwarf isn't a proto-star; it's a "failed star" - that is, it began as a proto-star but simply didn't have the mass necessary to enter the main sequence. I hate to be a nit-picker, though. +1 for a good, logical explanation. $\endgroup$ – HDE 226868♦ Aug 10 '14 at 16:08
Sun-Earth distance: 1AU
Earth-Jupiter distance (at the conjunction): 4AU
So Lucifer will be four times farther than Sun when it is nearer (six times when it is farthest), and at the same time it is a thousand times smaller. This is approx 40 times more light than full moon concentrated in a tiny point on sky.
EnviteEnvite
$\begingroup$ I'm not sure this answers the question. And @Envite, how does your link prove anything? $\endgroup$ – HDE 226868♦ Aug 6 '14 at 19:31
$\begingroup$ @HDE226868 The link is the reference for the mass relation between Sun and Jupiter $\endgroup$ – Envite Aug 7 '14 at 7:42
$\begingroup$ Right, @Envite, but mass and size aren't necessarily correlated. And Jupiter still doesn't have anywhere near enough mass to begin fusion. $\endgroup$ – HDE 226868♦ Aug 7 '14 at 13:50
$\begingroup$ Look, I feel that the whole exercise is futile. If Jupiter turned into a star - even a red dwarf - we would have a lot of problems with gravity. The solar system would become unstable, and there's a chance that some of the planets would be flung out of the solar system. We can't calculate the energy output because we can only guess at what type of star Jupiter would become, and we can't come up with any definite answer. There are dozens of possibilities; not a single one of them has any more merit than any of the others. Does the book specify? $\endgroup$ – HDE 226868♦ Aug 7 '14 at 13:53
$\begingroup$ @HDE226868 Aboslutely False. We will not have any problems with gravity if Jupiter "magically" (as expressed by the OP) becomes a star with its own mass. $\endgroup$ – Envite Aug 7 '14 at 14:34
protected by Community♦ Oct 14 '17 at 17:43
Not the answer you're looking for? Browse other questions tagged star the-sun light jupiter heat or ask your own question.
If we were to see the Sun with our naked eyes from the Orion belt, would all planets be encompassed inside the star? Is this calculable?
Why is every year the same number of days despite the gravity in the solar system?
How much light does Jupiter project onto the surface of Ganymede?
What supernova has created the iron currently found in Earth core?
How small a star can provide Sun-level illumination to its planets?
How Would a Neutron Star Actually Appear?
Do far away planets reflect light in the same way our moon/earth reflects light?
Fate of Jupiter when our sun dies
When a star reaches the red giant phase, why does it become more opaque?
Seasonal effect based on Earth position | CommonCrawl |
Overall, the studies listed in Table 1 vary in ways that make it difficult to draw precise quantitative conclusions from them, including their definitions of nonmedical use, methods of sampling, and demographic characteristics of the samples. For example, some studies defined nonmedical use in a way that excluded anyone for whom a drug was prescribed, regardless of how and why they used it (Carroll et al., 2006; DeSantis et al., 2008, 2009; Kaloyanides et al., 2007; Low & Gendaszek, 2002; McCabe & Boyd, 2005; McCabe et al., 2004; Rabiner et al., 2009; Shillington et al., 2006; Teter et al., 2003, 2006; Weyandt et al., 2009), whereas others focused on the intent of the user and counted any use for nonmedical purposes as nonmedical use, even if the user had a prescription (Arria et al., 2008; Babcock & Byrne, 2000; Boyd et al., 2006; Hall et al., 2005; Herman-Stahl et al., 2007; Poulin, 2001, 2007; White et al., 2006), and one did not specify its definition (Barrett, Darredeau, Bordy, & Pihl, 2005). Some studies sampled multiple institutions (DuPont et al., 2008; McCabe & Boyd, 2005; Poulin, 2001, 2007), some sampled only one (Babcock & Byrne, 2000; Barrett et al., 2005; Boyd et al., 2006; Carroll et al., 2006; Hall et al., 2005; Kaloyanides et al., 2007; McCabe & Boyd, 2005; McCabe et al., 2004; Shillington et al., 2006; Teter et al., 2003, 2006; White et al., 2006), and some drew their subjects primarily from classes in a single department at a single institution (DeSantis et al., 2008, 2009; Low & Gendaszek, 2002). With few exceptions, the samples were all drawn from restricted geographical areas. Some had relatively high rates of response (e.g., 93.8%; Low & Gendaszek 2002) and some had low rates (e.g., 10%; Judson & Langdon, 2009), the latter raising questions about sample representativeness for even the specific population of students from a given region or institution.
It was a productive hour, sure. But it also bore a remarkable resemblance to the normal editing process. I had imagined that the magical elixir coursing through my bloodstream would create towering storm clouds in my brain which, upon bursting, would rain cinematic adjectives onto the page as fast my fingers could type them. Unfortunately, the only thing that rained down were Google searches that began with the words "synonym for"—my usual creative process.
One symptom of Alzheimer's disease is a reduced brain level of the neurotransmitter called acetylcholine. It is thought that an effective treatment for Alzheimer's disease might be to increase brain levels of acetylcholine. Another possible treatment would be to slow the death of neurons that contain acetylcholine. Two drugs, Tacrine and Donepezil, are both inhibitors of the enzyme (acetylcholinesterase) that breaks down acetylcholine. These drugs are approved in the US for treatment of Alzheimer's disease.
That said, there are plenty of studies out there that point to its benefits. One study, published in the British Journal of Pharmacology, suggests brain function in elderly patients can be greatly improved after regular dosing with Piracetam. Another study, published in the journal Psychopharmacology, found that Piracetam improved memory in most adult volunteers. And another, published in the Journal of Clinical Psychopharmacology, suggests it can help students, especially dyslexic students, improve their nonverbal learning skills, like reading ability and reading comprehension. Basically, researchers know it has an effect, but they don't know what or how, and pinning it down requires additional research.
The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work.
First half at 6 AM; second half at noon. Wrote a short essay I'd been putting off and napped for 1:40 from 9 AM to 10:40. This approach seems to work a little better as far as the aboulia goes. (I also bother to smell my urine this time around - there's a definite off smell to it.) Nights: 10:02; 8:50; 10:40; 7:38 (2 bad nights of nasal infections); 8:28; 8:20; 8:43 (▆▃█▁▂▂▃).
Caffeine keeps you awake, which keeps you coding. It may also be a nootropic, increasing brain-power. Both desirable results. However, it also inhibits vitamin D receptors, and as such decreases the body's uptake of this-much-needed-vitamin. OK, that's not so bad, you're not getting the maximum dose of vitamin D. So what? Well, by itself caffeine may not cause you any problems, but combined with cutting off a major source of the vitamin - the production via sunlight - you're leaving yourself open to deficiency in double-quick time.
Evidence in support of the neuroprotective effects of flavonoids has increased significantly in recent years, although to date much of this evidence has emerged from animal rather than human studies. Nonetheless, with a view to making recommendations for future good practice, we review 15 existing human dietary intervention studies that have examined the effects of particular types of flavonoid on cognitive performance. The studies employed a total of 55 different cognitive tests covering a broad range of cognitive domains. Most studies incorporated at least one measure of executive function/working memory, with nine reporting significant improvements in performance as a function of flavonoid supplementation compared to a control group. However, some domains were overlooked completely (e.g. implicit memory, prospective memory), and for the most part there was little consistency in terms of the particular cognitive tests used making across study comparisons difficult. Furthermore, there was some confusion concerning what aspects of cognitive function particular tests were actually measuring. Overall, while initial results are encouraging, future studies need to pay careful attention when selecting cognitive measures, especially in terms of ensuring that tasks are actually sensitive enough to detect treatment effects.
The beneficial effects as well as the potentially serious side effects of these drugs can be understood in terms of their effects on the catecholamine neurotransmitters dopamine and norepinephrine (Wilens, 2006). These neurotransmitters play an important role in cognition, affecting the cortical and subcortical systems that enable people to focus and flexibly deploy attention (Robbins & Arnsten, 2009). In addition, the brain's reward centers are innervated by dopamine neurons, accounting for the pleasurable feelings engendered by these stimulants (Robbins & Everett, 1996).
Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything:
Many of the positive effects of cognitive enhancers have been seen in experiments using rats. For example, scientists can train rats on a specific test, such as maze running, and then see if the "smart drug" can improve the rats' performance. It is difficult to see how many of these data can be applied to human learning and memory. For example, what if the "smart drug" made the rat hungry? Wouldn't a hungry rat run faster in the maze to receive a food reward than a non-hungry rat? Maybe the rat did not get any "smarter" and did not have any improved memory. Perhaps the rat ran faster simply because it was hungrier. Therefore, it was the rat's motivation to run the maze, not its increased cognitive ability that affected the performance. Thus, it is important to be very careful when interpreting changes observed in these types of animal learning and memory experiments.
We reached out to several raw material manufacturers and learned that Phosphatidylserine and Huperzine A are in short supply. We also learned that these ingredients can be pricey, incentivizing many companies to cut corners. A company has to have the correct ingredients in the correct proportions in order for a brain health formula to be effective. We learned that not just having the two critical ingredients was important – but, also that having the correct supporting ingredients was essential in order to be effective.
None of that has kept entrepreneurs and their customers from experimenting and buying into the business of magic pills, however. In 2015 alone, the nootropics business raked in over $1 billion dollars, and web sites like the nootropics subreddit, the Bluelight forums, and Bulletproof Exec are popular and packed with people looking for easy ways to boost their mental performance. Still, this bizarre, Philip K. Dick-esque world of smart drugs is a tough pill to swallow. To dive into the topic and explain, I spoke to Kamal Patel, Director of evidence-based medical database Examine.com, and even tried a few commercially-available nootropics myself.
When comparing supplements, consider products with a score above 90% to get the greatest benefit from smart pills to improve memory. Additionally, we consider the reviews that users send to us when scoring supplements, so you can determine how well products work for others and use this information to make an informed decision. Every month, our editor puts her name on that month's best smart bill, in terms of results and value offered to users.
After 7 days, I ordered a kg of choline bitartrate from Bulk Powders. Choline is standard among piracetam-users because it is pretty universally supported by anecdotes about piracetam headaches, has support in rat/mice experiments27, and also some human-related research. So I figured I couldn't fairly test piracetam without some regular choline - the eggs might not be enough, might be the wrong kind, etc. It has a quite distinctly fishy smell, but the actual taste is more citrus-y, and it seems to neutralize the piracetam taste in tea (which makes things much easier for me).
Stimulants are the smart drugs most familiar to people, starting with widely-used psychostimulants caffeine and nicotine, and the more ill-reputed subclass of amphetamines. Stimulant drugs generally function as smart drugs in the sense that they promote general wakefulness and put the brain and body "on alert" in a ready-to-go state. Basically, any drug whose effects reduce drowsiness will increase the functional IQ, so long as the user isn't so over-stimulated they're shaking or driven to distraction.
Many of the most popular "smart drugs" (Piracetam, Sulbutiamine, Ginkgo Biloba, etc.) have been around for decades or even millenia but are still known only in medical circles or among esoteric practicioners of herbal medicine. Why is this? If these compounds have proven cognitive benefits, why are they not ubiquitous? How come every grade-school child gets fluoride for the development of their teeth (despite fluoride's being a known neurotoxin) but not, say, Piracetam for the development of their brains? Why does the nightly news slant stories to appeal more to a fear-of-change than the promise of a richer cognitive future?
Recent developments include biosensor-equipped smart pills that sense the appropriate environment and location to release pharmacological agents. Medimetrics (Eindhoven, Netherlands) has developed a pill called IntelliCap with drug reservoir, pH and temperature sensors that release drugs to a defined region of the gastrointestinal tract. This device is CE marked and is in early stages of clinical trials for FDA approval. Recently, Google announced its intent to invest and innovate in this space.
That is, perhaps light of the right wavelength can indeed save the brain some energy by making it easier to generate ATP. Would 15 minutes of LLLT create enough ATP to make any meaningful difference, which could possibly cause the claimed benefits? The problem here is like that of the famous blood-glucose theory of willpower - while the brain does indeed use up more glucose while active, high activity uses up very small quantities of glucose/energy which doesn't seem like enough to justify a mental mechanism like weak willpower.↩
As I am not any of the latter, I didn't really expect a mental benefit. As it happens, I observed nothing. What surprised me was something I had forgotten about: its physical benefits. My performance in Taekwondo classes suddenly improved - specifically, my endurance increased substantially. Before, classes had left me nearly prostrate at the end, but after, I was weary yet fairly alert and happy. (I have done Taekwondo since I was 7, and I have a pretty good sense of what is and is not normal performance for my body. This was not anything as simple as failing to notice increasing fitness or something.) This was driven home to me one day when in a flurry before class, I prepared my customary tea with piracetam, choline & creatine; by the middle of the class, I was feeling faint & tired, had to take a break, and suddenly, thunderstruck, realized that I had absentmindedly forgot to actually drink it! This made me a believer.
Adaptogens are plant-derived chemicals whose activity helps the body maintain or regain homeostasis (equilibrium between the body's metabolic processes). Almost without exception, adaptogens are available over-the-counter as dietary supplements, not controlled drugs. Well-known adaptogens include Ginseng, Kava Kava, Passion Flower, St. Johns Wort, and Gotu Kola. Many of these traditional remedies border on being "folk wisdom," and have been in use for hundreds or thousands of years, and are used to treat everything from anxiety and mild depression to low libido. While these smart drugs work in a many different ways (their commonality is their resultant function within the body, not their chemical makeup), it can generally be said that the cognitive boost users receive is mostly a result of fixing an imbalance in people with poor diets, body toxicity, or other metabolic problems, rather than directly promoting the growth of new brain cells or neural connections.
Taken together, these considerations suggest that the cognitive effects of stimulants for any individual in any task will vary based on dosage and will not easily be predicted on the basis of data from other individuals or other tasks. Optimizing the cognitive effects of a stimulant would therefore require, in effect, a search through a high-dimensional space whose dimensions are dose; individual characteristics such as genetic, personality, and ability levels; and task characteristics. The mixed results in the current literature may be due to the lack of systematic optimization.
The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia.
If this is the case, this suggests some thoughtfulness about my use of nicotine: there are times when use of nicotine will not be helpful, but times where it will be helpful. I don't know what makes the difference, but I can guess it relates to over-stimulation: on some nights during the experiment, I had difficult concentrating on n-backing because it was boring and I was thinking about the other things I was interested in or working on - in retrospect, I wonder if those instances were nicotine nights.
So what's the catch? Well, it's potentially addictive for one. Anything that messes with your dopamine levels can be. And Patel says there are few long-term studies on it yet, so we don't know how it will affect your brain chemistry down the road, or after prolonged, regular use. Also, you can't get it very easily, or legally for that matter, if you live in the U.S. It's classified as a schedule IV controlled substance. That's where Adrafinil comes in.
Some data suggest that cognitive enhancers do improve some types of learning and memory, but many other data say these substances have no effect. The strongest evidence for these substances is for the improvement of cognitive function in people with brain injury or disease (for example, Alzheimer's disease and traumatic brain injury). Although "popular" books and companies that sell smart drugs will try to convince you that these drugs work, the evidence for any significant effects of these substances in normal people is weak. There are also important side-effects that must be considered. Many of these substances affect neurotransmitter systems in the central nervous system. The effects of these chemicals on neurological function and behavior is unknown. Moreover, the long-term safety of these substances has not been adequately tested. Also, some substances will interact with other substances. A substance such as the herb ma-huang may be dangerous if a person stops taking it suddenly; it can also cause heart attacks, stroke, and sudden death. Finally, it is important to remember that products labeled as "natural" do not make them "safe."
The smart pill industry has popularized many herbal nootropics. Most of them first appeared in Ayurveda and traditional Chinese medicine. Ayurveda is a branch of natural medicine originating from India. It focuses on using herbs as remedies for improving quality of life and healing ailments. Evidence suggests our ancestors were on to something with this natural approach.
Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it.
The use of cognitive enhancers by healthy individuals sparked debate about ethics and safety. Cognitive enhancement by pharmaceutical means was considered a form of illicit drug use in some places, even while other cognitive enhancers, such as caffeine and nicotine, were freely available. The conflict therein raised the possibility for further acceptance of smart drugs in the future. However, the long-term effects of smart drugs on otherwise healthy brains were unknown, delaying safety assessments.
Please browse our website to learn more about how to enhance your memory. Our blog contains informative articles about the science behind nootropic supplements, specific ingredients, and effective methods for improving memory. Browse through our blog articles and read and compare reviews of the top rated natural supplements and smart pills to find everything you need to make an informed decision.
The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A.
The first night I was eating some coconut oil, I did my n-backing past 11 PM; normally that damages my scores, but instead I got 66/66/75/88/77% (▁▁▂▇▃) on D4B and did not feel mentally exhausted by the end. The next day, I performed well on the Cambridge mental rotations test. An anecdote, of course, and it may be due to the vitamin D I simultaneously started. Or another day, I was slumped under apathy after a promising start to the day; a dose of fish & coconut oil, and 1 last vitamin D, and I was back to feeling chipper and optimist. Unfortunately I haven't been testing out coconut oil & vitamin D separately, so who knows which is to thank. But still interesting.
The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total.
Instead of buying expensive supplements, Lebowitz recommends eating heart-healthy foods, like those found in the MIND diet. Created by researchers at Rush University, MIND combines the Mediterranean and DASH eating plans, which have been shown to reduce the risk of heart problems. Fish, nuts, berries, green leafy vegetables and whole grains are MIND diet staples. Lebowitz says these foods likely improve your cognitive health by keeping your heart healthy.
Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185).
These are the most popular nootropics available at the moment. Most of them are the tried-and-tested and the benefits you derive from them are notable (e.g. Guarana). Others are still being researched and there haven't been many human studies on these components (e.g. Piracetam). As always, it's about what works for you and everyone has a unique way of responding to different nootropics.
The Nature commentary is ivory tower intellectualism at its best. The authors state that society must prepare for the growing demand of such drugs; that healthy adults should be allowed drugs to enhance cognitive ability; that this is "morally equivalent" and no more unnatural than diet, sleep, or the use of computers; that we need an evidence-based approach to evaluate the risks; and that we need legal and ethical policies to ensure fair and equitable use.
On the other metric, suppose we removed the creatine? Dropping 4 grams of material means we only need to consume 5.75 grams a day, covered by 8 pills (compared to 13 pills). We save 5,000 pills, which would have cost $45 and also don't spend the $68 for the creatine; assuming a modafinil formulation, that drops our $1761 down to $1648 or $1.65 a day. Or we could remove both the creatine and modafinil, for a grand total of $848 or $0.85 a day, which is pretty reasonable.
Caffeine (Examine.com; FDA adverse events) is of course the most famous stimulant around. But consuming 200mg or more a day, I have discovered the downside: it is addictive and has a nasty withdrawal - headaches, decreased motivation, apathy, and general unhappiness. (It's a little amusing to read academic descriptions of caffeine addiction9; if caffeine were a new drug, I wonder what Schedule it would be in and if people might be even more leery of it than modafinil.) Further, in some ways, aside from the ubiquitous placebo effect, caffeine combines a mix of weak performance benefits (Lorist & Snel 2008, Nehlig 2010) with some possible decrements, anecdotally and scientifically:
That study is also interesting for finding benefits to chronic piracetam+choline supplementation in the mice, which seems connected to a Russian study which reportedly found that piracetam (among other more obscure nootropics) increased secretion of BDNF in mice. See also Drug heuristics on a study involving choline supplementation in pregnant rats.↩
Noopept is a nootropic that belongs to the ampakine family. It is known for promoting learning, boosting mood, and improving logical thinking. It has been popular as a study drug for a long time but has recently become a popular supplement for improving vision. Users report seeing colors more brightly and feeling as if their vision is more vivid after taking noopept.
ADMISSIONSUNDERGRADUATE GRADUATE CONTINUING EDUCATION RESEARCHDIVISIONS RESEARCH IMPACT LIBRARIES INNOVATION AND PARTNERSHIP SUPPORT FOR RESEARCHERS RESEARCH IN CONVERSATION PUBLIC ENGAGEMENT WITH RESEARCH NEWS & EVENTSEVENTS SCIENCE BLOG ARTS BLOG OXFORD AND BREXIT NEWS RELEASES FOR JOURNALISTS FILMING IN OXFORD FIND AN EXPERT ABOUTORGANISATION FACTS AND FIGURES OXFORD PEOPLE OXFORD ACCESS INTERNATIONAL OXFORD BUILDING OUR FUTURE JOBS 牛津大学Staff Oxford students Alumni Visitors Local community
"I am nearly four years out from my traumatic brain injury and I have been through 100's of hours of rehabilitation therapy. I have been surprised by how little attention is given to adequate nutrition for recovering from TBI. I'm always looking for further opportunities to recover and so this book fell into the right hands. Cavin outlines the science and reasoning behind the diet he suggests, but the real power in this book comes when he writes, "WE." WE can give our brains proper nutrition. Now I'm excited to drink smoothies and eat breakfasts that look like dinners! I will recommend this book to my friends.
Took pill #6 at 12:35 PM. Hard to be sure. I ultimately decided that it was Adderall because I didn't have as much trouble as I normally would in focusing on reading and then finishing my novel (Surface Detail) despite my family watching a movie, though I didn't notice any lack of appetite. Call this one 60-70% Adderall. I check the next evening and it was Adderall.
(I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I'm shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.)
NGF may sound intriguing, but the price is a dealbreaker: at suggested doses of 1-100μg (NGF dosing in humans for benefits is, shall we say, not an exact science), and a cost from sketchy suppliers of $1210/100μg/$470/500μg/$750/1000μg/$1000/1000μg/$1030/1000μg/$235/20μg. (Levi-Montalcini was presumably able to divert some of her lab's production.) A year's supply then would be comically expensive: at the lowest doses of 1-10μg using the cheapest sellers (for something one is dumping into one's eyes?), it could cost anywhere up to $10,000.
Sleep itself is an underrated cognition enhancer. It is involved in enhancing long-term memories as well as creativity. For instance, it is well established that during sleep memories are consolidated-a process that "fixes" newly formed memories and determines how they are shaped. Indeed, not only does lack of sleep make most of us moody and low on energy, cutting back on those precious hours also greatly impairs cognitive performance. Exercise and eating well also enhance aspects of cognition. It turns out that both drugs and "natural" enhancers produce similar physiological changes in the brain, including increased blood flow and neuronal growth in structures such as the hippocampus. Thus, cognition enhancers should be welcomed but not at the expense of our health and well being.
My first dose on 1 March 2017, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well.
Oxiracetam is one of the 3 most popular -racetams; less popular than piracetam but seems to be more popular than aniracetam. Prices have come down substantially since the early 2000s, and stand at around 1.2g/$ or roughly 50 cents a dose, which was low enough to experiment with; key question, does it stack with piracetam or is it redundant for me? (Oxiracetam can't compete on price with my piracetam pile stockpile: the latter is now a sunk cost and hence free.)
This tendency is exacerbated by general inefficiencies in the nootropics market - they are manufactured for vastly less than they sell for, although the margins aren't as high as they are in other supplement markets, and not nearly as comical as illegal recreational drugs. (Global Price Fixing: Our Customers are the Enemy (Connor 2001) briefly covers the vitamin cartel that operated for most of the 20th century, forcing food-grade vitamins prices up to well over 100x the manufacturing cost.) For example, the notorious Timothy Ferriss (of The Four-hour Work Week) advises imitators to find a niche market with very high margins which they can insert themselves into as middlemen and reap the profits; one of his first businesses specialized in… nootropics & bodybuilding. Or, when Smart Powders - usually one of the cheapest suppliers - was dumping its piracetam in a fire sale of half-off after the FDA warning, its owner mentioned on forums that the piracetam was still profitable (and that he didn't really care because selling to bodybuilders was so lucrative); this was because while SP was selling 2kg of piracetam for ~$90, Chinese suppliers were offering piracetam on AliBaba for $30 a kilogram or a third of that in bulk. (Of course, you need to order in quantities like 30kg - this is more or less the only problem the middlemen retailers solve.) It goes without saying that premixed pills or products are even more expensive than the powders.
Modafinil is a eugeroic, or 'wakefulness promoting agent', intended to help people with narcolepsy. It was invented in the 1970s, but was first approved by the American FDA in 1998 for medical use. Recent years have seen its off-label use as a 'smart drug' grow. It's not known exactly how Modafinil works, but scientists believe it may increase levels of histamines in the brain, which can keep you awake. It might also inhibit the dissipation of dopamine, again helping wakefulness, and it may help alertness by boosting norepinephrine levels, contributing to its reputation as a drug to help focus and concentration.
Nootropics – sometimes called smart drugs – are compounds that enhance brain function. They're becoming a popular way to give your mind an extra boost. According to one Telegraph report, up to 25% of students at leading UK universities have taken the prescription smart drug modafinil [1], and California tech startup employees are trying everything from Adderall to LSD to push their brains into a higher gear [2].
Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap | CommonCrawl |
Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator
Characterisation of the pressure term in the incompressible Navier–Stokes equations on the whole space
A direct method of moving planes for fully nonlinear nonlocal operators and applications
Yuxia Guo 1,, and Shaolong Peng 1,
Department of Mathematics, Tsinghua University, Beijing, 100084, China
* Corresponding author: Yuxia Guo
Received July 2020 Published November 2020
Fund Project: The first author is supported by NSFC (No. 11771235) and the second author supported by NSFC (No. 11971049)
In this paper, we are concerned with the following generalized fully nonlinear nonlocal operators:
$ F_{s,m}(u(x)) = c_{N,s} m^{ \frac{N}{2}+s} P.V. \int_{\mathbb{R}^{N}} \frac{G(u(x)-u(y))}{|x-y|^{ \frac{N}{2}+s}} K_{ \frac{N}{2}+s}(m|x-y|)dy+m^{2s}u(x), $
$ s\in (0,1) $
and mass
$ m>0 $
. By establishing various maximal principle and using the direct method of moving plane, we prove the monotonicity, symmetry and uniqueness for solutions to fully nonlinear nonlocal equation in unit ball,
$ \mathbb{R}^{N} $
$ \mathbb{R}^{N}_{+} $
and a coercive epigraph domain
$ \Omega $
$ \mathbb{R}^N $
Keywords: Fully nonlinear nonlocal operators, Direct methods of moving planes, Maximal principle, Liouville theorem, Symmetry and Monotonicity.
Mathematics Subject Classification: Primary:35R11;Secondary:35B06, 35B53.
Citation: Yuxia Guo, Shaolong Peng. A direct method of moving planes for fully nonlinear nonlocal operators and applications. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020462
V. Ambrosio, Ground states solutions for a non-linear equation involving a pseudo-relativistic Schrödinger operator, J. Math. Phys., 57 (2016), 051502, 18 pp. doi: 10.1063/1.4949352. Google Scholar
H. Berestycki, L. A. Caffarelli and L. Nirenberg, Inequalitites for second-order elliptic equations with applications to unbounded domains I, Duke Math. J., 81 (1996), 467-494. doi: 10.1215/S0012-7094-96-08117-X. Google Scholar
H. Berestycki, L. A. Caffarelli and L. Nirenberg, Monotonicity for elliptic equations in unbounded Lipschitz domains, Comm. Pure Appl. Math., 50 (1997), 1089-1111. doi: 10.1002/(SICI)1097-0312(199711)50:11<1089::AID-CPA2>3.0.CO;2-6. Google Scholar
C. Brandle, E. Colorado, A. de Pablo and U. Sánchez, A concave-convex elliptic problem involving the fractional Laplacian, Proc. Royal Soc. of Edinburgh-A: Math., 143 (2013), 39-71. doi: 10.1017/S0308210511000175. Google Scholar
[5] J. Bertoin, Lévy Processes, Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 1996. Google Scholar
H. Berestycki, F. Hamel and R. Monneau, One-dimensional symmetry of bounded entire solutions of some elliptic equations, Duke Math. J., 103 (2000), 375-396. doi: 10.1215/S0012-7094-00-10331-6. Google Scholar
H. Berestycki and L. Nirenberg, Monotonicity, symmetry and antisymmetry of solutions of semilinear elliptic equations, J. Geom. Phys., 5 (1988), 237-275. doi: 10.1016/0393-0440(88)90006-X. Google Scholar
[8] H. Berestycki and L. Nirenberg, Some Qualitative Properties of Solutions of Semilinear Elliptic Equations in Cylindrical Domains, Analysis, et Cetera, Academic Press, Boston, MA, 1990. Google Scholar
H. Berestycki and L. Nirenberg, On the method of moving planes and the sliding method, Bol. Soc. Brasil. Mat. (N.S.), 22 (1991), 1-37. doi: 10.1007/BF01244896. Google Scholar
S.-Y. A. Chang and M. del Mar Gonzàlez, Fractional Laplacian in conformal geometry, Adv. Math., 226 (2011), 1410-1432. doi: 10.1016/j.aim.2010.07.016. Google Scholar
Y. Chen and B. Liu, Symmetry and non-existence of positive solutions for fractional p-Laplacian systems, Nonlinear Anal., 183 (2019), 303-322. doi: 10.1016/j.na.2019.02.023. Google Scholar
W. Chen and C. Li, Maximum principles for the fractional $p$-Laplacian and symmetry of solutions, Adv. Math., 335 (2018), 735-758. doi: 10.1016/j.aim.2018.07.016. Google Scholar
W. Chen, C. Li and Y. Li, A direct method of moving planes for the fractional Laplacian, Adv. Math., 308 (2017), 404-437. doi: 10.1016/j.aim.2016.11.038. Google Scholar
W. Chen, Y. Li and P. Ma, The Fractional Laplacian, World Scientific Publishing Co. Pte. Ltd., Singapore, 2020. doi: 10.1142/10550. Google Scholar
W. Chen and C. Li, Moving planes, moving spheres, and a priori estimates, J. Differential Equations, 195 (2003), 1-13. doi: 10.1016/j.jde.2003.06.004. Google Scholar
W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. doi: 10.1002/cpa.20116. Google Scholar
W. Chen, Y. Li and R. Zhang, A direct method of moving spheres on fractional order equations, J. Funct. Anal., 272 (2017), 4131-4157. doi: 10.1016/j.jfa.2017.02.022. Google Scholar
P. Constantin, Euler equations, Navier-Stokes equations and turbulence, in mathematical foundation of turbulent viscous flows, Lecture Notes in Math., Springer, Berlin, 1871 (2006), 1â€"43. doi: 10.1007/11545989_1. Google Scholar
W. Chen and S. Qi, Direct methods on fractional equations, Disc. Cont. Dyn. Syst.-A, 39 (2019), 1269-1310. doi: 10.3934/dcds.2019055. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. PDEs, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
X. Cabré and J. Tan, Positive solutions of nonlinear problems involving the square root of the Laplacian, Adv. Math., 224 (2010), 2052-2093. doi: 10.1016/j.aim.2010.01.025. Google Scholar
L. Caffarelli and L. Vasseur, Drift diffusion equations with fractional diffusion and the quasi-geostrophic equation, Annals of Math., 171 (2010), 1903-1930. doi: 10.4007/annals.2010.171.1903. Google Scholar
W. Chen and L. Wu, The sliding methods for the fractional $p$-Laplacian, Adv. Math., 361 (2020), 106933, 26 pp. doi: 10.1016/j.aim.2019.106933. Google Scholar
L. Silvestre, Regularity of the obstacle problem for a fractional power of the Laplace operator, Comm. Pure Appl. Math., 60 (2007), 67-112. doi: 10.1002/cpa.20153. Google Scholar
R. Carmona, W. C. Masters and B. Simon, Relativistic Schrödinger operators: Asymptotic behavior of the eigenfunctions, J. Funct. Anal., 91 (1990), 117-142. doi: 10.1016/0022-1236(90)90049-Q. Google Scholar
W. Dai, Y. Fang, J. Huang, Y. Qin and B. Wang, Regularity and classification of solutions to static Hartree equations involving fractional Laplacians, Discrete and Continuous Dynamical Systems-A, 39 (2019), 1389-1403. doi: 10.3934/dcds.2018117. Google Scholar
W. Dai and Z. Liu, Classification of nonnegative solutions to static Schrödinger-Hartree and Schrödinger-Maxwell equations with combined nonlinearities, Calc. Var. & PDEs, 58 (2019), Art. 156, 24pp. doi: 10.1007/s00526-019-1595-z. Google Scholar
W. Dai, Z. Liu and G. Qin, Classification of nonnegative solutions to static Schrödinger-Hartree-Maxwell type equations, preprint, submitted for publication, arXiv: 1909.00492. Google Scholar
W. Dai and G. Qin, Liouville type theorems for fractional and higher order Hénon-Hardy equations via the method of scaling spheres, preprint, submitted for publication, arXiv: 1810.02752. Google Scholar
W. Dai and G. Qin, Classification of nonnegative classical solutions to third-order equations, Adv. Math., 328 (2018), 822-857. doi: 10.1016/j.aim.2018.02.016. Google Scholar
W. Dai and G. Qin, Liouville type theorems for elliptic equations with Dirichlet conditions in exterior domains, J. Differential Equations, 269 (2020), 7231-7252. doi: 10.1016/j.jde.2020.05.026. Google Scholar
W. Dai, G. Qin and D. Wu, Direct methods for pseudo-relativistic Schrödinger operators, Journal of Geometric Analysis, (2020). doi: 10.1007/s12220-020-00492-1. Google Scholar
S. Dipierro, N. Soave and E. Valdinoci, On fractional elliptic equations in Lipschitz sets and epigraphs: Regularity, monotonicity and rigidity results, Math. Ann., 369 (2017), 1283-1326. doi: 10.1007/s00208-016-1487-x. Google Scholar
A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher Transcendental Functions: Vol. Ⅱ, Journal of the Franklin Institute, McGraw-Hill, New York, 257 (1954), 150. doi: 10.1016/0016-0032(54)90080-0. Google Scholar
M. M. Fall and V. Felli, Sharp essential self-adjointness of relativistic Schrödinger operators with a singular potential, J. Funct. Anal., 267 (2014), 1851-1877. doi: 10.1016/j.jfa.2014.06.010. Google Scholar
M. M. Fall and V. Felli, Unique continuation properties for relativistic Schrödinger operators with a singular potential, Discrete Contin. Dyn. Syst.-A, 35 (2015), 5827-5867. doi: 10.3934/dcds.2015.35.5827. Google Scholar
R. L. Frank, E. Lenzmann and L. Silvestre, Uniqueness of radial solutions for the fractional laplacian, Comm. Pure Appl. Math., 69 (2013), 1671-1726. doi: 10.1002/cpa.21591. Google Scholar
E. De Giorgi, Convergence problems for functionals and operators, in Proc. Int. Meeting on Recent Methods in Nonlinear Analysis, Rome, (1978), Pitagora, (1979). Google Scholar
Y. Guo and J. Liu, Liouville type theorems for positive solutions of elliptic system in $R^{N}$, Commun. Partial Differ. Equ., 33 (2008), 263-284. doi: 10.1080/03605300701257476. Google Scholar
I. W. Herbst, Spectral theory of the operator $(p^{2}+m^{2})^{1/2}-Ze^{2}/r$, Comm. Math. Phys., 53 (1977), 285-294. Google Scholar
J. Liu, Y. Guo and Y. Zhang, Liouville-type theorems for polyharmonic systems in $R^{N}$, J. Differ. Equ., 225 (2006), 685-709. doi: 10.1016/j.jde.2005.10.016. Google Scholar
B. Liu and L. Ma, Radial symmetry results for fractional Laplacian systems,, Nonlinear Anal., 146 (2016), 120-135. doi: 10.1016/j.na.2016.08.022. Google Scholar
L. Ma and L. Zhao, Classification of positive solitary solutions of the nonlinear Choquard equation, Arch. Rational Mech. Anal., 195 (2010), 455-467. doi: 10.1007/s00205-008-0208-3. Google Scholar
S. Peng, Liouville theorems for fractional and higher order Hénon-Hardy systems on $\mathbb{R}^{n}$, Complex Variables and Elliptic Equations, 11 (2020), 1-25. doi: 10.1080/17476933.2020.1783661. Google Scholar
M. Qu and L. Yang, Solutions to the nonlinear Schrödinger systems involving the fractional Laplacian, J. Inequal. Appl., 297 (2018), 16pp. doi: 10.1186/s13660-018-1874-9. Google Scholar
Isabeau Birindelli, Françoise Demengel, Fabiana Leoni. Boundary asymptotics of the ergodic functions associated with fully nonlinear operators through a Liouville type theorem. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020395
Matthieu Alfaro, Isabeau Birindelli. Evolution equations involving nonlinear truncated Laplacian operators. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3057-3073. doi: 10.3934/dcds.2020046
Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115
Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079
Peng Luo. Comparison theorem for diagonally quadratic BSDEs. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020374
Pengyu Chen. Non-autonomous stochastic evolution equations with nonlinear noise and nonlocal conditions governed by noncompact evolution families. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020383
Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. On a terminal value problem for a system of parabolic equations with nonlinear-nonlocal diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1579-1613. doi: 10.3934/dcdsb.2020174
Giuseppe Capobianco, Tom Winandy, Simon R. Eugster. The principle of virtual work and Hamilton's principle on Galilean manifolds. Journal of Geometric Mechanics, 2021 doi: 10.3934/jgm.2021002
Simon Hochgerner. Symmetry actuated closed-loop Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 641-669. doi: 10.3934/jgm.2020030
Lucio Damascelli, Filomena Pacella. Sectional symmetry of solutions of elliptic systems in cylindrical domains. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3305-3325. doi: 10.3934/dcds.2020045
Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, 2021, 14 (1) : 89-113. doi: 10.3934/krm.2020050
Teresa D'Aprile. Bubbling solutions for the Liouville equation around a quantized singularity in symmetric domains. Communications on Pure & Applied Analysis, 2021, 20 (1) : 159-191. doi: 10.3934/cpaa.2020262
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020048
Christopher S. Goodrich, Benjamin Lyons, Mihaela T. Velcsov. Analytical and numerical monotonicity results for discrete fractional sequential differences with negative lower bound. Communications on Pure & Applied Analysis, 2021, 20 (1) : 339-358. doi: 10.3934/cpaa.2020269
Bing Sun, Liangyun Chen, Yan Cao. On the universal $ \alpha $-central extensions of the semi-direct product of Hom-preLie algebras. Electronic Research Archive, , () : -. doi: 10.3934/era.2021004
Yuxia Guo Shaolong Peng | CommonCrawl |
Factorial Permutation & Combination Probability Sample Size Mean, Mode & Median Standard Deviation Mean Absolute Deviation Skewness Z-score Standard Error Margin of Error Sampling Error Population Confidence Interval Covariance Coefficient of Variance Correlation Coefficient R-squared Linear Regression Mean Empirical Rule Probability & Distributions Gamma Functions Test of Significance Data Range Effect Size Percent Error Signal to Noise Ratio Altman Z-score
Dataset set
comma separated input values
Mean = 12
Standard Deviation = 5.8737
Population Standard Deviation = 5.2536
Probability & Statistics
Empirical Rule Calculator measures the share of values that fall within a specified number of standard deviations from the mean. It's an online statistics and probability tool requires data set $X$ and determines the percentage of values around the mean for the standard deviation width of $\sigma$, $2\sigma$ and $3\sigma$
Enter a data set $X$ (observed values) in the box. These values must be real numbers or variables and may be separated by commas. The values can be copied from a text document or a spreadsheet.
Press the "CALCULATE" button to make the computation.
Empirical rule calculator determines if a given data set follows a normal distribution by checking if $68\%$ of data falls within $\sigma$, $2\sigma$ and $3\sigma$, respectively.
Input: A list of real numbers separated by comma;
Output: Intervals of real numbers $(\mu-\sigma, \mu+\sigma)$, $(\mu-2\sigma, \mu+2\sigma)$, $(\mu-3\sigma, \mu+3\sigma)$ or $(\bar{X}-s_X, \bar{X}+s_X)$, $(\bar{X}-2s_X, \bar{X}+2s_X)$, $(\bar{X}-3s_X, \bar{X}+3s_X)$
Empirical rule calculator gives us the stepwise procedure and insight into every step of calculation. Before the final result of empirical rule is derived, it calculates the mean and standard deviation. These values can be of benefit for further solving of problems and applications.
Empirical Rule Population Formula: If the data set is a population then,
Empirical Rule at 68\% falls between $\mu-\sigma$ and $\mu+\sigma$
Empirical Rule at 95\% falls between $\mu-2\sigma$ and $\mu+2\sigma$
Empirical Rule at $99.7\%$ falls between $\mu-3\sigma$ and $\mu+3\sigma$
where the population mean is $\mu=\frac {1}{N}\sum_{i=1}^Nx_i$ and
population standard deviation is $\sigma=\sqrt{\frac{\sum_{i=1}^N(x_i-\mu)^2}{N}}$
Empirical Rule Sample Formula: If the data set is a sample then,,
Empirical Rule at 68\% falls between $\bar{X}-s_X$ and $\bar{X}-s_X$
Empirical Rule at 95\% falls between $\bar{X}-2s_X$ and $\bar{X}+2s_X$
Empirical Rule at $99.7\%$ falls between $\bar{X}-3s_X$ and $\bar{X}+3s_X$
where the sample mean is $\bar X=\frac {1}{n}\sum_{i=1}^nx_i$ and
sample standard deviation is $s_X=\sqrt{\frac{\sum_{i=1}^n(x_i-\mu)^2}{n-1}}$
What is Empirical Rule?
We can measure the percentage of data that is a certain distance from the mean no matter what the standard deviation of the set is. If the mean is $\mu=0$ and the standard deviation is $\sigma=1$ the curve is called a standard normal distribution. In this case, the values of $x$ represent the number of standard deviations distanced from the mean.
The empirical rule, also known as $68-95-99.7$ rule, or three sigma ($3\sigma$) rule is the percentages of data in a normal distribution within $\sigma=1$, $\sigma=2$ and $\sigma=3$ standard deviations of the mean, are approximately , and $68$, $95$ and $99.7$, respectively.
The Bell shaped curve, Bell curve or Gaussian function is used to represent the distribution of probability density function. The mean determines the location of the center of the Bell curve, and the standard deviation determines the height and width of the Bell curve. Note that the total area under the curve is 1. The term $68-95-99.7$ in normal distribution represents $68.27\%$, $95.45\%$ and $99.73\%$ of data around the mean for the width of $\sigma$, $2\sigma$ and $3\sigma$, respectively. In other words, for a normally distributed random variable $X$ with the population mean $\mu$ and the population standard deviation $\sigma$, it holds that $$\begin{align} &P(\mu-\sigma < X < \mu+\sigma)\approx 0.68\\ &P(\mu-2\sigma < X < \mu+2\sigma)\approx0.95\\ &P(\mu-3\sigma < X < \mu+3\sigma)\approx 0.997\end{align}$$
Therefore, the standard deviation the following characteristics:
About 68% of the data lie within one standard deviation of the mean;
About 95% of the data lie within two standard deviations of the mean;
About 99.7% of the data lie within three standard deviations of the mean
How to Calculate Empirical Rule?
To find the empirical rule we need to follow next steps
Find the number of the given data set, $n$;
Find the population or sample mean of the set as the sum of the values divided by $n$;
Find the population or sample standard deviation.
If the data set is a population then,
Empirical Rule at $68\%$ falls between $\mu-\sigma$ and $\mu+\sigma$
Empirical Rule at $95\%$ falls between $\mu-2\sigma$ and $\mu+2\sigma$
If the data set is a sample then,
This calculator shows the result for the sample $X : 5, 8, 12, 15, 20$. For any other set of data, just supply a list of numbers and click on the "CALCULATE" button. The grade school students may use this Empirical rule calculator to generate the work, verify the results derived by hand or do their homework problems efficiently.
Practice Problems for Empirical Rule
In statistical data analysis, when working along with standard deviation calculation, there may be many occasions that the user may need to find the percentage of data fall around the mean. Applications of the empirical rule are to determine if some data set follows a normal distribution and also to determine what the probability is of certain data occurring. The Empirical Rule is very important in statistics for forecasting, i.e. when obtaining the data is difficult or impossible to find. The Empirical Rule has application in hypothesis testing, process control, creating work standards, and in menu real-life situations. For instance, in hypothesis testing, the empirical rule finds confidence intervals. These intervals are used in testing to check if outcomes from a change are the result of the normal process or are different from the starting behavior.
The heights of students in a class have a normal distribution with a mean of $180$ centimeters and a standard deviation of $5$ centimeters. Find the interval that contains about $95\%$ of the students.
The weights of cats have a normal distribution with a mean of $3.6$ pounds and a standard deviation of $0.4$ pounds. What percent of cats weigh between $2.8$ and $4.8$ kilograms?
The special type of dogs lives $3.1$ years and the standard deviation is $0.60$ years. Find the probability of a dog living less than $2.52$ years, if the lifespans of these dogs are normally distributed.
The empirical rule Calculator, formula, and practice problems would be very useful for grade school students (K-12 education) primarily in statistical and probability problems. Because many natural phenomena have approximately the normal distribution, some real-life situations can be solved by using this concept.
Harmonic Mean Calculator
Weighted Mean Calculator
Geometric Mean Calculator
Class Interval Arithmetic Mean Calculator
Group Arithmetic Mean Calculator
Root Mean Square Calculator
Normal Distribution Calculator
Binomial Distribution Calculator
Negative Binomial Distribution Calculator
Discrete Poisson Distribution Calculator
Cumulative Poisson Distribution Calculator
Exponential Distribution Calculator | CommonCrawl |
Dynamic graphs, community detection, and Riemannian geometry
Craig Bakker ORCID: orcid.org/0000-0002-0083-40001,
Mahantesh Halappanavar1 &
Arun Visweswara Sathanur1
A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time – dynamic community detection – and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and find that the Riemannian methods are generally better suited to dynamic community detection. Next steps with the Riemannian framework include producing a Riemannian least-squares regression method for working with noisy data and developing support methods, such as spectral sparsification, to improve the scalability of our current methods.
Graphs and dynamic community detection
Community detection is an important activity in graph analytics with applications in numerous scientific and technological domains (Girvan and Newman 2002). Given a graph G=(V,E) with weight function w:E→ℜ+, the goal of community detection (or graph clustering) is to partition the vertex set V into an arbitrary number of disjoint subsets of V called communities (or clusters) such that the vertices within a community are tightly connected with each other but sparsely connected with the rest of the graph. Clustering on G can be represented as C(G), which is a unique mapping of each vertex to a community. We restrict our work here to undirected, unweighted graphs and to the disjoint partitioning of vertices into communities. For a detailed treatment of this topic, the reader is referred to the work by Fortunato (2010).
The relationships between entities in domains such as sociology, finance, cybersecurity and biology are most naturally modeled with the use of graphs. The inherently dynamic nature of such data (Fenn et al. 2012) leads to dynamic graph representations. A dynamic graph changes over time through the addition and deletion of vertices and edges. A snapshot of this graph, G n , consists of the vertices and edges that are active at a given time step n. Modifications from time n to n+1 are represented by ΔG n . Clustering can be performed at each time step, C(G n ), and as the graph evolves, so do its communities. Temporal communities can undergo several different transitions: growth via addition of new vertices, contraction via deletion of vertices, merging of two or more communities, splitting of a community into two or more communities, birth and death of a community, and resurgence or reappearance of a community after a period of time. Efficiently detecting these transitions is a challenging problem.
The problem of dynamic community detection has received significant interest in the academic literature (Cazabet and Amblard 2014). Current approaches for dynamic community detection broadly fall under two headings: incremental community detection and global community detection. The approaches in the first category focus on the systematic propagation of communities through time, whereas the approaches in the second category attempt to simultaneously optimize for multiple metrics on several snapshots of data. Stability of computation and accuracy of results are the fundamental limitations of the incremental approaches, while memory (space) and computational requirements are the main limitations of the global approaches (Cazabet and Amblard 2014). Incremental approaches are fundamentally combinatorial in nature (Tantipathananandh and Berger-Wolf 2011; Nguyen et al. 2014) and involve methods to track communities through time. The stochastic nature of these algorithms makes these methods unstable leading to inaccurate results. Mucha et al. (2010) build on the seminal work of Lambiotte et al. (2014) for community detection in dynamic multiplex networks by specializing null models in terms of stability under Laplacian dynamics.
Motivation for a Riemannian framework
There is a well-developed suite of methods for community detection in static graphs, but it is not always clear how to extend those methods to dynamic graphs in a way that captures the time-varying nature of those graphs' communities. The challenge is to develop methods that vary continuously in time, like the graphs themselves, between snapshots. Moreover, if existing methods are extended through time, it will be beneficial to do so in a way that provides new insight or analytical tools as well. With that in mind, we propose a Riemannian geometry approach that views dynamic graphs (and thus dynamic communities) through the lens of Laplacian dynamics on a matrix manifold. Riemannian geometry provides ways of calculating quantities such as distances between Laplacians and trajectory speeds on the matrix manifold. As such, it provides a clear and consistent way of representing graph dynamics. This framework is also modular with respect to existing static community detection methods.
In this paper, we provide the background theory needed to describe dynamic graphs in terms of Laplacian dynamics on matrix manifolds. The primary contribution of this paper is to bring existing theory to bear on a new application area – dynamic community detection. We use Riemannian geometry to interpolate between snapshots of dynamic graphs (using geodesics) and to calculate averages of those snapshots; we explicitly show the formulae for performing these calculations. The interpolated and average graphs are then amenable to existing static community detection methods. This allows us to use a consistent approach to track community behaviour both between snapshots, via interpolation, and across snapshots, via averaging.
Simply transferring previously derived formulae would not allow us to consider disconnected graphs, however, so our contributions also include a way of transforming disconnected graphs so that they are amenable to the matrix manifold tools. Using both synthetic and experimental graph data, we experimentally evaluate two different kinds of geodesics. We identify their strengths, as compared with entry-wise linear interpolation, and also discuss their weaknesses. Finally, we derive interpolation and extrapolation error bounds for both geodesics (shown in the Appendix) and identify promising avenues of future research in this area.
Our framework enables more accurate prediction of community transitions by building interpolated graphs between snapshots, global community detection through data aggregation, and prediction of future behaviour through extrapolation from given snapshots. We describe the basics of our framework in the "Riemannian geometry and dynamic graphs" section, show how it can be applied to dynamic clustering in "A Riemannian framework for dynamic community detection" section, and compare the Riemannian methods with an entry-wise linear approach on synthetic and real network data in the "Computational experiments" section.
The novelty of our approach arises primarily from the application of Riemannian geometry to dynamic graphs. When combined with existing spectral methods, this also provides a new interpretation of community splitting and merging as bifurcations in a gradient flow dynamical system (see the "Dynamic spectral clustering" section). To the best of our knowledge, the Riemannian framework presented in this paper is the first of its kind; it is our intent that the research community build from and extend this work to enable features of dynamic community detection not currently considered here.
Riemannian geometry and dynamic graphs
Riemannian geometry and matrix manifolds
Differential geometry deals with mathematics on manifolds; manifolds are spaces that are locally Euclidean (i.e., flat), but generally non-Euclidean globally (Boothby 1986). A Riemannian manifold is a type of manifold that has a metric associated with each point on the manifold. The traditional methods for calculating angles and distances in flat spaces have to be modified on manifolds to account for manifold curvature, and the metric is an integral part of those modifications on Riemannian manifolds.
A key part of Riemannian geometry, for the purposes of this paper, is the geodesic. Geodesics are the equivalent of straight lines in curved spaces. A geodesic is (locally) the shortest path between two points. Great circles on a sphere are examples of geodesics on a curved manifold. Consider a flight from Vancouver, Canada to London, England: the two cities are at similar latitudes, so on a Mercator projection map, the shortest flight would seem to be a straight West-to-East trajectory. In reality, however, flights between the two cities traverse the Pole because that is a shorter route – it is the great circle route. The discrepancy is due to the curvature of the Earth, which is distorted on a flat map. From another perspective, a geodesic is the path that a particle on a manifold would take if it were not subject to external forcing; a geodesic with constant speed has zero acceleration.
Riemannian geometry can be applied to matrix manifolds. The Grassman and Stiefel manifolds are perhaps the most frequently encountered matrix manifolds in differential geometry because they have closed-form solutions for quantities such as geodesics (Absil et al. 2007). Pennec et al. (2006) developed a metric for the manifold of symmetric positive-definite matrices with corresponding expressions for distances, geodesics, and tangent vector inner products in closed form. These formulae are valuable because even when there is a well-defined metric on a manifold, distances and geodesics between points do not usually have closed-form expressions. Such quantities have to be solved for numerically. Working on this matrix manifold, when appropriate, can be useful: matrix symmetry provides a reduction in effective dimension, and properties such as symmetry and positive-definiteness are automatically preserved.
Bonnabel and Sepulchre (2009) extended this framework to include symmetric positive-semidefinite matrices. The extension essentially worked by decomposing a positive-semidefinite matrix into a nullspace component (a Grassman manifold) and a positive-definite component, which could then use the existing metric.
Graph Laplacians and Riemannian geometry
Researchers have previously used non-Euclidean geometries to investigate graphs (Krioukov et al. 2009, 2010). That work has then been applied to large-scale networks such as the internet (Boguná et al. 2010). The approach described in this paper differs in a subtle but meaningful way. In those papers, the mappings used treat graph nodes as points in a hyperbolic space. Our present work, however, treats the entire graph as a single point in a non-Euclidean space.
The work of Bonnabel and Sepulchre (2009) combined with that of Pennec et al. (2006) enables us to consider graph Laplacians as points on a manifold of positive-semidefinite matrices. Each graph is a point, and thus a time-indexed sequence of graphs forms a trajectory on the manifold. This, in turn, means that we can calculate quantities such as trajectory velocities, distances between graphs (represented by manifold distances between their respective points), and relevant geodesics.
Given that we are interested in dynamic community detection, the Laplacian is a natural object to work with. The Laplacian uniquely defines a graph (up to self-loops), and there is already a known connection between the Laplacian spectrum and community structure (Newman 2010). Previous work in dynamic community detection (e.g., Mucha et al. (2010)) has also worked with the Laplacian. Graph Laplacians have a certain structure that make them amenable to the Riemannian geometry techniques presented here as well: Laplacians are symmetric (for undirected graphs) and positive-semidefinite. Adjacency matrices, for example, are generally indefinite and thus would not be suitable for use with the matrix manifolds described here.
We chose to work with the combinatorial Laplacian, L=D−A, because it has a constant nullspace for connected graphs (Newman 2010). This constant nullspace makes the geometric calculations much simpler than they would be otherwise. It is possible to use other Laplacians, such as the normalized Laplacian. If these Laplacians do not have constant nullspaces, though, the interpolation involves extra calculations (detailed by Bonnabel and Sepulchre (2009)). Assuming no self-loops, the combinatorial Laplacian also has the virtue of being easy to convert into an adjacency matrix. That being said, as long as a Laplacian is symmetric positive-semidefinite and has a constant nullspace dimension (for connected graphs), it is possible to calculate geodesic interpolations for that Laplacian.
There are two other relevant considerations we wish to address here. Firstly, the Laplacians of unweighted graphs constitute a discrete (and therefore sparse) subset of the matrix manifold. As such, any continuous trajectory will contain weighted graphs. Secondly, directed graphs do not have symmetric Laplacians, and thus they cannot be considered within this framework without symmetrizing them somehow (e.g., by ignoring the directionality of edges). For the purpose of community detection, though, edge direction may not be important.
A Riemannian framework for dynamic community detection
There are two primary components to our framework. The first involves modelling and analyzing the dynamic behaviour of the graph prior to any community detection. For this, we show how to calculate an average graph from a collection of snapshots (for use in a time-averaged community detection) and how to interpolate between time-indexed graph snapshots (for seeing how the graph evolves over time). In the Appendix, we derive and analyze bounds on the interpolation error in terms of distance on the manifold.
The second component consists of applying community detection methods to the dynamic graph. In this paper, we will focus on spectral methods, because they have convenient properties under continuous Laplacian dynamics, and the Louvain method (Blondel et al. 2008), because of its computational speed and ability to handle disconnected graphs. However, the Riemannian geometry methods do not require using any one particular community detection method.
Graph interpolation and averaging
We begin with interpolation between two snapshots. It is possible to do this using an entry-wise linear approach, L(t)=(1−t)L A +tL B , but there are good reasons not to use this approach.
Firstly, the Laplacians for a given dynamic graph all exist on a matrix manifold. For the trajectory L(t) on that manifold, though, the trajectory speed is not constant, the trajectory direction is not constant, and it is not the shortest path from L A to L B . It is precisely analogous to the Mercator projection map example given earlier – moving at a constant velocity (i.e., constant speed and direction) on the map would not correspond to moving at a constant velocity on the earth because of the earth's curvature. Experimentally, we have observed that the linear interpolation begins and ends its trajectory moving very quickly while the bulk of its trajectory moves relatively slowly. The difference between maximum and minimum velocities can be orders of magnitude, depending on the size of the graph and the distance between the two graphs being interpolated.
Secondly, in connected graphs, the product of the Laplacian's non-zero eigenvalues (i.e., the determinant of the positive-definite component) is concave along the linearly interpolated trajectory. If the two points are far enough apart, this product will go through a maximum between the two points. This maximum can, again, be orders of magnitude greater than the product at either endpoint; like the trajectory velocity, this variation will depend on the size of the graphs in question and their distance apart. The geodesic interpolation, however, provides a linear variation in the product of the eigenvalues. Pennec et al. (2006) comment on this in more detail. For a graph, this product relates directly, by Kirchoff's matrix tree theorem, to the number of spanning trees in the graph (Harris et al. 2008). In other words, the linear interpolation increases the overall connectivity of the graph between snapshots.
Finally, the linear interpolation cannot always be used for extrapolation. All of the interpolated Laplacians are positive-semidefinite, but it is easy to provide examples where the extrapolation quickly becomes indefinite.
Instead, we propose using geodesic interpolation. A geodesic interpolation trajectory has a constant velocity, produces an eigenvalue product that varies linearly between endpoints that are connected graphs, and can be extrapolated indefinitely without leaving the manifold of positive-semidefinite manifolds (with constant nullspace dimension). Following Bonnabel and Sepulchre (2009), we show how to calculate this geodesic between two snapshots of a given dynamic graph.
Consider the Laplacian L at a point. It can be represented with its eigendecomposition:
$$ L = \left[ \begin{array}{cc} \alpha &\xi \end{array} \right] \left[ \begin{array}{cc} D &0 \\ 0 &0 \end{array} \right] \left[ \begin{array}{c} \alpha^{T}\\ \xi^{T} \end{array} \right] = \alpha D \alpha^{T} $$
where the columns of α span the range of L. Moreover, the nullspace, ξ, is always parallel to (1,1,…,1), and thus span(α) is constant even though α may not be, in general.
Consider the geodesic between L A and L B . We can calculate the SVD of \(\alpha _{B}^{T} \alpha _{A}\):
$$ \alpha_{B}^{T} \alpha_{A} = O_{B} \sigma_{AB} O_{A}^{T} $$
The diagonal matrix σ AB has the principal angles between the subspaces spanned by α A and α B as its diagonal entries. Since those subspaces are the same, σ AB =I for any two Laplacians. We then calculate U A =α A O A and U B =α B O B . Since σ AB is constant, U A =U B , and U is constant for all points on the geodesic; α and O are not constant, though. Furthermore, we can use the same U matrix for any Laplacian of a given dynamic graph without affecting our calculations, because the span of U is constant. We calculate R=UTLU for L A and L B . The geodesic from L A at t=0 to L B at t=1 is then
$$\begin{array}{*{20}l} R (t) &= R_{A}^{\frac{1}{2}} \exp \left(t \ln R_{A}^{-\frac{1}{2}} R_{B} R_{A}^{-\frac{1}{2}} \right) R_{A}^{\frac{1}{2}} \end{array} $$
$$\begin{array}{*{20}l} L(t) &= U_{A} R (t) U_{A}^{T} = U_{B} R (t) U_{B}^{T} \end{array} $$
If there are multiple time-sequenced snapshots, this method can be used to do a piecewise geodesic interpolation with t being shifted and scaled appropriately. Note that the constant Laplacian nullspace means that we can work solely with the R components of L and ignore the Grassman component. We can also extrapolate with this geodesic simply by continuing the trajectory for t>1.
If we are interested in the average behaviour of a dynamic graph, we can calculate the least-squared-distance mean (the Karcher mean) of a set of graph snapshots. To do this, we use the R matrices derived from the graph Laplacians as before; each graph i has a matrix R i associated with it, and we want to determine the 'average' matrix S for N snapshots. We then list the sum-of-squared-distance function, the distance function itself, and the gradient of the squared distance (Pennec et al. 2006), respectively:
$$\begin{array}{*{20}l} f (S) &= \frac{1}{2N} \sum_{i} f^{(i)} = \frac{1}{2N} \sum_{i} d^{2} \left(S,R_{i}\right) \end{array} $$
$$\begin{array}{*{20}l} d^{2} \left(S,R_{i}\right) &= \left\| \ln S^{-\frac{1}{2}} R_{i} S^{-\frac{1}{2}} \right\|^{2} \end{array} $$
$$\begin{array}{*{20}l} \nabla_{S} f^{(i)} (S) &= - 2 \ln_{S} R_{i} \end{array} $$
We use iterated gradient descent to calculate the mean:
$$ S_{k+1} = S_{k}^{\frac{1}{2}} \exp \left[ \frac{1}{N} \sum_{i} \ln \left(S_{k}^{-\frac{1}{2}} R_{i} S_{k}^{-\frac{1}{2}} \right) \right] S_{k}^{\frac{1}{2}} $$
According to Pennec et al. (2006), this usually converges quickly.
Alternative Riemannian geometries
Riemannian geometry centers around the Riemannian metric – changing the metric entails changing properties of the manifold (such distances and geodesics). The current metric can be described as affine-invariant (Pennec et al. 2006), but it is not the only metric that could be used for the space of positive-definite matrices. We could also use a log-Euclidean metric as described by Arsigny et al. (2007). The primary reason to consider using the log-Euclidean metric instead of the affine-invariant one is computational cost: the formulae for distances and geodesics are simpler and easier to calculate for the log-Euclidean metric. Those distance and geodesic formulae are, respectively,
$$\begin{array}{*{20}l} d^{2} \left(S, R_{i} \right) &= \left\| \ln R_{i} - \ln S \right\| \end{array} $$
$$\begin{array}{*{20}l} R (t) &= \exp \left(\left(1-t\right) \ln R_{A} + t \ln R_{B} \right) \end{array} $$
Another computationally beneficial feature of the log-Euclidean metric is the closed-form expression that it has for calculating the mean of a set of matrices:
$$ S = \exp \left(\frac{1}{N} \sum_{i} \ln R_{i} \right) $$
To utilize these formulae for interpolating between graphs, we would simply replace Eq. 6 with Eqs. 9, 3 with Eq. 10, and the iterated process in Eq. 8 with a single evaluation of Eq. 11. There are other expressions that are simpler to evaluate for the affine-invariant metric, but those quantities may not be needed, and the different invariance properties of each metric may be valuable in different circumstances.
On a practical level, the two metrics generally produce similar interpolations (Arsigny et al. 2007): the spectrum of the affine-invariant interpolations tends to be slightly more isotropic than that produces by the log-Euclidean interpolations, but both interpolate determinants linearly between interpolation points (see the "Graph interpolation and averaging" section). For the rest of this paper, we will distinguish the geodesics and means calculated with the two methods as being either affine-invariant (AI) geodesics or log-Euclidean (LE).
Disconnected graphs
The methods described in this paper currently assume that the graph in question is connected and remains so at all points of interest. As they stand, they could potentially handle a graph with a constant number of disconnected components (which would correspond to the Laplacian nullspace having a constant dimension), but this does not significantly improve the method's generality. In order to be widely applicable, the interpolation methods need to be able to handle changing connectivity.
We can accommodate this by using a bias term with, potentially, a thresholding procedure. For a given adjacency matrix A, we add to each off-diagonal entry a bias term ε/n, where ε≪1 and n is the number of vertices in the graph, to produce a biased adjacency matrix \(\tilde {A}\) (which is now connected). We then construct a biased Laplacian matrix from \(\tilde {A}\), perform the interpolation on the biased Laplacian and subtract ε/n from each off-diagonal entry of the adjacency matrices produced by the biased interpolation. If need be, we can then apply a threshold to the resulting adjacency matrices or round those matrices to an appropriate number of decimal places. This approach essentially replaces the Laplacian's λ=0 eigenvalues with λ=ε.
Empirically, we found that this approach did not significantly change the interpolated trajectories for connected graphs while also producing reasonable results for disconnected graphs. If we consider the properties of the Riemannian metrics discussed in this paper, we can see why adding this small bias would not significantly disturb a geodesic trajectory. With these metrics, matrices with zero or infinite eigenvalues essentially exist at infinity. For matrices with finite eigenvalues greater than zero, the distances between matrices are relative and directly tied to the matrices' spectra. For example, the distance from λ=10−6 to λ=10−5 is comparable to the distance from λ=1 to λ=10. This means that a geodesic, which is a minimum-distance path between points, will not significantly alter the part of the spectrum associated with λ=ε values unless it is absolutely necessary to do so in order to reach the destination. Moreover, adding a fully connected graph with edge weights of ε would not meaningfully change the community structure because of the separation of scales (presuming a very small value of ε).
In our computational experiments, we found that ε=10−6 provided a good balance between avoiding ill-conditioning and keeping ε small, but even increasing ε to 10−3 did not change the interpolation significantly. As we increased ε, though, we found that the geodesic interpolations approached the trajectory of the linear interpolation; at, say, ε=106, they were almost identical. This, too, makes sense: as the eigenvalues become uniformly larger, the manifold becomes flatter, and the differences between the data points become smaller. The flatter the manifold, the closer the geodesic is to the linear interpolation. However, the geodesic interpolation is still guaranteed to remain positive definite, and the linear interpolation is not. This suggests that if the linear interpolation were more desirable in a particular application but the application also called for the use of extrapolation, then using a geodesic with a large bias term could provide the desired capabilities.
Dynamic spectral clustering
It is possible to use spectral clustering with the first non-trivial eigenvector for community detection, but this method can be improved upon by using multiple eigenvectors (Boccaletti et al. 2006). This approach is convenient for continuous Laplacian dynamics because as long as the eigenvalues are distinct, we can expect the eigenvectors and eigenvalues to vary smoothly with smooth changes in L. If the eigenvalues of the eigenvectors in question are not distinct, then the eigenvectors are not uniquely defined, and if eigenvalues whose eigenvectors are being used for spectral clustering cross during the course of a trajectory, the spectral clustering may experience a discontinuous jump. Disconnected graphs can provide exactly this kind of behaviour (e.g., with multiple zero eigenvalues). Moreover, if the number of disconnected components is not constant, then it will not suffice simply to consider the first m non-zero eigenvalues, for the set of such eigenvalues will not be constant.
Assume that the graphs are connected, that there is an ordering of the eigenvalues of L such that λ i ≤λi+1, λ1=0, and that eigenvector ξ(i) is associated with λ i . We can then plot each of the graph nodes in ℜn, where node k has coordinates given by \(\left (\xi ^{k}_{(2)},\xi ^{k}_{(3)}, \ldots, \xi ^{k}_{\left (n+1\right)}\right)\), and use clustering techniques to identify communities.
One way of identifying and tracking communities is through defining a kernel for the nodes. Summing over all of the nodes then produces a density function. The maxima of that density function correspond to cluster centroids, and the separatrices between maxima define community boundaries in the (reduced) eigenspace. With a symmetric Gaussian kernel, this density function would be
$$\begin{array}{*{20}l} f \left(\mathbf{x}\right) &= \sum \limits_{k} N \left(\mathbf{x} - \mathbf{y}_{(k)}, \sigma \right) \end{array} $$
$$\begin{array}{*{20}l} N \left(\mathbf{x} - \mathbf{y}_{(k)}, \sigma \right) &= \exp \left(-\frac{\left\|\mathbf{x} - \mathbf{y}_{(k)}\right\|^{2}}{2 \sigma^{2}} \right) \end{array} $$
where \(y^{l}_{(k)} = \xi ^{k}_{\left (l-1\right)}\). Other kernels could be used, but this provides an easily differentiable density function, and the magnitude of the kernel is not very important – what matters is the relative changes in density, not the function's absolute value. See an example of this in the spectral plot shown in Fig. 1. The format of Fig. 1 is used for all other spectral plots in this paper.
2-D spectral plot of graph nodes. The graph nodes are plotted as points, the contours show the magnitude of the density function, and the horizontal and vertical axes correspond to the ξ(2) and ξ(3) components, respectively. This particular plot shows two distinct communities with one node at approximately (-0.09,-0.11) that does not belong very strongly to either community and a cluster of points around (-0.07,0.05) that seems close to forming its own community
Changes in the graph's communities can then be seen as changes in the density function. The density of a cluster is proportionate to the magnitude of the density function at the peak (i.e., the cluster centroid). Community growth and contraction can be seen by points traversing community boundaries (i.e., separatrices). Birth and death correspond to the emergence or disappearance of a peak in the density function. Merging and splitting correspond to the merging and splitting, respectively, of the density function peaks. This splitting and merging correspond very closely to pitchfork bifurcations in dynamical systems; more precisely, the pitchfork bifurcation happens to the gradient flow \(\dot {\mathbf {x}} = \nabla f\). Birth and death also correspond to pitchfork bifurcations, but this is not as immediately obvious. It is a corollary of the Poincaré-Hopf theorem: creating a new maximum results in the creation of additional saddle points and/or minima (Domokos et al. 2012). To identify death, merging, or splitting, we can track the Hessian of f. If it becomes singular at a point, that is an indication of a potential bifurcation there. Birth may be identified in the same way, but searching the space for such a phenomenon may be more difficult than simply tracking known maxima and monitoring the Hessian at those points.
Once the spectrum has been plotted, techniques such as k-means clustering can identify communities. This should produce a sufficient approximation of the separatrices between maxima. However, if two eigenvectors are used, it may even be easier to identify communities visually.
Computational experiments
Implementation and testing procedure
To demonstrate our methods, we initially created a series of graph snapshots using a synthetic graph process. The dataset was created by generating two Erdős-Rényi (ER) random graphs with 100 nodes each, as representing distinct communities, with edge probabilities of p E =0.15 for both. We then began connecting the nodes belonging to the two communities through an inter-community edge probability of p int ≪p E ; we increased p int all the way to p E to simulate the distinct communities merging. Once the merger was complete, we gradually decreased p int to simulate the splitting of a large community into smaller ones.
To test our methods on real-world data, we used proteomics data produced by Mitchell et al. (2013). Networks were produced by identifying subnetworks of upregulated proteins (p<0.05 and fold change > 1.5 compared to uninfected mocks) from the overall human protein-protein interaction network (Keshava Prasad et al. 2008). The network data indicates time-varying linkages between different proteins in human lung epithelial cells that have been infected by the Severe Acute Respiratory Syndrome corona virus (SARS-CoV). The proteomics network formed a relatively sparse, highly disconnected graph of 576 nodes, and we used the data snapshots at t=24,30,36,48,54,60, and 72, where t is the number of post-infection hours. Because this graph is disconnected (and severely so), we use the bias approach described in the "Disconnected graphs" section.
We implemented our methods in Python, making particular use of the matrix exponential and logarithm functions in the SciPy package. To evaluate the interpolation and averaging results for the synthetic network, we recorded connectivity measurements, spectral snapshots from interpolated and averaged Laplacians, and the total number of communities in the interpolated and averaged Laplacians. To measure connectivity, we used the logarithm (for scaling purposes) of the product of the non-zero Laplacian eigenvalues as mentioned in the "Graph interpolation and averaging" section. For the spectral snapshots, we used the eigenvectors corresponding to the first two non-trivial eigenvalue to produce plots as described in the "Dynamic spectral clustering" section. These snapshots provided an evaluation that was more qualitative than quantitative. We then used the Louvain method to perform community detection. The graph snapshots are provided in Additional file 1, and the code implementing the methods is provided in Additional file 2.
The spectral snapshots and connectivity measurements were not as useful for the proteomics network because the proteomics network was highly disconnected, but the Louvain method was still applicable for community detection. To investigate the interpolation and averaging of community structure for this network, we tracked the total number of communities, the total number of communities with at least five members, community similarity, and graph energy. Because the network was highly disconnected, the Louvain method produced many small or single-member communities. Tracking the number of communities above a certain size helped to reduce the amount of noise due to that effect. By community similarity, we mean not just the number of communities but the composition of those communities as well. It can be difficult to measure the degree of similarity between two graphs' community structures when there are many communities and the community labelling is not consistent, but we can look at the pairwise similarity with the Rand index (Rand 1971).
The Rand index works by using a baseline or ground truth case, considering every distinct pair of nodes, and determining whether or not they are in the same community. It then looks at these same pairs in another graph of interest. If, for a given pair of nodes, the nodes are either in the same community as each other in both graphs or not in the same community as each other in both graphs, that pair gets a score of 1; otherwise they get a score of 0, indicating a dissimilarity between the community structures of the two graphs. Summing the results over all pairs and dividing by the number of pairs yields a score between 0 and 1, where 1 indicates that the two graph's community structures are identical. The smaller the value, the less similar the structures are.
Given that we had no ground truth between the data snapshots, we instead looked at the changes in this metric from one snapshot to the next. Ideally, there would be a steady change in this value between points – a sawtooth pattern over the course of the whole interpolation – as we measured how the interpolation differed from the most recent data snapshot. Finally, to measure network connectivity, we used graph energy instead of a Laplacian eigenvalue product. The energy of a graph, E, is defined as the sum of the absolute values of the eigenvalues of the adjacency matrix. Given that it is bounded by the number of edges, m, in an unweighted graph (Brualdi 2006), we can also use it to bound the number of edges:
$$ 2 \sqrt{m} \leq E \leq 2m \Rightarrow \frac{1}{2} E \leq m \leq \frac{1}{4} E^{2} $$
and thus it gives us information about both graph spectra and graph connectivity.
For both sets of data, we used thresholding on the edge weights to get unweighted graph equivalents. This procedure, and especially the threshold value used, was more impactful on the proteomics data than on the synthetic data.
Synthetic graph results
The graph spectral snapshots are shown in Fig. 2, and we can clearly see the expected merger and separation of two communities there. We can now interpolate from the third to the fourth data snapshot and then from fourth to the fifth data snapshot to further investigate this community merger and separation. Snapshots from the AI geodesic interpolation are shown in Fig. 3; the results from the linear and LE geodesic interpolations were almost identical with these. Increasing the temporal resolution would become increasingly cumbersome for presentation in a printed format. However, the method does lend itself well to video presentations of the dynamic community behaviour (see Additional file 3 for an example).
Synthetic graph spectral plots, frames 1-7. The spectral plots of the synthetic data snapshots are presented in order from left to right, and top to bottom. They show two communities that are stable and separate except for the merger shown in the fourth frame. There are also nodes that do not associate closely with any community at various points in time
Synthetic graph interpolation. The interpolation's frames are presented from left to right, and top to bottom. At the top left, the first frame is the third data snapshot, the sixth frame is the fourth data snapshot, and the eleventh frame is the fifth data snapshot; the interpolated frames are taken at evenly spaced time intervals between the data snapshots. The interpolated frames show a clear progression of community merging and splitting as well as some outliers that do not seem strongly attached to any community
Additional file 3: A video of the spectral plots created with the AI geodesic interpolation on the synthetic graph data progressing through the data snapshots in order from the initial to the final frame. (MP4 1966 kb)
In Fig. 4, we can see how the graph connectivity changes over time. The geodesic curves both interpolate the eigenvalue product linearly between points, whereas the linear interpolation is slightly concave. For this dynamic graph, the data points are relatively close to each other, and thus the geodesic and linear interpolations are very similar. If we interpolate between t=0, t=3, and t=6, we can see the distinction more clearly, as in Fig. 5.
Logarithm of product of non-zero eigenvalues over time, synthetic graphs. The connectivity results are shown for the interpolations that do not use a threshold (top) and the interpolations that use a threshold of 0.5 (bottom)
Logarithm of product of non-zero eigenvalues over time with longer interpolation window (no threshold), synthetic graphs. Interpolating between graphs that are 'farther apart' leads to a more apparent distinction between the geodesic and linear interpolations. The AI and LE geodesic are still indistinguishable with regards to the connectivity measure, however
Thresholding gives us a piecewise constant graph. The graph dynamics consist of an edge addition phase followed by an edge subtraction phase, so the thresholding parameter simply determines when that entry flips from 0 to 1 (or vice versa). If we were to use a finer time resolution, we might see a slight difference between the linear and geodesic interpolations with respect to when this transition happens, but the basic behaviour would remain the same.
In performing community detection, we found that the geodesic interpolations produced adjacency matrices with negative entries. Almost all of these entries were on the order of 0.001 to 0.01, and none were larger than 0.1. Negative edges need not be a barrier for community detection (e.g., see Traag and Bruggeman (2009)), but they can cause problems for the Louvain method, so in doing community detection, we simply set these entries to 0. This was only necessary for community detection on graphs that did not use thresholding. When using a threshold, any value equal to or below the threshold, including a negative value, was set to 0.
The spectral plots showed two communities merging and splitting with some outliers along the way. We found that the Louvain method split the merged community into four, and the outliers sometimes formed very small communities of their own (Fig. 6). The difference in results between the two methods suggests that in community detection, it may be worthwhile to be able to assign an 'unaffiliated' status to some nodes – nodes that are not really part of any community. This kind of behaviour is what gives us, for example, the brief existence of a small community (of size 3) in the LE geodesic interpolation between t=1 and t=2. When we use a threshold, these behaviours cease, as we now have a graph that is piecewise constant in time for all three interpolations.
Communities in interpolated synthetic graphs. When no threshold is applied (top), the Louvain method produces varying numbers of communities during the merger of the two original communities, and even the data snapshot of the merged communities shows not one but four communities; we also see some differences between the two geodesic interpolations. With a threshold (bottom), though, the piece-wise constant nature of the interpolation shows forth
Finally, we consider the average behaviour of this graph using the mean graphs produced by each interpolation method. The spectral plots of these graphs are shown in Fig. 7, and they clearly show two distinct communities. This indicates that the merging of the two communities was only a transient effect and that the same communities re-emerged after the temporary merger. The averaging process preseved the structure that we designed the dynamic graph to have. If the second pair of communities were significantly different from the first, then the spectrum of the average graph would not display two distinct communities so clearly.
Synthetic graph average spectral plots. The spectral plots for the AI geodesic (left), LE geodesic (center), and linear (right) means are all very similar: the geodesic interpolations produce indistinguishable means, and the linear interpolation's mean is only slightly different from the geodesics'
Table 1 illustrates the similarities while highlighting the small differences between the results: the geodesic interpolations consistently have slightly higher modularity and slightly lower connectivity than the linear interpolations, but thresholding the resulting graphs reduces those differences. This is not surprising given both the propensity that linear interpolations have for increasing connectivity and the similarity of the geodesic and linear interpolations in this case.
Table 1 Mean graph characteristics, synthetic graphs
Proteomics network results
In interpolating the proteomics network data, we again obtained negative adjacency matrix entries (around 5% of the total entries). The AI geodesics produced far fewer such entries than the LE geodesics (by an order of magnitude), and the AI entries were usually smaller. Of the negative entries, the largest was -0.16, but less than 1% of the negative entries had magnitudes greater than 0.01. As with the synthetic graphs, we simply set these negative entries to 0 when using the Louvain method.
Figure 8 shows how the number of graph communities varied over time and how different thresholding levels affected those results. With no thresholding, we found that the results were too connected (i.e., not enough communities) for all three interpolation methods: after leaving a supplied data point, the number of interpolated communities would immediately drop, remain relatively constant, and shoot up upon reaching the next data point. Thresholding produced better results. Generally speaking, the AI geodesic produced too many communities while the linear interpolation produced too few, and neither produced a steady deformation from one data point to the next. The LE geodesic showed an intermediate behaviour in this regard, and a threshold of 0.02 produced best performance. The number of communities produced by the interpolation did not vary smoothly, but there was a general progression from data point to data point. Changing the threshold value had a small effect on the AI geodesic, but it did nothing to improve the linear interpolation, and using a threshold value of 0.5 actually produced an odd spike in the number of communities halfway between data points. We will return to this phenomenon later.
Number of communities in interpolated proteomics network. The number of communities in the interpolations using thresholds of 0.02 (top left), 0.1 (top right), and 0.5 (bottom left) show significant differences between the three interpolations, while using no threshold (bottom right) produces similar behaviour for all three
In looking at Fig. 8, though, we see that there are many communities relative to the size of the graph – most of these are communities of one or two nodes that are not connected to the rest of the graph. If we only consider communities of a certain size, we can get a more accurate picture of the true community dynamics. In Fig. 9, we look only at communities that contain at least five nodes and consider how the results are affected by different threshold values. When using a threshold value, the results are somewhat similar to those in Fig. 8. In Fig. 8, there were too few communities because the graph was more connected, and we observe the effects of that increased connectivity here, too: there are fewer communities overall, but the communities that are present tend to be larger, and there are more large communities. The geodesic interpolations, on the other hand, were less connected. Therefore, they had had many small communities and relatively few larger ones; the best results came from the LE geodesic with a small threshold.
Number of communities with ≥ 5 members in interpolated proteomics network. The changes in the number of communities with at least five members were perhaps most regular when no threshold was applied (bottom right). Applying thresholds of 0.02 (top left), 0.1 (top right), and 0.5 (bottom left) produced greater differences between the linear and geodesic interpolations
The case without thresholds was more interesting. There, the linear interpolation still often produced too many communities, but the geodesic results did not uniformly produce too few communities. The LE geodesic may have been slightly better than the AI geodesic, but they were both still producing results that looked much more reasonable than they had when we plotted the total number of communities. In fact, those results look even more regular and smooth than the thresholded results.
With some analysis, we can see why using a threshold value of 0.5 produced odd spikes in the number of communities for the linear interpolation. Let us assume that we are interpolating from adjacency matrix A0 to adjacency matrix A1. Let us denote the edges in A1 that are not in A0 with the adjacency matrix A add and the edges in A0 that are not in A1 with the adjacency matrix A sub . Our linear interpolation from A0 at t=0 to A1 at t=1 would then be
$$ A(t) = A_{0} + t \left(A_{add} - A_{sub}\right) $$
If we use a threshold τ such that matrix entries greater than τ are sent to 1 and entries less than or equal to τ are sent to 0, we get two possible interpolation patterns, each with three interpolated values. If τ<0.5, then
$$ A(t) = \left\{ \begin{array}{cc} A_{0} &0 \leq t \leq \tau \\ A_{0} + A_{add} & \tau < t < 1-\tau \\ A_{1} &1-\tau \leq t \leq 1 \end{array} \right. $$
If τ≥0.5, then
$$ A(t) = \left\{ \begin{array}{cc} A_{0} &0 \leq t < 1-\tau \\ A_{0} - A_{sub} &1 - \tau \leq t \leq \tau \\ A_{1} &\tau < t \leq 1 \end{array} \right. $$
A0−A sub will be less connected than either of the interpolation end points, and if τ=0.5, then A(t)=A0−A sub only at t=0.5. That is why we see that spike in the number of communities.
The community similarity results are shown in Fig. 10. With no thresholding, the linear interpolation performs best. Both of the geodesics tend to become even less similar to the previous snapshot than the snapshot they are progressing towards, resulting in a U-shape, whereas the linear interpolation has a more consistent decrease. All three interpolations, though, show a sharp decrease in similarity immediately after leaving a snapshot. Surprisingly, the LE geodesic also produces more extreme results than the AI geodesic. Thresholding produces the best result, and it does so with the LE geodesic and a threshold of 0.1. The linear interpolation once again shows its piecewise constant behaviour, but a threshold of 0.02 is no longer optimal for the LE geodesic, and the AI geodesic performs reasonably well at that threshold value.
Community similarity in interpolated proteomics network. When no threshold is applied (bottom right), the LE geodesic displays more extreme behaviour than the AI geodesic. A threshold of 0.1 (top right) gives the best performance for the geodesics, a threshold of 0.02 (top left) produces excessive variation in the LE geodesic, and a threshold of 0.5 (bottom left) produces almost piecewise constant behaviour in the geodesics. The linear interpolation produces reasonable results when no threshold is applied
Plots of the energy of the interpolated graphs are shown in Fig. 11. When no threshold is applied, the linear interpolation produces an almost linear progression, whereas both geodesic methods go through significant minima between data points. The geodesics are designed to interpolate Laplacian eigenvalue products linearly, whereas the linear interpolation produces a linear variation in the eigenvalue sum. Linearly changing the sum produces a concave change in the determinant, as we saw in Fig. 5, and we can now see that linearly changing the product produces a convex change in the sum. The interpolations in question are being performed on the Laplacian, not the adjacency matrix, but we can see a clear connection.
Graph energy in interpolated proteomics network. Applying thresholds of 0.5 (bottom left), 0.1 (top right), and 0.02 (top left) produced the same kind of trends in the interpolations' graph energy as was the case in considering the number of communities: low-energy (i.e., less connected) graphs with the AI geodesic, high energy (i.e., more connected) graphs with the linear interpolation, and graphs of varying energy with the LE geodesic. Interpolation without a threshold (bottom right) gave similar performance for the geodesics
When we look at the thresholded results, we see that the linear interpolation consistently produces graphs with high energy values, the AI geodesic produces graphs with low energy values, and the LE geodesic is somewhere in the middle. For the LE geodesic, the best threshold value is around 0.02, where the interpolation produces a relatively steady change in graph energy from data point to data point (unlike the linear and AI geodesic interpolations, which basically plateau between points). This is consistent with what we saw in Fig. 8 and what we know about sparsity and the different interpolations.
Finally, we can look at the average graphs calculated using the three different methods. Table 2 shows the number of communities for each of the averaged graphs, and Table 3 shows the number of communities with at least five nodes in those graphs. The average graph without thresholding showed a much higher level of connectivity than any of the data snapshots, and this was the case for all of the averaging methods. This would make sense if the community structure changed significantly from snapshot to snapshot. Thresholding the average graph produced more reasonable results, though the AI average was highly disconnected, and the linear average showed a very large change in behaviour when the threshold dropped below 0.5.
Table 2 Number of communities in average graph
Table 3 Number of communities with ≥ 5 members in average graph
Table 3 records results congruent with those in Table 2. With the linear mean graph, we see more communities with at least five members than any of the individual graph snapshots have – again, the linear interpolation produces results with increased connectivity. The Riemannian mean graphs without thresholds produce more reasonable numbers of communities, but applying a threshold to the geodesic means severely reduces those numbers. The most reasonable result with a threshold seems to be the LE mean with a threshold of 0.02 or the linear mean with a threshold of 0.5.
Next, we can look at the average difference in community assignment between the mean graphs and the data snapshots in Table 4. The linear mean performs better than the others when no threshold is used, but with a threshold, the best results come from the Riemannian means (which are almost identical). These values are quite high – both here and in the interpolation results shown in Fig. 10 – and this is likely due to the large number of unconnected nodes.
Table 4 Average similarity in community assignment
The basic trends in the numbers of communities are reflected in the graph energies recorded in Table 5: the linear averages have very high energy and the geodesic averages have very low energies, with the LE averages' energies slightly higher than the AI averages'. What is somewhat surprising, though, is the difference in graph energies between the non-thresholded means – the numbers of communities in each are similar, but the linear average has an energy roughly an order of magnitude higher than the Riemannian averages. The energy of the linear mean without thresholding or with a threshold of 0.5 seem to be the most reasonable values.
Table 5 Graph energy in average graph
In concluding our observations about these averages, we note that the weights on the linear average graph will all have weights that are multiples of 1/7 (because there are seven data points provided), and thus there will be no difference in results for any two thresholds that lie between \(\frac {n}{7}\) and \(\frac {n+1}{7}\). This explains why the results for threshold values of 0.02 and 0.1 are the same for the linear average, for example. The geodesic interpolations provide no such structure, and our results here would suggest that low thresholds are generally required to get good results out of the geodesic interpolations.
Discussion and future work
Interpolation error
In the Appendix, we have provided error bounds for each geodesic interpolation in terms of distance on their respective manifolds. The actual error incurred will depend on the problem in question, though. That kind of error, or even entry-wise error, may not be the most important kind of error to consider for our purposes here, however. Rather, we may care most about the community structure.
Based on our community-related metrics (connectivity and similarity), the LE geodesic, with a threshold for the proteomics data, performed the best. The AI geodesic was too sparse and disconnected, while the linear interpolation was too connected (as expected). The optimal choice of threshold value depended on the metric being considered: 0.1 was by far the best when considering community similarity, but 0.02 was better for the other metrics under consideration. In general, the optimal threshold value will likely depend on the problem in question and the quantities of interest, but we found that the LE geodesic responded to changes in the threshold value more readily than the AI geodesic did.
In this paper, we used the same bias value for all of the proteomics interpolations (10−6), but as mentioned in the "Disconnected graphs" section, increasing the bias value caused the geodesic interpolation to approach the linear one. Figure 12 shows an example of this where increasing the bias term causes the LE geodesic to behave more and more like the linear interpolation (compare with Fig. 11). Future work may involve experimenting with different bias terms to find a happy medium between the linear and pure geodesic interpolations.
Graph energy in interpolated proteomics network, LE geodesic with varying bias values. Using a bias value of 1.0 essentially produces an average of the geodesic and linear interpolations; a bias value of 1000 produces results similar to the linear interpolation, and a bias value of 10−3 produces results similar to those obtained with a bias value of 10−6
One concern about the geodesic interpolations is the transient edges that they produce– edges that do not exist in either end point but emerge and disappear during the interpolation process. The weights on these edges were small, but they could be positive or negative, and they arose in both the synthetic and proteomics data, so they are not simply an artefact of using the bias addition approach to deal with disconnected graphs. Moreover, using a low threshold means that some of these edges may not disappear when that threshold is applied, and therefore they may affect the community structure of the graph. Using a larger bias value to more closely approximate a linear interpolation may ameliorate the problem, but it would be valuable to look in more detail at why these transients occur and how to interpret them from a graph theoretic perspective. For example, does it make sense to say that the 'shortest' or 'least energetic' path from one graph snapshot to another might involve some transient edges? From the perspective of the manifold geometry, it clearly does, as the shortest path between two points is a geodesic, but it is not clear if the same holds true purely from a network perspective.
In short, the geodesic interpolations are not perfect, and there are still unanswered questions, but it is nonetheless clear that linear interpolation is not well suited to graph interpolation if the ultimate goal is community detection. When using a threshold, linear interpolation will always produce a piecewise constant result consisting of three phases. Without thresholding, the linear interpolation inflates overall graph connectivity, and the greater the difference between the two graphs, the greater the inflation. As an extreme example, consider interpolating between a graph with adjacency matrix A to a graph with adjacency matrix 1−A. The result 'halfway' between them would be a fully connected graph with edge weights of 0.5. These issues are particularly prominent when calculating averages over multiple graphs.
Perhaps most saliently for our purposes here, the linear interpolation did not produce steady changes in the community structure between data points – the proteomics data showed that the linear interpolation almost always had markedly fewer communities than the data points it connected. The AI geodesic produced transient edges that were smaller in magnitude and fewer in number than the LE geodesic, but it was also more expensive and produced graphs that were too sparse (e.g., too few communities); the LE geodesic used a similar approach but produced better results when combined with a threshold. Similar trends held true, generally speaking, for the mean graphs as well.
Computational cost and supporting methods
Currently, the computational cost of geodesic interpolation is high because it requires calculating matrix functions like the exponential and logarithm. The LE geodesic is noticeably faster than the AI geodesic in calculating interpolated points, though, due to the fractional matrix powers used in the former but not the latter. Furthermore, the average graph is significantly easier to calculate for the LE geodesic because it has a closed-form expression, whereas the AI geodesic requires an iterated numerical solution. These computational costs are not prohibitive for graphs with hundreds of nodes, but for much larger graphs – say, on the order of 106 nodes – the computational cost could render our methods infeasible. One possible approach would be to project the graph Laplacians to a lower-dimensional space, perform the interpolation there, and then project back to the original space with some kind of low-rank or sparsity criterion; Riemannian optimization on matrix manifolds could be useful for determining an optimal low-rank projection (Vandereycken 2013).
Another option would be to use graph spectral sparsification (Batson et al. 2013) to produce sparse graphs that approximate the spectrum of the original graph. We would then perform the interpolation on those sparse graphs. Given the close relationship between geodesics and spectral properties, this approach may be better-suited to the geodesic interpolations than to the linear interpolation. Either way, it should be possible to come up with an error bound, in terms of the distance between the approximate and true solutions, that relates to the approximation used.
As an alternative to thresholding, it may also be possible to identify the Laplacians of unweighted graphs that are 'closest' to the geodesic trajectory and use them to define a kind of discrete trajectory of unweighted graphs that most closely approximates the geodesic between two unweighted graphs. This could potentially be more accurate than simply thresholding the adjacency matrix entries.
Additional interpolation and clustering methods
Our present interpolation methods match the supplied data points exactly, but the transitions from one interpolation to another are not smooth. It may be valuable to develop more sophisticated interpolation methods that will enforce smoothness, such as polynomial and spline interpolation, using the form of the geodesic interpolations. We may not want to match the supplied graph snapshots exactly, though. Instead, we may need to come up with an approximating curve for noisy data. It is possible to define a geodesic that minimizes the sum of squared distances between it and a set of time-indexed data (much like a linear least-squares regression). We could then solve for the regression coefficients in a manner similar to the calculation of the geodesic mean. Both higher-order interpolations and least-squares interpolations are possible for the AI and LE geodesics, but they may be easier to derive and computationally cheaper for the LE versions than the AI versions. Regardless of which is used, though, the geometries in which the interpolations are embedded would ensure that the Laplacians remain positive-semidefinite and thus representative of real graphs.
There is also the option of using other Laplacians (e.g., a normalized Laplacian). Some of these Laplacians have spectral properties, such as bounded eigenvalues, that may induce better interpolation behaviour. If these Laplacians also have non-constant nullspaces, though, that would add complexity to the interpolation procedure. This would not be a significant hurdle for piecewise geodesic interpolation, but it may be problematic for graph averaging and some of the interpolation expansions described in the paragraph above. We have not yet looked at this problem in detail, however.
Finally, as mentioned previously, the Riemannian framework does not require any one particular community detection method, though it may have some natural connections to spectral clustering. Future work with the framework could include comparing different static clustering methods (either analytically or computationally) to see if there are any that would be particularly well- or ill-suited to this kind of interpolation and averaging.
We described and implemented Riemannian methods for interpolating between and averaging dynamic graph snapshots. Following that, we demonstrated the use of these methods on a synthetically generated dynamic graph and an experimentally produced proteomics network and compared them with entry-wise linear interpolation. The linear interpolation increased graph connectivity between interpolation points, and we showed that when a threshold is used to produce unweighted graphs from the interpolation, the entry-wise linear approach will always produce a three-phase piecewise constant result.
The geodesic interpolations created using the Riemannian methods produced graphs with linearly varying connectivity when applied to connected graph snapshots and produced decreased connectivity between interpolation points when applied to disconnected graph snapshots. We found that using a low threshold on the edge weights improved our results on the disconnected graphs. However, these interpolations produced transient edges (with small positive and negative weights). One area of future work will be to investigate why this behaviour occurs and interpret it in graph theoretic terms. Choosing larger bias values when applying these methods to disconnected graphs may improve the quality of the interpolation, from the perspective of graph connectivity, and it may also reduce the presence of transient edges as well.
Other significant next steps for this work include developing techniques for applying our work to significantly larger graphs and expanding upon our current interpolation methods to produce the Riemannian analogues of polynomial interpolation, spline interpolation, and least-squares regression.
Error estimate calculations
For 1-D linear interpolation, there is a well-defined error bound for the interpolation: a linear interpolation of f(x) from x0 to x1 has an error bound of
$$ \frac{1}{8} \left(x_{1}-x_{0}\right)^{2} \max \left| f^{\prime} (\xi) \right| $$
We can then consider the Euclidean distance between a trajectory x(t) and its approximation y(t), t∈[0,1], from which we can calculate an error z(t):
$$\begin{array}{*{20}l} \mathbf{z} (t) &= \mathbf{x}(t) - \mathbf{y}(t), \ \mathbf{z} (0) = \mathbf{z}(1) = \mathbf{0} \end{array} $$
$$\begin{array}{*{20}l} \left\| \mathbf{z} \right\|^{2} &\leq \frac{1}{8} \max \limits_{\tau \in \left[0,1\right]} \frac{d^{2}}{dt^{2}} \left(\left\| \mathbf{z} \right\|^{2}\right) \end{array} $$
$$\begin{array}{*{20}l} \frac{d^{2}}{dt^{2}} \left(\left\| \mathbf{z} \right\|^{2}\right) &= 2 \ddot{\mathbf{x}} \cdot \mathbf{z} - 2 \ddot{\mathbf{y}} \cdot \mathbf{z} + 2 \left\| \dot{\mathbf{z}} \right\|^{2} \end{array} $$
$$\begin{array}{*{20}l} \left\| \mathbf{z} \right\|^{2} &\leq \frac{1}{8} \max \limits_{\tau \in \left[0,1\right]} \left(2 \ddot{\mathbf{x}} \cdot \mathbf{z} - 2 \ddot{\mathbf{y}} \cdot \mathbf{z} + 2 \left\| \dot{\mathbf{x}} - \dot{\mathbf{y}} \right\|^{2} \right) \\ &\leq \frac{1}{4} \max \limits_{\tau \in \left[0,1\right]} \left(\left|\ddot{\mathbf{x}} \cdot \mathbf{z}\right| + \left|\ddot{\mathbf{y}} \cdot \mathbf{z}\right| + \left\| \dot{\mathbf{x}} - \dot{\mathbf{y}} \right\|^{2} \right) \end{array} $$
We cannot say that a linear interpolation will always have the smallest amount of error, but a linear interpolation would have \(\ddot {\mathbf {y}} = 0\), so we would expect it to have a smaller error bound than an arbitrary nonlinear interpolation (i.e., one not using higher-order derivative information).
AI Geodesic
We end up with a similar result in considering the distance between a dynamic graph trajectory X(t) through the positive-definite subspace of the Laplacian and an AI geodesic interpolation Y(t) between X(0)=R0 and X(1)=R1:
$$\begin{array}{*{20}l} d^{2} \left(X,Y\right) &= \left\| \ln X^{-\frac{1}{2}} Y X^{-\frac{1}{2}} \right\|^{2} = \text{tr} \left(\Omega^{2}\right) \end{array} $$
$$\begin{array}{*{20}l} \Omega &= \ln \Psi \Leftrightarrow \exp \Omega = \Psi = X^{-\frac{1}{2}} Y X^{-\frac{1}{2}} \end{array} $$
$$\begin{array}{*{20}l} \frac{d}{dt} \left(d^{2} \left(X,Y\right)\right) &= \frac{d}{dt} \left(\text{tr} \left(\Omega^{2}\right) \right) = 2 \text{tr}\left(\Omega \dot{\Omega}\right) \end{array} $$
$$\begin{array}{*{20}l} \frac{d^{2}}{dt^{2}} \left(d^{2} \left(X,Y\right)\right) &= 2\frac{d}{dt} \left(\text{tr}\left(\Omega \dot{\Omega}\right)\right) \end{array} $$
Ω and Ψ commute with each other and with powers of each other (including negative powers) because they have the same eigenvectors. Traces of matrix products are also constant under cyclic permutations of those products. We will use these properties to derive an expression for \(\text {tr}\left (\Omega \dot {\Omega }\right)\) using this matrix commutivity and Greene's results on traces of matrix products (Greene 2014):
$$\begin{array}{*{20}l} \Omega \frac{d}{dt} \left(\exp \Omega \right) \exp \left(- \Omega\right) &= \Omega \dot{\Psi} \Psi^{-1} \end{array} $$
$$\begin{array}{*{20}l} \text{tr} \left(\Omega \frac{d}{dt} \left(\exp \Omega \right) \exp \left(- \Omega\right) \right) &= \text{tr} \left(\Omega \sum \limits_{k=0} \frac{1}{k!} \sum \limits_{r=0}^{k-1} \Omega^{r} \dot{\Omega} \Omega^{k-r-1} \exp \left(-\Omega\right) \right) \\ &= \sum \limits_{k=0} \frac{1}{k!} \sum \limits_{r=0}^{k-1} \text{tr} \left(\Omega \Omega^{r} \dot{\Omega} \Omega^{k-r-1} \exp \left(-\Omega\right) \right) \\ &= \sum \limits_{k=0} \frac{1}{k!} \sum \limits_{r=0}^{k-1} \text{tr} \left(\Omega \dot{\Omega} \Omega^{k-r-1} \exp \left(-\Omega\right) \Omega^{r} \right) \\ &= \sum \limits_{k=0} \frac{1}{k!} \sum \limits_{r=0}^{k-1} \text{tr} \left(\Omega \dot{\Omega} \Omega^{k-r-1} \Omega^{r} \exp \left(-\Omega\right) \right) \\ &= \sum \limits_{k=0} \frac{1}{k!} \sum \limits_{r=0}^{k-1} \text{tr} \left(\Omega \dot{\Omega} \Omega^{k-1} \exp \left(-\Omega\right) \right) \\ &= \text{tr} \left(\Omega \dot{\Omega} \left(\sum \limits_{k=1} \frac{1}{k!} k \Omega^{k-1}\right) \exp \left(-\Omega\right) \right) \\ &= \text{tr} \left(\Omega \dot{\Omega} \left(\sum \limits_{k=1} \frac{1}{\left(k-1\right)!} \Omega^{k-1}\right) \exp \left(-\Omega\right) \right) \\ &= \text{tr} \left(\Omega \dot{\Omega} \exp \Omega \exp \left(-\Omega\right) \right) = \text{tr}\left(\Omega \dot{\Omega} \right) \end{array} $$
$$\begin{array}{*{20}l} \Rightarrow \text{tr}\left(\Omega \dot{\Omega}\right) &= \text{tr} \left(\Omega \dot{\Psi} \Psi^{-1}\right) \end{array} $$
$$\begin{array}{*{20}l} \Omega \dot{\Psi} \Psi^{-1}&= \Omega \left(\frac{d}{dt} \left(X^{-\frac{1}{2}}\right) Y X^{-\frac{1}{2}} + X^{-\frac{1}{2}} \dot{Y} X^{-\frac{1}{2}} + X^{-\frac{1}{2}} Y \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) \right) X^{\frac{1}{2}} Y^{-1} X^{\frac{1}{2}} \\ &= \Omega \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) X^{\frac{1}{2}} + \Omega X^{-\frac{1}{2}} \dot{Y} Y^{-1} X^{\frac{1}{2}} + \Omega X^{-\frac{1}{2}} Y \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) \Psi^{-1} \end{array} $$
$$\begin{array}{*{20}l} \text{tr} \left(\Omega \dot{\Psi} \Psi^{-1}\right) &= \text{tr} \left(\Omega \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) X^{\frac{1}{2}}\right) + \text{tr} \left(\Omega X^{-\frac{1}{2}} \dot{Y} Y^{-1} X^{\frac{1}{2}} \right) \\ & \quad + \text{tr} \left(\Omega X^{-\frac{1}{2}} Y \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) \Psi^{-1}\right) \end{array} $$
Since Ψ−1 and Ω commute,
$$\begin{array}{*{20}l} \text{tr} \left(\Omega X^{-\frac{1}{2}} Y \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) \Psi^{-1}\right) &= \text{tr} \left(\Psi^{-1} \Omega X^{-\frac{1}{2}} Y \frac{d}{dt} \left(X^{-\frac{1}{2}}\right)\right)\\ &= \text{tr} \left(\Omega \Psi^{-1} X^{-\frac{1}{2}} Y \frac{d}{dt} \left(X^{-\frac{1}{2}}\right)\right)\\ &= \text{tr} \left(\Omega X^{\frac{1}{2}} Y^{-1} X^{\frac{1}{2}} X^{-\frac{1}{2}} Y \frac{d}{dt} \left(X^{-\frac{1}{2}}\right)\right) \\ &= \text{tr} \left(\Omega X^{\frac{1}{2}}\frac{d}{dt} \left(X^{-\frac{1}{2}}\right)\right) \end{array} $$
Since \(X^{-\frac {1}{2}} X^{-\frac {1}{2}} = X^{-1}\),
$$\begin{array}{*{20}l} \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) X^{-\frac{1}{2}} + X^{-\frac{1}{2}} \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) = \frac{d}{dt} \left(X^{-1}\right) = - X^{-1} \dot{X} X^{-1} \end{array} $$
$$\begin{array}{*{20}l} \Rightarrow X^{\frac{1}{2}} \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) + \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) X^{\frac{1}{2}} = - X^{-\frac{1}{2}} \dot{X} X^{-\frac{1}{2}} \end{array} $$
$$\begin{array}{*{20}l} \text{tr} \left(\Omega \dot{\Psi} \Psi^{-1}\right) &= \text{tr} \left(\Omega \frac{d}{dt} \left(X^{-\frac{1}{2}}\right) X^{\frac{1}{2}}\right) + \text{tr} \left(\Omega X^{-\frac{1}{2}} \dot{Y} Y^{-1} X^{\frac{1}{2}} \right) + \text{tr} \left(\Omega X^{\frac{1}{2}}\frac{d}{dt} \left(X^{-\frac{1}{2}}\right)\right) \\ &= \text{tr} \left(\Omega X^{-\frac{1}{2}} \dot{Y} Y^{-1} X^{\frac{1}{2}} \right) - \text{tr} \left(\Omega X^{-\frac{1}{2}} \dot{X} X^{-\frac{1}{2}}\right) \\ &= \text{tr} \left(\Omega X^{-\frac{1}{2}} \dot{Y} Y^{-1} X X^{-\frac{1}{2}} \right) - \text{tr} \left(\Omega X^{-\frac{1}{2}} \dot{X} X^{-\frac{1}{2}}\right) \\ &= \text{tr} \left(X^{-\frac{1}{2}} \Omega X^{-\frac{1}{2}} \dot{Y} Y^{-1} X \right) - \text{tr} \left(X^{-\frac{1}{2}}\Omega X^{-\frac{1}{2}} \dot{X}\right) \end{array} $$
For the AI geodesic interpolation (Pennec et al. 2006),
$$\begin{array}{*{20}l} \dot{Y}_{g} (t) &= R_{0}^{\frac{1}{2}} C^{\frac{1}{2}} \exp (Ct) C^{\frac{1}{2}} R_{0}^{\frac{1}{2}} \end{array} $$
$$\begin{array}{*{20}l} \dot{Y}_{g} Y^{-1}_{g} &= R_{0}^{\frac{1}{2}} C R_{0}^{-\frac{1}{2}} \end{array} $$
$$\begin{array}{*{20}l} C &= \ln R_{0}^{-\frac{1}{2}} R_{1} R_{0}^{-\frac{1}{2}} \end{array} $$
since C (and its powers) commute with exp(Ct). Therefore, \(\dot {Y}_{g} Y^{-1}_{g}\) is constant in time. For the entry-wise linear interpolation, however, there is no closed-form expression for \(\dot {Y}_{l}\) or \(Y^{-1}_{l}\); Y l is the positive-definite component of the interpolated Laplacian. In general, though, \(\dot {Y}_{l} Y^{-1}_{l}\) will not be constant in time. We can then consider the second derivative of the original distance function:
$$\begin{array}{*{20}l}{} \frac{d}{dt} \left(\text{tr} \left(\Omega \dot{\Psi} \Psi^{-1}\right) \right) &= \text{tr}\left(\frac{d}{dt} \left(X^{-\frac{1}{2}} \Omega X^{-\frac{1}{2}}\right) \dot{Y} Y^{-1} X\right) + \text{tr} \left(X^{-\frac{1}{2}} \Omega X^{-\frac{1}{2}} \frac{d}{dt} \left(\dot{Y} Y^{-1}\right) X \right) \\ & \quad + \text{tr} \left(X^{-\frac{1}{2}} \Omega X^{-\frac{1}{2}} \dot{Y} Y^{-1} \dot{X} \right) - \text{tr}\left(\frac{d}{dt} \left(X^{-\frac{1}{2}} \Omega X^{-\frac{1}{2}} \right) \dot{X}\right)\\ &\quad - \text{tr}\left(X^{-\frac{1}{2}} \Omega X^{-\frac{1}{2}} \ddot{X}\right) \end{array} $$
For the geodesic, \(\frac {d}{dt} \left (\dot {Y}_{g} Y^{-1}_{g}\right) = 0\), but this will not be the case for the entry-wise linear interpolation. Also note the recurrent \(X^{-\frac {1}{2}} \Omega X^{-\frac {1}{2}}\) term: \(X^{\frac {1}{2}} \Omega X^{\frac {1}{2}}\) is the vector from X to Y (Pennec et al. 2006), so \(X^{-\frac {1}{2}} \Omega X^{-\frac {1}{2}}\) is essentially a measure of trajectory discrepancy rescaled byX.
As with the vector trajectory previously, we cannot say that a given interpolation will always the most accurate one. However, one of the error terms disappears for the AI geodesic interpolation; all other things being equal, it is reasonable to expect that the error on the geodesic interpolation will be, at the very least, less variable than the error on the entry-wise linear interpolation. For extrapolation, the error estimate is no longer relevant for the entry-wise linear method because such an extrapolation is not guaranteed to remain positive-semidefinite. However, the error on the AI geodesic extrapolation is well-defined by the remainder formula in Taylor's theorem. For example, extrapolating past R1 to t>1 using an AI geodesic built by interpolating from R0 to R1 would produce the following error bound:
$$ d^{2} \left(X(t),Y(t)\right) \leq \left(t-1\right) \left| \frac{d}{dt} \left(d^{2} \left(X,Y\right)\right) \right|_{t=1} + \frac{1}{2} \left(t-1\right)^{2} \max \limits_{\tau \in \left[1,t\right]} \left| \frac{d^{2}}{dt^{2}} \left(d^{2}\left(X,Y\right) \right) \right| $$
with the derivatives as previously calculated.
LE geodesic
For the LE geodesic,
$$\begin{array}{*{20}l} d^{2}\left(X,Y\right) &= \left\| \ln X - \ln Y \right\|^{2} = \text{tr} \left(\Omega^{2}\right) \end{array} $$
$$\begin{array}{*{20}l} \Omega &= V - W \end{array} $$
$$\begin{array}{*{20}l} V &= \ln X \end{array} $$
$$\begin{array}{*{20}l} W &= \ln Y \end{array} $$
$$\begin{array}{*{20}l} \frac{d}{dt} \left(d^{2}\left(X,Y\right)\right) &= 2 \text{tr} \left(\Omega \dot{\Omega}\right) \end{array} $$
$$\begin{array}{*{20}l} \text{tr} \left(\Omega \dot{\Omega}\right) &= \text{tr} \left(\left(V - W\right) \left(\dot{V} - \dot{W}\right) \right) \\ &= \text{tr} \left(V \dot{V}\right) - \text{tr} \left(W \dot{V}\right) - \text{tr} \left(V \dot{W}\right) + \text{tr} \left(W \dot{W}\right) \end{array} $$
$$\begin{array}{*{20}l} \frac{d}{dt} \left(\text{tr} \left(\Omega \dot{\Omega}\right) \right) &= \text{tr} \left(\frac{d}{dt} \left(V \dot{V}\right)\right) - \text{tr} \left(\frac{d}{dt} \left(W \dot{V}\right)\right) - \text{tr} \left(\frac{d}{dt} \left(V \dot{W}\right)\right)\\ &\quad + \text{tr} \left(\frac{d}{dt} \left(W \dot{W}\right)\!\right) \end{array} $$
In general, all of these derivative terms will be non-zero. However, for the LE geodesic interpolation
$$\begin{array}{*{20}l} Y &= \exp \left(\left(1-t\right) \ln S_{2} + t \ln S_{1}\right) \end{array} $$
$$\begin{array}{*{20}l} W &= \ln Y = \left(1-t\right) \ln S_{2} + t \ln S_{1} \end{array} $$
$$\begin{array}{*{20}l} \dot{W} &= \frac{d}{dt} \left(\ln Y\right) = \ln S_{2} - \ln S_{1} \end{array} $$
$$\begin{array}{*{20}l} \ddot{W} &= 0 \end{array} $$
Several of the terms in \(\frac {d}{dt} \left (\text {tr} \left (\Omega \dot {\Omega }\right)\right)\) will therefore be zero for the LE geodesic. As such, we would expect the error from the LE geodesic to be less than the error from the entry-wise input interpolation for the same reasons that we would expect the AI geodesic error to be smaller than the entry-wise linear interpolation. We can then plug these results into Eq. 40 to get error bounds for the LE geodesic.
Absil, PA, Mahony R, Sepulchre R (2007) Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, New Jersey.
Arsigny, V, Fillard P, Pennec X, Ayache N (2007) Geometric means in a novel vector space structure on symmetric positive-definite matrices. SIAM J Matrix Anal Appl 29(1):328–347.
Batson, J, Spielman DA, Srivastava N, Ten SH (2013) Spectral sparsification of graphs: Theory and algorithms. Commun ACM 56:87–94.
Blondel, V, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp. P10008. http://iopscience.iop.org/article/10.1088/1742-5468/2008/10/P10008/pdf.
Boccaletti, S, Latora V, Moreno Y, Chavez M, Hwang DU (2006) Compex networks, structure and dynamics. Phys Rep 424:175–308.
ADS MathSciNet Article MATH Google Scholar
Boguná, M, Papadopoulos F, Krioukov D (2010) Sustaining the internet with hyperbolic mapping. Nat Commun 1:62.
Bonnabel, S, Sepulchre R (2009) Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank. SIAM J Matrix Anal Appl 31:1055–1070.
Boothby, WM (1986) An Introduction to Differentiable Manifolds and Riemannian Geometry. Academic Press, Orlando.
Brualdi, RA (2006) Energy of a graph In: Notes to AIM Workshop on Spectra of Families of Matrices Described by Graphs, Digraphs, and Sign Patterns.
Cazabet, R, Amblard F (2014) Dynamic Community Detection. In: Alhajj R Rokne J (eds)Encyclopedia of Social Network Analysis and Mining, 404–414.. Springer, New York.
Domokos, G, Sipos AR, Szabó T (2012) The mechanics of rocking stones:equilibria of separated scales. Math Geosci 44:71–89.
Fenn, DJ, Porter MA, Mucha PJ, McDonald M, Williams S, Johnson NF, Jones NS (2012) Dynamical clustering of exchange rates. Quant Finan 12(10):1493–1520.
Fortunato, S (2010) Community detection in graphs. Phys Rep 486(3–5):75–174.
ADS MathSciNet Article Google Scholar
Girvan, M, Newman ME (2002) Community structure in social and biological networks. Proc Natl Acad Sci 99(12):7821–7826.
Greene, J (2014) Traces of matrix products. Electron J Matrix Algebra. 27:716–734.
Harris, JM, Hirst JL, Mossinghoff M (2008) Combinatorics and Graph Theory. Springer, New York.
Keshava Prasad, T, Goel R, Kandasamy K, Keerthikumar S, Kumar S, Mathivanan S, Telikicherla D, Raju R, Shafreen B, Venugopal A, et al. (2008) Human protein reference database—2009 update. Nucleic acids Res 37(suppl_1):767–772.
Krioukov, D, Papadopoulos F, Vahdat A, Boguñá M (2009) Curvature and temperature of complex networks. Phys Rev E 80:035101.
Krioukov, D, Papadopoulos F, Kitsak M, Vahdat A, Boguñá M (2010) Hyperbolic geometry of complex networks. Phys Rev E 82:036106.
Lambiotte, R, Delvenne JC, Barahona M (2014) Random walks, markov processes and the multiscale modular organization of complex networks. IEEE Trans Netw Sci Eng 1(2):76–90.
Mitchell, HD, Eisfeld AJ, Sims AC, McDermott JE, Matzke MM, Webb-Robertson BJM, Tilton SC, Tchitchek N, Josset L, Li C, et al. (2013) A network integration approach to predict conserved regulators related to pathogenicity of influenza and sars-cov respiratory viruses. PLoS ONE 8(7):69374.
Mucha, PJ, Richardson T, Macon K, Porter MA, Onnela JP (2010) Community structure in time-dependent, multiscale, and multiplex networks. Science 328(5980):876–878.
Newman, MEJ (2010) Networks: An Introduction. Oxford University Press, Oxford, United Kingdom.
Nguyen, NP, Dinh TN, Shen Y, Thai MT (2014) Dynamic social community detection and its applications. Plos ONE 9(4):1–18.
Pennec, X, Fillard P, Ayache N (2006) A riemannian framework for tensor computing. Int J Comput Vis 66:41–66.
Rand, WM (1971) Objective criteria for the evaluation of clustering methods. J Am Stat Assoc 66(336):846–850.
Tantipathananandh, C, Berger-Wolf TY (2011) Finding communities in dynamic social networks In: 2011 IEEE 11th International Conference on Data Mining, 1236–1241.. IEEE, Vancouver.
Traag, VA, Bruggeman J (2009) Community detection in networks with positive and negative links. Phys Rev E 80(3):036115.
Vandereycken, B (2013) Low-rank matrix completion by riemannian optimization. SIAM J Optim 23:1214–1236.
The authors would like to thank Jason McDermott for providing the proteomics data used in this study.
This work was funded by the Microbiomes in Transition (MinT) Initiative at the Pacific Northwest National Laboratory.
The datasets supporting the conclusions of this article are included in the article's additional files.
Pacific Northwest National Laboratory, 902 Battelle Boulevard, Richland, 99352, WA, United States
Craig Bakker, Mahantesh Halappanavar & Arun Visweswara Sathanur
Craig Bakker
Mahantesh Halappanavar
Arun Visweswara Sathanur
CB proposed the Riemannian framework, performed the mathematical derivations, implemented the methods, generated results, and wrote the main body of the paper. MH obtained the proteomics data, provided the community similarity metric, co-wrote the literature review, offered comments and corrections on the manuscript, and suggested reviewers. AS proposed the bias method, wrote code to assist in the community detection, co-wrote the literature review, generated the synthetic data, offered comments and corrections on the manuscript, and suggested reviewers. All authors have read and approved the manuscript.
Correspondence to Craig Bakker.
The synthetic data snapshot files begin with 'comm-fixed', and each snapshot file is suffixed with its time index. The proteomics snapshot data files begin with "ppn-numeric" and they are also suffixed with their time indices. The.edges files may be read with a text editor; we recommend TextPad. (ZIP 46 kb)
The Python implementation of the methods described in the paper. (PY 24 kb)
Bakker, C., Halappanavar, M. & Visweswara Sathanur, A. Dynamic graphs, community detection, and Riemannian geometry. Appl Netw Sci 3, 3 (2018). https://doi.org/10.1007/s41109-018-0059-2
Community detection
Riemannian geometry | CommonCrawl |
← Two Titans
Memories of a Theoretical Physicist →
Posted on May 25, 2022 by woit
Various things that may be of interest:
MSRI in Berkeley has announced a \$70 million dollar gift from Jim and Marilyn Simons, and Henry and Marsha Laufer. This gift will make up the bulk of a planned endowment increase of \$100 million and is the largest endowment gift ever made to a US-based math institute. The success of the Renaissance Technologies hedge fund is what has made gifts on this scale possible. This summer MSRI will be renamed the "Simons Laufer Mathematical Sciences Institute", and the directorship will pass from David Eisenbud to Tatiana Toro.
The journal Inference has just published an article by Daniel Jassby, which gives a highly discouraging view of the prospects for magnetic confinement fusion devices. Jassby, who worked for many years at the Princeton Plasma Physics Lab, argues that performance of magnetic confinement fusion systems has not much advanced in a quarter century, making for very bleak prospects that such designs will lead to a workable power plant in the forseeable future. He sees inertial confinement fusion systems like the National Ignition Facility at Livermore as making some progress, but ends with:
The technological hurdles for implementing an ICF-based power system are so numerous and formidable that many decades will be required to resolve them—if they can indeed be overcome.
I've been spending some time reading Grothendieck's Récoltes et Semailles, which is a simultaneously fascinating and frustrating experience. I've made it almost to the end of the first part, except that there will be another forty pages or so of notes to go. To get to the first part involved starting by reading through about two hundred pages of four layers of introduction. It seems that basically Grothendieck did no editing. Once he was done writing the first part, as he thought of more to say he'd add notes. He distributed copies to various other mathematicians, and then kept adding new introductions, with various references to how this fit in with more technical mathematical documents he was working on (La "Longue Marche" à Travers la Théorie de Galois, À la poursuite des champs).
After the first part, looking ahead there's the daunting prospect of 1500 pages with the theme of examining his deepest mathematical ideas and what he felt was the "burial" that he and his ideas had been subjected to after his leaving active involvement with the math research community in 1970. Quite a few years ago I did spend some time looking through this part to try and learn more about Grothendieck's mathematical ideas. I'll see if I can try again, with the advantage of now knowing somewhat more about the mathematical background.
Besides the frustrating aspects, what has struck me most about this is that there are many beautifully written sections, capturing Grothendieck's feeling for the beauty of the deepest ideas in mathematics. One gets to see what it looked like from the inside to a genius as he worked, often together with others, on a project that revolutionized how we think about mathematics. This material is really remarkable, although embedded in far too much that is extraneous and repetitive. The text desperately needs an editor.
There are various places online one can find parts of the book and other related material, sometimes translated. Two places to look are the Grothendieck Circle, and Mateo Carmona's site.
For an up-to-date project on reworking foundations of mathematics (with an eye to eliminating analysis…), Dustin Clausen and Peter Scholze are now teaching a course on Condensed Mathematics and Complex Geometry, lecture notes here.
I noticed that the Harvard math department website now has an article on Demystifying Math 55. The past couple years this course has been taught by Denis Auroux, and one can find detailed course materials including lecture notes at his website.
The current version of the course tries to cover pretty much a standard undergraduate pure math curriculum in two semesters, with the first semester linear algebra, group theory and finite group representations, the second real and complex analysis. The course has gone through various incarnations over a long history, and has its own Wikipedia page. For various articles written about the course over the years, see here, here (about a Pavel Etingof version) and here (about a Dennis Gaitsgory version).
I took the course in 1975-76, when the fall semester was taught by mathematical physicist Konrad Osterwalder, who covered some linear algebra and analysis rigorously, following the course textbook Advanced Calculus by Loomis and Sternberg. The spring semester was rather different, with John Hubbard sometimes following Hirsch and Smale, sometimes giving us research-level papers about dynamical systems to read, and then telling us to read and work through Spivak's Calculus on Manifolds over reading period.
My experience with the course was somewhat different than that described in the articles above, partly due to the particular instructors and their choices, partly due to the fact that I was more focused on learning as much advanced physics as possible. I don't remember spending excessive amounts of time on the course, nor do I remember anyone I knew or ran into being especially interested in or impressed by my taking this particular course. What was a new experience was that it was clear the first semester that I was a rather average student in the class, not like in my high school classes. The second semester about half the students had dropped and I guess I was probably distinctly less than average. The current iteration of the course looks quite good for the kind of ambitious math student it is aimed at, and it would be interesting if a new textbook ever gets written.
Update: One more related item. This week Chapman University is hosting a conference about Grothendieck. Kevin Buzzard has posted his slides here.
22 Responses to This and That
Analyst says:
What do Clausen and Scholze's notes have to do with "reworking the foundations of mathematics," much less about eliminating analysis therefrom? I'm not familiar with their work but their stated goal is just to develop some of the basic theory of the constructions from their research papers and to illustrate it by proving some old theorems.
Analyst,
My reading of what Clausen/Scholze are trying to do is revamp the foundations of complex analysis/complex geometry so the subject is not based on analysis (e.g. differentiable functions), but on algebra. They explicitly advertise that their way of proving basic theorems of the subject is "analysis-free". A motivation for this is arithmetic geometry, where Scholze would like to prove things like real local Langlands by the same p-adic methods used at finite primes:
"Part of our goal is to develop foundations for analytic geometry that treat archimedean and non-archimedean geometry on equal grounds; and we will proceed by making archimedean geometry more similar to non-archimedean geometry.
Joseph Healy says:
I think one thing missing from Daniel Jassby's commentary is a current trend in managing tokamak plasma in real-time with AI.
That's not an accurate characterization of the notes. They aren't revamping "the foundations of complex analysis/complex geometry," they're just proving some core theorems about compact complex manifolds. (Note that complex analysis and complex geometry encompass far more than compact manifolds…) Further, it's no surprise this can be done algebraically, because GAGA tells you that compact complex manifolds are equivalent to complex algebraic varieties. One could already prove all of these things from an algebraic perspective. The point of the notes seems to be to give new proofs along these lines that illustrate their new techniques.
This is an interesting project (especially as a way to make their techniques more accessible), but I feel it is inappropriate to claim it is "reworking foundations of mathematics (with an eye to eliminating analysis…)." There's nothing about foundations in the notes, and the fact that these theorems can be "algebra-ized" is known to any PhD student in complex geometry.
mls says:
@ anonymous
Dr. Woit has a history of misrepresenting anything in mathematics and related subjects which does not conform to his personal beliefs. This has been especially true with every mention of "foundations."
At least the last paragraph of his present post includes an honest history explaining his profound ignorance of mathematics.
Will Sawin says:
I'm going to continue the trend of every commenter having a slightly different interpretation. In particular, I think one should look at this in the context of Scholze's previous work in condensed mathematics.
I think it's important to begin with the observation that in the theory of complex manifolds one often wants to use analytic methods and one also often wants to use algebraic methods potentially involving derived categories and such. A basic concern is that, in some future arguments, one could run into trouble when these two strands don't weave together properly.
For example, if one wants to keep track of size of something using Banach spaces, and also use derived categories, one could be stymied by the fact that Banach spaces do not form an abelian category, and therefore can't be used to construct a derived category. Scholze's earlier work on condensed, liquid, and solid vector spaces provides a fix for this by defining an abelian category that includes Banach spaces.
I believe Scholze's work in this course is intended to provide further tools on the same lines that are potentially useful in future research. Specifically this is for research that already intersects analysis and algebra – I don't think there is any intention to replace analysis in the proof of existence of solutions of some PDE or the prime number theorem or something like that. But it's not specific to the point that there is some Weil-conjectures-like goal in mind.
@anonymous
> because GAGA tells you that compact complex manifolds are equivalent to complex algebraic varieties.
This is not true at all. It only tells you that, for a manifold that is already algebraic, studying it using analytic and algebraic tools will give you the same answers (the same compact submanifolds, the same cohomology, …)
> One could already prove all of these things from an algebraic perspectiv
This seems like a silly point of view when one of the things being proven in the notes is GAGA, i.e. exactly the bridge that links the algebraic and analytic perspectives. One can't use GAGA to prove GAGA.
According to Laurent Fargues, one of the goals is the Hodge decomposition. Do you consider this something that can be proven entirely algebraically?
Johan says:
If I remember correctly, Columbia Library has one of the original copies of Récoltes et Semailles (sent by Grothendieck to Sammy Eilenberg), no? It may even be autographed, in any case it really belongs in the rare books collection.
Jonathan Chiche says:
I believe most, if not all, original copies of "Récoltes et semailles" are inscribed by Grothendieck. In 2008/2009 I read one of the two copies owned by Paris N university (with N=7 probably). One of them was inscribed by Grothendieck to Faltings, the other to Leray. If my memory serves me right, the latter had a note written by Leray pointing to an error in the text (regarding himself), and one of the two copies (I cannot recall which) was incomplete pf roughly one half. It would be interesting to have a list of institutionally and privately owned original copies, together with the text of Grothendieck's dedication. I disagree that the text needs an editor and remember it as a great reading for an aspiring mathematician.
Dustin Clausen says:
For what it's worth, I agree with what Will Sawin says (minus the attribution of everything solely to Scholze, of course!). I never thought of the goal of condensed math, or this approach to analytic geometry, as being to eliminate analysis. Actually, a lot of analysis shows up in the foundations. Rather, the goal is to put parts of analysis and topology in a new framework, one which allows to mix more easily with algebra. Then we can make formal arguments of a certain algebraic style, which however lead to analytic and topological conclusions.
In the example of the theorems we aim to reprove in the course, this means that the analysis is black-boxed into some foundational material about liquid vector spaces, and then the rest of the argument is in a sense purely algebraic. But I don't really think of that as eliminating analysis. In fact, in my view it is neither desirable, nor even possible, to elminate analysis from the study of complex geometry!
mls,
There was a small element in my post of trolling of analysts/foundations of math aficionados. I won't really apologize since it was kind of successful, and in the spirit of Clausen/Scholze's claim to be making the subject "analysis-free". I see that Clausen has written in here to clarify, which is great.
There is some connection of this to my math education experiences at Harvard. Doing about average in a Math 55 class that had a bunch of the top Math Olympiad performers in it even though I wasn't putting much time into it (much more of my time was going into the quantum mechanics class I was taking) wasn't such a bad performance. More relevant, the next math class I took was a graduate course in analysis, taught by Andrew Gleason. This course extensively covered set theory, point set topology and measure theory, and had us spending lots of time puzzling out questions like whether a space was $T_{2\frac{1}{2}}$. Around the same time I also took a set theory course from Quine (I did pretty well in both from what I recall).
In retrospect taking these courses (Gleason and Quine) was a big mistake and a waste of time, caused by my idea that, both in physics and in math, what I should be doing was focusing on learning the "foundations", from which understanding of everything else would flow. I should have been taking other courses which were not "foundational", but would have taught me about some of the great unifying concepts that bring together a wide range of beautiful mathematical structures (as well as physics!). Someone should have told me to take representation theory…
Łukasz says:
@Analyst @Peter Woit
In my opinion, the phrase "reworking the foundations of mathematics," connotes rather research in mathematical logic or set theory. For example, classical Arithmetic is based on classical first-order logic (some people prefer to tell in this context, about classical functional calculus).
I did clarify this later by specifying "foundations of complex analysis/complex geometry" and note that Clausen/Scholze talk about "a new foundation for combining algebra and topology". I'm well aware that many people identify "foundations of mathematics" with mathematical logic/set theory. That identification led to the misguided educational experiences of my youth that I explained, so I rather intentionally used the term to refer to a broader (and, if you ask me, more interesting) set of issues.
@Dustin Clausen
Sorry about that! Maybe next time I will make up for it by crediting it entirely to you and not at all to Scholze…
What does "GAGA" stand for?
For most people, a well-known performer. For some mathematicians, more relevant is
https://en.wikipedia.org/wiki/Algebraic_geometry_and_analytic_geometry
Peter Woit,
For * me * , it was what I was going trying to figure out what it means!
Thanks for the reference! [For the linklazy among us, "GAGA" refers to "Geometrie Algebrique et Geometrie Analytique", a foundational paper in algebraic geometry published by the redoubtable Jean-Pierre Serre in 1956 (Full disclosure: my personal knowledge of such things is a trifle on the nonexistent side.).]
David Brown says:
Jassby's 2018 article might be worth studying:
https://thebulletin.org/2018/02/iter-is-a-showcase-for-the-drawbacks-of-fusion-energy/
In any research, the yea-sayers who are overly optimistic might have a strong tendency to drive out the nay-sayers — because cash and career advancement are at stake.
And for those who do not speak french, let us not forget that "gaga" in popular langage means "senile" (more or less, maybe Peter would have a more accurate translation), and may also be for "to have a crush on someone"
Paul D. says:
The big problem with these DT fusion schemes is that the volumetric power density is just terrible. An existing PWR fission reactor's primary pressure vessel might have a power density of 20 MW/m^3; the volumetric gross fusion power density of ITER is 0.05 MW/m^3. It's very difficult to see how DT fusion can ever be cheaper than fission, given that the reactor itself will be at least an order of magnitude larger (and much more complex).
Note that this problem has nothing to do with plasma physics, but rather is due to limits on heat and radiation transfer at the wall of the reactor vessel and the square-cube law. Totally solve plasma confinement and the problem is still there, at least for reactors burning DT.
This issue has been known for approaching 40 years, if not longer. I find it incredible the press is still spouting glowing nonsense about DT fusion.
http://orcutt.net/weblog/wp-content/uploads/2015/08/The-Trouble-With-Fusion_MIT_Tech_Review_1983.pdf
Alan Post says:
I think Jassby's criticism of SPARC is a bit off: he points out that higher field strength will produce higher mechanical stresses, but doesn't make any argument that this is insurmountable. He also says that SPARC should focus on improving Q, not cost-effectiveness. But a big reason that progress has slowed is that the machines have become so expensive, so these are connected. Additionally, increasing field strength directly increases Q, for a fixed-size machine.
David Roberts says:
If you wanted to add another update about the Chapman conference, the videos are now available.
John Doe says:
I have math55a and math55b course notes on my computer. They can be found on the internet, easily. The full course notes are about 100 pages~ for each part and they cover everything that I covered in about 4 years at my university, but in very brief detail. The algebra course even covers topics like category theory and differential geometry. A lot of Galois theory is covered too.
I think I'd do very very poorly in math55 just because you are given not much time at all to digest these concepts. Rather than spending several weeks on rings, they are covered in one lecture and then you move on to something more advanced. I guess Peter Woit is to me like Ed Witten is to Peter Woit.
I also have all of parts I, II,III course notes of the Cambridge math degree, which can also easily be found on the internet. They were latex'd by a student who did it in four years. The full course notes there are about 4000 pages~ as compared to the 200 pages in math55. They do go a little more advanced and lots of extra topics like model theory, combinatorics are covered as well. | CommonCrawl |
Mathematical Biosciences & Engineering
2007 , Volume 4 , Issue 2
Select all articles
Export/Reference:
Comparison between stochastic and deterministic selection-mutation models
Azmy S. Ackleh and Shuhua Hu
2007, 4(2): 133-157 doi: 10.3934/mbe.2007.4.133 +[Abstract](1324) +[PDF](642.2KB)
We present a deterministic selection-mutation model with a discrete trait variable. We show that for an irreducible selection-mutation matrix in the birth term the deterministic model has a unique interior equilibrium which is globally stable. Thus all subpopulations coexist. In the pure selection case, the outcome is known to be that of competitive exclusion, where the subpopulation with the largest growth-to-mortality ratio will survive and the remaining subpopulations will go extinct. We show that if the selection-mutation matrix is reducible, then competitive exclusion or coexistence are possible outcomes. We then develop a stochastic population model based on the deterministic one. We show numerically that the mean behavior of the stochastic model in general agrees with the deterministic one. However, unlike the deterministic one, if the differences in the growth-to-mortality ratios are small in the pure selection case, it cannot be determined a priori which subpopulation will have the highest probability of surviving and winning the competition.
Azmy S. Ackleh, Shuhua Hu. Comparison between stochastic and deterministic selection-mutation models. Mathematical Biosciences & Engineering, 2007, 4(2): 133-157. doi: 10.3934/mbe.2007.4.133.
A final size relation for epidemic models
Julien Arino, Fred Brauer, P. van den Driessche, James Watmough and Jianhong Wu
A final size relation is derived for a general class of epidemic models, including models with multiple susceptible classes. The derivation depends on an explicit formula for the basic reproduction number of a general class of disease transmission models, which is extended to calculate the basic reproduction number in models with vertical transmission. Applications are given to specific models for influenza and SARS.
Julien Arino, Fred Brauer, P. van den Driessche, James Watmough, Jianhong Wu. A final size relation for epidemic models. Mathematical Biosciences & Engineering, 2007, 4(2): 159-175. doi: 10.3934/mbe.2007.4.159.
Theoretical models for chronotherapy: Periodic perturbations in funnel chaos type
Juvencio Alberto Betancourt-Mar and José Manuel Nieto-Villar
In this work, the Räossler system is used as a model for chrono- therapy. We applied a periodic perturbation to the y variable to take the Rössler system from a chaotic behavior to a simple periodic one, varying the period and amplitude of forcing. Two types of chaos were considered, spiral and funnel chaos. As a result, the periodical windows reduced their areas as the funnel chaos character increased in the system. Funnel chaos, in this chrono- therapy model, could be considered as a later state of a dynamical disease, more irregular and difficult to suppress.
Juvencio Alberto Betancourt-Mar, Jos\u00E9 Manuel Nieto-Villar. Theoretical models for chronotherapy: Periodic perturbations in funnel chaos type. Mathematical Biosciences & Engineering, 2007, 4(2): 177-186. doi: 10.3934/mbe.2007.4.177.
An optimal adaptive time-stepping scheme for solving reaction-diffusion-chemotaxis systems
Chichia Chiu and Jui-Ling Yu
Reaction-diffusion-chemotaxis systems have proven to be fairly accurate mathematical models for many pattern formation problems in chemistry and biology. These systems are important for computer simulations of patterns, parameter estimations as well as analysis of the biological systems. To solve reaction-diffusion-chemotaxis systems, efficient and reliable numerical algorithms are essential for pattern generations. In this paper, a general reaction-diffusion-chemotaxis system is considered for specific numerical issues of pattern simulations. We propose a fully explicit discretization combined with a variable optimal time step strategy for solving the reactiondiffusion- chemotaxis system. Theorems about stability and convergence of the algorithm are given to show that the algorithm is highly stable and efficient. Numerical experiment results on a model problem are given for comparison with other numerical methods. Simulations on two real biological experiments will also be shown.
Chichia Chiu, Jui-Ling Yu. An optimal adaptive time-stepping scheme for solving reaction-diffusion-chemotaxis systems. Mathematical Biosciences & Engineering, 2007, 4(2): 187-203. doi: 10.3934/mbe.2007.4.187.
P. van den Driessche, Lin Wang and Xingfu Zou
A general mathematical model for a disease with an exposed (latent) period and relapse is proposed. Such a model is appropriate for tuberculosis, including bovine tuberculosis in cattle and wildlife, and for herpes. For this model with a general probability of remaining in the exposed class, the basic reproduction number $\R_0$ is identified and its threshold property is discussed. In particular, the disease-free equilibrium is proved to be globally asymptotically stable if $\R_0<1$. If the probability of remaining in the exposed class is assumed to be negatively exponentially distributed, then $\R_0=1$ is a sharp threshold between disease extinction and endemic disease. A delay differential equation system is obtained if the probability function is assumed to be a step-function. For this system, the endemic equilibrium is locally asymptotically stable if $\R_0>1$, and the disease is shown to be uniformly persistent with the infective population size either approaching or oscillating about the endemic level. Numerical simulations (for parameters appropriate for bovine tuberculosis in cattle) with $\mathcal{R}_0>1$ indicate that solutions tend to this endemic state.
P. van den Driessche, Lin Wang, Xingfu Zou. Modeling diseases with latency and relapse. Mathematical Biosciences & Engineering, 2007, 4(2): 205-219. doi: 10.3934/mbe.2007.4.205.
Evolutionary dynamics of prey-predator systems with Holling type II functional response
Jian Zu, Wendi Wang and Bo Zu
This paper considers the coevolution of phenotypes in a community comprising the populations of predators and prey. The evolutionary dynamics is constructed from a stochastic process of mutation and selection. We investigate the ecological and evolutionary conditions that allow for continuously stable strategy and evolutionary branching. It is shown that branching in the prey can induce secondary branching in the predators. Furthermore, it is shown that the evolutionary dynamics admits a stable limit cycle. The evolutionary cycle is a likely outcome of the process, which requires higher evolutionary speed of prey than of predators. It is also found that different evolutionary rates and conversion efficiencies can influence the lengths of evolutionary cycles.
Jian Zu, Wendi Wang, Bo Zu. Evolutionary dynamics of prey-predator systems with Holling type II functional response. Mathematical Biosciences & Engineering, 2007, 4(2): 221-237. doi: 10.3934/mbe.2007.4.221.
Wenxiang Liu, Thomas Hillen and H. I. Freedman
In this paper we use a mathematical model to study the effect of an $M$-phase specific drug on the development of cancer, including the resting phase $G_0$ and the immune response. The cell cycle of cancer cells is split into the mitotic phase (M-phase), the quiescent phase ($G_0$-phase) and the interphase ($G_1,\ S,\ G_2$ phases). We include a time delay for the passage through the interphase, and we assume that the immune cells interact with all cancer cells. We study analytically and numerically the stability of the cancer-free equilibrium and its dependence on the model parameters. We find that quiescent cells can escape the $M$-phase drug. The dynamics of the $G_0$ phase dictates the dynamics of cancer as a whole. Moreover, we find oscillations through a Hopf bifurcation. Finally, we use the model to discuss the efficiency of cell synchronization before treatment (synchronization method).
Wenxiang Liu, Thomas Hillen, H. I. Freedman. A mathematical model for M-phase specific chemotherapy including the $G_0$-phase and immunoresponse. Mathematical Biosciences & Engineering, 2007, 4(2): 239-259. doi: 10.3934/mbe.2007.4.239.
Simeone Marino, Edoardo Beretta and Denise E. Kirschner
2007, 4(2): 261-286 doi: 10.3934/mbe.2007.4.261 +[Abstract](1360) +[PDF](1705.2KB)
The immune response in humans is complex and multi-fold. Initially an innate response attempts to clear any invasion by microbes. If it fails to clear or contain the pathogen, an adaptive response follows that is specific for the microbe and in most cases is successful at eliminating the pathogen. In previous work we developed a delay differential equations (DDEs) model of the innate and adaptive immune response to intracellular bacteria infection. We addressed the relevance of known delays in each of these responses by exploring different kernel and delay functions and tested how each affected infection outcome. Our results indicated how local stability properties for the two infection outcomes, namely a boundary equilibrium and an interior positive equilibrium, were completely dependent on the delays for innate immunity and independent of the delays for adaptive immunity. In the present work we have three goals. The first is to extend the previous model to account for direct bacterial killing by adaptive immunity. This reflects, for example, active killing by a class of cells known as macrophages, and will allow us to determine the relevance of delays for adaptive immunity. We present analytical results in this setting. Second, we implement a heuristic argument to investigate the existence of stability switches for the positive equilibrium in the manifold defined by the two delays. Third, we apply a novel analysis in the setting of DDEs known as uncertainty and sensitivity analysis. This allows us to evaluate completely the role of all parameters in the model. This includes identifying effects of stability switch parameters on infection outcome.
Simeone Marino, Edoardo Beretta, Denise E. Kirschner. The role of delays in innate and adaptive immunity to intracellular bacterial infection. Mathematical Biosciences & Engineering, 2007, 4(2): 261-286. doi: 10.3934/mbe.2007.4.261.
Maia Martcheva, Mimmo Iannelli and Xue-Zhi Li
We consider a model for a disease with two competing strains and vaccination. The vaccine provides complete protection against one of the strains (strain 2) but only partial protection against the other (strain 1). The partial protection leads to existence of subthreshold equilibria of strain 1. If the first strain mutates into the second, there are subthreshold coexistence equilibria when both vaccine-dependent reproduction numbers are below one. Thus, a vaccine that is specific toward the second strain and that, in absence of other strains, should be able to eliminate the second strain by reducing its reproduction number below one, cannot do so because it provides only partial protection to another strain that mutates into the second strain.
Maia Martcheva, Mimmo Iannelli, Xue-Zhi Li. Subthreshold coexistence of strains: the impact of vaccination and mutation. Mathematical Biosciences & Engineering, 2007, 4(2): 287-317. doi: 10.3934/mbe.2007.4.287.
On the stability of periodic solutions in the perturbed chemostat
Frédéric Mazenc, Michael Malisoff and Patrick D. Leenheer
We study the chemostat model for one species competing for one nutrient using a Lyapunov-type analysis. We design the dilution rate function so that all solutions of the chemostat converge to a prescribed periodic solution. In terms of chemostat biology, this means that no matter what positive initial levels for the species concentration and nutrient are selected, the long-term species concentration and substrate levels closely approximate a prescribed oscillatory behavior. This is significant because it reproduces the realistic ecological situation where the species and substrate concentrations oscillate. We show that the stability is maintained when the model is augmented by additional species that are being driven to extinction. We also give an input-to-state stability result for the chemostat-tracking equations for cases where there are small perturbations acting on the dilution rate and initial concentration. This means that the long-term species concentration and substrate behavior enjoys a highly desirable robustness property, since it continues to approximate the prescribed oscillation up to a small error when there are small unexpected changes in the dilution rate function.
Fr\u00E9d\u00E9ric Mazenc, Michael Malisoff, Patrick D. Leenheer. On the stability of periodic solutions in the perturbed chemostat. Mathematical Biosciences & Engineering, 2007, 4(2): 319-338. doi: 10.3934/mbe.2007.4.319.
A finite element method for growth in biological development
Cornel M. Murea and H. G. E. Hentschel
We describe finite element simulations of limb growth based on Stokes flow models with a nonzero divergence representing growth due to nutrients in the early stages of limb bud development. We introduce a ''tissue pressure'' whose spatial derivatives yield the growth velocity in the limb and our explicit time advancing algorithm for such tissue flows is described in detail. The limb boundary is approached by spline functions to compute the curvature and the unit outward normal vector. At each time step, a mixed-hybrid finite element problem is solved, where the condition that the velocity is strictly normal to the limb boundary is treated by a Lagrange multiplier technique. Numerical results are presented.
Cornel M. Murea, H. G. E. Hentschel. A finite element method for growth in biological development. Mathematical Biosciences & Engineering, 2007, 4(2): 339-353. doi: 10.3934/mbe.2007.4.339.
Sun Yi, Patrick W. Nelson and A. Galip Ulsoy
In a turning process modeled using delay differential equations (DDEs), we investigate the stability of the regenerative machine tool chatter problem. An approach using the matrix Lambert W function for the analytical solution to systems of delay differential equations is applied to this problem and compared with the result obtained using a bifurcation analysis. The Lambert W function, known to be useful for solving scalar first-order DDEs, has recently been extended to a matrix Lambert W function approach to solve systems of DDEs. The essential advantages of the matrix Lambert W approach are not only the similarity to the concept of the state transition matrix in linear ordinary differential equations, enabling its use for general classes of linear delay differential equations, but also the observation that we need only the principal branch among an infinite number of roots to determine the stability of a system of DDEs. The bifurcation method combined with Sturm sequences provides an algorithm for determining the stability of DDEs without restrictive geometric analysis. With this approach, one can obtain the critical values of delay, which determine the stability of a system and hence the preferred operating spindle speed without chatter. We apply both the matrix Lambert W function and the bifurcation analysis approach to the problem of chatter stability in turning, and compare the results obtained to existing methods. The two new approaches show excellent accuracy and certain other advantages, when compared to traditional graphical, computational and approximate methods.
Sun Yi, Patrick W. Nelson, A. Galip Ulsoy. Delay differential equations via the matrix lambert w function and bifurcation analysis: application to machine tool chatter. Mathematical Biosciences & Engineering, 2007, 4(2): 355-368. doi: 10.3934/mbe.2007.4.355.
RSS this journal
Tex file preparation
Abstracted in
Add your name and e-mail address to receive news of forthcoming issues of this journal:
Select the journal
Select Journals | CommonCrawl |
Existence of positive multi-bump solutions for a Schrödinger-Poisson system in $\mathbb{R}^{3}$
DCDS Home
November 2016, 36(11): 5837-5879. doi: 10.3934/dcds.2016057
Asymptotic stability for standing waves of a NLS equation with subcritical concentrated nonlinearity in dimension three: Neutral modes
Riccardo Adami 1, , Diego Noja 2, and Cecilia Ortoleva 3,
Dipartimento di Scienze Matematiche, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Torino
Dipartimento di Matematica e Applicazioni, Università di Milano Bicocca, via R. Cozzi 53, 20125 Milano
PriceWaterhouseCoopers Italia, Via Monte Rosa 91, 21049, Milano, Italy
Received September 2015 Revised May 2016 Published August 2016
In this paper the study of asymptotic stability of standing waves for a model of Schrödinger equation with spatially concentrated nonlinearity in dimension three. The nonlinearity studied is a power nonlinearity concentrated at the point $x=0$ obtained considering a contact (or $\delta$) interaction with strength $\alpha$, which consists of a singular perturbation of the Laplacian described by a selfadjoint operator $H_{\alpha}$, and letting the strength $\alpha$ depend on the wavefunction in a prescribed way: $i\dot u= H_\alpha u$, $\alpha=\alpha(u)$. For power nonlinearities in the range $(\frac{1}{\sqrt 2},1)$ there exist orbitally stable standing waves $\Phi_\omega$, and the linearization around them admits two imaginary eigenvalues (neutral modes, absent in the range $(0,\frac{1}{\sqrt 2})$ previously treated by the same authors) which in principle could correspond to non decaying states, so preventing asymptotic relaxation towards an equilibrium orbit. We prove that, in the range $(\frac{1}{\sqrt 2},\sigma^*)$ for a certain $\sigma^* \in (\frac{1}{\sqrt{2}}, \frac{\sqrt{3} +1}{2 \sqrt{2}}]$, the dynamics near the orbit of a standing wave asymptotically relaxes in the following sense: consider an initial datum $u(0)$, suitably near the standing wave $\Phi_{\omega_0}, $ then the solution $u(t)$ can be asymptotically decomposed as $$u(t) = e^{i\omega_{\infty} t +i b_1 \log (1 +\epsilon k_{\infty} t) + i \gamma_\infty} \Phi_{\omega_{\infty}} +U_t*\psi_{\infty} +r_{\infty}, \quad \textrm{as} \;\; t \rightarrow +\infty,$$ where $\omega_{\infty}$, $k_{\infty}, \gamma_\infty > 0$, $b_1 \in \mathbb{R}$, and $\psi_{\infty}$ and $r_{\infty} \in L^2(\mathbb{R}^3)$ , $U(t)$ is the free Schrödinger group and $$\| r_{\infty} \|_{L^2} = O(t^{-1/4}) \quad \textrm{as} \;\; t \rightarrow +\infty\ .$$ We stress the fact that in the present case and contrarily to the main results in the field, the admitted nonlinearity is $L^2$-subcritical.
Keywords: Nonlinear equations of Schrödinger type, asymptotic stability., point interactions, standing waves.
Mathematics Subject Classification: Primary: 35Q55, 37Q51, 37K4.
Citation: Riccardo Adami, Diego Noja, Cecilia Ortoleva. Asymptotic stability for standing waves of a NLS equation with subcritical concentrated nonlinearity in dimension three: Neutral modes. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 5837-5879. doi: 10.3934/dcds.2016057
R. Adami, G. Dell'Antonio, R. Figari and A. Teta, The Cauchy problem for the Schrödinger equation in dimension three with concentrated nonlinearity, Ann. I. H. Poincaré, 20 (2003), 477-500 doi: 10.1016/S0294-1449(02)00022-7. Google Scholar
R. Adami, G. Dell'Antonio, R. Figari and A. Teta, Blow-up solutions for the Schrödinger equation in dimension three with a concentrated nonlinearity, Ann. I. H. Poincaré, 21 (2004), 121-137 doi: 10.1016/j.anihpc.2003.01.002. Google Scholar
R. Adami, D. Noja, Stability and Symmetry-Breaking Bifurcation for the Ground States of a NLS with a $\delta'$ Interaction, Commun. Math. Phys., 318 (1) (2013), 247-289. doi: 10.1007/s00220-012-1597-6. Google Scholar
R. Adami, D. Noja and C. Ortoleva, Orbital and asymptotic stability for standing waves of a NLS equation with concentrated nonlinearity in dimension three, J. Math. Phys., 54 (2013) 013501. doi: 10.1063/1.4772490. Google Scholar
R. Adami, D. Noja, N. Visciglia, Constrained energy minimization and ground states for NLS with point defects, Discrete Contin. Dyn. Syst. Ser. B, 18 (5) (2013), 1155-1188. doi: 10.3934/dcdsb.2013.18.1155. Google Scholar
S. Albeverio, F. Gesztesy, R. Högh-Krohn and H. Holden, Solvable Models in Quantum Mechanics, American Mathematical Society, Providence, 2005. Google Scholar
D. Bambusi, Asymptotic stability of ground states in some Hamiltonian PDEs with symmetry, Comm. Math. Phys. 320 (2013), 499-542. doi: 10.1007/s00220-013-1684-3. Google Scholar
V. S. Buslaev, A. I. Komech, A.E. Kopylova and D. Stuart, On asymptotic stability of solitary waves in Schrödinger equation coupled to nonlinear oscillator, Comm. PDE, 33 (2008), 669-705. doi: 10.1080/03605300801970937. Google Scholar
V. S. Buslaev and G. Perelman, Scattering for the nonlinear Schrödinger equation: states close to a soliton, St.Petersbourg Math J., 4 (1993), 1111-1142. Google Scholar
V. S. Buslaev and G. Perelman, On the stability of solitary waves for nonlinear Schrödinger equations, Amer.Math.Soc.Transl., 164 (1995), 75-98. doi: 10.1090/trans2/164/04. Google Scholar
V. S. Buslaev and C. Sulem, On asymptotic stability of solitary waves for nonlinear Schrödinger equation, Ann. I. H. Poincaré, 20 (2003), 419-475. doi: 10.1016/S0294-1449(02)00018-5. Google Scholar
C.Cacciapuoti, D.Finco, D.Noja and A.Teta, The NLS equation in dimension one with spatially concentrated nonlinearities: the pointlike limit, Lett. Math. Phys., 104 (2014), 1557-1570. doi: 10.1007/s11005-014-0725-y. Google Scholar
C. Cacciapuoti, D.Finco, D.Noja and A.Teta, The point-like limit for a NLS Equation with concentrated nonlinearity in dimension three, arXiv:1511.06731, (2015) Google Scholar
S. Cuccagna, Stabilization of solution to nonlinear Schrödinger equations, Comm. Pure Appl. Math., 54 (2001), 1110-1145. erratum ibid. 58 (2005), 147. doi: 10.1002/cpa.1018. Google Scholar
S. Cuccagna and T. Mizumachi, On asymptotic stability in energy space of ground states for nonlinear Schrödinger equations, Comm. Math. Phys., 284 (2008), 51-87. doi: 10.1007/s00220-008-0605-3. Google Scholar
S. Cuccagna, The Hamiltonian structure of the nonlinear Schrödinger equation and the asymptotic stability of its ground states, Comm. Math. Phys., 305 (2011), 279-331. doi: 10.1007/s00220-011-1265-2. Google Scholar
Z. Gang and I. M. Sigal, Relaxation of solitons in nonlinear Schrödinger equations with potentials, Adv. Math., 216 (2007), 443-490. doi: 10.1016/j.aim.2007.04.018. Google Scholar
Z. Gang and M. Weinstein, Dynamics of nonlinear Schrödinger-Gross-Pitaevskii equations; mass transfer in systems with solitons and degenerates neutral modes, Analysis & PDE, 1 (2008), 267-322. doi: 10.2140/apde.2008.1.267. Google Scholar
Z. Gang and M. Weinstein, Equipartition of energy in Nonlinear Schrödinger-Gross-Pitaevskii equations, AMRX, 2 (2011) 123-181. Google Scholar
I. S. Gradshteyn and I.M. Ryzhik, Tables of Integrals, Series and Products, Vth edition, Academic Press, 1994. Google Scholar
E. Kirr and Ö. Mizrak, Asymptotic stability of ground states in 3d nonlinear Schrödinger equation including subcritical cases, J. Funct. Anal., 257 (2009), 3691-3747. doi: 10.1016/j.jfa.2009.08.010. Google Scholar
A. I. Komech, E. A. Kopylova and D. Stuart, On asymptotic stability of solitary waves for Schrödinger equation coupled to nonlinear oscillator, II, Comm. Pure Appl. Anal., 202 (2012), 1063-1079. Google Scholar
D. Noja and A. Posilicano, Wave equations with concentrated nonlinearities, J. Phys. A: Math. Gen., 38 (2005), 5011-5022. doi: 10.1088/0305-4470/38/22/022. Google Scholar
I.M. Sigal, Nonlinear wave and Schrödinger equations. I. Instability of periodic and quasiperiodic solutions, Commun. Math. Phys., 2 (1993), 297-320. Google Scholar
A. Soffer and M. Weinstein, Multichannel nonlinear scattering for nonintegrable equations, Comm. Math. Phys., 133 (1990), 119-146. doi: 10.1007/BF02096557. Google Scholar
A. Soffer and M. Weinstein, Multichannel nonlinear scattering for nonintegrable equations II, the case of anisotropic potentials and data, J. Diff. Eq., 98 (1992), 376-390. doi: 10.1016/0022-0396(92)90098-8. Google Scholar
A. Soffer and M. Weinstein, Resonances, radiation damping, and instability of Hamiltonian nonlinear waves, Invent. Math., 136 (1999), 9-74. doi: 10.1007/s002220050303. Google Scholar
A. Soffer and M. Weinstein, Selection of the ground state for nonlinear Schrödinger equations, Rev. Math. Phys., 16 (2004), 977-1071. doi: 10.1142/S0129055X04002175. Google Scholar
A. Soffer and M. Weinstein, Theory of nonlinear dispersive waves and selection of the ground state, Phys. Rev. Lett., 95 (2005), 213905. doi: 10.1103/PhysRevLett.95.213905. Google Scholar
T. P. Tsai and H. T. Yau, Asymptotic dynamics of nonlinear Schrödinger equations: resonance-dominated and dispersion-dominated solutions, Comm. Pure. Appl. Math, 55 (2002), 153-216. doi: 10.1002/cpa.3012. Google Scholar
T. P. Tsai and H. T. Yau, Relaxation of excited states in nonlinear Schrödinger equations, Int. Math. Res. Not., 31 (2002),1629-1673. doi: 10.1155/S1073792802201063. Google Scholar
M. Weinstein, Localized states and Dynamics in the Nonlinear Schrödinger/Gross-Pitaevskii equation, Frontiers in Applied Dynamical Systems, 3 (2015), 41-79. doi: 10.1007/978-3-319-19935-1_2. Google Scholar
Xiaoyu Zeng. Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1749-1762. doi: 10.3934/dcds.2017073
Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. Communications on Pure & Applied Analysis, 2015, 14 (3) : 843-859. doi: 10.3934/cpaa.2015.14.843
Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 525-544. doi: 10.3934/dcds.2001.7.525
François Genoud. Existence and stability of high frequency standing waves for a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2009, 25 (4) : 1229-1247. doi: 10.3934/dcds.2009.25.1229
François Genoud, Charles A. Stuart. Schrödinger equations with a spatially decaying nonlinearity: Existence and stability of standing waves. Discrete & Continuous Dynamical Systems, 2008, 21 (1) : 137-186. doi: 10.3934/dcds.2008.21.137
Masahito Ohta. Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1671-1680. doi: 10.3934/cpaa.2018080
Soohyun Bae, Jaeyoung Byeon. Standing waves of nonlinear Schrödinger equations with optimal conditions for potential and nonlinearity. Communications on Pure & Applied Analysis, 2013, 12 (2) : 831-850. doi: 10.3934/cpaa.2013.12.831
Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1093-1105. doi: 10.3934/dcds.2010.27.1093
Alex H. Ardila. Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field. Communications on Pure & Applied Analysis, 2018, 17 (1) : 163-175. doi: 10.3934/cpaa.2018010
Reika Fukuizumi, Louis Jeanjean. Stability of standing waves for a nonlinear Schrödinger equation wdelta potentialith a repulsive Dirac. Discrete & Continuous Dynamical Systems, 2008, 21 (1) : 121-136. doi: 10.3934/dcds.2008.21.121
Jaeyoung Byeon, Louis Jeanjean. Multi-peak standing waves for nonlinear Schrödinger equations with a general nonlinearity. Discrete & Continuous Dynamical Systems, 2007, 19 (2) : 255-269. doi: 10.3934/dcds.2007.19.255
Zaihui Gan, Jian Zhang. Blow-up, global existence and standing waves for the magnetic nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2012, 32 (3) : 827-846. doi: 10.3934/dcds.2012.32.827
Huifang Jia, Gongbao Li, Xiao Luo. Stable standing waves for cubic nonlinear Schrödinger systems with partial confinement. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2739-2766. doi: 10.3934/dcds.2020148
Yue Liu. Existence of unstable standing waves for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (1) : 193-209. doi: 10.3934/cpaa.2008.7.193
Juan Belmonte-Beitia, Vladyslav Prytula. Existence of solitary waves in nonlinear equations of Schrödinger type. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1007-1017. doi: 10.3934/dcdss.2011.4.1007
Santosh Bhattarai. Stability of normalized solitary waves for three coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 1789-1811. doi: 10.3934/dcds.2016.36.1789
Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1693-1716. doi: 10.3934/dcdss.2020450
Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316
Jaeyoung Byeon, Ohsang Kwon, Yoshihito Oshita. Standing wave concentrating on compact manifolds for nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2015, 14 (3) : 825-842. doi: 10.3934/cpaa.2015.14.825
Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1063-1079. doi: 10.3934/cpaa.2012.11.1063
Riccardo Adami Diego Noja Cecilia Ortoleva | CommonCrawl |
Fast and exact quantification of motif occurrences in biological sequences
Mattia Prosperi ORCID: orcid.org/0000-0002-9021-55951,
Simone Marini1 &
Christina Boucher2
Identification of motifs and quantification of their occurrences are important for the study of genetic diseases, gene evolution, transcription sites, and other biological mechanisms. Exact formulae for estimating count distributions of motifs under Markovian assumptions have high computational complexity and are impractical to be used on large motif sets. Approximated formulae, e.g. based on compound Poisson, are faster, but reliable p value calculation remains challenging. Here, we introduce 'motif_prob', a fast implementation of an exact formula for motif count distribution through progressive approximation with arbitrary precision. Our implementation speeds up the exact calculation, usually impractical, making it feasible and posit to substitute currently employed heuristics.
We implement motif_prob in both Perl and C+ + languages, using an efficient error-bound iterative process for the exact formula, providing comparison with state-of-the-art tools (e.g. MoSDi) in terms of precision, run time benchmarks, along with a real-world use case on bacterial motif characterization. Our software is able to process a million of motifs (13–31 bases) over genome lengths of 5 million bases within the minute on a regular laptop, and the run times for both the Perl and C+ + code are several orders of magnitude smaller (50–1000× faster) than MoSDi, even when using their fast compound Poisson approximation (60–120× faster). In the real-world use cases, we first show the consistency of motif_prob with MoSDi, and then how the p-value quantification is crucial for enrichment quantification when bacteria have different GC content, using motifs found in antimicrobial resistance genes. The software and the code sources are available under the MIT license at https://github.com/DataIntellSystLab/motif_prob.
The motif_prob software is a multi-platform and efficient open source solution for calculating exact frequency distributions of motifs. It can be integrated with motif discovery/characterization tools for quantifying enrichment and deviation from expected frequency ranges with exact p values, without loss in data processing efficiency.
Motif discovery and characterization are important for the study of gene evolution, duplication, transcription sites, and protein identification [1], as well as of genetic diseases caused by unstable repeat expansion [2, 3].
Several tools have been developed for de novo motif discovery [4,5,6]—including discriminative regular expression motif elicitation (DREME), hypergeometric optimization of motif enrichment (HOMER), multiple expectation maximizations for motif elicitation (MEME), the memetic framework for motif discovery (MFMD), peak-motifs, prosampler, regulatory sequence analysis tools (RSAT), Trawler Web, and Weeder—either generic or specialized, e.g. for ChIP-seq data [7,8,9,10,11,12,13,14,15].
Assessing the statistical significance of motif enrichment is a fundamental and challenging step of motif discovery, and can severely hamper downstream analytics. Kiesel et al. [16] pointed out that p values "Small enrichment factors can occur frequently in practice simply due to an imperfect background model that slightly underestimates the expected frequency of occurrence". In addition, p values are crucial not only in the discovery phases, but also in motif comparison and motif-motif similarity studies [17]. The classical definition of the motif enrichment problem (in terms of differences among motifs occurrences within background genome contents) has been proven to be NP-hard [18]. The p value calculation is not straightforward, and requires making assumptions on a background model of base frequencies and co-occurrence in order to derive a distribution of motif occurrences in reference genomes [19]. Several formulae—approximated and exact—and algorithms for estimating motif count distributions have been devised and implemented [20,21,22,23,24,25,26,27,28]. Exact formulae for estimating count distributions of motifs under Markovian assumptions have high computational complexity and are impractical to be used on large data sets. Approximated formulae, e.g. based on compound Poisson, are faster, but reliable p value calculation remains challenging [19, 25]. Thus, methods for p value estimation can be a bottleneck in large-scale projects. HOMER, Weeder and Peak-motifs do not report motif statistical significance, MEME uses an approximation approach (very conservative), later improved by DREME and the new simple, thorough, rapid, enriched motif elicitation (STREME) [10, 15], and MFMD uses information content score and complexity scores [29].
A software that provides a comprehensive occurrence and probability estimation is the bioinformatics toolkit for Motif Statistics and Discovery (MoSDi) by Marschall [30], written in Java, featuring models based on the approximated compound Poisson and nth level Markov order, as well as (quasi-)exact combinatorial formulae to reduce computational complexity (https://bitbucket.org/tobiasmarschall/mosdi). Another tool is motifcounter [31], an R-Bioconductor library implementing existing methods [27, 32], as well as an improvement on the compound Poisson model. One limitation of these programs is that calculation of occurrence distribution—even using the fast compound Poisson—becomes impractical with longer motifs (10+) and longer reference genomes (millions of bases), besides large motif datasets.
Prosperi et al. [28] provided an exact formula for counting the distribution of strings that do not overlap with themselves (i.e. non-clumpable), coupled with a mathematical demonstration of its validity, under both Bernoullian and Markovian assumptions. The calculation of the formula was exponential in the genome length by the length of the motif, but the authors demonstrated that it could be calculated efficiently within an arbitrary tolerance level.
This software article describes "motif_prob", a count distribution tool suitable for long motifs and long reference genomes, implementing the exact method by Prosperi et al. [28] with the efficient error-bound algorithm. In addition to the relevance of this software piece for large-scale processing, another motivation for our work is that the majority of probability distribution or p value calculators, even the most recent ones, use heuristics. To our knowledge, the formula by Prosperi et al. is still among the most efficient for exact calculation. The proposed motif_prob implementation thus makes exact quantification suitable with large scale projects, and posits to substitute currently employed heuristics. We compare motif_prob with other tools in terms of run time and precision, showing that its exact algorithm is several orders of magnitude faster even than the approximated methods, and finally we describe use cases for long motifs in bacteria.
Theoretical formulation
The exact formula by Prosperi et al. [28], for the calculation of the frequency distribution j of a string of length m within a text of length n (m < n) over alphabet k, under the Markovian model, is
$$\begin{array}{*{20}c} {P\left( {j,m,n} \right) = P\left( S \right)^{j} \mathop \sum \limits_{z = 1}^{{\left| {C_{n,m,j} } \right|}} \mathop \prod \limits_{y = 1}^{j + 1} P\left( {S_{{0,d_{yz} }} } \right),} \\ \end{array}$$
where P(S) = P(a1) · P(a2 | a1) · … · P(am−1 | am), P(S0,n) = P(S0,n−1) − P(S) · P(S0,n−m), S0,n = S0,n−1 · k − P(S) · km · S0,n−m, d1 … dj+1 are the lengths of the j + 1 segments where the j strings divide the text of length n in exact configurations with d1 + ··· + dj+1 = n − mj, and
$$\begin{array}{*{20}c} {\left| {C_{n,m,j} } \right| = \left( {\begin{array}{*{20}c} {n + j\left( {1 - m} \right)} \\ {n - mj} \\ \end{array} } \right).} \\ \end{array}$$
Formula (1) has a complexity of O(nj), which becomes quickly intractable. However, by defining R = P(S0,n+1)/P(S0,n) as a constant, Prosperi et al. show that for any positive (arbitrarily small) number ε, there is an index ηε such that for every η > ηε then
$$\begin{array}{*{20}c} {P\left( {S_{0,n + x} } \right) \sim P\left( {S_{0,n} } \right) \cdot R^{x} . } \\ \end{array}$$
By using this approximation, the summation of the original formula can be reduced to a single step, and calculations can be stopped when the ratio P(S0,n)/P(S0,n−1) reaches a desired level of tolerance ε. Specifically, after plugging the iterative approximation (3) in (1), we obtain the final formula
$$\begin{array}{*{20}c} {P\left( {j,m,n} \right) \sim P\left( S \right)^{j} \cdot R^{{n - mj - n_{\varepsilon } \left( {j + 1} \right)}} \cdot P\left( {S_{{\left( {0,n_{\varepsilon } } \right)}} } \right)^{j + 1} \cdot \left( {\begin{array}{*{20}c} {n + j\left( {1 - m} \right)} \\ {n - mj} \\ \end{array} } \right).} \\ \end{array}$$
We note that P(j, m, n) is the same irrespective of the position of the nucleotides in a query string, e.g. AACCC and CCCAA have the same probability. This property permits to extrapolate a probability for clumpable strings by permutation, e.g. ACCA into CCAA, although the value is not guaranteed given possible overlap. Another way is to replace the first or the last character with another one that has the same frequency. All details on the derivation of the exact formula and the proof for its progressive approximation, along with comparison against other state-of-art algorithms, can be found in the original work by Prosperi et al. [28].
Two different implementations are produced: one in Perl and another in C++. Both programs take the same input and parameters, namely: (1) a query string or multiple strings to be analyzed; (2) the length of the reference genome; and (3) the nucleotide frequencies of the genome. In alternative to the genome length and nucleotide frequencies, a FASTA file containing the genome string can be passed as input to the program. The output file reports—for each motif—the count distribution and other summary information including a flag for clumped strings, string probability, and statistics on the precision and tolerance levels.
Since the computational complexity of the formula is exponential, motif occurrences are calculated at increasing counts until the occurrence probability becomes lower than given a tolerance level ε, or the upper limit of counts j is reached. We also control estimates at each iteration in order to avoid issues with floating point operations when frequency/length ratios diverge, and to handle relatively ill-posed configurations. Given the motif m and genome g lengths, one can set a tolerance level ε such that P(0, m, n) > (1 − ε), and in general each case where (1 − P(S))(m−m+1) > (1 − ε). This is equal to (n − m + 1)∙log(1 − P(S)) > log(1 − ε), which implies n > m − 1 + log(1 − ε)/log(1 − P(S)). In the source code, we have set ε to 10−7 and j to 500. Further, we implement the calculation of the expected number of strings and the motif's (stationary) occurrence probability at any text position, according to Robin et al. [33].
Figure 1 provides a flowchart of the data processing pipeline, showing the required input specifications, the method's internal parameters, and the output fields.
Flowchart of the data processing pipeline for motif_prob, with input/output specifications and program parameters
The source code, documentation, sample datasets, and executable files are available under the MIT license at https://github.com/DataIntellSystLab/motif_prob.
An example of the occurrence distribution for motif query sequences of length 6, calculated on a randomly generated genome of 20,000 bases, varying the nucleotide frequencies, is illustrated in Fig. 2. The difference between the equiprobable base and the more general case is evident and demonstrates how the background distribution affects the p value calculation (see real-world use case after the benchmarks).
Application output for motif sequences of length 6 over a genome of length 20,000, and different nucleotide frequencies
Table 1 shows run time benchmarks on different motif length and motif set size configurations, executed on a laptop machine with Intel(R) Core(TM) i9-10885H CPU @ 2.4 GHz, 32 GB RAM. Both the Perl and the C++ programs exhibit run times several orders of magnitude smaller than MoSDi, even when the latter is executed with the fast compound Poisson approximation. We set a maximum processing time of 30 min for datasets up to 400,000 motifs, and MoSDi can process them only with smaller values of k and the approximated model, while the exact model is not feasible for most of datasets. The C++ implementation is the fastest, and the expected run time increase due to higher motif lengths is well compensated by the implementation setup.
Table 1 Run time (mm:ss) of the Perl and C++ programs compared to MoSDi (exact and approximated using compound Poisson) for calculating the occurrence distribution for s motif query sequences of length m (13–31) over a reference genome of 5 million bases
In terms of precision, we compare the exact probability values yielded by our program with both the compound Poisson and the exact estimates of MoSDi (allowing it to switch automatically to standard/doubling algorithms to improve run time). As previously described, usually the largest errors appears near the probability mass points [28]. For all motif lengths combinations of 4 bases, over a 10,000 bases reference genome, on average the peak probability values of MoSDi and motif_prob exact differ by two orders of magnitude, e.g. if the peak probability is in the range of 10−2 then the observed absolute difference is 10−4. The difference with the compound Poisson approximation is larger, on average double than the exact, but the relative ratio it is still one-two orders of magnitude smaller than the actual values. The difference becomes smaller as the sequence lengths increase.
We further test the concordance among MoSDi and motif_prob using a real motif dataset, the library of DNA-binding site matrices for Escherichia coli (https://arep.med.harvard.edu/ecoli_matrices/), which contains 802 motifs from 67 housekeeping genes for a median motif length of 26 (interquartile range, IQR 20–29). We consider motifs length within 20 bases to be able to estimate non-near-zero probabilities on the genome length of Escherichia coli. The final set includes 230 motifs with a median length 16 (IQR 15–18). The median (IQR) difference between MoSDi and motif_prob exact overall is 2.6·10−8 (2.2·10−8–5.0·10−8), while for all probabilities where the center of mass is not zero (median 0.18), it is 3.8·10–8 (3.3·10−9–2.6·10−7). Once again, the differences with the approximated estimation are larger but of the same level of magnitude. Figure 3 illustrates the absolute difference in probability between motif_prob and MoSDi (exact/compound Poisson) as well as the relative magnitude difference, expressed as the log10(Probmotif_prob/abs(Probmotif_prob − ProbMoSDi)), which well highlights how the difference between the two exact methods (and the compound Poisson too, although larger) is negligible with respect to the actual probability estimates.
Comparison between motif_prob and MoSDi (exact and approximated with compound Poisson) in terms of concordance of probability estimates. Panel A shows the absolute difference in peak probability values, stratified by motif length and probability mass value, while panel B shows the relative magnitude difference by motif length
As a final use case, we investigate the distribution of frequencies of antimicrobial resistance gene signatures found in bacteria under different GC content. Drug resistance mechanisms in bacteria involve acquisition of genes, often via mobile genetic elements, and in some cases changes within core housekeeping genes. A number of algorithms use k-mers, i.e. motifs of fixed k length, to classify antimicrobial resistance [34], as they can be handled efficiently through ad hoc data structures suitable to process high-throughput data. But assessing the importance of a k-mer with respect to their frequency in drug resistance genes is not straightforward; one issue is that bacteria and genes can have very different GC content [35]. When the GC content varies, the probability distributions of motif occurrence can change over a broad range (given also the underlying, individual A, C, G, and T content), and thus the p values of over- or under-representation. To show how the quantification can have large variance, we analyze k-mers from antimicrobial resistance genes collected in the MEGARes 2.0 database [36]. MEGARes contains 7868 genes, with an average gene length of 1030.29 nucleotide bases, 57 different antibiotic resistance classes, and 220 distinct resistance mechanisms.
From MEGARes, we select all the 3911 genes conferring resistance to beta-lactamase; we then identify all 13-mers, for a total of 453,308 motifs (50% GC content). In Table 2, we show how the count probability distribution of the 13-mers in MEGARes' beta-lactamase genes changes among bacterial species present in the human microbiome of respiratory tract [37], where we select uniformly 18 species on the basis of their GC content. The median probability of finding the aforementioned 13-mers at least once varies between 93 and 99%, and even species with a similar GC content can show different medians and interquartile ranges, such as Stomatobaculum longum (55% GC content, median p = 97%) and Kluyvera intermedia (52% GC content, median p = 93%). This variability is due to: the individual nucleotide content, which can differ even when the GC content is the same, and it directly affects the distribution (see also Fig. 2); the genome length; and the nucleotide content of the query motifs.
Table 2 Median (interquartile range, IQR) probability of finding at least once 13-mer motifs (top-frequent among beta-lactamase resistance genes) in the MEGARes database over different bacterial species characterized by heterogeneous GC content
The motif_prob software is a multi-platform, open source, efficient solution for calculating exact frequency distributions of (long) motif occurrences in reference genomes using high-throughput data. We showed how our code estimates are consistent with other, slower, exact calculations, and how the run times of our code (both Perl and C++) are competitive even with the non-exact compound Poisson approximation. Specifically, motif_prob is 50–1000× faster than MoSDi exact and 60–120× faster than MoSDi compound Poisson.
The current implementation is limited to non-clumpable strings, although it extrapolates a probability for clumpable strings by permutation. As future development of our work we foresee to develop an exact formula for clumpable strings and to extend the approach to generalize over motifs that can include nucleotide changes, insertions or deletions.
In conclusion, our tool can be effectively used in conjunction with motif discovery suites that process high-throughput data, allowing them to compute exact count distributions and associated p values without loss of run time performance, instead of relying on to approximations.
Project name: motif_prob
Project home page: https://github.com/DataIntellSystLab/motif_prob
Operating system(s): Multi-platform (UNIX/Linux/Mac, Windows)
Programming language: Perl, C++
Other requirements: None
Any restrictions to use by non-academics: Permissible under the terms of the MIT license.
The datasets generated and/or analysed during the current study are available in the motif_prob GitHub repository (https://github.com/DataIntellSystLab/motif_prob), the MEGARes database (https://megares.meglab.org/) and the website for DNA-binding site matrices for Escherichia coli (https://arep.med.harvard.edu/ecoli_matrices/).
DREME:
Discriminative regular expression motif elicitation
HOMER:
Hypergeometric optimization of motif enrichment
MEME:
Multiple expectation maximizations for motif elicitation
MFMD:
Memetic framework for motif discovery
RSAT:
Regulatory sequence analysis tools
STREME:
Simple, thorough, rapid, enriched motif elicitation
MoSDi:
Motif statistics and discovery
mm:ss:
Minutes:seconds
Luu P-L, Schöler HR, Araúzo-Bravo MJ. Disclosing the crosstalk among DNA methylation, transcription factors, and histone marks in human pluripotent cells through discovery of DNA methylation motifs. Genome Res. 2013;23(12):2013–29.
Gatchel JR, Zoghbi HY. Diseases of unstable repeat expansion: mechanisms and common principles. Nat Rev Genet. 2005;6:743–55.
Luu PL, Schöler HR, Araúzo-Bravo MJ. Disclosing the crosstalk among DNA methylation, transcription factors, and histone marks in human pluripotent cells through discovery of DNA methylation motifs. Genome Res. 2013;23:2013–29.
Tompa M, Li N, Bailey TL, Church GM, De Moor B, Eskin E, et al. Assessing computational tools for the discovery of transcription factor binding sites. Nat Biotechnol. 2005;23(1):137–44.
Lee NK, Li X, Wang D. A comprehensive survey on genetic algorithms for DNA motif prediction. Inf Sci. 2018;1(466):25–43.
Hashim FA, Mabrouk MS, Al-Atabany W. Review of different sequence motif finding algorithms. Avicenna J Med Biotechnol. 2019;11(2):130–48.
Pavesi G, Mereghetti P, Mauri G, Pesole G. Weeder Web: discovery of transcription factor binding sites in a set of sequences from co-regulated genes. Nucleic Acids Res. 2004;32(Web Server issue):W199–203.
Ettwiller L, Paten B, Ramialison M, Birney E, Wittbrodt J. Trawler: de novo regulatory motif discovery pipeline for chromatin immunoprecipitation. Nat Methods. 2007;4(7):563–5.
Bailey TL, Boden M, Buske FA, Frith M, Grant CE, Clementi L, et al. MEME SUITE: tools for motif discovery and searching. Nucleic Acids Res. 2009;37(Web Server issue):W202–8.
Bailey TL. DREME: motif discovery in transcription factor ChIP-seq data. Bioinformatics. 2011;27(12):1653–9.
Thomas-Chollier M, Herrmann C, Defrance M, Sand O, Thieffry D, van Helden J. RSAT peak-motifs: motif analysis in full-size ChIP-seq datasets. Nucleic Acids Res. 2012;40(4):e31–e31.
Dang LT, Tondl M, Chiu MHH, Revote J, Paten B, Tano V, et al. TrawlerWeb: an online de novo motif discovery tool for next-generation sequencing datasets. BMC Genomics. 2018;19(1):238.
Caldonazzo Garbelini JM, Kashiwabara AY, Sanches DS. Sequence motif finder using memetic algorithm. BMC Bioinform. 2018;19(1):4.
Li Y, Ni P, Zhang S, Li G, Su Z. ProSampler: an ultrafast and accurate motif finder in large ChIP-seq datasets for combinatory motif discovery. Berger B, editor. Bioinformatics. 2019;35(22):4632–9.
Bailey TL. STREME: accurate and versatile sequence motif discovery. bioRxiv. 2020;2020.11.23.394619.
Kiesel A, Roth C, Ge W, Wess M, Meier M, Söding J. The BaMM web server for de-novo motif discovery and regulatory sequence analysis. Nucleic Acids Res. 2018;46(W1):W215–20.
Gupta S, Stamatoyannopoulos JA, Bailey TL, Noble WS. Quantifying similarity between motifs. Genome Biol. 2007;8(2):R24.
Finding similar regions in many strings|Proceedings of the thirty-first annual ACM symposium on Theory of Computing [Internet]. [cited 2021 May 28]. https://doi.org/10.1145/301250.301376.
Zhang J, Jiang B, Li M, Tromp J, Zhang X, Zhang MQ. Computing exact p values for DNA motifs. Bioinformatics. 2007;23(5):531–7.
Gentleman JF, Mullin RC. The distribution of the frequency of occurrence of nucleotide subsequences, based on their overlap capability. Biometrics. 1989;45(1):35–52.
Régnier M. A unified approach to word occurrence probabilities. Discrete Appl Math. 2000;104(1):259–80.
Nicodème P, Salvy B, Flajolet P. Motif statistics. Theor Comput Sci. 2002;287(2):593–617.
Robin S, Daudin J-J, Richard H, Sagot M-F, Schbath S. Occurrence probability of structured motifs in random sequences. J Comput Biol J Comput Mol Cell Biol. 2002;9(6):761–73.
Rivals E, Rahmann S. Combinatorics of periods in strings. J Comb Theory Ser A. 2003;104(1):95–113.
Bejerano G, Friedman N, Tishby N. Efficient exact p-value computation for small sample, sparse, and surprising categorical data. J Comput Biol J Comput Mol Cell Biol. 2004;11(5):867–86.
Lladser ME, Betterton MD, Knight R. Multiple pattern matching: a Markov chain approach. J Math Biol. 2008;56(1):51–92.
Marschall T, Rahmann S. Efficient exact motif discovery. Bioinformatics. 2009;25(12):i356–64.
Prosperi MCF, Prosperi L, Gray RR, Salemi M. On counting the frequency distribution of string motifs in molecular sequences. Int J Biomath. 2012;5:1250055.
Fogel GB, Weekes DG, Varga G, Dow ER, Harlow HB, Onyia JE, et al. Discovery of sequence motifs related to coexpression of genes using evolutionary computation. Nucleic Acids Res. 2004;32(13):3826–35.
Marschall T, Rahmann S. Speeding up exact motif discovery by bounding the expected clump size. In: Moulton V, Singh M, editors. Algorithms in bioinformatics. Lecture notes in computer science. Berlin: Springer; 2010. p. 337–49.
Kopp W. motifcounter: R package for analysing TFBSs in DNA sequences [Internet]. Bioconductor version: Release (3.12); 2021 [cited 2021 Mar 17]. https://bioconductor.org/packages/motifcounter/.
Pape UJ, Rahmann S, Sun F, Vingron M. Compound poisson approximation of the number of occurrences of a position frequency matrix (PFM) on both strands. J Comput Biol J Comput Mol Cell Biol. 2008;15(6):547–64.
DNA, Words and Models: Statistics of Exceptional Words by S. Robin, F. Rodolphe, S. Schbath | 9780521847292 | Hardcover | Barnes & Noble® [Internet]. [cited 2021 Mar 17]. https://www.barnesandnoble.com/w/dna-words-and-models-s-robin/1110953123.
Clausen PTLC, Zankari E, Aarestrup FM, Lund O. Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data. J Antimicrob Chemother. 2016;71:2484–8.
Hildebrand F, Meyer A, Eyre-Walker A. Evidence of selection upon genomic GC-content in bacteria. PLoS Genet. 2010;6:e1001107.
Doster E, Lakin SM, Dean CJ, Wolfe C, Young JG, Boucher C, et al. MEGARes 2.0: a database for classification of antimicrobial drug, biocide and metal resistance determinants in metagenomic sequence data. Nucleic Acids Res. 2020;48:D561–9.
Ibironke O, McGuinness LR, Lu S-E, Wang Y, Hussain S, Weisel CP, et al. Species-level evaluation of the human respiratory microbiome. GigaScience. 2020;9:giaa038. https://doi.org/10.1093/gigascience/giaa038.
We thank Luciano Prosperi, MSc, and Roberto Di Castro, MSc, for aiding with the implementation and code maintenance.
This work has been supported by the National Institutes of Health (NIH)—National Institute of Allergy and Infectious Diseases (NIAID) Grants No. R01AI141810 and R01AI145552, and by the National Science Foundation (NSF) Grant No. 2013998. The funding body did not have roles in the design of the study and collection, analysis, interpretation of data, and in writing the manuscript.
Data Intelligence Systems Lab, Department of Epidemiology, College of Public Health and Health Professions and College of Medicine, University of Florida, Gainesville, FL, USA
Mattia Prosperi & Simone Marini
Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, USA
Christina Boucher
Mattia Prosperi
MP conceived idea, wrote paper and code; SM wrote code and set up web repository; CB reviewed algorithm, wrote the paper. All authors have read and approved the final manuscript.
Correspondence to Mattia Prosperi.
Dr. Christina Boucher is Associate Editor of BMC Bioinformatics. All other authors declare that they have no competing interests.
Prosperi, M., Marini, S. & Boucher, C. Fast and exact quantification of motif occurrences in biological sequences. BMC Bioinformatics 22, 445 (2021). https://doi.org/10.1186/s12859-021-04355-6
Probability distribution
Markov model | CommonCrawl |
Relationship between the cardinality of a group and the cardinality of the collection of subgroups
For a group G let F(G) denote the collection of all subgroups of G.Which of the following situation can occur?
G is finite but F(G) is infinite.
G is infinite but F(G) is finite.
G is countable but F(G) is uncountable.
G is uncountable but F(G) is countable.
Attempt. If G is finite then P(G) is finite hence (1) is not possible. Let G=I(the set of integers) be a group under addition then Consider S=$\{$Group of integers modulo n for n$\epsilon N$}$\subset F(G)$ hence infinite therefore (2) is incorrect.(4) is also incorrect Consider R(the set of real numbers) under addition then Let S=$\{ma|a\epsilon R$-{set of irrational numbers $m\epsilon I$}$\}$ $\subset F(G)$ where S is uncountable therefore F(G) is uncountable hence (3) is the only left choice. Why (3) is possible?
Mathematics13Mathematics13
$\begingroup$ To do this, you can't just think of a single situation where it does or doesn't occur. You have to prove that either it can occur (a specific example works for this), or that in EVERY situation it can't occur $\endgroup$ – Mark Aug 31 '16 at 6:10
$\begingroup$ As for why (3) is possible, here is an example of a countable group with uncountably many subgroups. $\endgroup$ – Mark Aug 31 '16 at 6:12
$\begingroup$ The question doesn't say that exactly one is possible and the others aren't. Multiple could be possible or none. $\endgroup$ – fleablood Aug 31 '16 at 7:18
You are right that (1) is impossible.
For (2), your reasoning is incorrect, for two reasons: first, the integers modulo $n$ are not a subgroup of the integers, but rather a quotient group. Second, in order to prove that (2) is impossible you'd have to show that it's false for ALL groups, not just for the integers.
You are right that (2) is impossible. To prove it, let $A$ be an infinite group and consider the subgroup generated by each element $a \in A$. There are two cases: first, one of these subgroups is infinite and isomorphic to $\mathbb{Z}$; second, all of these subgroups are finite. In the first case, you should be able to show that $\mathbb{Z}$ has infinitely many subgroups. In the second, $F(G)$ contains a finite subgroup containing each element, and finitely many finite subgroups could not possibly cover every element.
For (3), Mark has already linked to why it is possible in this question.
For (4), again you cannot prove it is impossible just by giving a single example, you have to show it is false for ALL groups. Try to employ a similar technique to how we proved (2): consider $A$ to be an uncountable group, and for each $a \in A$ there is a subgroup generated by $a$ which is at most countable. If there were countably many subgroups total, then the countable union of countable sets is countable, so...
So the only one that is possible is (3).
The arguments in (2) and (4) generalize as follows: if $G$ is infinite, then $|F(G)| \ge |G|$. But this is not true if $G$ is finite, e.g. if $G$ is cyclic of prime order then it only has two subgroups.
$\begingroup$ As a side remark (which I find interesting), which is beyond the scope of the question in some sense, to show that a countable union of countable sets is countable one needs some sort of axiom of choice, so 4 could be true in some settings. Of course most people work with axiom of choice, so you are completely right. $\endgroup$ – Paul Plummer Sep 1 '16 at 1:19
Not the answer you're looking for? Browse other questions tagged group-theory or ask your own question.
countable group, uncountably many distinct subgroup?
Relation between a group's cardinality and number of subgroups
Is there a group with countably many subgroups, but is not countable in ZF?
Infinite group has infinitely many subgroups, namely cyclic subgroups.
Give an example of a group $G$ and two of its elements $x$ and $y$ such that $|x|=|y|=2$ and $|xy|=3$.
Could there be bijective function between infinite countable set and infinite uncountable set?
Cardinality Relation between a group and its subgroups
About infinite chains of subgroups containing an infinite subgroup of a locally finite group
The inequality between indexes of two subgroups of the group
A finite union of infinite cyclic subgroups of a group $G$ is never a group. | CommonCrawl |
View source for Universal Style Transfer via Feature Transforms
← Universal Style Transfer via Feature Transforms
=Introduction= When viewing an image, whether it is a photograph or a painting, two types of mutually exclusive data are present. First, there is the content of the image, such as a person in a portrait. However, the content does not uniquely define the image. Consider a case where multiple artists paint a portrait of an identical subject, the results would vary despite the content being invariant. The cause of the variance is rooted in the style of each particular artist. Therefore, style transfer between two images results in the content being unaffected but the style being copied. Style transfer is an important image editing task which enables the creation of new artistic works. Typically one image is termed the content/reference image, whose style is discarded. The other image is called the style image, whose style, but the not content is copied to the content image. Deep learning techniques have been shown to be effective methods for implementing style transfer. Previous methods have been successful but with several key limitations and often trade off between generalization, quality, and efficiency. Either they are fast, but have very few styles that can be transferred or they can handle arbitrary styles but are no longer efficient. The presented paper establishes a compromise between these two extremes by using only whitening and coloring transforms (WCT) to transfer a style within a feedforward image reconstruction architecture. No training of the underlying deep network is required per style. ==Style Transfer== The original paper about neural style transfer suggests a novel application of convolutional filters: transfer the art style to another image. The process is described in the following figure. [[File:style_transfer.jpg|600px|center|thumb|Figure: the process of neural style transfer]] In the original architecture, the authors used VGG as the "local feature extractor", by minimizing the loss function that measures the difference between the style of the input image and the style of the target image, the network can generate an image with similar features. The key factor in the original paper is that the style similarity between the input image and target image can be measured by Gramian Matrix. The authors defined the loss function as the Gramian Matrix of the activations in different layers. Despite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In the paper[15], the authors theoretically showed that matching the Gram matrices of feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with the second order polynomial kernel. Thus, the authors argue that the essence of neural style transfer is to match the feature distributions between the style images and the generated images. =Related Work= Gatys et al. developed a new method for generating textures from sample images in 2015 [1] and extended their approach to style transfer by 2016 [2]. They proposed the use of a pre-trained convolutional neural network (CNN) to separate content and style of input images. Having proven successful, a number of improvements quickly developed, reducing computational time, increasing the diversity of transferrable styles, and improving the quality of the results. Central to these approaches and of the present paper is the use of a CNN. The disadvantage is the inefficiency in the optimization process. Even though there has been an improvement by formulating the stylizations, these methods require training one network per style due to the lack of generalization in network design. In 2017, Mechrez et al. [12] proposed an approach that takes as input a stylized image and makes it more photorealistic. Their approach relied on the Screened Poisson Equation, maintaining the fidelity of the stylized image while constraining the gradients to those of the original input image. The method they proposed was fast, simple, fully automatic and showed positive progress in making a stylized image photorealistic. Alternative attempts, by using a single network to transfer multiple styles include models conditioned on binary selection units [13], a network that learns a set of new filters for every new style [15], and a novel conditional normalization layer that learns normalization parameters for each style [3] In comparing their methods with the existing techniques outlined above, the authors cite the close relationship between their work and [7]. In [7] content features in higher layers are adaptively instance normalized by the mean and variance of style features. The authors consider this step to be a sub-optimal operation in the WCT. ==How Content and Style are Extracted using CNNs== A CNN was chosen due to its ability to extract high level feature from images. These features can be interpreted in two ways. Within layer <math> l </math> there are <math> N_l </math> feature maps of size <math> M_l </math>. With a particular input image, the feature maps are given by <math> F_{i,j}^l </math> where <math> i </math> and <math> j </math> locate the map within the layer. Starting with a white noise image and a reference (content) image, the features can be transferred by minimizing <center> <math> \mathcal{L}_{content} = \frac{1}{2} \sum_{i,j} \left( F_{i,j}^l - P_{i,j}^l \right)^2 </math> </center> where <math> P_{i,j} </math> denotes the feature map output caused by the white noise image. Therefore this loss function preserves the content of the reference image. The style is described using a Gram matrix given by <center> <math> G_{i,j}^l = \sum_k F_{i,k}^l F_{j,k}^l </math> </center> Gram matrix $G$ of a set of vectors $v_1,\dots,v_n$ is the matrix of all possible inner products whose entries are given by $G_{ij}=v_i^Tv_j$. The loss function that describes a difference in style between two images is equal to: <center> <math> \mathcal{L}_{style} = \frac{1}{4 N_l^2 M_l^2} \sum_{i,j} \left(G_{i,j}^l - A_{i,j}^l \right)^2 </math> </center> where <math> A_{i,j}^l </math> and <math> G_{i,j}^l </math> are the Gram matrices of the generated image and style image respectively. Therefore three images are required, a style image, a content image, and an initial white noise image. Iterative optimization is then used to add content from one image to the white noise image, and style from the other. An additional parameter is used to balance the ratio of these loss functions. The 19-layer ImageNet trained VGG network was chosen by Gatys et al. VGG-19 is still commonly used in more recent works as will be shown in the presented paper, although training datasets vary. Such CNNs are typically used in classification problems by finalizing their output through a series of full connected layers. For content and style extraction it is the convolutional layers that are required. The method of Gatys et al. is style independent, since the CNN does not need to be trained for each style image. However, the process of iterative optimization to generate the output image is computationally expensive. ==Other Methods== Other methods avoid the inefficiency of iterative optimization by training a network/networks on a set of styles. The network then directly transfers the style from the style image to the content image without solving the iterative optimization problem. V. Dumoulin et al. trained a single network on $N$ styles [3]. This improved upon previous work where a network was required per style [4]. The stylized output image was generated by simply running a feedforward pass of the network on the content image. While efficiency is high, the method is no longer able to apply an arbitrary style without retraining. =Methodology= Li et al. have proposed a novel method for generating the stylized image. A CNN is still used as in Gatys et al. to extract content and style. However, the stylized image is not generated through iterative optimization or a feed-forward pass as required by previous methods. Instead, whitening and colour transforms are used. ==Image Reconstruction== [[File:image_resconstruction.png|thumb|150px|right|alt=Training a single decoder.|Training a single decoder. X denotes the layer of the VGG encoder that the decoder receives as input.]] An auto-encoder network is used to first encode an input image into a set of feature maps, and then decode it back to an image as shown in the adjacent figure. The encoder network used is VGG-19. This network is responsible for obtaining feature maps (similar to Gatys et al.). The output of each of the first five layers is then fed into a corresponding decoder network, which is a mirrored version of VGG-19. Each decoder network then decodes the feature maps of the $l$th layer producing an output image. A mechanism for transferring style will be implemented by manipulating the feature maps between the encoder and decoder networks. First, the auto-encoder network needs to be trained. The following loss function is used <center> <math> \mathcal{L} = || I_{output} - I_{input} ||_2^2 + \lambda || \Phi(I_{output}) - \Phi(I_{input})||_2^2 </math> </center> where $I_{input}$ and $I_{output}$ are the input and output images of the auto-encoder. $\Phi$ is the VGG encoder. The first term of the loss is the pixel reconstruction loss, while the second term is feature loss. Recall from "Related Work" that the feature maps correspond to the content of the image. Therefore the second term can also be seen as penalising for content differences that arise due to the encoder network. The network was trained using the Microsoft COCO dataset. They use whitening and coloring transforms to directly transform the $f_c$ (VGG feature map of the content image at a certain layer) to $f_{cs}$ such that covariance matrix of $f_s$ (VGG feature map of style image) is same as covariance matrix of $f_{cs}$. This process consists of two steps, i.e., whitening (make covariance to identity) and coloring (make covariance to $f_s$) transforms. Note that the decoder will reconstruct the original content image if $f_c$ is directly fed into it, but if $f_{cs}$ is fed, it outputs an image with the content of content image and style of style image. ==Whitening Transform== Whitening first requires that the covariance of the data is a diagonal matrix. This is done by solving for the covariance matrix's eigenvalues and eigenvector matrices. Whitening then forces the diagonal elements of the eigenvalue matrix to be the same. In other words, whitening transforms the known covariance matrix to an identity matrix such that for given feature map $f_c$, whitening transforms it into $\hat{f}_c$ such that $\hat{f}_c \times \hat{f}_c^T = I$ . This is achieved for a feature map from VGG through the following steps. # The feature map $f_c$ is extracted from a layer of the encoder network after activation on the content image. This is the data to be whitened. # $f_c$ is centered by subtracting its mean vector $m_c$. # Then, the eigenvectors $E_c$ and eigenvalues $D_c$ are found for the covariance matrix of $f_c$. # The whitened feature map is then given by $\hat{f}_c = E_c D_c^{-1/2} E_c^T f_c$. Note that this is indeed finding the symmetric transformer matrix $A$ in $\hat{f}_c = A f_c$ such that the covariance matrix of $\hat{f}_c$ is an identity matrix. If interested, the derivation of the whitening equation can be seen in [5]. Li et al. found that whitening removed styles from the image. ==Colour Transform== It is the inverse of whitening transform i.e. it can transform a random variable to have the desired covariance matrix. However, whitening does not transfer style from the style image. It only uses feature maps from the content image. The colour transform uses both $\hat{f}_c$ from above and $f_s$, the feature map from the style image. Color transform in this case, transforms $\hat{f}_c$ to $f_{cs}$ such that $conv(f_{cs}) = conv(f_s)$, remember that covariance represents the ''style'' information of the image such this steps matches styles per the style image. # $f_s$ is centered by subtracting its mean vector $m_s$. # Then, the eigenvectors $E_s$ and eigenvalues $D_s$ are calculated for the covariance matrix of $f_s$. # The colour transform is given by $\hat{f}_{cs} = E_s D_s^{1/2} E_s^T \hat{f}_c$. # Recenter $\hat{f}_{cs}$ using $m_s$. i.e., $\hat{f}_{cs}$ = $\hat{f}_{cs}$ + $m_s$ Intuitively, colouring results in a correlation between the $\hat{f}_c$ and $f_s$ feature maps, or rather, $\hat{f}_{cs}$ is a linear transform of the original feature map $f_c$ which takes on the variance of $f_s$. This is where the style transfer takes place. ==Content/Style Balance== Using just $\hat{f}_{cs}$ as the input to the decoder may create a result that is too extreme in style. To balance content and style the new parameter $\alpha$ is defined to serve as the style weight to control the transfer effect. <center> <math> \hat{f}_{cs} = \alpha \hat{f}_{cs} + (1 - \alpha) f_c </math> </center> Authors use $\alpha$ = 0.6 in the style transfer experiments. ==Using Multiple Layers== It has been previously mentioned that multiple decoders were trained, one for each of the first five layers of the encoder network. Each layer of a CNN perceives features at different levels. Levels close to the input image will detect lower level local features such as edges. Those levels deeper into the network will detect more complex global features. The style transfer algorithm is applied at each of these levels, which yields the question as to which results, as shown below, to use. [[File:multilevel_features.png|thumb|700px|center|alt=Results of style transfer from each of the first five layers of the encoder network.|Results of style transfer from each of the first five layers of the encoder network.]] Ideally, the results of each layer should be used to build the final output image. This captures the entire range of features detected by the encoder network. First, one full pass of the network is performed. Then the stylised image from the deepest layer (Relu_5_1 in this case) is taken and used as the content image for another iteration of the algorithm, where then the next layer (Relu_4_1) is used as the output. These steps are repeated until the final image is produced from the shallowest layer. This process is summarised in the figure below. [[File:process_summary.png|thumb|700px|center|alt=Process summary of the multi-level stylization algorithm.|The content (C) and style (S) are fed to the VGG encoding network. The output image (I) after a whitening and colour transform (WCT) is taken from the deepest level's decoder. The process is iteratively repeated until the most shallow layer is reached.]] The authors note that the transformations must be applied first at the highest level (most abstract) layers, which capture complicated local structures and pass this transformed image to lower layers, which improve on details. They observe that reversing this order (lowest to highest) leads to images with low visual quality, as low-level information cannot be preserved after manipulating high level features. [[File:Universal_Style_Transfer_Coarse_to_Fine.JPG|thumb|700px|center|alt=(a)-(c) Output from intermediate layers. (d) Reversed transformation order.|(a)-(c) Output from intermediate layers. (d) Reversed transformation order.]] =Evaluation= The success of style transfer might appear hard to quantify as it relies on qualitative judgement. However, the extremes of transferring no style, or transferring only the style can be considered as performing poorly. Consistent transfer of style throughout the entire image is another parameter of success. Ideally, the viewer can recognize the content of the image, while seeing it expressed in an alternative style. Quantitatively, the quality of the style transfer can be calculated by taking the covariance matrix difference $L_s$ between the resulting image and the original style. The results of the presented paper also need to be considered within the contexts of generality, efficiency and training requirements. The implementation for this paper can be found on Github at: * Torch (official) : https://github.com/Yijunmaverick/UniversalStyleTransfer * Keras : https://github.com/eridgd/WCT-TF * PyTorch : https://github.com/sunshineatnoon/PytorchWCT ==Style Transfer== A number of style transfer examples are presented relative to other works. [[File:transfer_results_label.jpg|thumb|700px|center|alt=Style transfer results of the presented paper.|A: See [6]. B: See [7]. C: See [8]. D: Gatys et al. iterative optimization, see [2]. E: This paper's results.]] Li et al. then obtained the average $L_s$ using 10 random content images across 40 style images. They had the lowest average $log(L_s)$ of all referenced works at 6.3. Next lowest was Gatys et al. [2] with $log(L_s) = 6.7$. It should be noted that while $L_s$ quantitatively calculates the success of the style transfer, results are still subject to the viewer's impression. Reviewing the transfer results, rows five and six for Gatys et al.'s method shows local minimization issues. However, their method still achieves a competitive $L_s$ score. Since the qualitative assessment is highly subjective, a user study was conducted to evaluate 5 methods shown in Figure 6. The percentage of the votes each method received is shown in Table 2 (2nd row). It shows that the method presented in this paper receives the most votes for better stylized results. <center> [[File:style_transfer_table_2.png]] </center> ==Transfer Efficiency== It was hypothesized by Li et al. that using WCT would enable faster run-times than [2] while still supporting arbitrary style transfer. For a 256x256 image, using a 12GB TITAN X, they achieved a transfer time of 1.5 seconds. Gatys et al.'s method [2] required 21.2 seconds. The pure feed-forward approaches [7], and [8] had times equal to or less than 0.2 seconds. [6] had a time comparable to the presented paper's method. However, [6,7,8] do not generalize well to multiple styles as training is required. Therefore this paper obtained a near 15x speed up for a style agnostic transfer algorithm when compared to leading previous work. The authors also note that WCT was done using the CPU. They intend to port WCT to the GPU and expect to see the computational time be further reduced. ==Other Applications== Li et al.'s method can also be used for texture synthesis. This was the original work of Gatys et. al. before they applied their algorithm to style transfer problems. Texture synthesis takes a reference texture/image and creates new textures from it. With proper boundary conditions enforced these synthesized textures can be tileable. Alternatively, higher resolution textures can be generated. Texture synthesis has applications in areas such as computer graphics, allowing for large surfaces to be texture mapped. The content image is set as white noise, similar to how [2] initializes their output image. Then the reference texture/image is set as the style image. Since the content image is initially random white noise, then the features generated by the encoder of this image are also random. Li et al. state that this increases the diversity of the resulting output textures. [[File:texture_synthesis_label.jpg|thumb|700px|center|alt=Texture synthesis results.|A: Reference image/texture. B: Result from [8]. C: Result of present paper.]] Reviewing the examples from the above figure, it can be observed that the method from this paper repeats fewer local features from the image than a competing feed forward network method [8]. While the analysis is qualitative, the authors claim that their method produces "more visually pleasing results". =Conclusion= Only a couple of years ago were CNNs first used to stylize images. Today, a host of improvements have been developed, optimizing the original work of Gatys et al. for a number of different situations. Using additional training per style image, computational efficiency and image quality can be increased. However, the trained network then depends on that specific style image, or in some cases such as in [3], a set of style images. Till now, limited work has taken place in improving Gatys et al.'s method for arbitrary style images. The authors of this paper developed and evaluated a novel method for arbitrary style transfer in which they present a multi-level stylization pipeline, which takes all level of information of a style into account, for improved results. In addition, the proposed approach is shown to be equally effective for texture synthesis. Their method and Gatys et al.'s method share the use of a VGG-19 CNN as the initial processing step. However, the authors replaced iterative optimization with whitening and colour transforms, which can be applied in a single step. This yields a decrease in computational time while maintaining generality with respect to the style image. After their CNN auto-encoder is initially trained no further training is required. This allows their method to be style agnostic. Their method also performs favourably, in terms of image quality, when compared to other current work. =Critique= In the paper, the authors only experimented with layers of VGG19. Given that architectures such as ResNet and Xception perform better on image recognition tasks, it would be interesting to see how residual layers and/or Inception modules may be applied to the task of disentangling style and content and whether they would improve performance relative to the results presented in the current paper is the encoder used were to utilize layers from these alternative convolutional architectures. Additionally, it is worth exploring whether one can invent a probabilistic and/or generative version of the encoder-decoder architecture used in the paper. More precisely, is it possible to come up with something in the spirit of variational autoencoders, wherein we the bottleneck layer can be used to sample noise vectors, which can then be input into each of the decoder units to generate synthetic style and content images? Alternative attempts would also involve the study of generative adversarial networks with a perturbation threshold value. GANs can produce surreal images, where the underlying structure (content) is preserved ( in CNNs the filters learn the edges and surfaces and shape of the image), provided the Discriminator is trained for style classification ( training set consists of images pertaining the style that requires to be transferred). =Additional Results and Figures= Given in this section are the additional figures of universal style transform found in the supplementary file. They are typically for larger image sizes and more variety of styles. #[[File:style-1.PNG]] #[[File:style-2.PNG]] #[[File:style-3.PNG]] =References= [1] L. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In NIPS, 2015. [2] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. [3] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic style. In ICLR, 2017. [4] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016 [5] R. Picard. MAS 622J/1.126J: Pattern Recognition and Analysis, Lecture 4. http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf [6] T. Q. Chen and M. Schmidt. Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337, 2016. [7] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. arXiv preprint arXiv:1703.06868, 2017. [8] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016. [9] Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, A Neural Algorithm of Artistic Style, https://arxiv.org/abs/1508.06576 [10] Karen Simonyan et al. Very Deep Convolutional Networks for Large-Scale Image Recognition [11] VGG Architectures - [http://www.robots.ox.ac.uk/~vgg/research/very_deep/| More Details] [12] Mechrez, R., Shechtman, E., & Zelnik-Manor, L. (2017). Photorealistic Style Transfer with Screened Poisson Equation. arXiv preprint arXiv:1709.09828. [13] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang. Diversified texture synthesis with feed-forward networks. In CVPR, 2017 [14] D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua. Stylebank: An explicit representation for neural image style transfer. In CVPR, 2017 Implementation Example: https://github.com/titu1994/Neural-Style-Transfer [15] Li, Yanghao, Naiyan Wang, Jiaying Liu and Xiaodi Hou. "Demystifying Neural Style Transfer." IJCAI (2017).
Return to Universal Style Transfer via Feature Transforms.
Retrieved from "http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Universal_Style_Transfer_via_Feature_Transforms" | CommonCrawl |
Sick leave absence and the relationship between intra-generational social mobility and mortality: health selection in Sweden
Sunnee Billingsley ORCID: orcid.org/0000-0001-5698-24191
BMC Public Health volume 20, Article number: 8 (2020) Cite this article
Poor health could influence how individuals are sorted into occupational classes. Health selection has therefore been considered a potential modifier to the mortality class gradient through differences in social mobility. Direct health selection in particular may operate in the short-term as poor health may lead to reduced work hours or achievement, downward social mobility, unemployment or restricted upward mobility, and death. In this study, the relationship between social mobility and mortality (all-cause, cancer-related, cardiovascular disease-related (CVD), and suicide) is explored when the relationship is adjusted for poor health.
Using Swedish register data (1996–2012) and discrete time event-history analysis, odds ratios and average marginal effects (AME) of social mobility and unemployment on mortality are observed before and after accounting for sickness absence in the previous year.
After adjusting for sickness absence, all-cause mortality remained lower for men after upward mobility in comparison to not being mobile (OR 0.82, AME -0.0003, CI − 0.0003 to − 0.0002). Similarly, upward mobility continued to be associated with lower cancer-related mortality for men (OR 0.85, AME -0.00008, CI − 0.00002 to − 0.0002), CVD-related mortality for men (OR 0.76, AME -0.0001, CI − 0.00006 to − 0.0002) and suicide for women (OR 0.67, AME -0.00002, CI − 0.000002 to − 0.00003). The relationship between unemployment and mortality also persisted across most causes of death for both men and women after controlling for previous sickness absence. In contrast, adjusting for sickness absence renders the relationship between downward mobility and cancer-related mortality not statistically different from the non-mobile.
Health selection plays a role in how downward mobility is linked to cancer related deaths. It additionally accounts for a portion of why upward mobility is associated with lower mortality. That health selection plays a role in how social mobility and mortality are related may be unexpected in a context with strong job protection. Job protection does not, however, equalize opportunities for upward mobility, which may be limited for those who have been ill. Because intra-generational upward mobility and mortality remained related after adjusting for sickness absence, other important mechanisms such as indirect selection or social causation should be explored.
Relative inequalities in mortality by level of social class remain and have even increased in Europe in recent decades, including in Sweden [1]; individuals in higher social classes tend to live longer than those further down in the social hierarchy. Health selection has been considered a potential modifier to this mortality gradient through differences in social mobility; the health selection hypothesis predicts that individuals are sorted into classes on the basis of their health [2, 3]. Upward intra-generational mobility—changing a job to one that is in a higher occupational class—may not be accessible to all if opportunities are restricted by factors such as poor health. Additionally, poor health may lead to downward social mobility—taking a job in a lower social class. This hypothesis is therefore predicated on health being a determining factor of social mobility. But findings are inconclusive on whether this is the case. Much research suggests that social mobility is not associated with previous health [4,5,6,7,8,9,10,11], whereas some findings indicate a modest effect of health on social mobility [12,13,14,15,16,17], on financial deprivation [18] and the attainment of supervisory/managerial positions and income change [19].
In studies on the association between social mobility and mortality, the usual research design may have minimized the role of health selection. In Swedish research in particular [20,21,22,23], mobility is measured by a difference in social class at two time points that are years apart (generally 5 to 15 years) with a mortality follow-up period beginning after the second measure of class. The longer the interval between measures, the more studies may suffer from a "healthy survivor effect"; individuals who survive until the second measure are healthier than others. Moreover, mobility events that occur during the mortality follow-up period are not observed in these studies. Such an analytical design captures long-term relationships between social mobility and mortality, which are suitable for causal or indirect health selection mechanisms that operate over a long window of time [2, 4, 7]. In contrast, direct health selection may operate more in the short-term as poor health may immediately lead to reduced work hours or achievement, downward social mobility or no upward mobility [18, 24], and death. Occupational class measures must therefore be observed continually and at close intervals. In a recent application of this approach, intra-generational upward mobility was particularly linked to lower mortality risk across a wide range of causes of death for men, net of origin and destination class associations [25]. The current study builds on this preliminary support for the idea that health selection plays a role in how mobility and mortality are linked in Sweden by examining the contribution of health selection to this relationship more directly.
The aim of this study is to examine whether accounting for health status in the previous year weakens the short-term relationship between intra-generational social mobility and premature mortality. Standard measures in the literature for measuring health have included self-reported measures of poor health (physical and psychiatric), cardiometabolic factors, or medical records such as hospital admission. Health is measured in the current study by whether an individual took sick leave from work, referred to as sickness absence. This measure is objective, covers a broad spectrum of severe chronic diseases and disorders, as well as illnesses that might not be included in hospitalization records because of treatment at out-patient clinics. Sickness absence in Sweden has been changing with an increasing trend of psychiatric disorders (stress, in particular), reaching around 35% of all sickness absence in 2014 and a decreasing trend in musculoskeletal disorders, falling to 27% by 2014 [26]. Taking sick leave has been linked to negative earnings and career trajectories [27], as well as to all-cause mortality, cancer and CVD-related mortality and suicide, including psychiatric-related sickness absence [28,29,30,31].
These specific causes of death—all-cause mortality, cancer-related mortality, CVD-related mortality and suicide— are analyzed separately in this study. Over the time period of this study (1997–2012), CVD-related causes of death were most prominent for both men and women, followed by cancer [32]. External causes of death were also an important contributor, but these causes of death have not been found to be related to social mobility in Sweden [25]. Health selection is likely to be more present in mortality due to chronic [33] rather than acute diseases or disorders. All three causes of death here can be the outcome of conditions labeled as chronic [32], and there is great variation within the categories of cancer, mental disorders and cardiovascular disease. However, it may be that both cancer-related deaths and suicide are more likely preceded by periods of physical or mental illness that greatly reduce work capacity and impact career trajectories than cardiovascular disease.
If poor health leads to sickness absence and restricts access to upward mobility or increases incidence of downward mobility, the relationships between social mobility and mortality should be weakened when accounting for sickness absence. This hypothesis should be particularly relevant for cancer, where sick leave would be taken on the basis of poor health related to these diseases, as well as to suicide, where sick leave would presumably be taken for psychiatric disorders such as depression. However, the relationship between sickness absence and cause of death is not necessarily straightforward; for example, higher suicide risk was established for individuals in Sweden who had also taken somatic sickness absence, such as for musculoskeletal and digestive diseases [28]. CVD-related mortality encompasses major sudden causes of death (such as massive stroke, fatal arrhythmias, acute myocardial infarction, massive pulmonary embolism and acute aortic catastrophe), which may lessen the role that health selection plays in social mobility because there is less time for illness to influence careers.
The pathway leading from poor health to downward mobility may entail either a preference for a different set of circumstances that are potentially less demanding or that an individual who has lost their position was not able to obtain a new job in the same occupational class. An alternative labor market consequence is that an individual is not able to find new employment and is unemployed. This study therefore additionally explores unemployment as a potential extension of downward mobility [34]. Whether the health selection hypothesis has merit may depend on the country or social policy context being examined [24, 35]. In countries with a well-developed social benefits system and job protection policies such as Sweden [36, 37], individuals should be protected from losing their job if a physical or psychiatric disorder leads to reduced work ability. The Swedish Law on employment states that a dismissal of employment cannot be made without objective reasons that do not include poor health. In addition, the employer is obliged to help the previously ill worker return to work, and if necessary also offer a new job at the workplace. Sick leave benefits have maintained a replacement rate of 75 to 80% of an individual's wage (up to a maximum income level) over the last decades, which supports those in ill health to take a leave of absence and maintain a similar standard of living. Sickness absence is also provided with flexibility, whereby individuals can take sickness benefits that amount to 25, 50, 75% or 100% of their work time, depending on the extent of reduced work ability. These provisions, including the anti-discrimination legislation related to sick workers, should minimize direct health selection in the Swedish context [24]. In relation to health selection into upward mobility, however, these provisions are not likely to moderate how promotion possibilities are limited following sickness absence, which has been demonstrated in the US [38].
Register data
This study is based on Swedish register data that provides basic demographic information, socio-economic variables, sickness absence and mortality timing from population-based registers collected by Statistics Sweden. All-cause mortality was considered as well as specific causes of death delineated according to ICD-10 coding: deaths due to all cancers (C00-C97, D00-D48), CVD-related mortality (I00-I99) and suicide, including undetermined causes of death (X60-X84, Y10-Y34). Annual occupational information was provided over the time period 1996–2012 (in October/November each year). Complete coverage of occupational information was provided for all individuals who work in the public sector, and all those working in private companies with 500 or more employees. Information on individuals working in smaller private firms was restricted to a random selection each year in which the chance of being included in the register declined with firm size. When occupational information was missing for this reason, the previous occupation registered was used if the difference in income was less than 10%, or coded these observations as missing. The population consists of those who had an occupation registered in 1996 and who were 65 years or younger. Individuals were censored when they reached age 66.
Origin class was defined as the occupational class in 1996 and is a time-constant covariate. The social class measure is based on the Erikson-Goldthorpe-Portocarero (EGP) occupational schema and coded according to the European Socioeconomic Classification (EseC) [39]. Due to lack of data on self-employed and small employers the measure contained seven categories instead of nine, which were combined into four more stringently graded social class groups (from high [1] to low [4]): [1] EseC 1: large employers; professional, administrative and managerial occupations; higher grade technician and supervisory occupations, [2] Esec 2: intermediate occupations and lower supervisory or lower technician occupations, [3] EseC 3: lower services, sales and clerical occupations, [4] EseC 4: lower technical occupations and routine occupations. Destination class is categorized each year from 1997 to 2012 and includes the four occupational class categories, as well as four others that include [1] not employed (including retired or inactive), [2] studying, [3] missing information, or [4] unemployed, which was identified through not being classified in an occupational class and having received unemployment benefits that year. Social mobility is categorized as: [1] upward mobility: destination class is higher than origin class [2]; downward mobility: destination class is lower than origin class [3]; not mobile: destination and origin class are the same [4]; not applicable, which overlaps with the statuses in the destination class category of not employed, unemployed, studying or having missing information.
Each individual is categorized according to whether they had any sickness absence paid by the Swedish Social Insurance Agency. According to Swedish legislation sickness absence, or sick leave, begins the second or third week of an illness, depending on the year and changes in legislation. Shorter sick leaves such as those due to the flu are not accounted for in this measure. After a sickness absence of seven days, a doctor's note must be provided that verifies a disease or disorder that reduces the individuals' work ability in order to receive sickness benefit payment by the social insurance system. Individuals may have multiple short term sick leaves (e.g., for one week) and be considered as having no sickness absence as no days were paid for by the Swedish Social Insurance Agency. The measure also does not allow us to know how many sick leaves are included in the total number of days paid on sick leave. Based on the sex- and year-specific distribution of days taken, quartiles were created to remove the effect of changes in legislation. Men and women are categorized as taking 0 days of sickness absence or in the bottom 25% of days taken (Q1), 25–50% (Q2), 50–75% (Q3), top 25% (Q4). The quartile measure of sickness absence allowed a non-linear and nuanced measure of poor health that provided the best model fit: Akaike and Bayesian Information Criteria values were at their minimum when comparing the baseline model using the quartile measure of sickness absence rather than other measures (continuous number of days or a dummy variable for having taken sickness absence).
Discrete time event-history analysis [40] is used to examine the relationship between social mobility and mortality. This is a generalized linear model that is an extension of the piece-wise constant proportional hazards model where the odds ratios for mortality are conditional upon having survived up to that point in time. Time is treated as a discrete factor with one parameter for each segment of time, hereby designated with dummies for five-year age groups. In other words, the outcome is the odds of death at a given age conditional on the event not having occurred yet and compared to having the event at a later time. The model is specified as follows
$$ \log \left(\frac{P_{it}}{1-{P}_{it}}\right)=\alpha {D}_{it}+\beta {C}_{it}+\beta {V}_{it} $$
where Pit is the probability of death during interval t. Dit is a vector of dummies for age groups defined in five-year intervals, which serve as the baseline risk with coefficients α. Cit is a vector of covariates that are constant over time and Vit is a vector of covariates that vary over time, with coefficients β. All models were estimated with Stata version 15.
In the baseline model, the relationship between social mobility and mortality is adjusted for the time-constant variable of foreign-born status as well as the time-varying covariates of educational enrolment and attainment, marital status, and residence in metropolitan area. These covariates aim to capture the confounding influence of human capital, resources, background and partnership characteristics that have well established links to mortality and potentially to social mobility. In the second model, sickness absence in the previous year (entered into the model as a leading variable) is added to observe whether this potential antecedent explains the relationship between social mobility and mortality. If sickness absence leads to/prohibits mobility experiences that are relevant to mortality, this alignment would allow sickness absence to absorb the relationship between social mobility and mortality. Individuals are censored in our data if they emigrate, die, turn age 66, or when the end of the study period is reached.
The relationship between social mobility and mortality is assessed net of accumulated advantages and disadvantages by controlling for origin and destination status. Due to concerns that the origin and destination status effects would be biased by the mobile individuals and this would not allow a clean estimation of the mobility "effect", the diagonal reference model (DRM) has been argued to be the most suitable estimator [41, 42]. In this study, the simpler strategy of controlling for origin and destination status is chosen, which better accommodates time-varying class indicators that include episodes of non-labor market participation and performed very similarly in a comparison with the DRM on the same data [43].
Table 1 shows descriptive statistics of our sample: the first column shows the total distribution over the covariates for men and women separately and the following columns show row percentages of person/years without sickness absence and with sickness absence recorded in quartiles. The p-value for a chi-square test of independence between sickness absence and each covariate is recorded in the final columns. As evident, taking sick leave and the extent of sick leave is statistically related to each of the covariates included in this analysis. Overall, sickness absence is higher among women than men and highest amongst those in older ages, with a prior marriage/registered partnership, foreign born, with lower education (<=2 years of secondary education), and who live in a rural or small city. Additionally, for both the origin and destination social class variables, a social gradient appears in which sickness absence is higher in lower social classes. Those who were studying generally had a similar incidence of having been on sick leave the previous year to those who were working, except for a higher frequency of very long sick leaves. This indicates a potential pattern in which individuals who have had long-term illnesses may be more prone to going back to school for re-training. The category "no activity" also has a high share of sickness absence in the destination class variable, as well as the unemployed category and "missing", which includes employment in a smaller private firm and self-employment. These three categories largely make up the "not applicable" category in the social mobility variable, which correspondingly shows high sickness absence. Whereas we see that short and medium bouts of sick leave in the previous year are most frequent among the unemployed individuals, the longest sick leaves are most frequent for those who subsequently become inactive. In general, these patterns reflect what we would expect to see if those who stay in employment are healthier than those who exit. Relative to those who are not mobile, the share of sickness absence in the previous year is greatest for those who are downwardly mobile, and lowest among upwardly mobile individuals. Experiencing prolonged sickness absence was the rarest among very young individuals and for men and women who were upwardly mobile; only 0.92% of men and 2% of women who were upwardly mobile were in the highest quartile of sickness absence the year before.
Table 1 Descriptive statistics among men and women with and without registered sickness absence (SA, leading variable): Row percent (%) of person/years in each background variable category. SA measured in quartiles (Q1-Q4)
Table 2 displays descriptive statistics for deaths in the sample and how deaths are distributed across levels of sickness absence in the previous year. For all-cause mortality, 0.22 and 0.16% of person/years ended in death for men and women, respectively, whereas these shares were 0.09 and 0.10% for cancer-related mortality, 0.07 and 0.02% for CVD-related mortality and 0.02 and 0.01% for suicide. When men and women had not had sickness absence in the previous year, the shares of all-cause mortality dropped to 0.18 and 0.11%, respectively, and increased to 1.21 and 0.80% when sickness absence had been among the highest quartile in the preceding year. A gradient appears for both men and women across quartiles of sickness absence across the causes of death in this study. The only exception is CVD-related mortality and suicide for women, in which little or no change in mortality is observed until the top two quartiles of sickness absence. The p-value for a chi-square test of independence once again shows a bivariate relationship; sickness absence is statistically linked to all-cause mortality as well as the three specific causes of death. These patterns support the use of sickness absence as a measure of health that is linked to mortality.
Table 2 Men's and women's mortality by sickness absence (SA, leading variable measured in quartiles): Percent of all person/years
Figure 1 shows two patterns in how social mobility is linked to sickness absence descriptively. First, upward mobility is negatively related to sick leave length. The difference between no sick leave and any sick leave is greatest for men who were upwardly mobile, suggesting they have the greatest returns to good health and that upward mobility for women may involve less selection. Second, downward mobility has an inverted u-shape relationship to previous sick leave length and this occurs similarly for men and women. Downward mobility is more frequent for those who have been on short or moderate length sick leave than no leave at all, but downward mobility is even less frequent for those who have been on long sick leaves—likely due to the higher likelihood that they subsequently became inactive.
Social mobility and sickness absence in prior year: share of person/years
Table 3 shows selected results from discrete time hazard regressions on the relationship between intra-generational social mobility and mortality. The first column provides the baseline estimates that are adjusted for a set of control variables, whereas the second column provides estimates when also adjusted for sickness absence in the previous year. The estimates for downward and upward mobility are in reference to being non-mobile, unemployment estimates are in reference to blue collar working class positions and estimates for the quartile location in the distribution of sick leave length in the previous year is in reference to not having been on sick leave.
Table 3 Discrete time hazard model (selected) results for all-cause mortality, cancer-related mortality and suicide, with and without sickness absence
The indicator for poor health, sickness absence in the previous year, was consistently related to all-cause mortality and the three specific causes of mortality, net of other individual characteristics controlled for in the model. Previous poor health is most linked to cancer-related mortality for both men and women, followed by suicide. It was the least linked to CVD-related mortality. The increase in the odds of mortality at each higher quartile of sick leave length was also steepest for cancer-related mortality. However, the highest odds of mortality associated with taking only a short sickness absence (Q1) were found for suicide for women and not cancer-related mortality.
All-cause mortality
Excess death due to all causes was lower (OR 0.77, 95% CI 0.71–0.84) when men had been upwardly mobile, but upward mobility was not linked to all-cause mortality for women and downward mobility was not statistically related to all-cause mortality for either men or women. Unemployment increased the odds of all-cause mortality by 1.67 (95% CI 1.56–1.78) for men and 1.66 (95% CI 1.51–1.82) for women, relative to being employed in a blue-collar working class occupation. When adjusting for sickness absence in the relationship between upward mobility and mortality for men, the odds of death declined from being 23% lower than the non-mobile to 18% lower. The odds of death for both men and women declined in relation to being unemployed to 53% higher for men and 51% higher for women when accounting for sick leave.
Cancer-related mortality
All-cause cancer mortality was associated with being downwardly mobile among men and women both, respectively, compared to not being mobile (Table 3: OR 1.23, 95% CI 1.07–1.42; OR 1.14, 95% CI 1.01–1.30). When adjusting for prior sickness absence, this association was reduced among men (OR 1.16, 95% CI 1.00–1.33) and women (OR 1.08, 95% CI 0.95–1.23): the difference between not being mobile and being downwardly mobile became no longer statistically significant among women. Among men, upward mobility was associated with lowered odds of death due to cancer (OR 0.78, 95% CI 0.68–0.89), but the lower odds lessened from 0.78 to 0.85 when controlling for sickness absence. No relationship between upward mobility and cancer-related mortality appeared for women in the baseline model, but upward mobility was linked to higher odds of cancer mortality when adjusting for sickness absence (OR 1.15, 95% CI 1.04–1.28). That mortality is higher for upwardly mobile women is likely linked to the predominance of breast cancer related deaths and its unusual positive class gradient. The difference related to accounting for having been on sick leave may be linked to early detection and how this intersects with class and mobility. Unemployment also remained positively associated with cancer-related mortality when accounting for sickness absence (OR 1.40 for men, 95% CI 1.23–1.58; OR 1.57 for women, 95% CI 1.38–1.78).
CVD-related mortality
Neither social mobility nor unemployment were linked to CVD-related mortality for women. For men, upward mobility was negatively associated with CVD-related mortality, both before and after adjusting for sickness absence (OR 0.76, 95% CI 0.66–0.88) and the difference between the two estimates appeared to be negligible. Similarly, the estimates related to unemployment remained mostly stable when including sickness absence in the model for men (OR 1.38, 95% CI 1.23–1.56).
Table 3 shows that upward mobility was associated with lowered odds of suicide in men and women, respectively (OR 0.76, 95% CI 0.59–0.99; OR 0.65, 95% CI 0.43–0.96). These associations were no longer statistically significant when adjusting for sickness absence. Downward mobility was not associated with suicide for men or women. Odds ratios for suicide were elevated when men and women, respectively, had been unemployed (OR 1.85, 95% CI 1.49–2.29; OR 1.55, 95% CI 1.03–2.33), but this association disappeared for women when sickness absence in the previous year was controlled for (OR 1.40, 95% CI 0.93–2.11).
Average marginal effects
Stepwise comparison of odds ratios—i.e., comparing odds ratios from two models in which an additional variable is introduced in the second model—may be inaccurate due to changes in omitted variable bias created by including an additional variable [44]; therefore, more careful conclusions about health selection are drawn from a comparison of average marginal effects (AME) estimated from the baseline model and from the model that accounts for sickness absence in the previous year. AMEs are also useful because they give more substantive information on how a factor matters; they reflect the average influence of the situation that actually exists in the population. Figures 2, 3, 4 and 5 display how the AMEs (the difference in predictive margins between the non-mobile and the mobile) change when accounting for prior sickness absences. AMEs are calculated for all those who were observed in an occupational class. If health selection into mobility and mortality were playing a role in the social mobility and mortality relationship, we would expect to see the average marginal effect become either statistically insignificant or shift toward 0 when accounting for sickness absence, which would mean that the difference between the non-mobile and the mobile disappears or lessens.
Average marginal effects in relation to no mobility on all-cause mortality
Average marginal effects in relation to no mobility on cancer-related mortality
Average marginal effects in relation to no mobility on CVD-related mortality
Average marginal effects in relation to no mobility on suicide
The predicted relationship between social mobility and mortality is largely the same as the pattern observed for the previous odds ratios. For all-cause mortality (Fig. 2) and CVD-related mortality (Fig. 4), the only significant relationship appears for men's upward mobility and this relationship remains even after adjusting for previous sickness absence. Figure 3 displays the same finding for cancer-related mortality for men, where the lowered mortality related to upward mobility was weakened by controlling for previous sick leave, but remained statistically distinct from the predicted mortality of the non-mobile. The predicted mortality of the downwardly mobile became statistically indistinct, however, in the average marginal effects for both men and women.
In relation to suicide, the influence of upward mobility remained distinct from non-mobility for women even when adjusting for sickness absence. In contrast to the mobility patterns for most other causes of death, being both upwardly and downwardly mobile is related to lower mortality in men than for the non-mobile. Adjusting for sickness absence alters the predicted influence of upward mobility for men in the expected direction of weakening the relationship, but removing the potential influence of illness leading to downward mobility slightly strengthens the lower predicted mortality associated with downward mobility. Removing the influence of illness appears to therefore potentially increase the selectivity of those who are downwardly mobile toward being less at risk of suicide.
The changes due to controlling for sickness absence in the predicted mortality related to unemployment are shown in Figs. 6 and 7 for women and men, respectively. In these figures, the reference group is the professional class in order to show how both the lowest class (blue collar working class) and unemployment compare. In addition, the social mobility and origin status variables were not kept in these models due to the inability to estimate AMEs of unemployment with multiple overlapping categories; dropping these two variables did not change the odds ratios for unemployment. In relation to the professional class, unemployment was related to every cause of death for men and women and adjusting for sickness absence did not absorb the relationship with any cause of death except for cancer for men. The predicted influence on mortality was weakened when controlling for previous sickness absence for women's cancer-related mortality and for all-cause mortality for both men and women. However, the comparison between unemployment and blue collar working class positions shows that unemployed men are still at a higher risk of cancer-related mortality, and the difference due to a change in reference groups likely reflects the higher selection out of the labor market for those with cancer who previously worked in blue class working collar jobs. The predicted mortality due to CVD and suicide did not differ between blue collar working class and unemployed women before or after including sickness absence.
Average marginal effects in relation to working in the professional class, women
Average marginal effects in relation to working in the professional class, men
The aim of this study was to examine how poor health intersects with social mortality and its relationship to mortality. If recent health in terms of sickness absence explains the short-term association between intragenerational social mobility and mortality then we can conclude health selection plays a role in how individuals are sorted into classes and in the mortality gradient. Descriptive statistics indicate health selection both out of the labor force and into mobility: Among men and women, a higher share of prior sickness absence was associated with inactivity, downward mobility and unemployment, and lower levels of sickness absence were evident among upwardly mobile compared to non-mobile states. For each cause of death in men and women, mortality is also highest when there has been heavy use of sickness absence.
The results from multivariate discrete time hazard analyses present similar patterns when looking at all-cause mortality and CVD-related mortality, which is not unexpected given that cardiovascular diseases are the leading cause of death in Sweden for men and women. When predicting mortality differences, the results for both all-cause and CVD-related mortality suggest only minor health selection effects in the relationship between upward mobility and mortality. Men appear to be less at risk of CVD-related mortality when they have been upwardly mobile and this lower risk is only moderately altered when accounting for sick leave. A similar pattern appears for other causes of death, where previous poor health weakens but does not explain the lower cancer-related mortality of men and lower suicide mortality of women who have been upwardly mobile. This implies that other causal or indirect selection mechanisms may play a role in this relationship, potentially including well-being benefits of having achieved a higher class status or characteristics of an individual that lead to less career advancement and more cancer and CVD-related mortality for men, suicide for women. No research to date offers evidence on how positive changes in well-being through upward mobility might influence mortality positively. Because estimates in this study are net of origin and destination effects, the advantages and disadvantages of class position such as income and social prestige as well as work environment characteristics should not play a role in the relationship. How upward mobility may offer protection from premature mortality is a question to answer in future research.
In contrast, previous poor health weakens the relationship between upward mobility and suicide for men. To the extent that the relationship is explained by prior health and selection, it may be that individuals choose not to search for higher job positions; alternatively, they may not be selected for promotion by an employer due to stigma related to the disorder, or not hired for a new position due to lower productivity that may result from having been on sick leave. These pathways do not seem to be the same for women, as upward mobility was rarely linked to lower mortality for women. This confirms the descriptive finding that upward mobility and sickness absence were less related for women than men; we know that women face different obstacles in reaching higher social class positions than men [45] and health appears potentially more important to men's upward mobility prospects than women's.
Having previously been in poor health and taken sick leave from work does largely explain the relationship between downward mobility and cancer mortality for both women and men. Chronic diseases such as cancer were expected to be the most likely to create health selection into social mobility and mortality since cancer mortality should be preceded by a period of time in poor health long enough to affect a career. Although suicide may be thought to be preceded by psychiatric disorders that potentially influences social mobility chances [33] as well, previous studies have found little support for selection effects within the labor market [4, 9, 10, 14, 17, 18]. Psychiatric health selection effects may be more evident in the pathway out of the labor market, particularly for those individuals with very severe psychiatric disorder such as schizophrenia [9, 17].
No earlier study has examined health selection of cancer disease into social mobility or by using cancer-related death as an outcome. The scenario that is indicated by the findings in this study is that individuals with cancer often continue to work but change jobs to a lower occupational class, perhaps to lessen work duties and responsibilities because of limited capacity. Due to the Swedish context, where regulation surrounding sickness absence protects workers' positions, we might suspect this is a process solely resulting from individuals' preferences. Unemployment, however, is less likely to be the result of an individual's choice, healthy or in poor health. That previous sickness absence weakens the relationship between unemployment and all-cause and cancer-related mortality indicates that Swedish legislation may not be offering enough job protection for those in poor health, who are not able to leave the labor market completely but also cannot find a new job and begin receiving unemployment benefits instead. More research is needed to explore whether employers are discriminating against those who have been on sick leave when they return to work.
Methodological considerations
Since the occupational class data only include a random sample of those persons working in smaller private firms and no individuals who become self-employed, the observations not covered by that data are coded as missing and they are not contributing to the social mobility estimates; the results cannot be generalized to individuals in very small firms or those self-employed. The sickness absence measure also comes with a few limitations. First, we do not know how many episodes of illness a person had during a year. For those who had multiple episodes, the total number of days spent ill is underestimated because the employer paid days are uncounted for each sickness leave spell in our data. Second, sickness absence, poor health and diagnosed illnesses do not perfectly overlap [46] and we can expect that sickness absence can sometimes be for common conditions that are not fatal such as back pain. Nevertheless, our results show a strong mortality gradient associated with the measure of sickness absence, confirming that the measure conveys either the extent of illness or an overall health potential that is related to mortality.
Across the causes of death studied here, upward mobility appears to be the main pathway through which social mobility is linked to mortality in Sweden. Better longevity for upwardly mobile men largely persists after adjusting for previous poor health. However, the overall conclusion of this study is that health selection accounts for a small part of why upward mobility is associated with lower mortality and plays a more definitive role in how downward mobility is linked to cancer related deaths. Using an analytical strategy that does not preclude the role of direct health, individuals in Sweden therefore appear to be sorted into occupational trajectories according to health. That health selection plays any role in how social mobility and mortality are related in Sweden may be unexpected in a context with strong job protection. Job protection does not, however, equalize opportunities for upward mobility, which may be reduced for those who have been ill. But because intragenerational upward mobility and mortality remained related after adjusting for an objective and broad measure of poor health, the findings of this study point to other important pathways to explore through which social mobility and mortality are linked, such as indirect selection or social causation.
All relevant data is owned by Statistics Sweden and is available upon request by interested researchers who apply and are granted access to the data. Data are available from Statistics Sweden https://www.scb.se/en/ for researchers who meet the criteria for access to confidential data. The authors confirm they had no special access or privileges and that other researchers may access the data in the same way.
AME:
Average marginal effect
CVD:
Diagonal reference model
EGP:
Erikson-Goldthorpe-Portocarero
EseC:
European Socioeconomic Classification
ICD-10:
International classification of diseases and related health problems, 10th revision
Mackenbach JP, Kulhánová I, Artnik B, Bopp M, Borrell C, Clemens T, et al. Changes in mortality inequalities over two decades: register based study of countries. BMJ [Internet]. 2016;353:i1732 Available from: http://www.ncbi.nlm.nih.gov/pubmed/27067249, [cited 2018 Sep 24].
Dahl E, Kjærsgaard P. Social mobility and inequality in mortality: an assessment of the health selection hypothesis. Eur J Pub Health. 1993;3:124–32.
Claussen B, Smits J, Naess O, Smith GD. Intragenerational mobility and mortality in Oslo: social selection versus social causation. Soc Sci Med. 2005;61(12):2513–20.
Novak M, Ahlgren C, Hammarstrom A. Social and health-related correlates of intergenerational and intragenerational social mobility among Swedish men and women. Public Health [Internet]. 2012;126(4):349–57 Available from: https://doi.org/10.1016/j.puhe.2012.01.012.
Elovainio M, Ferrie JE, Singh-Manoux A, Shipley M, Batty GD, Head J, et al. Socioeconomic differences in cardiometabolic factors: social causation or health-related selection? Evidence from the Whitehall II cohort study, 1991-2004. Am J Epidemiol. 2011;174(7):779–89.
Ki M, Sacker A, Kelly Y, Nazroo J. Health selection operating between classes and across employment statuses. J Epidemiol Community Health [Internet]. 2011; [cited 2018 Sep 17];65(12):1132–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/20817930.
Lundberg O. Childhood Living Conditions , Health Status , and Social Mobility : A Contribution to the Health Selection Debate. 1991;7(2).
Van De Mheen H, Stronks K, Schrijvers CTM, MacKenbach JP. The influence of adult ill health on occupational class mobility and mobility out of and into employment in the Netherlands. Soc Sci Med. 1999;49(4):509–18.
Tiikkaja S, Sandin S, Hultman CM, Modin B, Malki N, Sparén P. Psychiatric disorder and work life: A longitudinal study of intra-generational social mobility. Int J Soc Psychiatry [Internet], Available from: http://journalssagepubcom/doi/101177/0020764015614594, [cited 2018 Sep 17]. 2016;62(2):156–66.
Eaton WW, Muntaner C, Bovasso G, Smith C. Socioeconomic status and depressive syndrome: the role of inter- and intra-generational mobility, government assistance, and work environment. J Health Soc Behav [Internet]. 2001;42(3):277–94 Available from: http://www.ncbi.nlm.nih.gov/pubmed/11668774, [cited 2018 Sep 17].
Elstad JI, Krokstad S. Social causation, health-selective mobility, and the reproduction of socioeconomic health inequalities over time: panel study of adult men. Soc Sci Med. 2003;57(8):1475–89.
Cardano M, Costa G, Demaria M. Social mobility and health in the Turin longitudinal study. Soc Sci Med. 2004;58(8):1563–74.
Power C, Matthews S, Manor O. Inequalities in self rated health in the 1958 birth cohort: lifetime social circumstances or social mobility? BMJ [Internet]. 1996;313(7055):449–53 Available from: http://www.ncbi.nlm.nih.gov/pubmed/87763101, [cited 2018 Sep 17].
Power C, Stansfeld S, Matthews S, Manor O, Hope S. Childhood and adulthood risk factors for socio-economic differentials in psychological distress: evidence from the 1958 British birth cohort. Soc Sci Med [Internet]. 2002;55(11):1989–2004 Available from: https://www.sciencedirect.com/science/article/pii/S0277953601003252, [cited 2018 Sep 17].
Manor O, Matthews S, Power C. Health selection: the role of inter- and intra-generational mobility on social inequalities in health. Soc Sci Med. 2003;57(11):2217–27.
Halleröd B, Gustafsson J-E. A longitudinal analysis of the relationship between changes in socio-economic status and changes in health. Soc Sci Med [Internet]. 2011 Jan ;72(1):116–123. Available from: https://www.sciencedirect.com/science/article/pii/S027795361000701X, [cited 2018 Sep 17].
Wiersma D, Giel R, De Jong A, Slooff CJ. Social class and schizophrenia in a Dutch cohort. Psychol Med [Internet]. 1983;13(01):141 Available from: http://www.journals.cambridge.org/abstract_S0033291700050145, [cited 2018 Sep 17].
Chandola T, Bartley M, Sacker A, Jenkinson C, Marmot M. Health selection in the Whitehall II study, UK. Soc Sci Med [Internet]. 2003;56(10):2059–72 Available from: http://www.sciencedirect.com/science/article/pii/S0277953602002010?via%3Dihub, [cited 2017 Oct 10].
Elstad JI. Health and Status Attainment. Acta Sociol [Internet]. 2004;47(2):127–40 Available from: http://journals.sagepub.com/doi/10.1177/0001699304043824, [cited 2018 Sep 17].
Rosvall M, Chaix B, Lynch J, Lindström M, Merlo J. Similar support for three different life course socioeconomic models on predicting premature cardiovascular mortality and all-cause mortality. BMC Public Health [Internet]. 2006;6(1):203 Available from: http://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-6-203.
Mishra GD, Chiesa F, Goodman A, De Stavola B, Koupil I. Socio-economic position over the life course and all-cause, and circulatory diseases mortality at age 50-87 years: results from a Swedish birth cohort. Eur J Epidemiol. 2013;28(2):139–47.
Nilsson PM, Nilsson J-Å, Östergren P-O, Berglund G. Social mobility, marital status, and mortality risk in an adult life course perspective: the Malmö preventive project. Scand J Soc Med [Internet]. 2005;33(6):412–23 Available from: http://journals.sagepub.com/doi/10.1080/14034940510005905.
Hallqvist J, Lynch J, Bartley M, Lang T, Blane D. Can we disentangle life course processes of accumulation, critical period and social mobility? An analysis of disadvantaged socio-economic positions and myocardial infarction in the Stockholm heart epidemiology program. Soc Sci Med. 2004;58(8):1555–62.
Campos-Matos I, Kawachi I. Social mobility and health in European countries: Does welfare regime type matter? Soc Sci Med [Internet]. 2015;142:241–8 Available from: https://www.sciencedirect.com/science/article/pii/S0277953615300873, [cited 2018 Sep 17].
Billingsley S. Intragenerational social mobility and cause-specific premature mortality. Böckerman P, editor. PLoS One [Internet]. 2019;14(2):e0211977. Available from: http://dx.plos.org/10.1371/journal.pone.0211977, [cited 2019 Nov 19].
Försäkringskassan. Sickness absence 60 days or longer [Internet]. 2015. Available from: https://www.forsakringskassan.se/wps/wcm/connect/d7d4b78e-39fa-4c2f-bed9-ade979b5ff23/socialforsakringsrapport_2015_1.pdf?MOD=AJPERES.
Sieurin L, Josephson M, Vingård E. Positive and negative consequences of sick leave for the individual, with special focus on part-time sick leave. Scand J Public Health [Internet]. 2009;37(1):50–6 Available from: http://journals.sagepub.com/doi/10.1177/1403494808097171, [cited 2018 Oct 31].
Wang M, Alexanderson K, Runeson B, Head J, Melchior M, Perski A, et al. Are all-cause and diagnosis-specific sickness absence, and sick-leave duration risk indicators for suicidal behaviour? A nationwide register-based cohort study of 4.9 million inhabitants of Sweden. Occup Environ Med [Internet]. 2014 Jan 1 ;71(1):12–20. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24142975, [cited 2018 Oct 31].
Mittendorfer-Rutz E, Kjeldgård L, Runeson B, Perski A, Melchior M, Head J, et al. Sickness Absence Due to Specific Mental Diagnoses and All-Cause and Cause-Specific Mortality: A Cohort Study of 4.9 Million Inhabitants of Sweden. PLoS One. 2012;7(9).
Ishtiak-Ahmed K, Perski A, Mittendorfer-Rutz E. Predictors of suicidal behaviour in 36,304 individuals sickness absent due to stress-related mental disorders - a Swedish register linkage cohort study. BMC Public Health [Internet]. 2013;13(1):492 Available from: http://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-13-492, [cited 2018 Oct 31].
Bryngelson A, Åsberg M, Nygren Å, Jensen I, Mittendorfer-Rutz E. All-Cause and Cause-Specific Mortality after Long-Term Sickness Absence for Psychiatric Disorders: A Prospective Cohort Study. Baradaran HR, editor. PLoS One [Internet]. 2013 ;8(6):e67887. Available from: https://dx.plos.org/10.1371/journal.pone.0067887, [cited 2018 Oct 31].
Bernell S, Howard SW. Use Your Words Carefully: What Is a Chronic Disease? Front public Heal [Internet]. 2016 ;4:159. Available from: http://www.ncbi.nlm.nih.gov/pubmed/27532034, [cited 2019 Nov 20].
Blane D, Harding S, Rosato M. Does social mobility affect the size of the socioeconomic mortality differential?: evidence from the Office for National Statistics longitudinal study. J R Stat Soc. 1999;1:59–70.
Uggla C, Billingsley S. Unemployment, intragenerational social mobility and mortality in Finland: heterogeneity by age and economic context. J Epidemiol Community Health [Internet]. 2018 ;jech-2018-ec210457. Available from: http://www.ncbi.nlm.nih.gov/pubmed/30061098, [cited 2018 Aug 17].
West P. Rethinking the health selection explanation for health inequalities. Soc Sci Med [Internet]. 1991;32(4):373–84 Available from:https://www.sciencedirect.com/science/article/pii/027795369190338D?via%3Dihub, [cited 2018 Sep 24].
Eikemo TA, Bambra C, Judge K, Ringdal K. Welfare state regimes and differences in self-perceived health in Europe: a multilevel analysis. Soc Sci Med. 2008;66:2281–95.
Esping-Andersen G. The three worlds of welfare capitalism. Cambridge: Polity Press; 1990.
Judiesch MK, Lyness KS. Left Behind? The Impact of Leaves of Absence on Managers' Career Success. Acad Manag J [Internet]. 1999;42(6):641–51 Available from: http://journals.aom.org/doi/10.5465/256985, [cited 2019 Nov 19].
Harrison E, Rose D. The European Socio-economic Classification ( ESeC ). nstitute Soc Econ Res Univ Essex Colchester, UK. 2006;(September):1–22.
Allison PD. Discrete-Time Methods for the Analysis of Event Histories. Source Sociol Methodol [Internet]. 1982;13:61–98 Available from: http://www.jstor.org, [cited 2018 Feb 15].
Sobel ME. Diagonal mobility models: a substantively motivated class of designs for the analysis of mobility effects. Am Sociol Rev. 1981;46(6):893–906.
Sobel ME. Social mobility and fertility revisited: some new models for the analysis of the mobility effects hypothesis. Am Sociol Rev. 1985;50(5):699–712.
Billingsley S, Drefahl S, Ghilagaber G. An application of diagonal reference models and time-varying covariates in social mobility research on mortality and fertility. Soc Sci Res. 2018;75:73–82.
Mood C. Logistic Regression: Why We Cannot Do What We Think We Can Do, and What We Can Do About It. Eur Sociol Rev [Internet]. 2010;26(1):67–82 Available from: https://academic.oup.com/esr/article-lookup/doi/10.1093/esr/jcp006, [cited 2018 Feb 15].
Hultin M. Consider her adversity. Four essays on gender inequality in the labor market. Swedish Insti- tute for social research. Akademitryck: Stockholm; 2001.
Wikman A, Marklund S, Alexanderson K. Illness, disease, and sickness absence: an empirical test of differences between concepts of ill health. J Epidemiol Community Heal [Internet]. 2005;59(6):450–4 Available from: https://jech.bmj.com/content/59/6/450.long, [cited 2019 Nov 20].
Anna Bryngelson is gratefully acknowledged for contributing to earlier versions of this paper.
Support is acknowledged from the Swedish Research Council for health, working life and welfare (grant 2012–00437). Open access funding provided by Stockholm University.
Department of Sociology and Demography Unit, Stockholm University, S-106 91, Stockholm, Sweden
Sunnee Billingsley
Correspondence to Sunnee Billingsley.
The author declare that he/she has no competing interests.
Billingsley, S. Sick leave absence and the relationship between intra-generational social mobility and mortality: health selection in Sweden. BMC Public Health 20, 8 (2020). https://doi.org/10.1186/s12889-019-8103-4 | CommonCrawl |
High spatio-temporal-resolution detection of chlorophyll fluorescence dynamics from a single chloroplast with confocal imaging fluorometer
Yi-Chin Tseng1 &
Shi-Wei Chu ORCID: orcid.org/0000-0001-7728-43291,2
Chlorophyll fluorescence (CF) is a key indicator to study plant physiology or photosynthesis efficiency. Conventionally, CF is characterized by fluorometers, which only allows ensemble measurement through wide-field detection. For imaging fluorometers, the typical spatial and temporal resolutions are on the order of millimeter and second, far from enough to study cellular/sub-cellular CF dynamics. In addition, due to the lack of optical sectioning capability, conventional imaging fluorometers cannot identify CF from a single cell or even a single chloroplast.
Here we demonstrated a fluorometer based on confocal imaging, that not only provides high contrast images, but also allows CF measurement with spatiotemporal resolution as high as micrometer and millisecond. CF transient (the Kautsky curve) from a single chloroplast is successfully obtained, with both the temporal dynamics and the intensity dependences corresponding well to the ensemble measurement from conventional studies. The significance of confocal imaging fluorometer is to identify the variation among individual chloroplasts, e.g. the temporal position of the P–S–M phases, and the half-life period of P–T decay in the Kautsky curve, that are not possible to analyze with wide-field techniques. A linear relationship is found between excitation intensity and the temporal positions of P–S–M peaks/valleys in the Kautsky curve. Based on the CF transients, the photosynthetic quantum efficiency is derived with spatial resolution down to a single chloroplast. In addition, an interesting 6-order increase in excitation intensity is found between wide-field and confocal fluorometers, whose pixel integration time and optical sectioning may account for this substantial difference.
Confocal imaging fluorometers provide micrometer and millisecond CF characterization, opening up unprecedented possibilities toward detailed spatiotemporal analysis of CF transients and its propagation dynamics, as well as photosynthesis efficiency analysis, on the scale of organelles, in a living plant.
Chlorophyll fluorescence (CF) has been proven to be one of the most powerful and widely used techniques for plant physiologists [1,2,3,4,5,6,7]. Despite of its low quantum efficiency (2–10% of absorbed light [8]), CF detections are meaningful due to its intricate connection with numerous internal processes during photosynthesis, such as reduction of photosystem reaction centers, non-photochemical quenching, etc. [9, 10]. It is well known that the efficiency of photosynthesis can be derived from CF dynamics, thus providing noninvasive, fast and accurate characterization for photosynthesis. CF characterization has been widely adopted to study plant physiology, including stress tolerance, nitrogen balance, carbon fixation efficiency, etc. [11]. It is not too exaggerated to say that nowadays, no investigation about photosynthetic process would be complete without CF analysis.
Conventionally, the tool of choice to study CF is a fluorometer. There are many different fluorometry techniques, such as plant efficiency analyzer (PEA) [12], pulse amplitude modulation (PAM) [13], the pump and probe (P&P) [14, 15] and the fast repetition rate (FRR) [16]. It is interesting to note that these various detection approaches are all based on the same principle, i.e. the Kautsky effect [7], or equivalent, CF transient when moving photosynthetic material from dark adaption to light environment.
Conventional imaging fluorometers (e.g. PAM and P&P fluorometers) are based on wide-field detection, and are routinely adopted to study ensemble of CF transients from a large area of a leaf, significantly limiting its spatiotemporal resolution. For example, to study stress propagation in a plant leaf [17], current imaging fluorometers only provide spatial resolution on the order of millimeter, with temporal resolution on the order of second. To unravel the more detailed propagation dynamics, the required spatial resolution should be at least on single cell or sub-cellular level, while the temporal resolution should be enhanced to millisecond scale.
The concept of introducing fluorescent microscope to study high-resolution CF dynamics has been realized two decades ago [18], but the drawback of the early microscopic fluorometer version is the lack of optical section capability due to its wide-field nature, and thus prevents study of CF transient on a truly single cell or even a single chloroplast level. Confocal microscopy, which is known to provide optical sectioning with exceptionally high axial contrast, has been extensively used for CF imaging with sub-micrometer resolution [19,20,21,22,23]. However, the high-speed time-lapsed imaging capability is less explored in earlier works.
Here we introduce a concept of confocal imaging fluorometer, which is the combination of confocal microscopy and CF transient detection. The technique not only detects CF signals with millisecond temporal resolution, but also attains micrometer spatial resolution in all three dimensions. The CF transient (Kautsky curve) within a single chloroplast is successfully retrieved. With statistical comparison, the CF transients of a group of palisade cells and the ensemble of single chloroplasts are found to be similar to each other, and both correspond well to the result of conventional imaging fluorometers, showing the reliability of our result. Nevertheless, the CF transient of individual chloroplast can be substantially different, manifesting the value of the unusual capability to study plant cell organelles. Furthermore, we found that the shape of transients is highly intensity-dependent, which is also shown in an earlier study [24]. We also found that the short integration time and optical section characteristic of confocal image fluorometer make a significant difference of illumination intensity comparing to that of conventional fluorometers. Given CF transient from a single chloroplast, it is possible to investigate degree of influence from external or internal plant-stress with scale of organelle, and confocal imaging fluorometer has paved the way for this high spatiotemporal resolution CF detection.
Basic concept of confocal imaging fluorometer
The optical principle of confocal imaging fluorometer is basically the same as confocal laser-scanning microscopy [25], which is an optical imaging technique for increasing contrast and resolution. The essential components of a confocal imaging fluorometer is shown in Fig. 1, including a laser system, a dichroic mirror, a scanning mirror system, an objective lens, a pinhole and a photomultiplier tube (PMT).
Principle and basic components of a confocal imaging fluorometer. Laser beam is reflected by a dichroic mirror and goes through a set of scanning mirrors, then focused by an objective lens onto the specimen. Fluorescence signals is epi-collected in the same path, and filtered out by the dichroic mirror. A confocal pinhole is used to allow only fluorescence emitted from the focal plane being detected by the PMT
The laser system in a confocal imaging fluorometer provides strong and monochromatic illumination, whose wavelength can be selected to meet sample request. The laser beam is sent to the objective after the scanning mirror system to achieve two-dimensional raster scanning at the focal plane. The backward fluorescence signal is collected by the same objective, de-scanned through the scanning mirrors, and separated from residual laser by the dichroic mirror. The fluorescence signal then is focused onto the pinhole, which is placed at the conjugate plane of objective focus, to achieve optical sectioning by excluding out-of-focus signals. One or more PMTs are placed behind the pinhole to collect the in-focus fluorescence signals, which are reconstructed into images by synchronization with the scanning mirrors [25].
In general, a confocal imaging system is capable of collecting signal with a well-defined optical section on the order of 1 µm [26]. This high axial resolution makes confocal system an invaluable tool to observe single cell or sub-cellular organelles [27,28,29].
The objective lens is characterized by magnification and numerical aperture (NA). To enable large field-of-view observation, low magnification objectives are typically required. However, please note that resolution is determined by NA, which can be independent from magnification. NA describes the light acceptance cone of an objective lens and hence light gathering ability and resolution. The definition of NA is:
$$NA \equiv n \times \sin \theta ,$$
where n is the index of refraction of the immersion medium, and θ is the half-angle of the maximum light acceptance cone. Both lateral (xy-direction) and axial (z-direction) resolutions for fluorescence imaging mode are defined by NA and the wavelength (λ) [30].
$$r_{lateral} = \frac{0.43 \times \lambda }{NA}$$
$$r_{axial} = \frac{0.67 \times \lambda }{{n - \sqrt {n^{2} - NA^{2} } }}$$
To compare the actinic light illumination in a conventional fluorometer, e.g. PAM, and in a laser scanning confocal fluorometer, there are several aspects. First, in PAM the actinic light is provided by a lamp or an LED, which is an incoherent light source; while in a confocal system, the laser excitation is coherent. Second, the spectral bandwidth of a laser is in general much narrower than that of a lamp, which is typically tens of nanometers even after adding bandpass filters. Third, wide-field illumination is adopted in PAM, while point-scan is used in confocal.
Although there are many differences between the illumination method of the conventional fluorometer and the confocal one, in an early work [31], it has been shown that the actinic effect of using a Xe lamp or a laser is equivalent. In a more recent work [23], they have shown that frequency of scanning (~300 s−1) does not seem to affect the response, even when compared to wide-field illumination. In our current work, the scanning frequency on each chloroplast is about 10,000 s−1. However, as we will show in the results, clear OPSMT transitions and similar intensity-dependent CF dynamics are all observable. Therefore, it seems that the high-frequency laser beam movement does not cause significant effects on CF dynamics.
Kautsky effect
Kautsky effect, discovered in 1931, describes the dynamics of CF when dark-adapted photosynthetic chlorophyll suddenly exposes to continuous light illumination [32]. After initial light absorption, chlorophyll becomes excited and soon releases its energy into one of the three internal decay pathways, including photosynthesis (photochemical quenching, qP), heat (non-photochemical quenching, NPQ) and light emission (CF). Owing to energy conservation, the sum of quantum efficiencies for these three pathways should be unity. Therefore, the yield of CF is strongly related to the efficiency of both qP and NPQ [33].
To be more specific, when transferring a photosynthetic material from dark adaption into light illumination, CF yield typically exhibits a fast rising phase (within 1 s) and a slow decay phase (few minute duration), as shown by the green curve in Fig. 2. The fast rising phase is labeled as O–P, where O is for origin, and P is the peak [24]. It is mainly caused by the reduction of qP; that is, depletion of electron acceptors, quinine (Qa) in the electron transport chain [34]. The slow decay phase is labeled as P–S–M–T, where S stands for semisteady state, M for a local maximum, and T for a terminal steady state level [24]. One very interesting phenomenon is the shape of this decay phase depends strongly on illumination intensity. At low intensity (32 μmol/m2/s), the Kautsky curve is the green one. When the intensity grows one order larger, the amplitude of S–M rise in the transient is smaller, as shown by the red curve. At one more order higher intensity, the blue curve shows that the S–M section disappears completely, leaving an exponential decay in the P–T section. This is known as saturation state, which is critical to derive the quantum efficiency of photosynthesis. Such intensity-dependent curve transition is the result of photosynthetic state transition, and more detailed discussion can be found in the references [1, 10, 13, 35,36,37].
(Modified figure from [1], with copyright permission)
The Kautsky effect, showing the CF transient as well as its intensity dependence. Wavelength of excitation: 650 nm. Excitation light intensity for curves labeled 1, 2 and 3 was 32, 320 and 3200 μmol/m2/s, respectively. For definition of OPSMT, O is the origin, P is the peak, S stands for semi-steady state, M for a local maximum, and T for a terminal steady state level
Plant sample
Brugmansia suaveolens (solanaceae), also known as Angel's Trumpet, was a woody plant usually 3–4 m in height with pendulous flowers and furry leaves distributed widely in Taiwan, especially in wet areas. Being interested in spatiotemporal dynamics of CF, we selected B. suaveolens as our target material since the CF of its cousin Datura wrightii, also known as Devil's Trumpet, had been studied in depth [17]. B. suaveolens leaves were collected from the Botanical Garden of National Taiwan University, Taipei, Taiwan (25°1′N, 121°31′E, 9 m a.s.l.). All sample leaves were picked as fully expanded leaves that had neither experienced detectable physical damage nor herbivory. In order to minimize the sampling error, three leaves were chosen within plants that grew in similar micro-climate. Furthermore, all the measurements were completed no longer than two hours after disleaving. Fresh leaves were sealed in slide glass (76 × 26 mm), and slide samples were dark-adapted under constant temperature and constant humidity dark environment (20 °C, 70%RH) for 20 min.
A confocal microscope (Leica TCS SP5) in the Molecular Imaging Center of National Taiwan University was adopted. CF was excited by a HeNe laser, whose wavelength (633 nm) was the same as that used in popular conventional fluorometers, such as LI-6400 from LI-COR. A relatively low-NA objective (HC PL Apo 10×/0.4 CS) was selected to allow not only large field of view over a few millimeters, but also spatial resolution better than a single chloroplast. From Eqs. (2) and (3), the lateral and axial resolutions were 1 and 5 µm, respectively. Although this was not particularly high compared to common confocal imaging system, due to the low-NA objective here, the three-dimensional spatial resolution was much better than conventional imaging fluorometers.
To operate the confocal fluorometer, the initial step was to bring the sample to focus by weak excitation (~1 kW/cm2, or equivalently 5.56 × 107 μmol/m2/s for intensity conversion, please see "Discussion"), and then the leaf was left in dark again for 5 min. To observe the Kautsky effect, the 633-nm laser was focused on the sample, and the fluorescence emission was recorded in the spectral range of 670–690 nm. The intensity-dependent CF transient curves were obtained by taking time-lapsed images while varying the 633-nm excitation intensity from 1 to 55 kW/cm2, at different sample regions. For experimental details, the scanning speed was 1400 Hz (1400 rows per second), the pinhole size was 52 μm (one Airy diameter), the built-in PMT voltage was set at 600 V, and a dichroic filter TD 488/543/633 was included in the optical path. With different number of total pixels, the temporal resolution of the CF transient varies from 10 ms (16 × 16 pixels) to about 200 ms (256 × 256 pixels). No significant photobleaching of CF was expected at this intensity range [38].
Fluorescence dynamics from a single chloroplast
Conventional fluorometers observe CF dynamics over a large area on a leaf, and here we demonstrate that the confocal imaging fluorometer allows us to obtain CF transients from a precisely chosen cell or even a single chloroplast. Figure 3a shows the confocal images of a leaf sample. (a1) is the large-area view, showing the distribution of vascular bundles, while (a2) gives a zoom-in view of a group of palisade cells, showing clear distribution of chloroplasts in each cell. By further zooming in, the field of view is focused onto a single chloroplast, as given in (a3), showing the distribution of chlorophyll density inside the organelle [39].
Confocal images and CF transients on different spatial scales inside a living leaf. A 633-nm laser, with 3 kW/cm2 intensity, is adopted for 50 s continuous confocal imaging. The sample leaf was kept in darkness for ~20 min before imaging. a1 The image over a large area of the leaf, a2 zoomed into show a group of palisade cells, and a3 further zoomed into focus onto a single chloroplast. b1, b2 CF transient from a group of palisade cells and a single chloroplast, respectively. b2 Noisier since less pixels are involved
Figure 3b presents the CF transients at low intensity illumination (3 kW/cm2) from a group of palisade cells (b1) and a single chloroplast (b2). The latter is noisier due to less pixels involved. The characteristic P–S decay and S–M rise of Kautsky curve are obvious in both (b1) and (b2). In Fig. 3b1, based on the statistics of 30 chloroplasts, the averaged timing points for P–S–M states are 1.8, 5.9 and 10.4 s, respectively, corresponding well with the reported values in the literature (Fig. 2). On the other hand, box plots are embedded in Fig. 3b1 to show the variations of time and intensity in P–S–M states between the 30 individual chloroplasts. The bottom and top of the box are the first and third quartiles, while the ends of the whiskers represent the maximum and minimum values. This result not only confirms that the averaged Kautsky curves acquired by the confocal fluorometer are similar to the curves taken with conventional fluorometers, but also shows that the variations between individual chloroplasts are indeed significant.
Intensity dependent fluorescence transient
As we have mentioned in Fig. 2, it is well known that the Kautsky curve changes with intensity. Figure 4 shows the intensity-dependent Kautsky curves from ~780 chloroplasts (colored lines) along with their standard error (gray lines), obtained by the confocal fluorometer. Note that to make the standard error visible on the same scale, it is multiplied by 16. Figure 4a is acquired with low laser intensity (3 kW/cm2), and a temporal variation similar to curve 1 of Fig. 2 is found, i.e. a complete O–P–S–M–T curve. The CF intensity rises to its first peak within 1 s (O–P rise), quickly decreasing to a local minimum (P–S fall), rising again to a second peak (S–M rise) then slowly falling as exponential decay (M–T decay). At slightly higher intensity (10 kW/cm2), a temporal variation similar to curve 2 of Fig. 2 is observed. The P–S fall and S–M rise still exist, but become much smaller, while the positions of P, S, and M appear earlier in the curve. At high intensity (55 kW/cm2), the S–M part disappears completely, leaving a single exponential P–T decay, similar to curve 3 of Fig. 2, i.e. saturation state. This result matches very well to the conventional wide-field fluorometer [1, 13, 36], but with much higher spatiotemporal resolution, manifesting again the reliability and usefulness of the confocal technique.
Averaged CF transients from ~780 chloroplasts (colored) with standard error (×16, gray) under excitation intensity at a 3, b 10, and c 55 kW/cm2, respectively, showing clearly the intensity-dependent Kautsky curves
From Fig. 4, not only the curve shape is intensity-dependent, but the positions of local maxima and minimum (P, S, M points) are strongly dependent on excitation intensities. Figure 5a shows the detailed curve variation relative to intensity, in the range of 3–55 kW/cm2, and the corresponding temporal position of local maximum of induced transients, i.e. point M, is given in Fig. 5b. Surprisingly, an almost perfect linear trend is observed. Similar linear results are found for the semi-steady state point S in Fig. 5c, and for the peak point P in Fig. 5d. Due to the limitation of temporal resolution (200 ms for 256 × 256 pixels), S and P points are analyzed with intensity range 3–40 kW/cm2 and 3–20 kW/cm2, respectively. The linear trends indicate that the state transition rate increases with higher excitation intensity. The underlying mechanism relies more investigation in the future.
a Detailed Kautsky curve variation in the intensity range of 3–55 kW/cm2. The temporal positions of b the local maximum (M), c semisteady state (S), and d peak (P), all change linearly with excitation intensity. The grey area represents 95% confidence region
Fluorescence dynamics under saturation intensity
In the last section, we have shown that at high excitation intensity, the CF is driven into saturation, which is very important for the quantum efficiency calculation. Thus, here we provide further characterization of the saturation states across individual chloroplasts. In Fig. 6a, the green color provides the spatial distribution of CF intensity over many living cells, and the red color shows the distribution of P–T phase decay time constant. For better identification, the two colors are shown separately in Fig. 6b, c. The statistical analysis for the P–T decay time constant of transients from individual chloroplasts is derived from Fig. 6c. The averaged decay time constant of a large area of leaf is 34.6 s, again matching well to the reported values in Fig. 2. Nevertheless, the standard variation of the decay time constant is 10.6 s, which reaches one-third of the average value, so significant divergence exists between each chloroplast. This decay time divergence is manifested by explicitly showing four Kautsky curves from individual chloroplasts in Fig. 6c1–c4.
High-resolution spatial distribution of CF intensity (green in a, b) and of the PT-phase time constant (red in a, c). The fluorescence transients of four selected chloroplasts within a living leaf are shown in the bottom panels, manifesting the significant difference in the time constants. The dataset is acquired at 40 kW/cm2 with a HeNe laser (633 nm)
Please note that error values in Fig. 6c1–c4 are the least square errors when fitting the curves with an exponential decay, different from the statistical standard deviation above. When analyzing data from a single chloroplast, the signal-to-noise ratio is relatively low, resulting in about 10% error in the time constant determination. Fortunately, the variation of time constants among chloroplasts is much larger than 10%, so this error is still tolerable. In the case where reduced error is necessary, the confocal system provides the flexibility to increase the integration time (reducing temporal resolution), so that higher signal-to-noise ratio can be achieved.
Since the laser intensity is relatively strong, it is necessary to confirm the reproducibility of the Kautsky curve in the same region of chloroplasts. Figure 7a1 shows the confocal CF image of a group of cells, and the corresponding averaged Kautsky curve is given in Fig. 7b1. The excitation intensity is 55 kW/cm2, which is adequate to saturate the photosystem, so a curve similar to 3 in Fig. 2 is observed. The sample was then kept in dark for 5 min, before the same intensity was applied again. The results of second excitation is given in Fig. 7a2, b2. Apparently, the Kautsky curve is fully recoverable, even under relatively high illumination intensity.
The reproducibility of Kautsky curve under strong illumination intensity (55 kW/cm2). a1, b1 are confocal CF image and Kautsky curve for the first set of excitation. a2, b2 are the corresponding results with the second set of excitation after 5 min in dark
Deriving quantum efficiency of photosystem II
We have shown that intensity-dependent CF transient is found on the scale of cells and chloroplasts, it is then straightforward to derive the physiologically important factors, such as the maximal quantum efficiency of photosystem II (ΦPSII). To derive maximal ΦPSII, the first step is to quantify the fluorescence yield, which is the ratio between CF intensity and excitation intensity. In our work, the relative quantum yield values are obtained by normalizing the CF intensities to the fluorescence intensity of a commercial fluorescent slide (92001, Chroma Tech., VT) under the same excitation intensity. The values of relative quantum yield at low excitation intensity (ΦF0, at 3 kW/cm2) and at saturation intensity (ΦFm, at 55 kW/cm2) are given in Table 1. Then the spatial distribution of ΦPSII is obtained with pixel-by-pixel calculation of (ΦFm − ΦF0)/ΦFm, as shown in Fig. 8. Apparently, the effect of the fluorescent slide is removed when calculating the quantum efficiency with the above equation. Numerical values of quantum efficiencies on different scales are also listed in Table 1. Similar to the results of Kautsky curves, the mean values of quantum efficiency are similar throughout a large area of leaf to a single chloroplast. On the other hand, from Table 1 and Fig. 8, the value of ΦPSII can be very different among individual chloroplasts, once again manifesting the significance of high-resolution mapping of the CF dynamics inside a living plant.
Table 1 Relative quantum yields and quantum efficiencies at different spatial scales
High-resolution spatial distribution of quantum efficiency of photosystem II inside a living leaf
We have successfully obtained the Kautsky curve, as well as its intensity dependence, with the confocal imaging fluorometer. Comparing to conventional wide-field imaging fluorometers, the confocal technique allows much better spatial confinement due to optical sectioning capability, and thus observation from a single chloroplast becomes possible. With the statistical analyses for P, S, M, T states of the Kautsky curves, at low and high intensities in Figs. 3 and 6 respectively, it can be concluded that the behavior of individual chloroplasts under our confocal imaging fluorometer is indeed similar to a large area of leaf under a conventional wide-field fluorometer. However, the value of the confocal technique lies in the capability to unravel the significant difference between individual chloroplasts, as highlighted by the box plot in Fig. 3 and the clear variation of P–T decay time constants in Fig. 6c.
In terms of the temporal resolution performance, the confocal and wide-field fluorometers should be similar in terms of a single pixel detection, which takes about 1–10 μs in both cases. As mentioned in [17], the wide-field fluorometer takes about 1 s to record one image. Nevertheless, the advantage of the confocal scheme is the freedom to select number of pixels, as well as the position of these pixels, significantly enhancing the temporal responses. By using more advanced scanning approaches, such as random-access microscopy [40], high-speed CF detection among distant chloroplasts is possible. In addition, by adopting a multi-focus scanning approach, such as being demonstrated by spinning disk confocal microscopy in 2009 [23], the frame rate of confocal fluorometer can be significantly improved.
Although spinning disk technique may potentially provide higher frame rate, there are several limitations that prevent it to be an ideal choice for fluorometry application [41]. First of all, due to the size limitation of camera, spinning disk confocal microscopy typically exhibits a small field of view, often only the size of a few cells, which is problematic when studying tissues. A good comparison is given in Fig. 8 of [41], where laser scanning confocal microscope provides much larger field of view.
Second, due to the existence of multiple pinholes on a pinhole array, the optical sectioning capability of spinning disk microscopy is in general less ideal than laser scanning confocal microscopy, especially when observing thick and scattering tissues. In addition, when using a low-magnification lens for large-area study, the spinning disk technique can significantly lose its optical sectioning ability. The reason is that most spinning disk system has a pinhole array comprising pinholes with a fixed size, which is designed for high-magnification and high-NA immersion objective lens, such as a 100×/NA 1.4 objective. However, for plant tissues, low magnification lens with moderate NA is preferred for large area observation. In our case, a 10×/NA 0.4 objective is employed, providing 1.5 mm × 1.5 mm field of view. If the 10× objective is used in a spinning disk system, whose pinhole diameter cannot be adjusted, both the axial sectioning capability and the lateral resolution shall be far less optimal than a confocal system. On the other hand, in a point-scanning confocal system, the pinhole size is easily adjustable, allowing observation for both high and low magnifications. Even with the low NA objective, as we mentioned in the main text, our confocal fluorometer still provides 1-micrometer lateral resolution and 5-micrometer axial resolution, adequate for single chloroplast imaging.
The third concern is image uniformity. In a spinning disk system, when using a Gaussian laser beam, the excitation intensity of the center region is larger than the edge, making it difficult to quantify the response from individual chloroplast. On the other hand, the image uniformity of laser scanning confocal fluorometer is much better than typical spinning disk one.
Last but not least, when comparing spinning disk and laser scanning confocal techniques, it is commonly accepted that the laser intensity at each focus is less for the former, so photobleaching is reduced. However, we would like to point out that the overall accumulated power/energy on the plant tissue is in fact higher, since the laser power is spread over hundreds of foci across the entire field of view. Therefore, more powerful lasers are required for the spinning disk system, and the issue of potential photothermal damage in the tissues has to be considered.
Another important aspect to notice is that the illumination intensity of the confocal fluorometer is much higher than that of the wide-field fluorometers. As shown in Figs. 5 and 6, to eliminate the semi-steady state S in the CF transient, about 55 kW/cm2 is required for the confocal fluorometer. However, in the case of wide-field fluorometer, as shown in the example of Fig. 2 [1], to eliminate S, 3200 μmol/m2/s is required. Considering the wavelength to be 650 nm in [1], the photon energy is 1240/650 = 1.9 eV = 3 × 10−19 J. Therefore, the intensity unit (μmol/m2/s) is equivalent to [10−6 × 6 × 1023 (# of photons)] × [3 × 10−19 (J/photon)]/104 cm2/s = 18 × 10−9 kW/cm2. As a result, in the wide-field fluorometer, the required illumination intensity is 3200 × 18 × 10−9 kW/cm2 = 5.76 × 10−5 kW/cm2, six orders of magnitude smaller than that in the confocal one.
To explain this 6-order intensity difference, optical sectioning and illumination time of the confocal imaging fluorometer have to be considered. In a conventional fluorometer (wide-field detection), CF signals are emitted throughout the whole leaf in the axial direction, so the depth of field (i.e. signal collection depth) is equivalent to the thickness of a leaf, which is usually 100–1000 µm. On the other hand, for a confocal fluorometer, a pinhole is inserted before the detector to reject most out-of-focus fluorescence, and thus the total signal strength is significantly reduced. The typical depth of field in a confocal fluorometer is about 1–10 µm, which is 2-orders less than that of the wide-field one. Hence, the signal strength of the confocal fluorometer is expected to be 2-orders weaker than the wide-field counterpart.
In terms of the illumination time, in a conventional wide-field imaging fluorometer, the whole leaf sample is illuminated continuously, so the illumination time for each pixel is the same as the frame acquisition time. On the other hand, a small laser focus scans across the sample in the confocal scheme, making the illumination time for each pixel much shorter than the frame time. For example, in the case of Fig. 4a2, 1 frame takes about 1 s, and the frame is composed of 256 × 256 pixels, so the illuminating time for each pixel (1 pixel is roughly 1 µm2 in this case) of the confocal imaging fluorometer is about 4-orders shorter than that of conventional wide-field imaging fluorometer.
Combining the above two reasons, it is reasonable that the illumination intensity in the confocal imaging fluorometer needs to be much higher than that in the wide-field fluorometer to achieve similar CF signal strength, as well as the Kautsky curves. The latter is somewhat surprising since it indicates that the physiological response of the chlorophyll remains the same with such high-intensity, yet short-period, illumination. One possible reason is that there is a slow reaction during photosynthesis and CF generation, so the chlorophyll only responses to the average intensity, not the instantaneous intensity. By looking into the electron transport chains in the photosystem, the bottleneck reaction might be the reduction of plastoquinone (PQ), which has a relatively slow reaction rate (100 molChl mmol−1 s−1) [42]. Further studies are necessary to identify the underlying photochemical mechanism.
In this work, we demonstrated a confocal imaging fluorometer that can provide high spatiotemporal characterization of CF inside a living leaf. The three-dimensional spatial resolution is on the order of micrometer, and the temporal resolution reaches tens of milliseconds, allowing us to study CF transient, i.e. the Kautsky effect, from even a single chloroplast. Although the ensemble behavior of CF transient, as well as the intensity-dependent Kautsky curves, agree well with the results of conventional wide-field fluorometers, confocal imaging fluorometer provides valuable information toward the difference of CF dynamics among individual chloroplasts. The features of optical sectioning and laser focus scanning in the confocal fluorometer result in much higher illumination intensity compared to conventional techniques, while maintaining normal cellular physiological responses. Our work not only opens up new possibilities to study CF dynamics on the level of organelles, but also is promising to unravel more spatial/temporal details in the associated photosynthetic processes.
Strasser RJ, Srivastava A, Govindjee. Polyphasic chlorophyll a fluorescence transient in plants and cyanobacteria. Photochem Photobiol. 1995;61:32–42.
Zarco-Tejada PJ, Miller JR, Mohammed GH, Noland TL. Chlorophyll fluorescence effects on vegetation apparent reflectance: I. Leaf-level measurements and model simulation. Remote Sens Environ. 2000;74:582–95.
Krause GH, Weis E. Chlorophyll fluorescence and photosynthesis: the basics. Annu Rev Plant Biol. 1991;42:313–49.
Horton P, Ruban AV, Walters RG. Regulation of light harvesting in green plants (indication by nonphotochemical quenching of chlorophyll fluorescence). Plant Physiol. 1994;106:415.
Dobrowski SZ, Pushnik JC, Zarco-Tejada PJ, Ustin SL. Simple reflectance indices track heat and water stress-induced changes in steady-state chlorophyll fluorescence at the canopy scale. Remote Sens Environ. 2005;97:403–14.
Csintalan Z, Proctor MCF, Tuba Z. Chlorophyll fluorescence during drying and rehydration in the mosses Rhytidiadelphus loreus (hedw.) warnst., Anomodon viticulosus (hedw.) hook. & tayl. and Grimmia pulvinata (hedw.) sm. Ann Bot. 1999;84:235–44.
Bolhar-Nordenkampf HR, Long SP, Baker NR, Oquist G, Schreiber U, Lechner EG. Chlorophyll fluorescence as a probe of the photosynthetic competence of leaves in the field: A review of current instrumentation. Funct Ecol. 1989;3:497–514.
Trissl HW, Gao Y, Wulf K. Theoretical fluorescence induction curves derived from coupled differential equations describing the primary photochemistry of photosystem ii by an exciton-radical pair equilibrium. Biophys J. 1993;64:974.
Schansker G, Tóth SZ, Holzwarth AR, Garab G. Chlorophyll a fluorescence: beyond the limits of the QA model. Photosynth Res. 2014;120:43–58.
Papageorgiou GC. Photosystem ii fluorescence: slow changes–scaling from the past. J Photochem Photobiol B Biol. 2011;104:258–70.
DeEll JR, Toivonen PMA. Practical applications of chlorophyll fluorescence in plant biology. Berlin: Springer; 2012.
Lazár D, Ilik P. High-temperature induced chlorophyll fluorescence changes in barley leaves comparison of the critical temperatures determined from fluorescence induction and from fluorescence temperature curve. Plant Sci. 1997;124:159–64.
Schreiber U, Schliwa U, Bilger W. Continuous recording of photochemical and non-photochemical chlorophyll fluorescence quenching with a new type of modulation fluorometer. Photosynth Res. 1986;10:51–62.
Mauzerall D. Light-induced fluorescence changes in chlorella, and the primary photoreactions for the production of oxygen. Proc Nat Acad Sci. 1972;69:1358–62.
Falkowski PG, Wyman K, Ley AC, Mauzerall DC. Relationship of steady-state photosynthesis to fluorescence in eucaryotic algae. BBA-Bioenergetics. 1986;849:183–92.
Kolber ZS, Prášil O, Falkowski PG. Measurements of variable chlorophyll fluorescence using fast repetition rate techniques: defining methodology and experimental protocols. BBA-Bioenergetics. 1998;1367:88–106.
Barron-Gafford GA, Rascher U, Bronstein JL, Davidowitz G, Chaszar B, Huxman TE. Herbivory of wild manduca sexta causes fast down-regulation of photosynthetic efficiency in datura wrightii: an early signaling cascade visualized by chlorophyll fluorescence. Photosynth Res. 2012;113:249–60.
Oxborough K, Baker N. An instrument capable of imaging chlorophyll a fluorescence from intact leaves at very low irradiance and at cellular and subcellular levels of organization. Plant, Cell Environ. 1997;20:1473–83.
Scholes JD, Rolfe SA. Photosynthesis in localised regions of oat leaves infected with crown rust (Puccinia coronata): quantitative imaging of chlorophyll fluorescence. Planta. 1996;199:573–82.
Osmond B, Schwartz O, Gunning B. Photoinhibitory printing on leaves, visualised by chlorophyll fluorescence imaging and confocal microscopy, is due to diminished fluorescence from grana. Funct Plant Biol. 1999;26:717–24.
Hibberd JM, Quick WP. Characteristics of c4 photosynthesis in stems and petioles of c3 flowering plants. Nature. 2002;415:451–4.
Kim S-J, Hahn E-J, Heo J-W, Paek K-Y. Effects of leds on net photosynthetic rate, growth and leaf stomata of chrysanthemum plantlets in vitro. Sci Hortic. 2004;101:143–51.
Omasa K, Konishi A, Tamura H, Hosoi F. 3d confocal laser scanning microscopy for the analysis of chlorophyll fluorescence parameters of chloroplasts in intact leaf tissues. Plant Cell Physiol. 2009;50:90–105.
Papageorgiou G. Light-induced changes in the fluorescence yield of chlorophyll a in vivo: I. Anacystis nidulans. Biophys J. 1968;8:1299–315.
Paddock S, Fellers TJ, Davidson MW. Confocal microscopy. Berlin: Springer; 2001.
Shotton DM. Confocal scanning optical microscopy and its applications for biological specimens. J Cell Sci. 1989;94:175–206.
Roselli L, Paparella F, Stanca E, Basset A. New data-driven method from 3d confocal microscopy for calculating phytoplankton cell biovolume. J Microsc. 2015;258:200–11.
Matsumoto B. Cell biological applications of confocal microscopy. Cambridge: Academic Press; 2003.
Hibbs AR. Confocal microscopy for biologists. Berlin: Springer; 2004.
Pawley JB, Masters BR. Handbook of biological confocal microscopy. Opt Eng. 1996;35:2765–6.
Omasa K. Image instrumentation of chlorophyll a fluorescence. In SPIE Proc. 1998;3382:91–9.
Kautsky H, Appel W. Chlorophyllfluorescenz und kohlensaureassimilation. Biochemistry. 1960;322:279–92.
Bradbury M, Baker NR. Analysis of the slow phases of the in vivo chlorophyll fluorescence induction curve. Changes in the redox state of photosystem ii electron acceptors and fluorescence emission from photosystems i and ii. BBA-Bioenergetics. 1981;635:542–51.
Maxwell K, Johnson GN. Chlorophyll fluorescence—a practical guide. J Exp Bot. 2000;51:659–68.
Stirbet A, Govindjee. The slow phase of chlorophyll a fluorescence induction in silico: origin of the s–m fluorescence rise. Photosynth Res. 2016;130:193–213.
Omasa K, Shimazaki KI, Aiga I, Larcher W, Onoe M. Image analysis of chlorophyll fluorescence transients for diagnosing the photosynthetic system of attached leaves. Plant Physiol. 1987;84:748–52.
Kodru S, Malavath T, Devadasu E, Nellaepalli S, Stirbet A, Subramanyam R. The slow s to m rise of chlorophyll a fluorescence reflects transition from state 2 to state 1 in the green alga chlamydomonas reinhardtii. Photosynth Res. 2015;125:219–31.
Vermaas WF, Timlin JA, Jones HD, Sinclair MB, Nieman LT, Hamad SW, Melgaard DK, Haaland DM. In vivo hyperspectral confocal fluorescence imaging to determine pigment localization and distribution in cyanobacterial cells. Proc Nat Acad Sci USA. 2008;105:4050–5.
Chen M-Y, Zhuo G-Y, Chen K-C, Wu P-C, Hsieh T-Y, Liu T-M, Chu S-W. Multiphoton imaging to identify grana, stroma thylakoid, and starch inside an intact leaf. BMC Plant Biol. 2014;14:175.
Reddy GD, Kelleher K, Fink R, Saggau P. Three-dimensional random access multiphoton microscopy for functional imaging of neuronal activity. Nat Neurosci. 2008;11:713–20.
Jonkman J, Brown CM. Any way you slice it—a comparison of confocal microscopy techniques. J Biomol Tech. 2015;26:54–65.
de Wijn R, van Gorkom HJ. Kinetics of electron transfer from QA to QB in photosystem ii. Biochemistry. 2001;40:11912–22.
YCT designed the experiment, carried out signal analysis, and wrote most of the manuscript. SWC envisioned the idea, provided the experimental hardware, and helped to polish the manuscript. Both authors read and approved the final manuscript.
The authors appreciate the inspirational discussion with Prof. Govindjee from University of Illinois at Urbana-Champaign. SWC acknowledge the generous support from the Foundation for the Advancement of Outstanding Scholarship.
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
This work is supported by the Molecular Imaging Center of NTU (105R8916, 105R7732), and by the Ministry of Science and Technology, Taiwan, under grant MOST-105-2628-M-002-010-MY4 and MOST-106-2321-B-002-020.
Department of Physics, National Taiwan University, No. 1, Section 4, Roosevelt Rd, Da'an District, Taipei City, 10617, Taiwan
Yi-Chin Tseng & Shi-Wei Chu
Molecular Imaging Center, National Taiwan University, No. 81, Changxing Street, Da'an District, Taipei, 10672, Taiwan
Shi-Wei Chu
Yi-Chin Tseng
Correspondence to Shi-Wei Chu.
Tseng, YC., Chu, SW. High spatio-temporal-resolution detection of chlorophyll fluorescence dynamics from a single chloroplast with confocal imaging fluorometer. Plant Methods 13, 43 (2017). https://doi.org/10.1186/s13007-017-0194-2
Optical section
3D microscopy
Kautsky curve
Chlorophyll fluorescence transient | CommonCrawl |
Applications of differential calculus
1 Solving equations numerically: bisection and Newton's method
2 How to compare functions, L'Hopital's Rule
3 Linearization
4 The accuracy of the best linear approximation
5 Flows: a discrete model
6 Motion under forces
7 Differential equations
8 Functions of several variables
9 Optimization examples
Solving equations numerically: bisection and Newton's method
Given a function $y=f(x)$, find such a $d$ that $f(d)=0$.
Algebraic methods only apply to a very narrow class of functions (such as polynomials of degree below $5$). What about a numerical solution? Solving the equation numerically means finding a sequence of numbers $d_n$ such that $d_n\to c$. Then $d_n$ are the approximations of the unknown $d$.
Let's recall how we interpreted the proof of the Intermediate Value Theorem as an iterated search for a solution of the equation $f(x)=0$. We constructed a sequence of nested intervals by cutting intervals in half. It is called bisection.
We have a function $f$ is defined and is continuous on interval $[a,b]$ with $f(a)<0,\ f(b)>0$ and we want to approximate a $d$ in $[a,b]$ such that $f(d) = 0$. We start with the halves of $[a,b]$: $\left[ a,\frac{1}{2}(a+b) \right],\ \left[ \frac{1}{2}(a+b),b \right]$. For at least one of them, $f$ changes its sign; we call this interval $[a_1,b_1]$.
Next, we consider the halves of this new interval and so on. We continue with this process and the result is two sequences of numbers.
Exercise. Write a recursive formula for these intervals.
These two sequences of numbers form a sequence of intervals: $$[a,b] \supset [a_1,b_1] \supset [a_2,b_2] \supset ...,$$ on each of which $f$ changes its sign: $$f(a_n)<0,\ f(b_n)>0\quad \text{ or }\quad f(a_n)>0,\ f(b_n)<0.$$ We concluded that the sequences converge to the same value, $a_n\to d,\ b_n\to d$, and, furthermore, from the continuity of $f$, we concluded that $$f(a_n)\to f(d),\ f(b_n)\to f(d).$$ Thus $d$ is an (unknown) solution: $f(d)=0$.
Example. Let's review how the bisection method solves a specific equation: $$\sin x=0.$$ We started with the interval $[a_1,b_1]=[3,3.5]$ and used the spreadsheet formula for $a_n$ and $b_n$: $$\texttt{=IF(R[-1]C[3]*R[-1]C[4]<0,R[-1]C,R[-1]C[1])},$$ $$\texttt{=IF(R[-1]C[1]*R[-1]C[2]<0,R[-1]C[-1],R[-1]C)}.$$
The values of $a_n,\ b_n$ visibly converge to $\pi$ (and so does $d_n$, the mid-point of the interval) while the values of $f(a_n),\ f(b_n)$ converge to $0$. $\square$
There are other methods for solving $f(x)=0$. One is to use the linearization of $f$ as a substitute, one such approximation at a time.
Suppose a function $f$ is given as well as the initial estimate $x_0$ of a solution $d$ of the equation $f(x)=0$. We replace $f$ in this equation with its linear approximation $L$ at this initial point: $$L(x)=f(x_0)+f'(x_0)(x-x_0).$$ Then we solve the equation $L(x)=0$ for $x$: $$L(x)=f(x_0)+f'(x_0)(x-x_0)=0.$$ In other words, we find the intersection of the tangent line with the $x$-axis:
The equation, which is linear, is easy to solve. The point of intersection is $$x_1=x_0-\frac{f(x_0)}{f'(x_0)}.$$ Then we repeat the process for $x_1$. And so on...
This is called Newton's method. It is a sequence of numbers given recursively: $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}.$$
Warning: the method fails when it reaches a point where the derivative is equal to (or even close to) $0$.
Example. Let's use Newton's method to solve the same equation as above: $$\sin x=0.$$ We start with $x_0=3$ and use the spreadsheet formula for $x_n$: $$\texttt{=R[-1]C-SIN(R[-1]C)/COS(R[-1]C)}.$$
The sequence converges to $\pi$ very quickly (left). However, as our choice of $x_0$ approaches $\pi /2$, the value of $x_1$ becomes larger and larger. In fact, it might be so large that the sequence won't ultimately converge to $\pi$ but to $2\pi,\ 100\pi$, etc. If we choose $x_0=\pi /2$ exactly, the tangent line is horizontal, failure... $\square$
Exercise. Solve the equation $\sin x=.2$.
How to compare functions, L'Hopital's Rule
How does one compare the "power" of two functions?
When they are power functions, say $x$ and $x^2$, or $x^2$ and $x^3$, the answer is known: the higher the power the stronger the function.
Same rule applies to polynomials. What about functions in general? What we learn from the power functions is that we don't look at the difference of two functions, $$(x^2+x)-x^2\to +\infty \text{ as } x\to +\infty,$$ but the ratio: $$\frac{x^2+x}{x^2}\to 1 \text{ as } x\to +\infty.$$
Example. We have encountered this problem indirectly many times when computing limits at infinity of rational functions. The method we used was to divide both numerator and denominator by the highest available power; for example: $$\begin{array}{lll} \frac{3x^3+x-7}{2x^3+100}&=\frac{3x^3+x-7}{2x^3+100}\frac{/x^3}{/x^3}\\ &=\frac{3+x^{-2}-7x^{-3}}{2+100x^{-3}}\\ &\to \frac{3}{2}& \text{ as } x\to \infty. \end{array}$$ We conclude that these two functions have the "same power" at infinity. We can see how they stay together below:
Even though the difference goes to infinity, the proportion remains visibly $3$ to $2$.
However, if we replace the power of the numerator with $4$, we see this: $$\begin{array}{lll} \frac{3x^4+x-7}{2x^3+100}&=\frac{3x^4+x-7}{2x^3+100}\frac{/x^4}{/x^4}\\ &=\frac{3+x^{-3}-7x^{-4}}{2x^{-1}+100x^{-4}}\\ &\to \infty& \text{ as } x\to \infty. \end{array}$$ We conclude that $3x^4+x-7$ is "stronger" at infinity than $2x^3+100$. We can see how quickly the former runs away below:
Thus to compare two functions $f$ and $g$, at infinity or at a point, we form a fraction from them and evaluate the limit of the ratio: $$\lim \frac{f(x)}{g(x)}=?$$
Definition. If the limit below is infinite, or its reciprocal is zero, $$\lim \left|\frac{f(x)}{g(x)}\right|=\infty,\ \lim \left|\frac{g(x)}{f(x)}\right|=0,$$ we say that $f$ is of higher order than $g$, and use the following notation: $$f>>g\ \text{ and }\ g=o(f).$$ The latter reads "little o".
The most important use of the latter notation is in the definition of the derivative: $$f'(a)=\lim_{\Delta x\to 0}\frac{\Delta y}{\Delta x}.$$ It can be rewritten as: $$\begin{array}{|c|}\hline \quad \Delta y=f'(a)\Delta x+o(\Delta x). \quad \\ \hline\end{array}$$
The method presented above allows us to justify the following hierarchy: $$...>>x^n>>...>>x^2>>x>>\sqrt{x}>>\sqrt[3]{x}>>...>>1.$$ Below, we see powers below $1$ on right and above $1$ on left:
Among polynomials, the degree plays the role of the order.
Theorem. For any two polynomials $P$ and $Q$, $\deg P<\deg Q$ if and only if $P=o(Q)$.
Next, where does $e^x$ fit in this hierarchy? Below, we compare $e^x$ and $x^{100}$:
The former seems stronger but, unfortunately, the methods of dividing by the higher power doesn't apply here. We will be looking for another method.
The idea of using limits to determine the relative "order" of functions also applies to their behavior at a point.
Example. Consider these functions are $0$: $$\frac{3+x^{-2}-7x^{-3}}{2+10x^{-3}}\to -\frac{7}{10} \text{ as } x\to 0.$$ We conclude that these two functions have the "same order" at $0$. However, below the numerator is of higher order at $0$ than the denominator: $$\frac{3+x^{-2}-7x^{-4}}{2+10x^{-3}}\to \infty \text{ as } x\to 0.$$ $\square$
We are thus able to justify the following hierarchy at $0$: $$...>>\frac{1}{x^n}>>...>>\frac{1}{x^2}>>\frac{1}{x}>>\frac{1}{\sqrt{x}}>>\frac{1}{\sqrt[3]{x}}>>...>>1.$$ Below, we compare $1/x$ and $1/x^2$:
However, the method fails for other types of functions: where does $\ln x$ fit in this hierarchy? We need a new approach...
Let's observe that the order of a function is determined by its rate of growth, i.e., its derivative. So, to compare two functions, we can try to compare their derivatives instead. After all, the derivative is a limit, by definition. We have computed so many derivatives by now that we can use the results to evaluate some other limits.
We can justify this idea for the following simplified case:
$f(c)=g(c)=0$;
$g'(c)\neq 0$.
Then $$\frac{f(x)}{g(x)} = \frac{f(x)-0}{g(x)-0}= \frac{f(x)-f(c)}{g(x)-g(c)} =\frac{\frac{f(x)-f(c)}{x-c}}{\frac{g(x)-g(c)}{x-c} }\to \frac{f'(c)}{g'(c)} \text{ by QR.}$$
Theorem (L'Hopital's Rule). Suppose functions $f$ and $g$ are continuously differentiable. Then, for limits of the types: $$x\to \infty \text{ and } x\to a^\pm,$$ we have $$\lim \frac{f(x)}{g(x)} = \lim\frac{f'(x)}{g'(x)}$$ if the latter exists, and provided the left-hand side is an indeterminate expression: $$\lim f(x) = \lim g(x) = 0, $$ or $$\lim f(x) = \lim g(x) = \infty.$$
An informal explanation why this works is that the denominators cancel below: $$\frac{\frac{\Delta f}{\Delta x}}{\frac{\Delta g}{\Delta x}}=\frac{\Delta f}{\Delta g}.$$
Warning: L'Hopital's Rule is not the Quotient Rule (of limits or of derivatives)!
Example. Compute $$\lim_{x \to \infty} \frac{x^{2} - 3}{2x^{2} - x + 1}.$$ The old method was to divide by the highest power. Instead, we apply L'Hopital's Rule: $$=\lim_{x \to \infty} \frac{2x}{4x-1}.$$ This is still indeterminate! So, apply LR again: $$=\lim_{x \to \infty} \frac{2}{4} = \frac{1}{2}.$$ $\square$
Warning: Before you apply L'Hopital's Rule verify that this is indeed an indeterminate expression.
Example. Compute $$\begin{array}{lll} \lim_{x\to 1} \frac{\ln x}{x-1} &&\leadsto\frac{0}{0}!\\ &=\lim_{x \to 1} \frac{\frac{1}{x}}{1} \\ &= \lim_{x \to 1} \frac{1}{x} \\ &= 1. \end{array}$$ $\square$
Example. Compute $$\begin{array}{lll} \lim_{x\to \infty}\frac{e^{x}}{x^{2}} && \leadsto\frac{\infty}{\infty}& \text{ ...apply LR}\\ &=\lim_{x\to\infty} \frac{e^{x}}{2x} & \leadsto\frac{\infty}{\infty} & \text{ ...apply LR}\\ &=\lim_{x \to \infty} \frac{e^{x}}{2} \\ &= \infty. \end{array}$$ $\square$
In fact, no matter how high the power is, it comes to zero after a sufficient number of differentiations. Meanwhile, nothing happens to the exponent: $$\lim_{x \to \infty} \frac{e^{x}}{x^{n}} = \infty. $$ Therefore, we have our hierarchy appended: $$e^x>>...>>x^n>>...>>x^2>>x>>\sqrt{x}>>\sqrt[3]{x}>>...>>1.$$
What about other indeterminate expressions? Below are the possible types of limits that correspond to algebraic operations:
$$0\cdot\infty;$$
differences:
$$\infty - \infty;$$
powers:
$$0^{0},\ \infty^{0},\ 1^{\infty}.$$ Idea: convert them to fractions.
Example. Evaluate: $$\lim_{x\to 0^{+}} x \ln x=?$$ How do we apply LR here? We convert to a fraction by dividing by the reciprocal: $$\begin{array}{lll} = \lim_{x\to 0^{+}} \frac{\ln x}{\frac{1}{x}} & \leadsto \frac{\infty}{\infty} \\ = \lim_{x \to 0^{+}} \frac{\frac{1}{x}}{-\frac{1}{x^{2}}}\\ = \lim_{x\to 0^{+}} -x = 0. \end{array}$$ $\square$
Example. Evaluate $$\lim_{x \to \infty}x^{\frac{1}{x}}=?$$ How do we convert to a fraction? Use log! $$\ln x^{\frac{1}{x}} = \frac{1}{x}\ln x. $$ Then, we use the fact that $\ln$ is continuous (the "magic words"): $$\begin{aligned} \ln\left( \lim_{x \to \infty} x^{\frac{1}{x}} \right) & = \lim_{x \to \infty}\left(\ln x^{\frac{1}{x}}\right) \\ &= \lim_{x \to \infty} \left(\frac{1}{x}\ln x\right) \\ &= \lim_{x\to\infty}\frac{\ln x}{x} \qquad \leadsto \frac{\infty}{\infty} \\ &\ \overset{\text{LR}}{=\! =\! =}\ \lim_{x\to\infty}\frac{\frac{1}{x}}{1} = 0. \end{aligned}$$ Therefore, $$\lim_{x\to\infty} x^{\frac{1}{x}} = e^{0} = 1. $$ $\square$
Exercise. What property of limits we used at the end?
Exercise. Prove $x>>\ln x>>\sqrt{x}$ at $\infty$.
Exercise. Compare $\ln x$ and $\frac{1}{x^n}$ at $0$.
Example. The condition of the theorem that requires the ratio to be an indeterminate expression must be verified. What can happen otherwise is illustrated below. If we apply L'Hopital's Rule, we get the following: $$\lim_{x\to +\infty}\frac{1+\frac{1}{x}}{1+\frac{1}{x^2}}\ \overset{\text{LR?}}{=\! =\! =}\ \lim_{x\to +\infty}\frac{\left( 1+\frac{1}{x}\right)'}{\left( 1+\frac{1}{x^2}\right)'}=\lim_{x\to +\infty}\frac{-\frac{1}{x^2}}{-\frac{2}{x^3}}=\frac{1}{2}\lim_{x\to +\infty}x=+\infty.$$ But the Quotient Rule for limits is applicable here: $$\lim_{x\to +\infty}\frac{1+\frac{1}{x}}{1+\frac{1}{x^2}}\ \overset{\text{QR}}{=\! =\! =}\ \frac{\lim_{x\to +\infty}\left(1+\frac{1}{x}\right)}{\lim_{x\to +\infty}\left(1+\frac{1}{x^2}\right)}=\frac{1}{1}=1.$$ A mismatch! L'Hopital's Rule is inapplicable because the limits of the numerator and the denominator of the original fraction exist! Here is another example: $$\lim_{x\to 1}\frac{x^{2}}{x} \ \overset{\text{LR?}}{=\! =\! =}\ \lim_{x \to 1}\frac{2x}{1} = 2...$$ $\square$
Exercise. Describe this class of functions: $o(1)$.
Linearization
It is a reasonable strategy to answer a question that you don't know the answer to by answering instead another one -- reasonably close -- with a known answer.
What is the square root of $4.1$? I know that $\sqrt{4}=2$, so I'll say that it's about $2$.
What is the square root of $4.3$? It's about $2$.
And so on. These are all reasonable estimates, but they are all the same! Our interpretation of this observation is that we approximate the function $f(x)=\sqrt{x}$ by a constant function $C(x)=2$.
It is a crude approximation!
The error of an approximation is the difference between the two functions, i.e., the function that gives us the lengths of these segments: $$E(x) = | f(x) - C | . $$ But what $C$ do we choose? Even though the choice is obvious, let's run through the argument anyway. We want to make sure that the error diminishes as $x$ is getting closer to $a$. In other words, we would like the following to be satisfied: $$E(x)=| f(x) - C |\to 0 \text{ as } x\to a,\ \text{ or } f(x) \to C \text{ as } x\to a.$$ It suffices to require that $f$ is continuous at $a$ if we choose $C=f(a)$!
As long as the number is "close" to $4$ we use it or we look for another number with a known square root. For example, $\sqrt{.99}\approx \sqrt{1}=1$, $\sqrt{10}\approx \sqrt{9}=3$, and so on.
Can we do better that the horizontal line? Definitely!
We already know that the tangent line "approximates" the graph of a function: when you zoom in on the point, the tangent line will merge with the graph:
There are many straight lines that can be used to approximate including the horizontal line and the tangent looks better!
What is so special about the tangent line?
We are about to start using more precise language. Recall from Chapter 2 that the point-slope form of a line through $(x_0,y_0)$ with slope $m$ is given by (as a relation): $$ y - y_{0} = m(x - x_{0}). $$ Specifically, we have $$x_0=a,\ y_0=f(a).$$ Therefore, we have $$y-f(a)=m(x - a) .$$ At this point, we make an important step and stop looking at this as an equation of a line but as a formula for a new function: $$ L(x) = f(a) + m(x - a) .$$ It is linear polynomial (because the power of $x$ is $1$ while the rest of the parameters are constant). That's why it is called a linear approximation of $f$ at $x = a$.
Among these, however, when you zoom in on the point, the tangent line -- but no other line -- will merge with the graph. Indeed, the angle between the lines is preserved under magnification:
This is the geometric meaning of "best" approximation.
Now, algebra. Once again, let's look at the error, i.e., the difference between the two functions: $$E(x) = | f(x) -L(x) | . $$
For all of these approximations, we have, just as above, $$E(x)=| f(x) - L(x) |\to 0 \text{ as } x\to a.$$
Exercise. Prove this statement. Hint: use the "magic" words.
Then, how do we choose the "best" approximation? Will choose a function -- from among these -- with the error converging to $0$ the fastest possible. Specifically, we compare this convergence with that of the linear function: $$E(x)=| f(x) - L(x) |\to 0\ \text{ vs. }\ |x-a|\to 0.$$ We know how to compare the orders of convergence from our analysis of in the last section: look at the convergence of their ratio! Then, $E(x)$ converges faster if the limit of this ratio is zero; i.e., $$\frac{ f(x) -L(x) }{x-a}\to 0.$$ In other words, $$f(x) -L(x) =o(x-a).$$
Theorem (Best linear approximation). Suppose $f$ is differentiable at $x=a$ and $L(x)=f(a)+m(x-a)$ is any of its linear approximations. Then, $$\lim_{x\to a} \frac{ f(x) -L(x) }{x-a}=0 \Longleftrightarrow m=f'(a).$$
Proof. As $x\to a$, we have: $$\begin{array}{llcc} \frac{ f(x) -(f(a) + m(x-a)) }{x-a}&=&\frac{ f(x) -f(a)}{x-a} &-m \\ &\to& f'(a) &-m . \end{array}$$ $\blacksquare$
Example (root). Let's approximate $\sqrt{4.1}$.
We can't compute $\sqrt{x}$ by hand. In fact, the only meaning of $x=\sqrt{4.1}$ is that it is such a number that $x^2=4.1$. In that sense, the function $f(x)=\sqrt{x}$ is unknown.
With just a few exceptions such as $f(4)= \sqrt{4}=2$, etc. We will use these points as "anchors". Then, we can use the constant approximation and declare that $\sqrt{4.1}\approx 2$.
The best linear approximation of $f$ is also known. The reason is that, as a linear function, it can be computed by hand. Let's find this function. $$\begin{aligned} f'(x) &= \frac{1}{2\sqrt{x}} \ \Longrightarrow \\ f'(4) &= \frac{1}{2\sqrt{4}} = \frac{1}{4}. \end{aligned}$$ The best linear approximation is: $$\begin{array}{lll} L(x) &= f(a) &+ f'(a) (x - a) \\ & = 2 &+ \frac{1}{4} (x - 4). \end{array} $$ This function is a replacement for $f(x)=\sqrt{x}$ in the vicinity of the "anchor point" $x=4$.
Next, $$\begin{aligned} L(4.1) &= 2 + \frac{1}{4}(4.1 - 4) \\ & = 2 + \frac{1}{4} \cdot 1 \\ & = 2 + 0.025 \\ & = 2.025. \end{aligned} $$ Thus, $2.025$ is an approximation of $\sqrt{4.1}$. $\square$
Exercise. Find the best linear approximation of $f(x)=x^{1/3}$ at $a=1$. Use it to estimate $1.1^{1/3}.$
Exercise. Use the best linear approximation of $f(x)=\sqrt{\sin x}$ to estimate $\sqrt{\sin \pi/2}$.
Replacing a function with its linear approximation is called linearization. Linearizations make a lot of things much simpler.
Example (integration). Linearization helps with integration. If we imagine that we don't know the antiderivative of $\sin x$, we can still make progress: $$\int \sin x\, dx\approx \int x\, dx=\frac{x^2}{2}+C.$$
In fact, the result is the quadratic approximation of $\cos x$! $\square$
Example (limits). Not only all linear functions, $L(x)=mx+b,\ m\ne 0$, are continuous but also the relation, in the definition of continuity, between $\varepsilon$ and $\delta$ is very simple: to ensure $$|x-a|<\delta \Longrightarrow |L(x)-L(a)|<\varepsilon,$$ we simply choose: $$\delta=\tfrac{1}{m}\varepsilon.$$
We now interpret these two, as before:
$\delta$ is the accuracy of the measurement of $x$ and
$\varepsilon$ is the accuracy of the indirect evaluation of $y=L(x)$.
Now, if $L$ is a linearization of function $y=f(x)$ around $a$, we can have a similar, approximate, analysis for $f$: we can make $\Delta f=f(a+\Delta x)-f(a)$ as small as we like by choosing $\Delta x$ small enough.
How do we find this $\Delta x$? We linearize: $$f'(a)=\frac{dy}{dx}\Bigg|_{x=a}\approx \frac{\Delta y}{\Delta x}.$$ Then, the change of the output variable is approximately proportional to the change of the input variable: $$\begin{array}{|c|}\hline \quad \Delta y\approx f'(a) \Delta x. \quad \\ \hline\end{array}$$ How well this approximation works is discussed in the next section. This idea can also be expressed via the differentials: $$\begin{array}{|c|}\hline \quad dy= f'(a) dx. \quad \\ \hline\end{array}$$ The equation is, in fact, their definition.
Example (error estimation). Let's revisit the example of evaluating the area $A=f(x)=x^2$ of square tiles when their dimensions are close to $10\times 10$.
Suppose the desired accuracy of $A$ is $\Delta A =5$, what should be the accuracy $\Delta x$ of measurement $x$? By trial-and-error, we discovered that $\Delta x=.2$ is appropriate: $$\begin{array}{lll} A&=(10 \pm .2)^2\\ &=10^2 \pm 2 \cdot 10 \cdot .2 +.2^2 \\ &\text{or }100.04 \pm 4. \end{array}$$ Instead, to get a quick "ballpark" figure, we find the derivative, $2x$, of $f(x)=x^2$ at $x=10$: $$f'(10)=2\cdot 10=20,$$ and then apply the above formula: $$\Delta A\approx f'(10) \Delta x=20\Delta x \ \Longrightarrow\ \Delta x \approx 5/20=.25.$$ So, in order to achieve the accuracy of $4$ square inches of the computation of the area of a $10\times 10$ tile, one will need the $1/4$-inch accuracy of the measurement of the side of the tile. $\square$
Exercise. What is the accuracy of the computation of the surface area of the Earth if the radius is found to be $6,360\pm 30$ kilometers? What about the volume?
Example. With enough "anchor" points, we can produce an approximation of the whole, unknown, function. The result for $f(x)=\sqrt{x}$ is shown below:
As a summary, below we illustrate how we attempt to approximate a function around the point $(1,1)$ with constant functions first; from those we choose the horizontal line through the point. This line then becomes one of many linear approximations of the curve that pass through the point; from those we choose the tangent line.
In Chapter 15, we will see that these are just the two first steps in the sequence of approximations...
The accuracy of the best linear approximation
An "approximation" is meaningless if it comes as a single number.
We need to know more about what we have to make this number useful. How close is $3.14$ to $\pi$? Even more important is the question: how close is $\pi$ to $3.14$? Or, from the last section, how close is $\sqrt{4.1}$ to $2.025$? The answer to the question will give $\sqrt{4.1}$ an exclusive range of possible values.
First, the constant approximation. Even though the function $y=f(x)$ coincides with $y=C=f(a)$ at $x=$, it may run away very fast and very far afterwards. How far? The only limit is the rate of growth of $f$, i.e., its derivative! So, we can predict the behavior of $f$ if we have a priori information about $|f'|$. Then we have a range of possible values for $f(x)$, for all $x$! The result is a funnel that contains the (unknown) graph of $y=f(x)$:
Indeed, we can conclude from the Mean Value Theorem that $$|f(x)-C(x)|=|f(x)-f(a)| =|f'(t)(x-a)|\le K|x-a|,$$ if we only know that $|f'(t)|\le K$ for all $t$ between $a$ and $x$.
Example. How close is $\sqrt{4.1}$ to $2$? We estimate the derivative of $f(x)=\sqrt{x}$: $$f'(t)=\frac{1}{2\sqrt{t}}\le\frac{1}{2\sqrt{4}}\le .25.$$ Then $$|\sqrt{4.1}-2|\le .25|4.1-4|=.025.$$ $\square$
Next, the linear approximation. Two graphs below have the same tangent line but one is better approximated. What causes the difference?
There are slopes close to our location that are very different from that of the tangent line. The quantity that makes the slopes, i.e., the derivatives, change is the second derivative. We can then predict the accuracy of the approximation if we have a priori information about the magnitude of the second derivative.
Theorem (Error bound). Suppose $f$ is twice differentiable at $x=a$ and $L(x)=f(a)+f'(a)(x-a)$ is its best linear approximation at $a$. Then, $$E(x)=|f(x)-L(x)| \le \tfrac{1}{2}K(x-a)^2,$$ where $K$ is a bound of the second derivative on the interval from $a$ to $x$: $$|f' '(t)|\le K \text{ for all } t \text{ in this interval}.$$
The theorem claims that $E=o((x-a)^2)$.
When $K$ is fixed, we have a range of possible values of the function $y=f(x)$! The result is, again, a funnel that contains the (unknown) graph of $y=f(x)$:
This time, the two edges of the funnel are parabolas: $$y=L(x)\pm\frac{1}{2}K(x-a)^2.$$ Therefore, the value of $K$ makes the funnel wider or narrower.
The practical meaning of the theorem is the following. For each $x$, this error bound, $$\varepsilon (x)= \tfrac{1}{2}K(x-a)^2,$$ is the accuracy of the approximation in the sense that the interval located on the $y$-axis $$[L(x)-\varepsilon (x),L(x)-\varepsilon (x)]$$ contains the true number, $f(x)$.
This interval is a vertical cross-section of the funnel.
Example (root). We continue with the last example: $$\sqrt{4.1}\approx L(4.1)=2.025.$$ Once again, the answer is unsatisfactory because it doesn't really tell us anything about the true value of $\sqrt{4.1}$. In fact, $\sqrt{4.1}\ne 2.025$! Let's apply the theorem. First, we find the second derivative: $$\begin{aligned} f(x) &= \sqrt{x} \ \Longrightarrow f'(x) = \frac{1}{2\sqrt{x}} \ \Longrightarrow \\ f' '(x) &= \left( \frac{1}{2\sqrt{x}} \right)'=\left( \frac{1}{2} x^{-1/2} \right)' \quad\text{ by PF}\\ &=\frac{1}{2} \left( -\frac{1}{2} \right) x^{-1/2-1}\\ &=-\frac{1}{4} x^{-3/2}. \end{aligned}$$ So, we need a bound for this function: $$|f' '(x)|=\tfrac{1}{4}\left|x^{-3/2}\right|$$ over the interval $[4,4.1]$. This is a simple, decreasing function.
Therefore, $$|f' '(x)|\le |f' '(4)|=\tfrac{1}{4}\left|4^{-3/2}\right|=\tfrac{1}{32}=0.03125.$$ This is our best choice for $K$! According to the theorem, our conclusion is that the error of the approximation cannot be larger than the following: $$E(x)=|f(x)-L(x)| \le \tfrac{1}{2}0.03125\cdot (x-4)^2.$$ Specifically, $$E(4.1)=\left|\sqrt{4.1}-L(4.1)\right| \le \tfrac{1}{2}0.03125\cdot (4.1-4)^2,$$ or $$\left|\sqrt{4.1}-2.025\right| \le 0.00015625.$$ Therefore, we conclude:
$\sqrt{4.1}$ is within $\varepsilon = 0.00015625$ from $2.025$.
In other words, we have: $$2.025 - 0.00015625\le \sqrt{4.1}\le 2.025 +0.00015625,$$ or $$2.02484375 \le \sqrt{4.1} \le 2.02515625.$$ We have found an interval that is guaranteed to contain the number we are looking for! $\square$
Example (sin). Approximate $\sin .01$. Note that the constant approximation is $\sin .01 \approx \sin 0 =0$. So, we have $a=0$. Now we compute: $$\begin{array}{lll} f(x)=\sin x& \\ f'(x)=\cos x& \Longrightarrow& L(x)=0+\cos x \Big|_{x=0}(x-0)&\Longrightarrow& L(x)=x\\ f' '(x)=-\sin x& \Longrightarrow & |f' '(x)|=\sin x& \Longrightarrow &|f' '(x)|\le 1=K. \end{array}$$ Thus, $$\sin .01 \approx .01,$$ and, furthermore, the accuracy is at worst $$\varepsilon = \tfrac{1}{2}K(x-a)^2=.5\cdot 1\cdot .01^2=.00005.$$ Therefore, $$.01-.00005=.00995\le \sin.01 \le .01005 =.01+.00005.$$
Note that the choice of $K$ in the last example was best possible. This was, therefore, a worst-case scenario. In contrast, here $K=1$ isn't the best possible choice. By limiting our attention to the relevant part of the graph of $|f' '|$ we discover a better value for the bound $K=.01$. This improves the accuracy by a factor of $.01$. $\square$
Knowing the concavity of the function that we approximate cuts the funnel in half.
Corollary (One-sided error bound). Suppose $f$ is twice differentiable at $x=a$ and $L(x)=f(a)+f'(a)(x-a)$ is its best linear approximation at $a$. Suppose also that $|f' '(x)|\le K$ for all $x$ in $[a,b]$. Then,
when $f$ is concave up (i.e., $f' '>0$), we have for each $x$ in $[a,b]$:
$$L(x)\le f(x) \le L(x)+\tfrac{1}{2}K(x-a)^2;$$
when $f$ is concave down (i.e., $f' '<0$), we have for each $x$ in $[a,b]$:
$$L(x)-\tfrac{1}{2}K(x-a)^2\le f(x) \le L(x).$$
Flows: a discrete model
Previously, we considered an example of how functions appear as direct representations of the dynamics given by an indirect description: motion's velocity is derived from the acceleration and then the location from the velocity. The acceleration (or the force) is a function of time and so is the velocity. A different, and just as important, example of such emergence is flows of liquids. However, here the velocity is a function of location.
We start with a discrete model.
Suppose there is a pipe with the velocity of the stream measured somehow at each location. Similarly, this is a canal with the water that has the exact same velocity -- parallel to the canal -- at all locations across it. In other words, the velocity only varies along the length of the pipe or the canal. This makes the problem one-dimensional.
Problem: Trace a single particle of this stream.
We will simply apply the familiar formula to compute the location from the velocity.
A fixed time increment $\Delta t$ is supplied ahead of time even though it can also be variable.
We start with the following two quantities provided by the model we are to implement:
the initial time $t_0$, and
the initial location $p_0$.
They are placed in the first row of the spreadsheet; for example: $$\begin{array}{c|c|c|c} &\text{iteration } n&\text{time }t_n&\text{velocity }v_n&\text{location }p_n\\ \hline \text{initial:}&0&3.5&--&22\\ \end{array}$$ This is the starting point. We would like to know the values of these quantities at every moment of time, in these increments.
As we progress in time and space, new numbers are placed in the next row of our spreadsheet. This is how the second row, $n=1,\ t_1=t_0+\Delta t$, is finished.
The current velocity $v_1$ given in the first cell of the row and the initial location $p_0$ is the last cell of the last row. The following recursive formula is placed in the second cell of the new row of our spreadsheet:
next location $=$ initial location $+$ current velocity $\cdot$ time increment.
This dependence is shown below for $\Delta t=.1$: $$\begin{array}{c|c|ccccc} &\text{iteration } n&\text{time }t_n&&\text{velocity }v_n&&\text{location }p_n\\ \hline \text{initial:}&0&t_0=3.5&&--&&p_0=22\\ &&\downarrow&&&\swarrow&\downarrow\\ &1&t_1=3.5+.1&\to&v_1=33&\to&p_1=22+33\cdot .1\\ &&\downarrow&&&\swarrow&\downarrow\\ &2&t_2=?&\to&v_2=?&\to&p_2=?\\ \end{array}$$ Also, in a flow, the current velocity of a particle depends -- somehow -- on the current time or its (last) location, as indicated by the arrows. This dependence may be an explicit formula or it may come from the instruments readings.
We continue with the rest in the same manner. As we progress in time and space, a number is supplied and are placed in each of the columns of our spreadsheet one row at a time: $$t_n,\ v_n,\ p_n,\ n=1,2,3,...$$ We then plotting the time and location on the plane with axes $t$ (horizontal) and $p$ (vertical). The result is a sequence of points developing one row at a time that might looks like this:
The first quantity in each row we compute is the time:
next moment of time $=$ last moment of time $+$ time increment,
$t_{n+1}=t_n+\Delta t$.
The next is the velocity $v_{n+1}$ which may be constant or may explicitly depend on the values in the previous row.
The $n$th iteration of the location $p_n$ is computed:
next location $=$ last location $+$ current velocity $\cdot$ time increment,
$p_{n+1}=p_n+v_{n+1}\cdot \Delta t$.
The values of the location are placed in the third column of our spreadsheet.
The result is a growing table of values: $$\begin{array}{c|c|c|c|c|c} &\text{iteration } n&\text{time }t_n&&\text{velocity }v_n&\text{location }p_n\\ \hline \text{initial:}&0&3.5&&--&22\\ &1&3.6&&33&25.3\\ &...&...&&...&...\\ &1000&103.5&&4&336\\ &...&...&&...&...\\ \end{array}$$ The result may be seen as three sequences $t_n,\ v_n,\ p_n$ or as the table of values of two functions of $t$. The result might look like this:
Let's implement this algorithm with a spreadsheet.
Example. Suppose that the velocity of the stream is constant: $$v_{n+1}=.5.$$
Exercise. Find the recursive formula for the position.
Example. Suppose that there is a source of water in the middle of the pipe and the velocity of the stream is directly proportional to the distance from the source: $$v_{n+1}=.5\cdot p_n.$$
Exercise. Modify the formulas for the case of a variable time increment: $\Delta t_{n+1}=t_{n+1}-t_n$.
What about a continuous flow?
We can imagine that the object moves from a given location to the next following a straight line. This does make the motion continuous. However, what if it is the relation between the position and the velocity that that holds continuously?
More precisely, we pose the following:
Problem: suppose the velocity comes from an explicit formula as a function of the location $z=f(y)$ defined on an interval $J$, is there an explicit formula for the location as a function of time $y=y(t)$ defined on an interval $I$?
We assume that there is a version of our recursive relation, $$p_{n+1}=p_n+v_{n+1}\cdot \Delta t,$$ for every $\Delta t>0$ small enough. Then our two functions have to satisfy: $$v_n=f(p_n) \text{ and } p_n=y(t_n).$$
We substitute these two, as well as $t=t_n$, into the recursive formula for $p_{n+1}$: $$y(t+\Delta t)=y(t)+f(y(t+\Delta t))\cdot \Delta t.$$ Then, $$\frac{y(t+\Delta t)-y(t)}{\Delta t}=f(y(t+\Delta t)).$$ Taking the limit over $\Delta t\to 0$ gives us the following: $$y'(t)=f(y(t)),$$ provided $y=y(t)$ is differentiable at $t$ and $z=f(y)$ is continuous at $y(t)$. The above result is called a differential equation. Such equations are considered below and throughout the rest of the text.
Example. Let's again consider the case when the velocity is proportional to the location: $$v_{n+1}=.5\cdot p_n.$$ For the continuous case, we have a differential equation: $$y'=.5\cdot y.$$ We already know its solution: $$y(t)=Ce^{.5t},$$ for any $C$. $\square$
Now a flow on the plane. In that case, we only show the two spatial coordinates and hide the time:
Instead of two (velocity -- location), there will be four main columns when the motion is two-dimensional and six when it is three-dimensional: $$\begin{array}{c|c|cc|cc|c} \text{}&\text{time}&\text{horiz.}&\text{horiz.}&\text{vert.}&\text{vert.}&...\\ \text{} n&\text{}t&\text{vel. }x'&\text{loc. }x&\text{vel. }y'&\text{loc. }y&...\\ \hline 0&3.5&--&22&--&3&...\\ 1&3.6&33&25.3&4&3.5&...\\ ...&...&...&...&...&...&...\\ 1000&103.5&4&336&66&4&...\\ ...&...&...&...&...&...&...\\ \end{array}$$
Example. Suppose the velocity of the flow depends on the location. In fact, let the velocity be proportional to the location: $$v_{n+1}=.2\cdot p_n,$$ for both horizontal and vertical. This is the result:
The particles are flying away from the center. For more complex patterns, the vertical and horizontal will have to be interdependent. For example, the horizontal velocity may be proportional to the vertical location and the vertical velocity proportional to the negative of the horizontal location:
Motion under forces
The concepts introduced above allow us to state some elementary facts about the Newtonian physics, in dimension $1$.
All functions are functions of time.
The main quantities are the following: $$\begin{array}{lllll} \text{quantity }&\text{incremental }&\text{nodes }&\text{continuous }\\ \hline \text{location: }&r&\text{ primary }&r&\\ \text{displacement: }&D=\Delta r&\text{ secondary }&D=dr&\\ \text{velocity: }&v=\frac{\Delta r}{\Delta t}&\text{ secondary }&v=r'&\\ \text{momentum: }&&\text{ secondary }&p=mv&\\ \text{impulse: }&J=\Delta p&\text{ primary }&J=dp&\\ \text{acceleration: }&a=\frac{\Delta^2 r}{\Delta t^2}&\text{ primary }&a=r' '& \end{array}$$
For example, the upward concavity of the function below is obvious, which indicates a positive acceleration:
Newton's First Law: If the net force is zero, then the velocity $v$ of the object is constant: $$F=0 \Longrightarrow v=\text{ constant }.$$
The law can be restated without invoking the geometry of time:
If the net force is zero, then the displacement $dr$ of the object is constant:
$$F=0 \Longrightarrow dr=\text{ constant }.$$ The law shows that the only possible type of motion in this force-less and distance-less space-time is uniform; i.e., it is a repeated addition: $$r(t+1)=r(t)+c.$$
Newton's Second Law: The net force on an object is equal to the derivative of its momentum $p$: $$F=p'.$$
Newton's Third Law: If one object exerts a force $F_1$ on another object, the latter simultaneously exerts a force $F_2$ on the former, and the two forces are exactly opposite: $$F_1 = -F_2.$$
Law of Conservation of Momentum: In a system of objects that is "closed"; i.e.,
there is no exchange of matter with its surroundings, and
there are no external forces; $\\$
the total momentum is constant: $$p=\text{ constant }.$$
In other words, $$J=dp=0.$$ To prove, consider two particles interacting with each other. By the third law, the forces between them are exactly opposite: $$F_1=-F_2.$$ Due to the second law, we conclude that $$p_1'=-p_2',$$ or $$(p_1+p_2)'=0.$$
Exercise. State the equation of motion for a variable-mass system (such as a rocket). Hint: apply the second law to the entire, constant-mass system.
Exercise. Create a spreadsheet for all of these quantities.
To study this physics in dimension $2$ or $3$, one simply combines the three functions for each of the above quantities into one.
Suppose we know the forces affecting a moving object. How can we predict its dynamics?
Assuming a fixed mass, the total force gives us our acceleration. Then, we apply the same formula (i.e., anti-differentiation) to compute:
the velocity from the acceleration, and then
the location from the velocity.
We will examine a discrete model. As an approximation, it allows us to avoid anti-differentiation and rely entirely on algebra.
We start with the following three quantities that come from the setup of the motion:
the initial time $t_0$,
the initial velocity $v_0$, and
They are placed in the four consecutive cells of the first row of the spreadsheet: $$\begin{array}{c|c|c|c|c} &\text{iteration } n&\text{time }t_n&\text{acceleration }a_n&\text{velocity }v_n&\text{location }p_n\\ \hline \text{initial:}&0&3.5&--&33&22\\ \end{array}$$ Another quantity that comes from the setup is
the current acceleration $a_1$.
It will be placed in the next row.
This is the starting point. We would like to know the values of all of these quantities at every moment of time, in these increments.
As we progress in time and space, new numbers are placed in the next row of our spreadsheet. This is how the second row, $n=1,\ t_1=t_0+\Delta t$, is completed.
The current acceleration $a_0$ given in the first cell of the second row. The current velocity $v_1$ is found and placed in the second cell of the second row of our spreadsheet:
current velocity $=$ initial velocity $+$ current acceleration $\cdot$ time increment.
The second quantity we use is the initial location $p_0$. The following is placed in the third cell of the second row:
current location $=$ initial location $+$ current velocity $\cdot$ time increment.
This dependence is shown below: $$\begin{array}{c|c|c|cccc} &\text{iteration } n&\text{time }t_n&\text{acceleration }a_n&&\text{velocity }v_n&&\text{location }p_n\\ \hline \text{initial:}&0&3.6&--&&33&&22\\ &&&& &\downarrow& &\downarrow\\ \text{current:}&1&t_1&66&\to&v_1&\to&p_1\\ \end{array}$$
These are recursive formulas just as in the last section.
We continue with the rest in the same manner. As we progress in time and space, numbers are supplied and placed in each of the four columns of our spreadsheet one row at a time: $$t_n,\ a_n,\ v_n,\ p_n,\ n=1,2,3,...$$
The first quantity in each row we compute is the time: $$t_{n+1}=t_n+\Delta t.$$ The next is the acceleration $a_{n+1}$ which may be constant (such as in the case of a free-falling object) or may explicitly depend on the values in the previous row.
Where does the current acceleration come from? It may come as pure data: the column is filled with numbers ahead of time or it is being filled as we progress in time and space. Alternatively, there is an explicit, functional dependence of the acceleration (or the force) on the rest of the quantities. The acceleration may depend on the following:
1. the current time, e.g., $a_{n+1}=\sin t_{n+1}$ such as when we speed up the car, or
2. the last location, e.g., $a_{n+1}=1/p_n^2$ such as when the gravity depends on the distance to the planet, or
3. the last velocity, e.g., $a_{n+1}=-v_n$ such as when the air resistance works in the opposite direction of the velocity,
or all three.
Exercise. Draw arrows in the above table to illustrate these dependencies.
Simple examples of case 1 and case 2 are addressed below and in the next section, respectively. More examples of case 1 are discussed in Chapter 11 and in the multidimensional setting in Chapter 17. Case 2 and case 3 are considered in Chapter 22.
The $n$th iteration of the velocity $v_n$ is computed:
current velocity $=$ last velocity $+$ current acceleration $\cdot$ time increment,
$v_{n+1}=v_n+a_n\cdot \Delta t$.
The values of the velocity are placed in the second column of our spreadsheet.
current location $=$ last location $+$ current velocity $\cdot$ time increment,
$p_{n+1}=p_n+v_n\cdot \Delta t$.
The result is a growing table of values: $$\begin{array}{c|c|c|c|c|c} &\text{iteration } n&\text{time }t_n&&\text{acceleration }a_n&\text{velocity }v_n&\text{location }p_n\\ \hline \text{initial:}&0&3.5&&--&33&22\\ &1&3.6&&66&38.5&25.3\\ &...&...&&...&...&...\\ &1000&103.5&&666&4&336\\ &...&...&&...&...&...\\ \end{array}$$ The result may be seen as four sequences $t_n,\ a_n,\ v_n,\ p_n$ or as the table of values of three functions of $t$.
Exercise. Implement a variable time increment: $\Delta t_{n+1}=t_{n+1}-t_n$.
Example (rolling ball). A rolling ball is unaffected by horizontal forces. Therefore, $a_n=0$ for all $n$. The recursive formulas for the horizontal motion simplify as follows:
the velocity $v_{n+1}=v_n+a_n\cdot \Delta t=v_n=v_0$ is constant;
the position $p_{n+1}=p_n+v_n\cdot \Delta t=p_n+v_0\cdot \Delta t$ grows at equal increments.
In other words, the position depends linearly on the time. $\square$
Example (falling ball). A falling ball is unaffected by horizontal forces and the vertical force is constant: $a_n=a$ for all $n$. The first of the two recursive formulas for the vertical motion simplifies as follows:
the velocity $v_{n+1}=v_n+a_n\cdot \Delta t=v_n+a\cdot \Delta t$ grows at equal increments;
the position $p_{n+1}=p_n+v_n\cdot \Delta t$ grows at linearly increasing increments.
In other words, the position depends quadratically on the time. $\square$
More interesting applications involve both vertical and horizontal components.
Instead of three (acceleration -- velocity -- location), there will six main columns when the motion is two-dimensional such as in the case of angled flight and nine when it is three-dimensional such as in the case of side wind: $$\begin{array}{c|c|ccc|ccc|c} \text{}&\text{time}&\text{horiz.}&\text{horiz.}&\text{horiz.}&\text{vert.}&\text{vert.}&\text{vert.}&...\\ \text{} &\text{}&\text{acceleration}&\text{velocity}&\text{position}&\text{acceleration}&\text{velocity}&\text{position}&...\\ \text{} n&\text{}t&a_n&v_n&x_n&b_n&u_n&y_n&...\\ \hline 0&3.5&--&33&22&-10&5&3&...\\ 1&3.6&66&38.5&25.3&-15&4&3.5&...\\ ...&...&...&...&...&...&...&...&...\\ 1000&103.5&666&4&336&14&66&4&...\\ ...&...&...&...&...&...&...&...&...\\ \end{array}$$
The shape of the trajectory is then unpredictable.
Example (cannon). A falling ball is unaffected by horizontal forces and the vertical force is constant: $$x:\ a_{n+1}=0;\quad y: b_{n+1}=-g.$$ Now recall the setup considered previously: from a $200$ feet elevation, a cannon is fired horizontally at $200$ feet per second.
The initial conditions are:
the initial location, $x:\ x_0=0$ and $y:\ y_0=200$;
the initial velocity, $x:\ v_0=200$ and $y:\ u_0=0$.
Then we have two pairs of recursive equations independent of each other: $$\begin{array}{lll} x:& v_{n+1}&=v_0, & &x_{n+1}&=x_n&+v_n\Delta t;\\ y:& u_{n+1}&=u_n&-g\Delta t, &y_{n+1}&=y_n&+u_n\Delta t. \end{array}$$
Implemented with a spreadsheet, the formulas produce the same results as the explicit formulas did before:
Example ("solar" system). The gravity is constant (independent of location) in the free-fall model.
If we now look at the solar system, maybe the gravity is also constant?
The direction will, of course, matter!
A differential equation is an equation that relates values of the derivative of the function to the function's values. To solve the equation is to find all possible functions that satisfy it.
Example. The simplest example of a differential equation is: $$f'(x)=0 \text{ for all } x.$$ We already know the solution:
constant function are solutions, and, conversely,
only constant functions are solutions.
Example. We can replace $0$ with any function: $$f'(x)=g(x) \text{ for all } x.$$ We already know the solution:
a solution $f$ has to be an antiderivative of $g$, and, conversely,
only antiderivatives of $g$ are solutions.
However, let's take a look at the problem with a fresh eye.
For the first kind of differential equation, we will be looking for functions $y=y(x)$ that satisfy the equation, $$y'(x)=g(x) \text{ for all } x.$$ What we know from the equation is the value of the derivative $y'(x)$ of $y$ at every point $x$ but we don't know the value $y(x)$ of the function itself. Then, for every $x$, we know the slope of the tangent line at $(x,y(x))$. As $y(x)$ is unknown, in order to visualize the data, we plot the same slope for every point $(x,y)$ on a given vertical line:
Thus, for each $x=c$, we indicate the angle $\alpha$, with $g(c)=\tan \alpha$, of the intersection of the graph of the unknown function $y=y(x)$ and the vertical line $x=c$.
Metaphorically, we are to create a fabric from two types of threads. The vertical ones have already been placed and the way the threads of the second type are to be weaved in has also been set.
The challenge is to devise a function that would cross these lines at these exact angles.
Is it always possible to have these functions? Yes, at least when $g$ is continuous. How do we find them? Anti-differentiation.
Example. A familiar interpretation is that of an object the velocity $v(x)=y'(x)$ of which is known at any moment of time $x$ and its location $y=y(x)$ is to be found.
These solutions fill the plane without intersections. They are just the vertically shifted versions of one of them. $\square$
For another kind of a differential equation, what if the derivative depends on the values of the function only (and not the values of the variable)?
We are looking for functions $y=y(x)$ that satisfy the equation, $$y'(x)=h(y(x)) \text{ for all } x.$$ What we know from the equation is the value of the derivative $y'(x)$ of $y$ at every point $(x,y)$ even though we don't know the value $y(x)$ of the function itself. Then, for every $y$, we know the slope of the tangent line at $(x,y)$. As $y(x)$ is unknown, in order to visualize the data, we plot the same slope for every point $(x,y)$ on a given horizontal line:
Thus, for each $y=d$, we indicate the angle $\alpha$, with $h(d)=\tan \alpha$, of the intersection of the graph of the unknown function $y=y(x)$ and the vertical line $x=c$ with $y(c)=d$.
Metaphorically, we are to create a fabric from two types of threads. The horizontal ones have already been placed and the way the threads of the second type are to be weaved in has also been set.
Is it always possible to have these functions? Yes, at least when $h$ is differentiable. The challenge is to devise a function that would cross these lines at these exact angles. How this can be done numerically is discussed later.
Example. An interpretation is a stream of liquid with its velocity known at every location $y$ and we need to trace the path of a particle initially located at a specific place $y_0$.
Example. This is what happens when the velocity is equal to the location: $$y'=y.$$ To solve, what is the function equal to its derivative? It's the exponent $y=e^x$, of course. However, all of its multiples $y=Ce^x$ are also solutions:
These multiples of the exponent are shown on the left, not to be confused with the exponents of various bases (middle). Just as the differential equations of the first kind, these solutions fill the plane. $\square$
Exercise. What is the transformation of the plane that creates all these curves from one?
Exercise. Can the velocity be really "equal" to the location?
Exercise. What happens when the velocity is proportional to the location; i.e., $$y'=ky?$$
Compare and contrast: $$\begin{array}{lll} \text{DE: }&y'(x)=g(x)&y'(x)=h(y(x))\\ \hline \text{the slopes are the same along any }&\text{ vertical line }&\text{ horizontal line }\\ \text{the velocity is known at any }&\text{ time }&\text{ location }\\ \end{array}$$
Of course, differential equations are also common that have neither of these patterns:
Even though we can't solve those equations yet, once we have a family of curves, it is easy to find a differential equation it came from.
Example. What differential equation does the family of all concentric circles around $0$ satisfy?
This family is given by: $$x^2+y^2=r^2,\ \text{ real } r.$$ We simply differentiate the equation implicitly: $$x^2+y^2=r^2\ \Longrightarrow\ 2x+2yy'=0.$$ That's the equation. $\square$
Example. What about this family of hyperbolas?
This family is given by: $$xy=C,\ \text{ real } C.$$ Again, we differentiate the equation implicitly. We have: $$y+xy'=0.$$ $\square$
Functions of several variables
Optimization examples
Example. In Chapter 7, we confirmed -- numerically -- that it is really true that $45$ degree is the best angle to shoot for longer distance:
Let's apply the methods of calculus to prove this fact. Recall the dynamics of this motion. The horizontal coordinate and the vertical coordinate are given by: $$\begin{cases} x&=x_0&+v_0t,\\ y&=y_0&+u_0t&-\tfrac{1}{2}gt^2, \end{cases}$$ with the following initial conditions:
$x_0$ is the initial depth,
$v_0$ is the initial horizontal component of velocity,
$y_0$ is the initial height, and
$u_0$ is the initial vertical component of velocity.
We shoot from the ground at zero depth: $x_0=y_0=0$. We assume that the initial speed is $1$ and that we shoot under angle $\alpha$. Then, $$\begin{cases} v_0&=&\cos \alpha,\\ u_0&=&\sin \alpha. \end{cases}$$ We substitute: $$\begin{cases} x&=&\cos \alpha\cdot t,\\ y&=&\sin \alpha\cdot t&-\tfrac{1}{2}gt^2. \end{cases}$$ The ground is reached at the moment $t$ when $y=0$, or $$\sin \alpha\cdot t-\tfrac{1}{2}gt^2=0.$$ We discard the starting moment, $t=0$, and end up with: $$\sin \alpha-\tfrac{1}{2}gt=0.$$ We solve for $t$, $$t=\frac{2}{g}\sin\alpha,$$ and substitute into the equation for $x$: $$x=\cos \alpha\cdot t=\cos \alpha\cdot \frac{2}{g}\sin\alpha.$$ This is the final distance, the depth of the shot: $$D(\alpha)=\frac{2}{g}\sin\alpha\cos \alpha.$$ We need to find the value of $\alpha$ that maximizes $D$. Since $D(0)=D(\pi/2)=0$, the answer lies within this interval. We differentiate: $$D'(\alpha)=\frac{2}{g}\big(\cos\alpha\cos \alpha+\sin\alpha(-\sin\alpha)\big)=\frac{2}{g}(\cos^2\alpha-\sin^2\alpha).$$ We set it equal to $0$ and conclude: $$D'(\alpha)=0\ \Longrightarrow\ \cos^2\alpha=\sin^2\alpha\ \Longrightarrow\ \cos\alpha=\sin\alpha\ \Longrightarrow\ \alpha=\frac{\pi}{4}.$$ $\square$
Exercise. Use a trigonometric formula to finish the solution without differentiation.
Exercise. What if we shoot from a hill?
Example (laying a pipe). Suppose we are to lay a pipe from point $A$ to point $B$ located one kilometer east and one kilometer north of $A$. The cost of laying a pipe is normally $\$50$ per meter, except for a rocky strip of land $200$ meters wide that goes east-west; here the price is $\$100$ per meter. What is the lowest cost to lay this pipe?
First we discover that it doesn't matter where the patch is located and assume that we have to cross it first. We proceed from $A$ to point $P$ on the other side of the patch and then to $B$. Then the cost $C$ is $$C=|AP|\cdot 100+|PB|\cdot 50.$$ We next denote by $x$ the distance from $P$ to the point directly across from $A$. Then $$|AP|=\sqrt{200^2+x^2} \text{ and } |PB|=\sqrt{(1000-x)^2+800^2}.$$ Therefore, we are to minimize the cost function: $$C(x)=\sqrt{200^2+x^2}\cdot 100+\sqrt{(1000-x)^2+800^2}\cdot 50.$$ Now, we convert this formula into a spreadsheet formula: $$\texttt{=sqrt(200^2+RC[-1]^2)*100+sqrt((1000-RC[-1])^2+800^2)*50.}$$ Plotting the curve, and then zooming in, suggests that the optimal choice for $x$ is about $81.5$ meters with the cost about $\$82,498.50$:
Let's confirm the result with calculus. Differentiate: $$\begin{array}{lll} C'(x)&=\left( \sqrt{200^2+x^2}\cdot 100+\sqrt{(1000-x)^2+800^2}\cdot 50\right)'\\ &=\frac{2x}{2\sqrt{200^2+x^2}}\cdot 100+\frac{-2(1000-x)}{2\sqrt{(1000-x)^2+800^2}}\cdot 50. \end{array}$$ The equation $C'(x)=0$ proves itself too complex to be solved exactly. Instead, we look back at the picture to recognize the terms in this expression as fractions of sides of these two triangles: $$C'(x)=\cos\widehat{APC}\cdot 100-\cos\widehat{BPD}\cdot 50.$$ Then, $C'=0$ if and only if $$\frac{\cos\widehat{APC}}{\cos\widehat{BPD}} =\frac{50}{100}.$$ In other words, the optimal price is reached when the ratio of the cosines of the two angles at $P$ are equal to the ratio of the two prices. In particular, making the price of laying the pipe across the patch more expensive will make this part of the path to cross more directly. Since cosine is a decreasing function on $(0,\pi/2)$, we conclude that $\widehat{BPD}$ is expressed uniquely in terms of $\widehat{APC}$. $\square$
Exercise. Show that, indeed, the location of the strip doesn't matter. Hint: geometry.
Example (refraction). Suppose light is passing from one medium to another. It is known that the speed of light through the first is $v_1$ and through the second $v_2$. We rely on the principle that light follows the path of fastest speed and find the angle of refraction. A similar but simpler set-up:
Let $x$ be the parameter, then the time it takes to get from $A$ to $B$ is: $$\begin{array}{lll} T(x)&= \frac{|AP|}{v_1}+\frac{|AP|}{v_2}\\ &= \frac{\sqrt{1+x^2}}{v_1}+\frac{\sqrt{1+(1-x)^2}}{v_2}. \end{array}$$ We differentiate just as in the last example: $$\begin{array}{lll} T'(x)&=\frac{1}{v_1}\frac{x}{\sqrt{1+x^2}}+\frac{1}{v_2}\frac{-(1-x)}{\sqrt{1+(1-x)^2}}\\ &=\frac{1}{v_1}\cos\widehat{APC}-\frac{1}{v_2}\cos\widehat{BPD}. \end{array}$$ And $T'=0$ if and only if $$\frac{\cos\widehat{APC}}{\cos\widehat{BPD}} =\frac{v_1}{v_2}.$$ Therefore, light follows the path with the ratio of the cosines of the two angles at $P$ equal to the ratio of the two propagation speeds. $\square$
Exercise. Find the distance from the point $(1,1)$ to the parabola $y=-x^2$ by two methods: (a) find the minimal distance between the point and the curve, and (b) find the line perpendicular to the curve and its length.
Example (numerical optimization). The differentiation might produce a function so complex that solving the equation $f'=0$ analytically is impossible. In that case we can apply one of the iterative processes of solving equation discussed earlier in this chapter. Alternatively, we design a process that follows the direction of the derivative as it always points at the nearest maximum.
In other words, we move along the $x$-axis and
we step right when $f'>0$, and
we step left when $f'<0$.
We reverse the direction when we look for a minimum. We make a step proportional to the derivative so that its magnitude also matters; we move faster when it's higher.
As you can see, the motion will slow down as we are get closer to the destination, which is convenient. So, we build a sequence of approximations recursively: $$x_{n+1}=x_n+h\cdot f'(x_n),$$ where $h$ is the coefficient of proportionality. For example, for $f(x)=\sin x$, we have $$x_{n+1}=x_n+h\cdot \cos(x_n).$$ The spreadsheet formula is: $$\texttt{=R[-1]C+R2C*COS(R[-1]C)}.$$
Especially for functions of several variables (Chapter 18), the method is called the gradient descent or ascent. $\square$
Example (line fitting). What is the best linear approximation of a sequence of points? The idea is to minimize the sum of the distances -- vertical distances -- from the line to the points.
File:Line fitting.png
In other words, if we have a sequence of $n$ points on the plane: $$(x_1,y_1),(x_2,y_2),...,(x_n,y_n),$$ we are to find such number $m$ (the slope) that the following function reaches its minimum: $$R(m)=\sum_{k=1}^n|mx_k-y_k|.$$ We suddenly realize that this function is non-differentiable and, therefore, none of the method that we have available applies!
Retrieved from "https://calculus123.com/index.php?title=Applications_of_differential_calculus&oldid=1771" | CommonCrawl |
Home » Statistics » Gamma Distribution Calculator with examples
Gamma Distribution Calculator with examples
1 Gamma distribution calculator with examples
1.1 How to use Gamma Distribution Calculator?
2 Definition of Gamma Distribution
3 Another form of gamma distribution is
4 Mean and Variance of Gamma Distribution
5 Gamma Distribution Example 1
Use this calculator to find the probability density and cumulative probabilities for Gamma distribution with parameter $\alpha$ and $\beta$.
Gamma Distribution Calculator
Shape Parameter $\alpha$:
Scale Parameter $\beta$
Value of x
Probability density : f(x)
Probability X less than x: P(X < x)
Probability X greater than x: P(X > x)
How to use Gamma Distribution Calculator?
Step 1 – Enter the shape parameter $\alpha$
Step 2 – Enter the scale parameter $\beta$
Step 3 – Enter the value of $x$
Step 4 – Click on "Calculate" button to get gamma distribution probabilities
Step 5 – Gives the output probability density at $x$ for gamma distribution
Step 6 – Gives the output probability $X < x$ for gamma distribution
Step 7 – Gives the output probability $X > x$ for gamma distribution.
Definition of Gamma Distribution
A continuous random variable $X$ is said to have an gamma distribution with parameters $\alpha$ and $\beta$ if its p.d.f. is given by
$$ \begin{align*} f(x)&= \begin{cases} \frac{1}{\beta^\alpha\Gamma(\alpha)}x^{\alpha -1}e^{-x/\beta}, & x>0;\alpha, \beta >0; \\ 0, & Otherwise. \end{cases} \end{align*} $$
In notation, it can be written as $X\sim G(\alpha, \beta)$.
Another form of gamma distribution is
$$ \begin{align*} f(x)&= \begin{cases} \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha -1}e^{-\beta x}, & x>0;\alpha, \beta >0 \\ 0, & Otherwise. \end{cases} \end{align*} $$
Mean and Variance of Gamma Distribution
The mean and variance of gamma distribution $G(\alpha,\beta)$ are
$\mu_1^\prime =\alpha\beta$ and $\mu_2 =\alpha\beta^2$ respectively.
The probabilities can be computed using MS EXcel or R function pgamma().
The percentiles or quantiles can be computed using MS EXcel or R function qgamma().
The probabilities can also be computed using incomplete gamma functions.
Gamma Distribution Example 1
Suppose that $Y$ has the gamma distribution with parameter $\alpha$ (shape) =10 and $\beta$ (scale)=2. Use R to compute the
a. probability that $Y$ is between 2 and 8,
b. $90^{th}$ percentile of gamma distribution.
Given that $X\sim G(10,2)$ distribution. That is $\alpha= 10$ and $\beta=2$.
The probability density function of $X$ is
$$ \begin{aligned} f(x;\alpha,\beta)&= \frac{1}{\beta^\alpha \Gamma(\alpha)} x^{\alpha -1}e^{-\frac{x}{\beta}}, x>0;\alpha, \beta >0 \\ &= \frac{1}{2^{10} \Gamma(10)} x^{10 -1}e^{-\frac{x}{2}}, x>0 \end{aligned} $$
a. The probability that $2 < X < 8$ is
$$ \begin{aligned} P(2 < X < 8) &= P(X < 8) - P(X < 2)\\ &=\int_0^{8}f(x)\; dx - \int_0^{2}f(x)\; dx\\ &= 0.0081 -0\\ &=0.0081 \end{aligned} $$
b. Let the $90^{th}$ percentile be $Q$.
$$ \begin{aligned} & P(X < Q) = 0.9\\ \Rightarrow &\int_0^{Q}f(x)\; dx=0.9\\ \Rightarrow &Q= 28.412 \end{aligned} $$
Thus $90^{th}$ percentile of the given gamma distribution is 28.412.
If a random variable $X$ has a gamma distribution with $\alpha=4.0$ and $\beta=3.0$, find $P(5.3 < X < 10.2)$.
Given that $X\sim G(4,3)$ distribution. That is $\alpha= 4$ and $\beta=3$.
$$ \begin{aligned} f(x;\alpha,\beta)&= \frac{1}{\beta^\alpha \Gamma(\alpha)} x^{\alpha -1}e^{-\frac{x}{\beta}}, x>0;\alpha, \beta >0 \\ &= \frac{1}{3^{4} \Gamma(4)} x^{4 -1}e^{-\frac{x}{3}}, x>0 \end{aligned} $$
The probability that $5.3 < X < 10.2$ is
$$ \begin{aligned} P(5.3 < X < 10.2) &= P(X < 10.2) - P(X < 5.3)\\ &=\int_0^{10.2}f(x)\; dx - \int_0^{5.3}f(x)\; dx\\ &= 0.4416 -0.1034\\ &=0.3382 \end{aligned} $$
Let $X$ have a standard gamma distribution with $\alpha=3$. Find
a. $P(2\leq X \leq 6)$
b. $P(X>8)$
c. $P(X\leq 6)$
Given that $X\sim G(3,1)$ distribution, which is a standard gamma distribution. That is $\alpha= 3$ and $\beta=1$.
$$ \begin{aligned} P(2 < X < 6) &= P(X < 6) - P(X < 2)\\ &=\int_0^{6}f(x)\; dx-\int_0^{2}f(x)\; dx\\ &= 0.938 -0.3233\\ &=0.6147 \end{aligned} $$
b. The probability that $X > 8$ is
$$ \begin{aligned} P(X > 8) &= 1- P(X \leq 8)\\ &=1- \int_0^{8}f(x)\; dx\\ &= 1-0.9862\\ &=0.0138 \end{aligned} $$
c. The probability that $X \leq 6$ is
$$ \begin{aligned} P(X \leq 6)&= \int_{0}^{6} f(x)\; dx\\ &=0.938 \end{aligned} $$
Time spend on the internet follows a gamma distribution is a gamma distribution with mean 24 $min$ and variance 78 $min^2$.
a. parameters of gamma distribution,
c. probability that time spend on the internet is between 22 to 38 minutes,
b. probability that time spend on the internet is less than 28 minutes.
Let $X$ be the time spend on the internet. Given that $X\sim G(\alpha, \beta)$. The mean of $G(\alpha,\beta)$ distribution is $\alpha\beta$ and the variance is $\alpha\beta^2$.
Given that $mean =\alpha\beta=24$ and $V(X)=\alpha\beta^2=78$.
a. Thus $\beta=\frac{78}{24}=3.25$ and $\alpha = 24/3.25= 7.38$ (rounded to two decimal)
$$ \begin{aligned} f(x;\alpha,\beta)&= \frac{1}{\beta^\alpha \Gamma(\alpha)} x^{\alpha -1}e^{-\frac{x}{\beta}}, x>0;\alpha, \beta >0 \\ &= \frac{1}{3.25^{7.38} \Gamma(7.38)} x^{7.38 -1}e^{-\frac{x}{3.25}}, x>0 \end{aligned} $$
b. The probability that $22 < X < 38$ is
$$ \begin{aligned} P(22 < X < 38) &= P(X < 38) - P(X < 22)\\ &=\int_0^{38}f(x)\; dx-\int_0^{22}f(x)\; dx\\ &= 0.9295 -0.4572\\ &=0.4722 \end{aligned} $$
b. The probability that $X < 28$ is
$$ \begin{aligned} P(X < 28) &=\int_0^{28}f(x)\; dx\\ &= 0.7099 \end{aligned} $$
In this tutorial, you learned about how to calculate probabilities of Gamma distribution. You also learned about how to solve numerical problems based on Gamma distribution.
To read more about the step by step tutorial on Gamma distribution refer the link Gamma Distribution. This tutorial will help you to understand Gamma distribution and you will learn how to derive mean, variance, moment generating function of Gamma distribution and other properties of Gamma distribution.
To learn more about other probability distributions, please refer to the following tutorial:
Let me know in the comments if you have any questions on Gamma Distribution Examples and your thought on this article.
Categories All Calculators, Probability Distributions, Statistics, Statistics-Calc Tags gamma distribution, gamma distribution calculator, probability distribution
Exponential Distribution Calculator with Examples
Moment coefficient of kurtosis calculator for ungrouped data | CommonCrawl |
Two large parallel conducting plates carrying opposite charges of equal magnitude are separated by 2.20cm.
Calculate the absolute magnitude of Electric Field E in the area between the two conducting plates if the magnitude of charge density at the surface of each place is 47.0 nC/m^2.
Calculate the Potential Difference V that exists between the two conducting plates.
Calculate the impact on the magnitude of Electric Field E and Potential Difference V if the distance between the conducting plates is doubled while keeping the density of charge constant at the conducting surfaces.
The aim of this article is to find the Electric Field $\vec{E}$ and Potential Difference $V$ between the two conducting plates and the impact of change in the distance between them.
The main concept behind this article is Electric Field $\vec{E}$ and Potential Difference $V$.
Electric Field $\vec{E}$ acting on a plate is defined as the electrostatic force in terms of unit charge that act on a unit area of the plate. It is represented by Gauss Law as follows:
\[\vec{E}=\frac{\sigma}{2\in_o}\]
$\vec{E}=$ Electric Field
$\sigma=$ Surface Charge Density of the Surface
$\in_o=$ Vacuum Permittivity $= 8.854\times{10}^{-12}\dfrac{F}{m}$
Potential Difference $V$ between two plates is defined as the electrostatic potential energy in terms of unit charge that acts between those two plates separated by a certain distance. It is represented as follows:
\[V=\vec{E}.d\]
$V=$ Potential Difference
$d=$ Distance between two plates
Distance between two plates $d=2.2cm=2.2\times{10}^{-2}m$
Surface Charge Density of each plate $\sigma=47.0\dfrac{n.C}{m^2}=47\times{10}^{-9}\dfrac{C}{m^2}$
Vacuum Permittivity $\in_o=8.854\times{10}^{-12}\dfrac{F}{m}$
Magnitude of Electric Field $\vec{E}$ acting between given two parallel plates $1$, $2$ is:
\[\vec{E}={\vec{E}}_1+{\vec{E}}_2\]
\[\vec{E}=\frac{\sigma}{2\in_o}+\frac{\sigma}{2\in_o}\]
\[\vec{E}=\frac{2\sigma}{2\in_o}=\frac{\sigma}{\in_o}\]
Substituting the value of Surface Charge Density $\sigma$ and Vacuum Permittivity $\in_o$:
\[\vec{E}=\frac{47\times{10}^{-9}\dfrac{C}{m^2}}{8.854\times{10}^{-12}\dfrac{F}{m}}\]
\[\vec{E}=5.30834\times{10}^3\frac{N}{C}\]
\[Electric\ Field\ \vec{E}=5308.34\frac{N}{C}=5308.34\frac{V}{m}\]
Potential Difference $V$ between given two parallel plates $1$, $2$ is:
Substituting the value of Electric Field $\vec{E}$ and the distance $d$ between two plates, we get:
\[V=5.30834\times{10}^3\frac{V}{m}\times2.2\times{10}^{-2}m\]
\[Potential\ Difference\ V=116.78\ V\]
The distance between the two parallel plates is double.
As per the expression of Electric Field $\vec{E}$, it is not dependent on distance, hence any change in distance between the parallel plates will not have any impact on Electric Field $\vec{E}$.
\[\vec{E}=5308.34\frac{V}{m}\]
We know that the Potential Difference $V$ between given two parallel plates $1$, $2$ is:
If the distance is doubled, then:
\[V^\prime=\vec{E}.2d=2(\vec{E}.d)=2V\]
\[V^\prime=2(116.78\ V)=233.6V\]
Part (a) – Magnitude of Total Electric Field $\vec{E}$ acting between given two parallel plates $1$, $2$ will be:
Part (b) – Potential Difference $V$ between given two parallel plates $1$, $2$ is:
\[V=116.78\ V\]
Part (c) – If the distance between the conducting plates is doubled, Electric Field $\vec{E}$ will not change whereas the Potential Difference $V$ will be doubled.
Calculate the magnitude of Electric Field $\vec{E}$ in the area between the two conducting plates if the surface charge density of each place is $50\dfrac{\mu C}{m^2}$.
Magnitude of Total Electric Field $\vec{E}$ acting between given two parallel plates $1$, $2$ will be:
\[\vec{E}=\frac{\sigma}{2\in_o}+\frac{\sigma}{2\in_o}=\frac{\sigma}{\in_o}\]
Substituting the values, we get:
\[\vec{E}=\frac{50\times{10}^{-6}\dfrac{C}{m^2}}{8.85\times{10}^{-12}\dfrac{F}{m}}\]
\[\vec{E}=5.647\times{10}^6\frac{N}{C}=5.647\times{10}^6\frac{V}{m}\]
previous post LASIK eye surgery uses pulses of laser light to shave off tissue from the cornea, reshaping it. A typical LASIK laser emits a 1.0-mm-diameter laser beam with a wavelength of 193 nm. Each laser pulse lasts 15 ns and contains 1.0 mJ of light energy.
next post A +9 nC charge is located at the origin. What is the strength of the electric field at the position (x,y)=(−5.0 cm,−5.0 cm) | CommonCrawl |
Fractional-order 4D hyperchaotic memristive system and application in color image encryption
Peng Li1,
Ji Xu1,
Jun Mou1 &
Feifei Yang1
In this paper, some properties of the fractional-order four-dimensional (4D) hyperchaotic memristive system are analyzed by the phase diagram, Lyapunov exponent spectrum and bifurcation diagram according to the Adomian decomposition method. Based on the chaotic system, a color image encryption scheme is proposed through combining the DNA sequence operation. The algorithm simulation results and security feature analysis show that the encryption scheme has good encryption effect and high safety performance, which provides an experimental basis and theoretical guidance for the safe transmission of image information.
Nowadays, digital image is an improtant carrier of information, because of the inherent performance of digital images, including bulk data capacity, high redundancy and extremely strong correlation in adjacent pixels, which make digital image processing a research hotspot. For example, prediction error preprocessing for image compression [1], histogram equalization of images [2], image compression and reconstruction [3], and so on. To achieve the requirement of digital image safety transmission, researchers are interested in an encryption algorithm based on a chaotic system. Chaos is a random or uncertain movement in a particular system. It has inherent properties of ergodicity, sensitivity of initial value and parameters, and complex dynamic characteristic [4, 5]. Especially, chaotic attractors coexist [6, 7]. Therefore, a chaos system could be used in the image encryption fields.
Up to now, all kinds of image encryption algorithms through chaotic system are proposed [1, 8,9,10,11,12,13]. For example, Hua et al. [8] proposed an image encryption scheme using 2D Logistic-adjusted-Sine map. Yang et al. [9] presented novel quantum image encryption through 1D quantum cellular automata. Because low-dimensional chaotic maps have fewer system parameters, the structures are simple. The system parameters and initial value may be predicted by using chaotic signal estimation technologies. On the contrary, high-dimensional chaotic maps, especially hyperchaotic maps, possess excellent chaotic performance and complex structure. Therefore, Natiq et al. [10] designed a new hyperchaotic map and its application for image encryption. Luo Y and his research team [11] proposed a parallel image encryption algorithm through two chaotic maps.
Recently, an encryption scheme using DNA addition in combination with chaotic system was proposed by Zhang et al. [14]. Soon afterwards, some cryptosystems were applied to DNA sequence operations and chaotic systems [2,3,4,5, 7, 15,16,17,18,19,20,21,22,23,24,25,26,27]. These schemes applied DNA encoding and DNA sequence operation to encrypt images. An idea of DNA subsequence operation, rather than complex biological operation of image encryption scheme, was introduced by Zhang et al. [25]. Liu and his research team [26] employed a chaotic map and the DNA complementary rule in an image encryption algorithm. SaberiKamarposhti et al. [27] proposed hybrid image encryption algorithm through DNA sequences and a logistic map. However, compared with the general chaotic system, the fractional-order system has nonlocal character and high nonlinearity, and the encryption algorithm of fractional-order chaotic has higher security features [20, 28]. Compared with the general chaotic system, dynamic features of the memristor chaotic system depend not only on system parameters but also on the initial conditions of memristor retention internal state variables [29, 30]. However, the memristor chaotic systems are not widely used for image and data encryption algorithms. Therefore, to improve the safety performance of image encryption algorithm, in this paper, a color image encryption using a fractional-order 4D hyperchaotic memristive system and DNA sequence operations is proposed.
The following is the architecture of this paper. Preliminary materials are described in Section 2. The encryption and decryption scheme and the simulation results are presented in Section 3. In Section 4, security performance is analyzed. Finally, the conclusion is given in Section 5.
Preliminary materials
Adomian decomposition method
For a certain fractional-order differential equation *Dq to(t) = f(x(t)), here x(t) = [x1(t),x2(t),…,xn(t)]T, *Dq to are variables and *Dq to is the Caputo derivative operator of order q((m − 1) < q ≤ m, m∈N). The following initial value is obtained by making f (x (t)) been separated into three parts [31, 32]:
$$ \left\{\begin{array}{l}{}^{\ast }{D}_{to}^qx(t)= Lx(t)+ Nx(t)+g(t)\\ {}{x}^{(k)}\left({t}_0^{+}\right)={b}_k,k=0,1,\dots, m-1\end{array}\right. $$
Here, L and N are linear and nonlinear parts of system functions, g(t) = [g1(t),g2(t),…,gn(t)]T are constants for autonomous systems, and bk is a specified constant. On both sides of Eq. (3) perform Jq to operators, the following equation is obtained [33]:
$$ x={J}_{t0}^q Lx+{J}_{t0}^q Nx+{J}_{t0}^qg+\sum \limits_{k=0}^{m-1}{b}_k\frac{{\left(t-{t}_0\right)}^k}{k!} $$
Jq to is fractional integral operator of order q based on Riemann-Liouville. For t∈ [t0,t1], q ≥ 0, r ≥ 0, γ > − 1 and real constant C, the fundamental properties of Jq to are described by [34]:
$$ {J}_{t0}^q{\left(t-{t}_0\right)}^{\gamma }=\frac{\Gamma \left(\gamma +1\right)}{\Gamma \left(\gamma +1+q\right)}{\left(t-{t}_0\right)}^{\gamma +q} $$
$$ {J}_{t0}^qC=\frac{C}{\Gamma \left(q+1\right)}{\left(t-{t}_0\right)}^q $$
$$ {J}_{t0}^q{J}_{t0}^rx(t)={J}_{t0}^{q+r}x(t) $$
Based on ADM, the nonlinear terms of Eq. (4) are decomposed according to
$$ \left\{\begin{array}{l}{A}_j^i=\frac{1}{i!}{\left[\frac{d^i}{d{\lambda}^i}N\left({\nu}_j^i\left(\lambda \right)\right)\right]}_{\lambda =0}\\ {}{\nu}_j^i\left(\lambda \right)={\sum}_{k=0}^i{\left(\lambda \right)}^k{x}_j^k\end{array}\right. $$
where i = 0,1,…,∞, j = 1,2,…n. Then the nonlinear terms are expressed as
$$ Nx=\sum \limits_{i=0}^{\infty }{A}^i\left({x}^0,{x}^1,\dots, {x}^i\right) $$
So the solution of Eq. (3) x = ∑∞ i = 0 is derived from
$$ \left\{\begin{array}{l}{x}^0={J}_{t0}^qg+{\sum}_{k=0}^{m-1}{b}_k\frac{{\left(t-{t}_0\right)}^k}{k!}\\ {}{x}^1={J}_{t0}^q{Lx}^0+{J}_{t0}^q{A}^0\left({x}^0\right)\\ {}{x}^2={J}_{t0}^q{Lx}^1+{J}_{t0}^q{A}^1\left({x}^0,{x}^1\right)\\ {}\dots \\ {}{x}^i={J}_{t0}^q{Lx}^{i-1}+{J}_{t0}^q{A}^{i-1}\left({x}^0,{x}^1,\dots, {x}^{i-1}\right)\\ {}\dots \end{array}\right. $$
Fractional-order 4D hyperchaotic memristive system
The fractional-order 4D hyperchaotic memristive system is described by [30]:
$$ \left\{\begin{array}{l}{D^{\ast}}_{t0}^qx=\alpha \left(y-x\right)\\ {}{D^{\ast}}_{t0}^qy=- xz+\beta y-\rho W(w)x\\ {}{D^{\ast}}_{t0}^qz= xy-\gamma z\\ {}{D^{\ast}}_{t0}^qw=x\end{array}\right. $$
where x, y, z and w are the stateful variables of chaotic system, q(0 < q ≤ 1) is the order of fractional-order differential equation, where W(w) is defined as W(w) = a + 3bw2, and a, b, α, β, γ, and ρ are the system parameters.
In order to evaluate the chaotic system for image encryption, the dynamic characteristics of the fractional-order 4D hyperchaotic memristive system by the phase diagram, Lyapunov exponent spectrum, and bifurcation diagram are analyzed according to Adomian decomposition method.
Let parameters a = 4, b = 0.01, α = 36, β = 20, γ = 3, q = 0.85 and ρ = 3, the initial value of the Eq. (9) is (1, 0, 1, 0). We get the phase diagram shown Fig. 1a. Then make parameters a = 4, b = 0.01, α = 36, β = 20, γ = 3, ρ = 3 and versus q∈[0.75, 1]. The Lyapunov exponent spectrum and bifurcation diagram of the fractional-order 4D hyperchaotic memristive system are obtained as shown in Fig. 1b and c. Obviously, the phase diagram, Lyapunov exponent spectrum and bifurcation diagram of the fractional-order 4D hyperchaotic memristive system distribute in a large region. This means that the system has good randomness, large key space and pseudorandom sequence generator.
Dynamic analysis of 4D hyperchaotic memristive system. a Phase diagram; b Lyapunov exponent spectrum with a = 4, b = 0.01, α = 36, β = 20, γ = 3, ρ = 3, versus q∈[0.75,1]; c bifurcation diagram with a = 4, b = 0.01, α = 36, β = 20, γ = 3, ρ = 3, versus q∈[0.75,1]
DNA encoding and decoding rules
A DNA sequence is composited of four nuclear acid bases ATCG (adenine, thymine, cytosine, guanine); here, A and T are complementary, and C and G are complementary. The information is represented binary in the current generation theory of the electronic computer, and in the DNA coding theory, all information is represented by four nuclear acid bases A, T, C, G. According to complementary rules of binary 0 and 1, the 00 and 11 are complementary, and 01 and 10 are complementary. Therefore, acid bases A, T, C and G are encoded 00, 01, 10 and 11. Obviously, the coding rules has 4! = 24 kinds, but only 8 kinds of coding rules are satisfied with Watson-Crick mutual complement rule [35], as shown in Table 1. DNA decoding is the opposite of DNA encoding. For instance, if the value of image pixel is 152, we can get the corresponding binary sequence "10011000," and it can be encoded as "TCGA" based on Rule 1. If the encoded sequence is "TGCA," it can be decoded "00100111" by Rule 3, the final DNA decoding result is decimal value 39.
Table 1 DNA encoding rules
DNA addition and subtraction rules
On the basis of traditional binary addition and subtraction, the DNA addition and subtraction are obtained. Thus, according to the eight kinds of DNA encoding rules, we can get the corresponding eight kinds of DNA addition and subtraction rules. For example, on the basis of DNA encoding rule 1, DNA addition rule 1 and subtraction rule 1 are shown in Table 2.
Table 2 Addition rules and subtraction rules
DNA complementary rule
The DNA complementary rule [26] must satisfy Eq. (10) for each nucleotide xi.
$$ \left\{\begin{array}{l}{x}_i\ne L\left({x}_i\right)\ne L\left(L\left({x}_i\right)\right)\ne L\left(L\left(L\left({x}_i\right)\right)\right)\\ {}{x}_i=L\left(L\left(L\left(L\left({x}_i\right)\right)\right)\right)\end{array}\right. $$
where L (xi) and xi are basic pairs and they are complementary, the basic pairs are satisfied with the injective map.
On the basis of (12), there are six kinds of reasonable combination of complementary base pairs, as shown below:
L1(A) = T, L1 (T) = C, L1 (C) = G, L1 (G) = A;
L2(A) = T, L2 (T) = G, L2 (G) = C, L2 (C) = A;
L3 (A) = C, L3 (C) = T, L3 (T) = G, L3 (G) = A;
L4 (A) = C, L4 (C) = G, L4 (G) = T, L4 (T) = A;
L5 (A) = G, L5 (G) = T, L5 (T) = C, L5 (C) = A;
L6 (A) = G, L6 (G) = C, L6 (C) = T, L6 (T) = A,
where Li (i = 1,2,...,6) represents the ith complement rule.
In the diffusion of pixels, used DNA complementary rule bases complementary replacement, and we can randomly select one of the six kinds of complementary combination rules are complementary to replace, which achieve the goal of pixel diffusion.
Method - image encryption and decryption algorithm
The key design
The key design of the proposed color image encryption algorithm is shown in Fig. 2. It consists of five parts, chaotic system initial values(x0,y0,z0,w0), parameters a, b, α, β, γ, q, ρ, cycle numbers m, n, starting acid base c0(c0∈A, T, C, G) and DNA encoding rules in Table 1 α1, β1(α1, β1∈ [1, 8]).
Key format
Image encryption algorithm
Pixel position scrambling
The pixel location is scrambled in order to destroy correlation of the original image, and an image is rearranged, which makes the image become disturbed. The random sequences through the fractional-order 4D hyperchaotic memristive system are generated, and the image is permutated. The detailed confusion process can be presented as the following steps.
Step 1. The input is color original image I with the size of M × N × 3. Setting secret key values a, b, α, β, γ, q, ρ, x0, y0, z0, w0. New initial conditions of the fractional-order 4D hyperchaotic memristive system are generated by
$$ s=\frac{\left[\sum \limits_{i=1}^M\sum \limits_{j=1}^N\sum \limits_{k=1}^3I\left(i,j,k\right)\right]}{10^{10}} $$
$$ \left\{\begin{array}{l}{x}_0^{\hbox{'}}={x}_0+s\\ {}{y}_0^{\hbox{'}}={y}_0+s\\ {}{z}_0^{\hbox{'}}={z}_0+s\\ {}{w}_0^{\hbox{'}}={w}_0+s\end{array}\right. $$
Step 2. Setting the L = max(M, N). Let the chaotic system (9) iterate for (m + L) times based on new initial conditions, and then throw out the former m values to improve initial value sensitivity. The four chaotic sequences {xi}L i = 1, {yi}L i = 1, {zi}L i = 1 and {wi}L i = 1 are obtained by Eq. (9). The following shift step numbers are used for scrambling:
$$ Bri=\operatorname{mod}\left(\left\lfloor \left|{x}_i\right|\times {10}^{16}\right\rfloor, \frac{N}{2}\right) $$
$$ Bcj=\operatorname{mod}\left(\left\lfloor \left|{y}_j\right|\times {10}^{16}\right\rfloor, \frac{M}{2}\right) $$
where Bri means that the cyclic step size of row i, and Bcj is the cyclic step number of column j. Here, i = 1,2,...,M, j = 1,2,...,N.
Step 3. Color image I is decomposed into R, G, B parts, and then R, G, B parts are converted into three matrices and the rows are shifted. The shift results TR1, TG1 and TB1 are obtained by the following rules. Assumption xi > 0, let the row i of R would be moved to left and step number is Bri; otherwise, the row i of R would be moved to the right with step number Bri, where i = 1,2,...,M. The same rules are used as in the G and B channels.
Step 4. The columns shift results TR, TG and TB are obtained as follows. When yj > 0, the column j of TR1 would be moved up with the size of step is Bcj, or else the column j of TR1 would be moved down with the size of step is Bcj, where j = 1,2,...,N. The same rules are used as in the TG1 and TB1.
DNA sequence operation
The pixel values are diffused according to DNA operations and include addition and complementary operations. Specific steps are as follows.
Step 1. The M × 8 N binary matrices R, G and B are obtained by TR, TG and TB. Then the matrices R, G and B are encoded through the DNA encoding Rule α, and then the M × 4 N DNA matrix S1, S2 and S3 are obtained.
Step 2. Setting the chaotic system initial values of x0, y0, z0, w0 and getting chaotic sequences {xi}MN i = 1, {yi}MN i = 1, {zi}MN i = 1, {wi}L i = 1 by iterating system (1) (n + M × N) times and discarding the former n values. Three sequences k1, k2 and k3 are obtained by
$$ \left\{\begin{array}{l}k1=\operatorname{mod}\left(\left\lfloor \left|{x}_i\right|\right\rfloor \right)\times {10}^{16},256\Big)\\ {}k2=\operatorname{mod}\left(\left\lfloor \left|{y}_i\right|\right\rfloor \right)\times {10}^{16},256\Big)\\ {}k3=\operatorname{mod}\left(\left\lfloor \left|{z}_i\right|\right\rfloor \right)\times {10}^{16},256\Big)\end{array}\right. $$
where i = 1,2,….,MH.
Step 3. The sequence k1, k2 and k3 are transformed into binary matrix, and the matrixes are encoded according to the same DNA rule α to get three M × 4 N matrixes K1, K2 and K3.
Step 4. According to the DNA complementary rule, the middle encryption result of DNA formulation matrix C = {ci}4MN i = 1 is obtained as follows:
$$ {\displaystyle \begin{array}{l}\mathrm{If}\ {\mathrm{c}}_{2i-2}=\mathrm{A},\mathrm{then}\ {\mathrm{c}}_{2i-1}={L}_{l1}\left({\mathrm{s}}_{2i-1}\right);\\ {}\mathrm{If}\ {\mathrm{c}}_{2i-2}=\mathrm{C},\mathrm{then}\ {\mathrm{c}}_{2i-1}={L}_{l2}\left({\mathrm{s}}_{2i-1}\right);\\ {}\mathrm{If}\ {\mathrm{c}}_{2i-2}=\mathrm{G},\mathrm{then}\ {\mathrm{c}}_{2i-1}={L}_{l3}\left({\mathrm{s}}_{2i-1}\right);\\ {}\mathrm{If}\ {\mathrm{c}}_{2i-2}=\mathrm{T},\mathrm{then}\ {\mathrm{c}}_{2i-1}={L}_{l4}\left({\mathrm{s}}_{2i-1}\right);\\ {}\mathrm{If}\ {\mathrm{c}}_{2i-1}=\mathrm{A},\mathrm{then}\ {\mathrm{c}}_{2i}={L}_{l5}\left({\mathrm{s}}_{2i}\right);\\ {}\mathrm{If}\ {\mathrm{c}}_{2i-1}=\mathrm{C},\mathrm{then}\ {\mathrm{c}}_{2i}={L}_{l6}\left({\mathrm{s}}_{2i}\right);\\ {}\mathrm{If}\ {\mathrm{c}}_{2i-1}=\mathrm{G},\mathrm{then}\ {\mathrm{c}}_{2i}={L}_{l7}\left({\mathrm{s}}_{2i}\right);\\ {}\mathrm{If}\ {\mathrm{c}}_{2i-1}=\mathrm{T},\mathrm{then}\ {c}_{2i}={L}_{l8}\left({\mathrm{s}}_{2i}\right).\end{array}} $$
Step 5. The encrypted image of DNA sequence D = {di}4MN i = 1 is calculated by
$$ {d}_i={c}_i+K(i)+{d}_{i-1} $$
Here, i = 1,2,….,MH. "+" means that the DNA addition operation, and d0 = c4MN. Three DNA matrices D1, D2 and D3 are obtained by Eq. (18).
Step 6. The matrices D1, D2, and D3 are decoded by DNA Rule β and then recovering three binary formulations C1, C2 and C3. Finally, the encrypted image C by combination C1, C2 and C3 is obtained.
Decryption algorithm
The decryption algorithm is a process of restoring the original image. First, the encryption image C is decomposed C1,C2 and C3, and then C1,C2 and C3 are encoded as matrices D1, D2 and D3 through DNA rule β, and then the middle encryption result of DNA formulation matrix C = {ci}4MH i = 1 is recovered as
$$ {c}_i={d}_i-K(i)-{d}_{i-1} $$
where i = 1,2,….,MH. "–" is the DNA subtraction, and d0 = c4MH. The matrices K1, K2 and K3 are generated by doing Step 3 of the DNA sequence operation. Second, the image of DNA sequence matrices S1, S2 and S3 is recovered. The same iteration as Step 3 and Step 4 of pixel position scrambling is performed. Finally, the encrypted image is recovered.
Simulation result
The color Lena image with the size of 256 × 256 is used for an algorithm simulation test, resulting in the Lena image as shown in Fig. 3a. The key a = 4, b = 0.01, α = 36, β = 20, γ = 3, q = 0.855, ρ = 2.67, x0 = 1, y0 = 0, z0 = 1, w0 = 0, m = 1000, n = 5000, c0 = A, α = 1, β = 3. The encrypted Lena image can be obtained as in Fig. 3b and the corresponding decryption image as shown in Fig. 3c.
Encryption and decryption results. a Original Lena image; b encrypted Lena image; c decrypted Lena image
Key space
As a good image encryption algorithm, it should have large enough key space to resist the brute-force attack. In our encryption scheme, the keys are x0, y0, z0, w0, a, b, α, β, γ, q, ρ; if the calculation precision is 10− 15, the key space will be 2548. For the other part of key c0, α1, β1, b1, b2,...b8, because DNA has four acid base, eight kinds of encoding and decoding rules and six DNA complementary rules, and get the key space 22 × 26 × 220 = 228. So, all the key space would be 2576, which shows that the algorithm key space is large enough and can resist the brute-force attack.
Key sensitivity analysis
The restored image will be completely different from its original image when the key has a tiny change, which means that the encryption algorithm is well and should also be extremely sensitive with its key. In this paper, we respectively used six slightly changed keys, (x0 + 10− 16), (y0 + 10− 16), (z0 + 10− 16), (a + 10− 15), (b + 10− 15) and (c + 10− 15), to decrypt the encrypted Lena image shown in Fig. 3b and the sensitivity test shown in Fig. 4. Obviously, these restored images are completely different from the correct decrypted image in Fig. 3c. Therefore, the proposed algorithm is very sensitive to its key.
Key sensitivity test results. a Decryption with (x0 + 10− 16); b decryption with (y0 + 10− 16); c decryption with (z0 + 10− 16); d decryption with (a + 10− 16); e decryption with (b + 10− 16); f decryption with (c + 10− 16)
Histogram analysis
The distribution of pixel values in the image is shown by the histogram. The histogram of the encrypted image is flat and can well resist statistical attacks. The histograms of original color Lena image and its encrypted image are shown in Fig. 5. It can be seen that the cipher image histogram is very smooth, which indicates that the proposed encryption algorithm is well. So, in this paper, the encryption algorithm is proposed that will not make the attacker by analyzing the ciphertext to get any image with statistical information; thus, it can prevent the attacker from doing a statistical attack.
Histogram of the color Lena image and cipher image. a Histogram of Lena in R channel; b histogram of Lena in G channel; c histogram of Lena in B channel; d histogram of encrypted in R channel; e histogram of encrypted in G channel; f histogram of encrypted in B channel
Correlation coefficient analysis
For the original image, it has extremely strong correlation in adjacent pixels. A good of image encryption algorithm should break the correlation between neighboring pixels. The correlation coefficients rxy of pixels x and y is calculated as:
$$ {r}_{xy}=\frac{\operatorname{cov}\left(x,y\right)}{\sqrt{D(x)D(y)}} $$
$$ \operatorname{cov}\left(x,y\right)=E\left\{\left[x-E(x)\right]\left[y-E(y)\right]\right\} $$
$$ E(x)=\frac{1}{N}\sum \limits_{i=1}^N{x}_i $$
$$ D(x)=\frac{1}{N}\sum \limits_{i=1}^N{\left[{x}_i-E(x)\right]}^2 $$
where x and y are the pixel values of different image pixels, cov (x, y) represents the covariance, and D(x) means that the variance of x. Similarly, E(x) is the average, and N represents the number of all pixels. Correlation coefficients of the original Lena image and encrypted image in R, G, and B channels are listed in Table 3. The correlation coefficients of the encrypted image with identical positions in R, G, B components are listed in Table 4. Table 5 lists the correlation coefficients of the encrypted image with adjacent positions in R, G, B components. The tabular data indicate that the original images have significant correlation, whereas encrypted images are very small, which shows the encryption algorithm effect is up to the mustard.
Table 3 Correlation coefficients in R, G, B channels
Table 4 Identical position with R, G, B
Table 5 Adjacent position with R, G, B
To clearly see the correlation of the original and encrypted images, the correlation distributions of horizontally adjacent pixels for the Lena image are shown in Fig. 6. Obviously, the original image has extremely strong correlation between adjacent pixels as shown in Fig. 6a, b and c. In the figure, we can see that all the pixel dots of the original image are congregated along the diagonal. However, the encrypted image pixel dots are scattered over the entire plane as shown Fig. 6d, e and f. This indicates that the correlations of different pixels in the encrypted image are greatly reduced in the encrypted image. Therefore, the image encryption algorithm has the ability to resist statistical attack.
Correlation distributions between two horizontal different pixels. a Original image in R channel; b original image in G channel; c original image in B channel; d encrypted image in R channel; e encrypted image in G channel; f encrypted image in B channel
Information entropy
Information entropy is an important gray value for image random, and it is defined as
$$ H(m)=\sum \limits_{i=0}^{L-1}p\left({m}_i\right)\log \frac{1}{p\left({m}_i\right)} $$
where p(mi) means that the probability of occurrence for symbol mi, and L represents all the number of symbols mi. Because there are 28 states of the 256 gray-level images, so the theoretical value of information entropy is 8. The information entropy values of encryption image in R, G, B channels, and the combination of R, G, B components S, are listed Table 6. It can be seen clearly from Table 6 that calculation of the values of the new algorithm is close to 8. Therefore, randomness of the encrypted images is good.
Table 6 Information entropy of encryption image
Differential attack
The attacker makes a subtle change in the original image. Then the original image is encrypted and changed by encryption methods, and then attacker can be compared with two encrypted images to find the original image and the encrypted image correlation to attack image information. Therefore, researchers usually use the number of pixels change rate (NPCR) and unified average changing intensity (UACI) to evaluate whether the encryption algorithm can resist differential attack. NPCR and UACI are calculated as follows:
$$ \mathrm{NPCR}=\frac{\sum \limits_{i,j}D\left(i,j\right)}{L}\times 100\% $$
$$ \mathrm{UACI}=\frac{1}{L}\sum \limits_{i,j}\frac{\left|C\left(i,j\right)-{C}_1\Big(i,j\Big)\right|}{255}\times 100\% $$
where L is the number of all image pixels. C and C1 are pixel values before and after the same position change, respectively, and D(i, j) is obtained through the following rules. If C(i, j) ≠ C1(i, j), then D(i, j) = 1, if C(i, j) = C1(i, j), then D(i, j) = 0.
In this experimental test, we just change the one random pixel of the original image and carry out the test ten times with one round of encryption to obtain the average values NPCRs and UACIs are listed in Table 7. The results illustrate that the mean values NPCRs and UACIs of the proposed algorithm are over 99.6% and 33.3%, respectively, which means the value is large enough to resist the differential attack.
Table 7 Mean values number of pixels change rates and unified average changing intensities of encryption image
Robustness analysis
Noise attack
The encrypted image is disturbed by noise in the transmission process. So we added the Gaussian noise to the encrypted image to carry out antinoise test. Three different variances of Gaussian noise are added to the encrypted Lena image, and corresponding recovery results are shown in Fig. 7a, b and c. Among them, the quality of the decrypted images becomes increasingly worse with the increase of noise variance. However, the main image information can be obtained. This proves that the proposed encryption scheme has strong antinoise capability.
Noise attack analysis. a Variance is 0.0000001; b variance is 0.0000003; c variance is 0.0000005
Cropping attack
The cropping attack is an important standard of evaluating the cryptosystem in order to test resistance of cropping attack of the proposed algorithm. The encrypted Lena image with three different data losses is shown in Fig. 8a, b and c, and the decrypted images are shown in Fig. 8d, e and f, respectively. In Fig. 8, we can see that even though the encrypted image is cropped, the main information of the image can be recovered. Moreover, our algorithm could resist cropping attack to a certain degree.
Cropping attack analysis results. a 1/16 data loss; b 1/32 data loss; c 1/64 data loss; d decrypted image of (a); e decrypted image of (b); f decrypted image of (c)
In this paper, we focus on studying a color image encryption algorithm through a fractional-order 4D hyperchaotic memristive system and DNA sequence operations. The dynamic analysis results show that the fractional-order 4D hyperchaotic memristive system has more complexity in dynamic characteristics and randomness; moreover, it is more suited for image encryption. Algorithm simulation test and security performance analysis indicate that our algorithm not only can effectively to encrypt image but also has excellent safety features. Therefore, image encryption algorithm based on the fractional-order 4D hyperchaotic memristive system can effectively encrypt images and has more efficiency, which provides the related theoretical basis and practical application foundation applied to cryptography, secure communication and information security and other fields.
4-Dimensional
ADM:
ATCG:
Adenine, thymine, cytosine, guanine
DNA:
Deoxyribonucleic acid
NPCR:
Number of pixels change rate
UACI:
Unified average changing intensity
K.C. Liu, Prediction error preprocessing for perceptual color image compression. EURASIP J. Image Video Process. 2012(1), 3 (2012)
T. Huynh-The, B.V. Le, S. Lee, et al., Using weighted dynamic range for histogram equalization to improve the image contrast. EURASIP J. Image Video Process. 2014(1), 44 (2014)
Y. Wang, H. Bai, L. Zhao, et al., Cascaded reconstruction network for compressive image sensing. EURASIP J. Image Video Process. 2018, 77 (2018)
J. Liu, K. Liu, S. Liu, Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations. PLoS One 12(5), e0175730 (2017)
J. Liu, S. Liu, C. Yuan, Adaptive complex modified projective synchronization of complex chaotic (hyperchaotic) systems with uncertain complex parameters. Nonlinear Dyn. 79(2), 1035–1047 (2015)
Article MathSciNet MATH Google Scholar
C. Li, J.C. Sprott, H. Xing, Constructing chaotic systems with conditional symmetry. Nonlinear Dyn. 87, 1351–1358 (2017)
Article MATH Google Scholar
C. Li, J.C. Sprott, Y. Mei, An infinite 2-D lattice of strange attractors. Nonlinear Dyn. 89(4), 2629–2639 (2017)
Z. Hua, Y. Zhou, Image encryption using 2D logistic-adjusted-sine map. Inf. Sci. 339, 237–253 (2016)
Y.G. Yang, J. Tian, H. Lei, et al., Novel quantum image encryption using one-dimensional quantum cellular automata. Inf. Sci. 345, 257–270 (2016)
H. Natiq, N.M.G. Al-Saidi, M.R.M. Said, et al., A new hyperchaotic map and its application for image encryption. Eur. Phys. J. Plus 133(1), 6 (2018)
Y. Luo, R. Zhou, J. Liu, et al., A parallel image encryption algorithm based on the piecewise linear chaotic map and hyper-chaotic map. Nonlinear Dyn. 93(3), 1165–1181 (2018)
X.J. Tong, M. Zhang, Z. Wang, et al., An image encryption scheme based on a new hyperchaotic finance system. Optik 126(20), 2445–2452 (2015)
W. Liu, K. Sun, C. Zhu, A fast image encryption algorithm based on chaotic map. Opt. Lasers Eng. 84, 26–36 (2016)
Q. Zhang, L. Guo, X. Wei, Image encryption using DNA addition combining with chaotic maps. Math. Comput. Modell. 52(11–12), 2028–2035 (2010)
A. Girdhar, V. Kumar, A RGB image encryption technique using Lorenz and Rossler chaotic system on DNA sequences. Multimed. Tools Appl. 77(20), 27017–27039 (2018)
X. Fu, B. Liu, Y.Y. Xie, et al., Image encryption-then-transmission using DNA encryption algorithm and the double chaos. IEEE Photonics J. 10(3), 3900515 (2018)
Y. Zhang, The image encryption algorithm based on chaos and DNA computing. Multimed. Tools Appl. 77(16), 21589–21615 (2018)
X. Li, C. Zhou, N. Xu, A secure and efficient image encryption algorithm based on DNA coding and spatiotemporal chaos. Int. J. Netw. Secur. 20(1), 110–120 (2018)
R. Guesmi, M.A.B. Farah, A. Kachouri, et al., A novel chaos-based image encryption using DNA sequence operation and secure hash algorithm SHA-2. Nonlinear Dyn. 83(3), 1123–1136 (2016)
L.M. Zhang, K.H. Sun, W.H. Liu, et al., A novel color image encryption scheme using fractional-order hyperchaotic system and DNA sequence operations. Chin. Phys. B 26(10), 98–106 (2017)
X. Chai, Z. Gan, Y. Lu, et al., A novel image encryption algorithm based on the chaotic system and DNA computing. Int. J. Mod. Phys. C 28(5), 1750069 (2017)
X. Wu, K. Wang, X. Wang, et al., Lossless chaotic color image cryptosystem based on DNA encryption and entropy. Nonlinear Dyn. 90(2), 855–875 (2017)
T. Hu, Y. Liu, L.H. Gong, et al., An image encryption scheme combining chaos with cycle operation for DNA sequences. Nonlinear Dyn. 87(1), 51–66 (2016)
W. Liu, K. Sun, Y. He, et al., Color image encryption using three-dimensional sine ICMIC modulation map and DNA sequence operations. Int. J. Bifurcation Chaos 27(11), 1750171 (2017)
Q. Zhang, X.L. Xue, X.P. Wei, A novel image encryption algorithm based on DNA subsequence operation. Sci. World J. 2012, 286741 (2012)
H.J. Liu, X.Y. Wang, A. Kadir, Image encryption using DNA complementary rule and chaotic maps. Appl. Soft Comput. 12(5), 1457–1466 (2012)
M. SaberiKamarposhti, I. AlBedawi, D. Mohamad, A new hybrid method for image encryption using DNA sequence and chaotic logistic map. Aust. J. Basic Appl. Sci. 6(3), 371–380 (2012)
E.S.A. Shahri, A. Alfi, J.A.T. Machado, Stability analysis of a class of nonlinear fractional-order systems under control input saturation. Int. J. Robust Nonlinear Control 28(3), 2887–2905 (2018)
X. Ye, J. Mou, C. Luo, et al., Dynamics analysis of Wien-bridge hyperchaotic memristive circuit system. Nonlinear Dyn. 92(3), 923–933 (2018)
J. Mou, K. Sun, H. Wang, et al., Characteristic analysis of fractional-order 4D hyperchaotic memristive circuit. Math. Probl. Eng. 2017, 2313768 (2017)
S. Momani, K. Al-Khaled, Numerical solutions for systems of fractional differential equations by the decomposition method. Appl. Math. Comput. 162(3), 1351–1365 (2005)
MathSciNet MATH Google Scholar
V. Daftardar-Gejji, H. Jafari, Adomian decomposition: a tool for solving a system of fractional differential equations. J. Math. Anal. Appl. 301(2), 508–518 (2005)
N.T. Shawagfeh, Analytical approximate solutions for nonlinear fractional differential equations. Appl. Math. Comput. 131(2–3), 517–529 (2002)
R. Gorenflo, F. Mainardi, Fractal and fractional calculus in continuum mechanics (Springer-Verlag, New York, 1997)
A.N. Demaria, A structure for deoxyribose nucleic acid. J. Am. Coll. Cardiol. 42(2), 373–374 (2003)
The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
The research presented in this paper was supported by Provincial Natural Science Foundation of Liaoning (Grant No. 20170540060), Basic Scientific Research Projects of Colleges and Universities of Liaoning Province (Grant Nos. 2017 J045 and 2017 J046).
Please contact author for data requests.
School of Information Science and Engineering, Dalian Polytechnic University, Dalian, China
Peng Li, Ji Xu, Jun Mou & Feifei Yang
Peng Li
Ji Xu
Jun Mou
Feifei Yang
PL made a theoretical guidance for this paper. JX designed and performed experiments. JM wrote this manuscript. FY analyzed data. All authors carefully read and approved the final manuscript.
Correspondence to Jun Mou.
Li, P., Xu, J., Mou, J. et al. Fractional-order 4D hyperchaotic memristive system and application in color image encryption. J Image Video Proc. 2019, 22 (2019). https://doi.org/10.1186/s13640-018-0402-7
Color image encryption
Security analysis
DNA sequence operations
Visual Information Learning and Analytics on Cross-Media Big Data | CommonCrawl |
Spontaneous Appearance of the Spin-Triplet Fulde-Ferrell-Larkin-Ovchinnikov Phase in a Two-Band Model: Possible Application to LaFeAsO 1−x F x
M. Zegrodnik1 &
J. Spałek1,2
Journal of Superconductivity and Novel Magnetism volume 28, pages 1155–1160 (2015)Cite this article
The possibility of a spontaneous spin-triplet paired phase of the Fulde-Ferrell-Larkin-Ovchinnikov type is studied. As it is shown in a system with the dominant interband pairing and two distinct Fermi surface sheets, the Fermi wave-vector mismatch can be compensated by a nonzero center-of-mass momentum of the Cooper pairs. This idea is examined with the use of a model which describes the two hole-like bands in the iron-based superconductor. It is shown that for the proper range of model parameters, the minima of the free energy appear which correspond to a nonzero Cooper pair momentum. Different superconducting gap symmetries are analyzed, and the corresponding phase diagrams are shown.
The so-called Fulde-Ferrel-Larkin-Ovchinnikov (FFLO) phase has been proposed decades ago [1, 2] and attracted much attention over the years. This unconventional superconducting phase can be induced by the external magnetic field in systems with high Maki parameter [3] for the case of spin-singlet pairing. The Fermi wave-vector mismatch which appears in such conditions can be compensated by a nonzero center-of-mass momentum of the Cooper pairs. Experimental signs of the FFLO phase have been reported in the heavy fermion compound CeCoIn 5 [4–6], as well as in organic superconductors [7–11]. Also, an indirect evidence of a superfluid FFLO phase in a system of ultracold atomic gas trapped in an array of one-dimensional tubes has been reported [12].
It has been proposed by us recently (M. Zegrodnik and J. Spałek, 2014, A spontaneous paired state with nonzeroCooper-pair momentum: Possible application to iron pnictides, unpublished) that a paired phase with nonzero Cooper pair momentum can appear in the absence of an external magnetic field in systems with dominant interband pairing and two distinct Fermi surface sheets. The high value of the Maki parameter would not be required for the formation of such phase. However, the electronic structure of the system at hand should exhibit certain features to create favorable conditions for nonzero momentum pairing. To study this idea, we use the interband spin-triplet pairing mechanism [14, 15] suggested for iron pnictides in [16]. It should be noted that with respect to the iron-based superconductors both spin-singlet [17–20] and spin-triplet [16, 20, 21], gap symmetries have been considered. In this work, we use the tight binding model which reflects the two hole-like bands of the iron-based compound LaFeAsO 1−x F x . The stability of the proposed phase against both the normal and the homogeneous paired phases is analyzed. Different symmetries of the superconducting gap are considered, and their influence on the properties of the nonzero momentum pairing is studied. One should note that our approach could be applied to other multiband systems with either spin-singlet or spin-triplet types of pairing.
Theoretical Model
Following Raghu et al. [22], we use the tight binding model which describes the electronic structure of the Fe-As layer of LaFeAsO 1−x F x . However, we limit to the two hole-like bands only. Additionally, spin-triplet pairing term is added (similarly as in [16]). The model Hamiltonian has the following form:
$$ \mathcal{\hat{H}}={\sum}_{\mathbf{k}l\sigma}(E_{\mathbf{k}l}-\mu)\hat{n}_{\mathbf{k}l\sigma}-\frac{2}{N} {\sum}_{\mathbf{k}\mathbf{k}^{\prime}\mathbf{Q}m}J_{\mathbf{k}-\mathbf{k}^{\prime}}\hat{A}^{\dagger}_{\mathbf{k}^{\prime}m\mathbf{Q}} \hat{A}_{\mathbf{k}m\mathbf{Q}}, $$
where l=1,2 labels the bands, μ is the chemical potential, N is the number of Fe atoms in the lattice, and E k l are the dispersion relations which are plotted in Fig. 1b. One should note that the summations in the Hamiltonian are over the folded Brillouin zone which is marked by the solid line in Fig. 1a (for details of the folding procedure, see [22]). The second term in (1) is responsible for the interband spin-triplet pairing with the possibility of nonzero total momentum of the Cooper pairs, Q. An analogical term has been introduced in [16] but for the two electron-like bands of the iron-based superconductor and without the inclusion of the nonzero momentum pairing. The spin-triplet pairing operators are defined as follows:
$$ \hat{A}^{\dagger}_{\mathbf{k},m\mathbf{Q}}\equiv\left\{\begin{array}{cl} \hat{c}^{\dagger}_{\mathbf{k}1\uparrow}\hat{c}^{\dagger}_{-\mathbf{k}+\mathbf{Q}2\uparrow} & m=1,\\ \hat{c}^{\dagger}_{\mathbf{k}1\downarrow}\hat{c}^{\dagger}_{-\mathbf{k}+\mathbf{Q}2\downarrow} & m=-1,\\ \frac{1}{\sqrt{2}}(\hat{c}^{\dagger}_{\mathbf{k}1\uparrow}\hat{c}^{\dagger}_{-\mathbf{k}+\mathbf{Q}2\downarrow} +\hat{c}^{\dagger}_{\mathbf{k}1\downarrow}\hat{c}^{\dagger}_{-\mathbf{k}+\mathbf{Q}2\uparrow}) & m=0. \end{array}\right. $$
The second term in (1) is associated with the pairing mechanism induced by Hund's rule [14, 15]. As the Hund's coupling operates on particles from different bands, the resultant pairing has an interband character. Such approach results in spin-triplet, band-singlet paired phase with a symmetric gap parameter (even parity, e.g., s-wave, extended s-wave, d-wave). In the absence of magnetic ordering, one can focus on the superconducting A phase (equal spin) in which the Cooper pairs are in the states corresponding to m=±1 only and Δ k,1Q =Δ k,−1Q ≡Δ k Q is fulfilled, where
$$ {\Delta}_{\mathbf{k},\pm 1\mathbf{Q}}=-\frac{2}{N}{\sum}_{\mathbf{k}^{\prime}}J_{\mathbf{k}-\mathbf{k}^{\prime}}\langle\hat{A}_{\mathbf{k}^{\prime},\pm 1\mathbf{Q}}\rangle, $$
is the gap parameter corresponding to spin-up and spin-down Cooper pairs. This phase is equivalent to the one corresponding to the following relations: Δ k,1Q =Δ k,−1Q ≡0,Δ k,0Q ≠0, where
$$ {\Delta}_{\mathbf{k},0\mathbf{Q}}=-\frac{2}{\sqrt{2}N}{\sum}_{\mathbf{k}^{\prime}}J_{\mathbf{k}-\mathbf{k}^{\prime}} \langle\hat{A}_{\mathbf{k}^{\prime},0\mathbf{Q}}\rangle\;, $$
is the gap parameter. For simplicity, we have assumed that in the superconducting state, all the pairs have the same total momentum Q (the Fulde-Ferrell phase) and we take J k in the following form:
$$ J_{\mathbf{k}}=J_{0}+J_{1}(\cos k_{x}+\cos k_{y}), $$
where J 0 and J 1 determine the pairing strength. Such form of J k has also been chosen in [16]. By using the mean field (Bardeen-Cooper-Schrieffer, BCS) approximation, one obtains the following form of the effective Hamiltonian:
$$\begin{array}{@{}rcl@{}} \mathcal{\hat{H}}_{HF} = \\ {\sum}_{\mathbf{k}l\sigma}(E_{\mathbf{k}l}-\mu)\hat{n}_{\mathbf{k}l\sigma}+{\sum}_{\mathbf{k},m=\pm 1}({\Delta}_{\mathbf{k}\mathbf{Q}}\hat{A}^{\dagger}_{\mathbf{k}m\mathbf{Q}}+H.C.)\end{array}$$
$$+\frac{N({\Delta}^{(0)}_{\mathbf{Q}})^{2}}{J_{0}}+\frac{2N({\Delta}^{(1)}_{\mathbf{Q}})^{2}}{J_{1}}\;,$$
where the gap is a mixture of s-wave and extended s-wave gap symmetries, i.e.,
$$ {\Delta}_{\mathbf{k}\mathbf{Q}}={\Delta}^{(0)}_{\mathbf{Q}}+{\Delta}^{(1)}_{\mathbf{Q}}(\cos k_{x} + \cos k_{y}). $$
However, other gap symmetries can also be analyzed, e.g., the d-wave:
$$ {\Delta}_{\mathbf{k}\mathbf{Q}}={\Delta}^{(1)}_{\mathbf{Q}}(\cos k_{x} - \cos k_{y}). $$
The amplitudes Δ(0), Δ(1) and the chemical potential are calculated by solving the set of self-consistent equations numerically, whereas the vector Q is determined by minimizing the free energy of the system.
Two hole-like Fermi surface sheets in the folded Brillouin zone (a) and the electronic structure (b) for band filling n=1.78838. The energies are normalized to the bare bandwidth W. Note the Fermi wave-vector mismatch Δk between the states (k=k F ,l=1) and (k=−k F ,l=2)
It should be noted that with respect to the considered tight binding model, the even-parity pairing can be also analyzed within the spin-singlet, band-triplet channel. The corresponding Hamiltonian has the following form:
$$ \mathcal{\hat{H}}^{s}={\sum}_{\mathbf{k}l\sigma}(E_{\mathbf{k}l}-\mu)\hat{n}_{\mathbf{k}l\sigma}-\frac{2}{N}{\sum}_{\mathbf{k}\mathbf{k}^{\prime}}J_{\mathbf{k}-\mathbf{k}^{\prime}}\hat{B}^{\dagger}_{\mathbf{k}^{\prime}\mathbf{Q}}\hat{B}_{\mathbf{k}\mathbf{Q}}\;, $$
where the spin-singlet pairing operator has been introduced:
$$ \hat{B}^{\dagger}_{\mathbf{k}\mathbf{Q}}=\frac{1}{\sqrt{2}}(\hat{c}^{\dagger}_{\mathbf{k}1\uparrow}\hat{c}^{\dagger}_{-\mathbf{k}+\mathbf{Q}2\downarrow} -\hat{c}^{ \dagger} _{\mathbf{k}1\downarrow}\hat{c}^{\dagger}_{-\mathbf{k}+\mathbf{Q}2\uparrow}). $$
In such case, the gap parameter is defined as follows:
$$ {\Delta}^{s}_{\mathbf{k}\mathbf{Q}}=-\frac{2}{\sqrt{2}N}{\sum}_{\mathbf{k}^{\prime}}J_{\mathbf{k}-\mathbf{k}^{\prime}}\langle\hat{B}_{\mathbf{k}^{\prime}m\mathbf{Q}}\rangle\;. $$
In the absence of magnetic ordering, the self-consistent equations corresponding to both spin-triplet paired phase of type A and spin-singlet paired phase (with nonzero \({\Delta }^{s}_{\mathbf {k}\mathbf {Q}}\)) have the same form. Moreover, the free energies of those two phases are equal. In effect, the results presented in the next section are valid for both situations. This is caused by the fact that for the considered phases, the spin and band indices are treated on equal footing so the spin-triplet band-singlet situation is equivalent to that with the spin-singlet band-triplet.
In our study, we consider the following phases: the normal phase (NS) with Δ Q =0, the homogeneous superconducting phase (SC-A) with Δ Q ≠0, Q≡0, and the inhomogeneous superconducting phase (FF-A) with Δ Q ≠0, Q≠0.
In the subsequent discussion, n expresses the number of electrons per one Fe ion; the wave-vectors are given in the units of 1/a, where a is the lattice parameter; and all the energies have been normalized to the bare bandwidth W, whereas T represents the reduced temperature T≡k B T/W.
One should note that our model with the interband pairing between the two Fermi surface sheets shown in Fig. 1a resembles the situation of one-band model with the spin-singlet pairing between the two spin subbands. The difference is that, here, the bottoms of the bands between which the pairing occurs coincide but the shape of the dispersion relations leads to Fermi wave-vector mismatch (c.f. Fig. 1a); whereas in the original idea of the FFLO phase, under the influence of the Zeeman term, the spin subbands are shifted as a whole. In our model, the mismatch can be tuned by changing n, because by increasing the Fermi level, one increases the distance between the Fermi sheets (c.f. Fig. 1).
We analyze first the gap symmetry given by (8) with two pairing components (J 0≠0,J 1≠0). In Fig. 2, we show that for the proper values of the band filling n, the free-energy minima appear for nonzero values of the Cooper pair momentum Q, which correspond to the stability of the FF-A phase. As one can see in Fig. 2f, for n=1.77514, the free energy in the homogeneous paired phase (for Q ≡ 0) is already greater than the free energy in the normal phase (ΔF>0). However, by setting the proper value of Q, the stability of the superconducting phase can still be obtained. For the chosen gap symmetry, it is possible to connect the largest parts of the Fermi surfaces when Q is parallel to either the k x - or k y -axis. In the FF-A phase, the population imbalance between the two bands occurs. As a result, some of the particles from the second band (l=2) are not paired. The region in the reciprocal space which is occupied by the unpaired particles is shown in Fig. 3b. The corresponding quasiparticle dispersion relations in the FF-A phase are plotted in Fig. 3a.
Free energy in the paired state as a function of Q for the three values of the band filling: a n=1.75, b n=1.76709, and c n=177514. The pairing strength is set to J 0=0.371992 and J 1=J 0/5. In d–f, the difference between the free energy in the paired and the normal phases (ΔF) is shown as a function of Q x for Q y =0. The values of n chosen for d–f correspond to those from a–c, respectively
The quasiparticle dispersion relations in the FF-A phase: a for n=1.798,J 0=0.3913, and J 1=J 0/5 along the trajectory in the folded Brillouin zone marked in b by the dashed line. The so-called depairing region for the same model parameters is shown in b
In Fig. 4a, b, we show the stability regions of the considered phases in the (n,J 0) space together with the values of the gap amplitudes. As one can see, the behavior of Δ(0) and Δ(1) is very similar except that the values of Δ(1) are 1 order of magnitude smaller than those of Δ(0). The critical temperature is the same for both, as it should be. The border between the stability regions of the SC-A phase and the NS phase for the case of no-FF-A phase included is marked by the solid line. One can see that the nonzero values of the Cooper pair momentum allows for the paired phase to adapt to the unfavorable conditions of the system with large Fermi wave-vector mismatch (the larger the band filling, the larger the mismatch). In effect, the region of stability of the paired phase is broadened by the FF-A phase. The transition from SC-A to FF-A phase has a discontinuous nature as the drop in both Δ(0) and Δ(1) occurs along the transition line. Additionally, in Fig. 4c, we show the free-energy difference between the inhomogeneous paired phase and the next lowest free-energy phase as a function of n and J 0. The values of Q x which correspond to the free-energy minimum (FF-A phase stability) are provided in Fig. 4b.
The phase diagram in the (n,J 0) space together with the values of Δ(0) (a) and Δ(1) (b) gap amplitudes. The solid line in a and b marks the stability border between the SC-A and NS phases for the case of no-FF-A phase included. c The free-energy difference between the FF-A phase and the next lowest free-energy phase (ΔF). d Values of the Q x component for Q y =0, which minimize the free energy and lead to FF-A phase stability. For all points of the diagram, we set J 1=J 0/5
According to our analysis, the FF-A phase can also be stable for the case of extended s-wave gap symmetry without the admixture of s-wave (J 0=0,J 1≠0). The corresponding phase diagram on the (T,n) plane is presented in Fig. 5. Here, also the discontinuous nature of the SC-A → FF-A transition is seen (the sudden drop of Δ(1) in Fig. 5a). For the sake of completeness, we have made calculations for the d-wave gap symmetry given by (9). However, the J 1 parameter has to be quite large to obtain a stable d-wave paired solution. In Fig. 6, we show that also in the last case, the free-energy minimum can appear for nonzero values of Q. As one can see, the choice of the pairing symmetry influences the direction of the Q vector for which the free-energy minimum appears. In this case, the minimum is obtained for Q x =Q y (and Q x =−Q y ) direction, whereas for the case of s-wave symmetry (or the mixture of s-wave and extended s-wave symmetries) similar minimum is located on the Q x - or Q y -axis (c.f. Fig. 2).
The phase diagram in the (T,n) space a for J 1=0.4836 and J 0=0. b Values of the Q x component of the Cooper pair momentum (for Q y =0) which correspond to FF-A stability
Free energy of the paired phase as a function of the Cooper pair momentum for the case of d-wave gap symmetry with the pairing strength J 0=0 and J 1=1.2066 and n=1.5
We have analyzed the possibility of a new kind of superconducting phase with a spontaneous nonzero Cooper pair momentum. This phase can occur without the external a magnetic field in systems with the dominant interband pairing and two distinct Fermi surface sheets. The corresponding Fermi wave-vector mismatch which appears in such situation can be compensated by nonzero center-of-mass momentum of the Cooper pairs. In our study, we use as an example a tight binding model which describes the two hole-like bands of the iron-based superconductor LaFeAsO 1−x F x . The calculations have been carried out for different even-parity gap symmetries (s-wave, extended s-wave, and d-wave). We have shown that for proper values of the band filling and of the pairing strength, the free-energy minima appear which correspond to nonzero Cooper pair momentum. The direction of the Q vector depends on the selected gap symmetry (c.f. Figs. 2 and 6). For the case of the d-wave symmetry, the values of the pairing strength J 1 have to be very large (J 1>1) to obtain a paired solution in the considered model.
In our approach, we use the mean field (BCS) approximation which overestimates both the values of the order parameters and the critical temperature, so it would be interesting to analyze the considered problem with the inclusion of interelectronic correlations. The spin-triplet interband pairing induced by the combined effect of Hund's rule and the correlations have been analyzed by us recently within the Gutzwiller approximation but without the possibility of a nonzero momentum pairing [23, 24]. Also, application of the proposed idea to other systems with the interband pairing seems reasonable. Namely, pairing between two species of particles with different (effective) masses could lead to a similar Fermi wave-vector mismatch as that considered above. Such an unconventional phase could be realized in systems of ultracold atomic gases in optical lattices. Spin-singlet pairing between particles with different effective masses has been theoretically investigated in [25, 26]. However, in these considerations, the so-called spin-dependent masses are induced by interelectronic correlations and appear in an external magnetic field. As a result, the appearance of the nonzero momentum pairing is both due to the energy shift of the spin subbands and the corresponding modification of the dispersion relations due to spin-dependent renormalization factors.
As we have mentioned, the pairing induced by Hund's rule has an interband character. However, when it comes to other mechanisms, both inter- and intra-band components of the pairing can appear. The former can lead to the non-zero momentum of the Cooper pairs, whereas when the latter is strong, the homogeneous superconducting phase should be favored. It would be interesting to see to what extent the energy gain coming from the nonzero momentum pairing can survive in a model with both inter- and intra-band pairing. Another issue which would require further studies is the appearance of the degeneracy of the spin-triplet and spin-singlet pairings within our approach. This degeneracy should be broken by the spin-orbit coupling which has not been included by us at this stage of research. Moreover, the spin-orbit coupling would probably lead to a mixed ground state. These issues should be analyzed separately and are beyond the scope of this paper.
Fulde, P., Ferrell, R.A.: Phys. Rev. 135, A550 (1964)
Larkin, A.I., Ovchinnikov, Y.N.: Sov. Phys. JETP 20, 762 (1964)
MathSciNet Google Scholar
Saint-James, D., Sarma, G., Thomas, E.J.: Type II Superconductivity. Pergamon, New York (1969)
Bianchi, A., Movshovich, R., Cpan, C., Pagliuso, P.G., Sarrao, J.L.: Phys. Rev. Lett. 91, 187004 (2003)
Kumagai, K., Saitoh, M., Oyaizu, T., Furukawa, Y., Takashima, S., Nohara, M., Takagi, H., Matsuda, Y.: Phys. Rev. Lett. 97, 227002 (2006)
Correa, V.F., Murphy, T.P., Martin, C., Purcell, K.M., Palm, E.C., Schmiedenshoff, G.M., Cooley, J.C., Tozer, W.: Phys. Rev. Lett. 98, 087001 (2007)
Lee, I.J., Naughton, M.J., Danner, G.M., Chaikin, P.M.: Phys. Rev. Lett. 78, 3555 (1997)
Singleton, J., Symington, J.A., Nam, M.-S., Ardavan, A., Kurmoo, M., Day, P.: J. Phys. Condens. Matter. 12, L641 (2000)
Tanatar, M.A., Ishiguro, T., Tanaka, H., Kobayashi, H.: Phys. Rev. B 66, 134503 (2002)
Uji, S., Terashima, T., Nishimura, M., Takahide, Y., Konoike, T., Enomoto, K., Cui, H., Kobayashi, H., Kobayashi, A., Tanaka, H., Takumoto, M., Choi, E. S., Tokumoto, T., Graf, D., Brooks, J. S.: Phys. Rev. Lett. 97, 157001 (2006)
Shinagawa, J., Kurosaki, Y., Zhang, F., Parker, C., Brown, S.E., Jérome, D., Christensen, J.B., Bechgaard, K.: Phys. Rev. Lett. (2007)
Liao, Y., Rittner, A.S.C., Paprotta, T., Li, W., Partridge, G.B., Hulet, R.G., Baur, S.K., Mueller, E.J.: Nature 467, 567 (2010)
Spałek, J.: Phys. Rev. B 63, 104513 (2001)
Zegrodnik, M., Spałek, J.: Phys. Rev. B 86, 014505 (2012)
Dai, X., Fang, Z., Zhou, Y., Zhang, F.C.: Phys. Rev. Lett. 101, 057008 (2008)
Mazin, I.I., Singh, D.J., Johannes, M.D., Du, M.H.: Phys. Rev. Lett. 101, 57003 (2008)
Kuroki, K., Onari, S., Arita, R., Usui, H., Tanaka, Y., Kontani, H., Aoki, H.: Phys. Rev. Lett. 101, 087004 (2008)
Chen, W.Q., Zhou, K.Y., Zhang, F.C.: Phys. Rev. Lett. 102, 047006 (2009)
Maier, T.A., Scalpino, D.J.: Phys. Rev. B 78m, 020514(R) (2008)
Lee, P.A., Wen, X.G.: Phys. Rev. B 78, 144517 (2008)
Raghu, S., Qi, X.L., Liu, C.X., Scalpino, D.J., Zhang, S.C.: Phys. Rev. B 77, 220503(R) (2008)
Zegrodnik, M., Spałek, J., Bünemann, J.: New J. Phys. 15, 073050 (2013)
Zegrodnik, M., Bünemann, J., Spałek, J.: New J. Phys. 16, 033001 (2014)
Kaczmarczyk, J., Spałek, J.: J. Phys. Condens. Matter 22, 355702 (2010)
Maśka, M.M., Mierzejewski, M., Kaczmarczyk, J., Spałek, J.: Phys. Rev. B 82, 054509 (2010)
The authors are grateful to the Foundation for Polish Science (FNP) for the support within the project TEAM, as well as to the National Science Center (NCN) through the Grant MAESTRO, No. DEC-2012/04/A/ST3/00342.
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
Academic Centre for Materials and Nanotechnology, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059, Krakow, Poland
M. Zegrodnik & J. Spałek
Marian Smoluchowski Institute of Physics, Jagiellonian University, ul. Reymonta 4, 30-059, Krakow, Poland
J. Spałek
M. Zegrodnik
Correspondence to M. Zegrodnik.
Zegrodnik, M., Spałek, J. Spontaneous Appearance of the Spin-Triplet Fulde-Ferrell-Larkin-Ovchinnikov Phase in a Two-Band Model: Possible Application to LaFeAsO 1−x F x . J Supercond Nov Magn 28, 1155–1160 (2015). https://doi.org/10.1007/s10948-014-2800-0
Issue Date: March 2015
Unconventional superconductivity
FFLO phase
Iron pnictides | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.