text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Unraveling pedestrian mobility on a road network using ICTs data during great tourist events
Chiara Mizzi1,
Alessandro Fabbri1,
Sandro Rambaldi1,
Flavio Bertini1,
Nico Curti1,
Stefano Sinigardi1,
Rachele Luzi1,
Giulia Venturi1,
Micheli Davide2,
Giuliano Muratore2,
Aldo Vannelli2 &
Armando Bazzani ORCID: orcid.org/0000-0002-9633-00171
EPJ Data Science volume 7, Article number: 44 (2018) Cite this article
Tourist flows in historical cities are continuously growing in a globalized world and adequate governance processes, politics and tools are necessary in order to reduce impacts on the urban livability and to guarantee the preservation of cultural heritage. The ICTs offer the possibility of collecting large amount of data that can point out and quantify some statistical and dynamic properties of human mobility emerging from the individual behavior and referring to a whole road network. In this paper we analyze a new dataset that has been collected by the Italian mobile phone company TIM, which contains the GPS positions of a relevant sample of mobile devices when they actively connected to the cell phone network. Our aim is to propose innovative tools allowing to study properties of pedestrian mobility on the whole road network. Venice is a paradigmatic example for the impact of tourist flows on the resident life quality and on the preservation of cultural heritage. The GPS data provide anonymized georeferenced information on the displacements of the devices. After a filtering procedure, we develop specific algorithms able to reconstruct the daily mobility paths on the whole Venice road network. The statistical analysis of the mobility paths suggests the existence of a travel time budget for the mobility and points out the role of the rest times in the empirical relation between the mobility time and the corresponding path length. We succeed to highlight two connected mobility subnetworks extracted from the whole road network, that are able to explain the majority of the observed mobility. Our approach shows the existence of characteristic mobility paths in Venice for the tourists and for the residents. Moreover the data analysis highlights the different mobility features of the considered case studies and it allows to detect the mobility paths associated to different points of interest. Finally we have disaggregated the Italian and foreigner categories to study their different mobility behaviors.
The fast development of Information Communication Technologies (ICT) offers new opportunities for the realization of innovative analytical tools in the framework of big data analytics, which is one of the future challenges of the Complexity Science [1, 2]. Various authors have considered the large georereferenced datasets on individuals mobility, studying the statistical laws at the base of human movements [3–8] and the dynamic properties of human mobility in urban contexts [9, 10]. The aim of this research activity in the framework of Complex Systems Physics is to provide to stakeholders new knowledge tools to improve the sustainability of the mobility demand in future cities [8, 11–13]. The use of ICT datasets to study the human mobility is considered part of the road-map toward the realization of smart cities [2]. The big data science has certainly provided new tools to cope with complex problems of modern cities [14], however it has some intrinsic criticalities, that have given rise to a debate on the possibility of realizing smart cities [15]. The main questions are how to control the lack of information in the data and the representativeness of the datasets, that are strictly related to the use of new technologies. On one hand, data analytics is developing new statistical methods to extract the relevant information from large data sets [16]. On the other hand the possibility of using different data sources and the fast spreading of ICT in the population could reduce the bias in the data sample.
The governance of the mobility demand generated by big tourist flows is becoming a key issue for the quality of life in the historical Italian cities, that will become even worse in the next future due to globalization processes. On one hand the frailty of the cultural heritage is incompatible with presence of big tourist flows, on the other hand the daily life of residents is heavily conditioned by the presence of tourists. However the relevance of tourism economy advises against restriction policies that would limited a priori the tourists number. The data collection on individual mobility is preliminary to the development of any dynamic model which simulates and, hopefully, forecasts the pedestrian flows. Moreover during big tourist events, the critical crowding conditions require a specific analysis to point out the spatial and dynamic features of the observed mobility in relation with the structure of the road network and attractiveness of the points of interest [17, 18]. The individual behavior is a key issue to understand the emergent properties of crowd dynamics [19]. The historical centre of Venice is a paradigmatic case study, both for the predominantly pedestrian character of the venetian mobility and the unique features of its monuments, that attract large crowds of visitors during all the year. The historical city of Venice has a surface of 6.7 km2 (see Additional file 1) and \({\simeq}55\text{,}000\) inhabitants. This value can grow up to double during big tourist events. In this paper we cope with the problem of understanding how the pedestrian flows moved on the road network of the Venice historic centre during the Carnival of Venice 2017 (from 23/2/2017 up to 02/03/2017) and during the Festa del Redentore (from 14/7/2017 up to 16/7/2017). Our approach is based on two datasets provided by the Italian mobile phone company TIM [20] containing GPS (Global Positioning System) data about a relevant sample of mobile devices, with a ID number which changes every 24 hours. The collection of GPS data is possible by means of the new technologies that are currently being developed by NOKIA (Geosynthesis system) and the data provide anonymized GPS positions of a device each time certain types of network activities are on. We have introduced restrictive conditions to identify an individual mobility path to reduce the errors due to lacking of GPS data when the mobile device is in an idle condition ,and we have considered the problem of the representativeness of our data sample performing a direct measure of the pedestrian flow on a bridge. Even if we cannot achieve a final answer to this problem we are confident that the expected diffusion of the ICT will improve the quality of the GPS datasets collected using mobile devices. The choice of studying the mobility during big tourist events was made for two reasons: on one hand we take advantage from the presence of many people to increase the penetration of the mobile device sample to reconstruct pedestrian mobility, on the other hand there is a specific request to study the venetian mobility during such events, since the municipality has proposed to limit the tourist presence. Moreover, at the moment, the development of counting systems for measuring the tourist flows in Venice is under discussion and the actual numbers are estimated using average data from transportation means. We are aware that an exhaustive understanding of the pedestrian mobility in Venice certainly requires further studies, which consider a long period of data collection (not available at the moment), but in this paper we limit ourselves to face with the problem of how to extract relevant information on pedestrian mobility from the GPS datasets. Both the chosen events attract big crowds of tourists, but they present different features besides the fact that the Carnival takes place in winter and the Festa del Redentore in summer, during the evening of 15 July: Carnival is a typical tourist festival with several scheduled events distributed throughout the city (even if the main attractions are in San Marco square), whereas Festa del Redentore is a religious festivity very important to the Venetians which attracts many people arriving from the Venice district to attend to the fireworks along the Giudecca Canal. For these reasons we expect differences in the observed mobility in the two case studies, that have to be pointed out by our analysis. The distribution of devices detected by the phone cells network has been used to measure the spatial activity patterns [21–24] or to estimate the evolution of crowding into different areas of a city [25–27]. The study of the mobility through the reconstruction of the device trajectories on a road network, requires a precision of few meters in the device location that is characteristic of GPS data. In previous works [6, 28] we have studied the private vehicle mobility on urban road networks using GPS dataset recorded for insurance reasons that contains information on the vehicle trajectories at a scale of 1 km or 30 sec, on a sample of \({\simeq}3\%\) of the Italian vehicle population. In this work we apply the same methodologies to the GPS data recorded from mobile devices. After a preliminary data analysis to select the devices that have provided a suitable amount of GPS data, our approach is based on algorithms able to associate a daily mobility path to each device. The main difficulties are the occasional character of the mobile device activities that prevent the data collection at a fixed spatial scale and the signal losses mainly due to the narrow roads in Venice. To avoid the possible introduction of biases in our analysis, we prefer to follow a big data approach reducing drastically the numerousness of device sample and only reconstructing the mobility paths that satisfy well defined reliability criteria. As a consequence our approach is not able to detect critical crowding situations localized on the road network, but it succeeds to highlight the dynamic features of pedestrian mobility during the considered events. In particular the presented results refer to days of 26/02/2017 (Carnival Sunday) and 15/07/2017 (Redentore day) during which the presence of tourists was particularly relevant. We have then checked the penetration of the sample by comparing the estimated pedestrian flows aggregated at each hour, with a direct measure performed by volunteers on the Redentore bridge, that is crossed by a large amount of people due to the presence of fireworks show on the Giudecca Canal.
The main results of the paper is the emergence of a diffusion-like relation between the covered distance and the elapsed time \(s\propto t^{\alpha}\) with \(\alpha\simeq1/2\) and the existence of preferred mobility connected subnetworks of the whole road network able to take into account the majority of the observed mobility [29]. In the first case we suggest the existence of a travel time budget [30, 31] for the pedestrian mobility in Venice and we introduce the concept of rest times during the individual mobility, that could play an important role in the construction of dynamic models for tourist flows. In the second case our results highlight that the existence of mobility subnetworks can simplify the monitoring and controlling problem of the tourist flows and help the definition of models. Thanks to the information in the datasets we can also distinguish between Italian and foreign visitors and point out the existence of different mobility paths for the two categories.
The paper is organized as follows: in the second section we describe the main features of the datasets and we give an estimate of the sample penetration; in the third section we describe the algorithms to reconstruct the mobility paths and we perform the study of statistical properties of the observed mobility during the considered events; in fourth section we highlight the mobility connected subnetworks that emerge from the aggregation of the mobility paths and we discuss the difference in the mobility paths between Italians and foreigners and in the mobility driven by different attraction points; the conclusive remarks are reported in the last section.
The datasets
The dataset used in this study has been provided by the Italian mobile phone company TIM and contains georeferenced positions of tens of thousands anonymous devices (e.g. mobile phones, tablets, etc. …), whenever they performed an activity (e.g. a phone call or an internet access) during eight days from 23/2/2017 up to 02/03/2017 (Carnival of Venice dataset), and from 14/7/2017 up to 16/7/2017 (Festa del Redentore dataset). According to statistical data, 66% of the whole Italian population has a smartphone [32] and TIM is one the greatest mobile phone company in Italy whose users are \({\simeq}30\%\) of the whole smartphone population. The datasets refer to a geographical region that includes an area of the Venice province, so that it is possible to distinguish commuters from sedentary people and the different transportation means used to reach Venice. Each valid record gives information about the GPS localization of the device, the recording time, the signal quality and also the roaming status, which in turns allow to distinguish between Italian and foreigners. More details on the dataset collection techniques are reported in Additional file 1. The devices are fully anonymized and not reversible identification numbers (ID) are automatically provided by the system for mobile phones and calls within the scope of the trial; the ID is kept for a period of 24 hours. During each activity a sequence of GPS data is recorded with a 2 sec. sampling rate and the collection stops when the activity ends. As matter of fact during an activity most of people reduce their mobility except if they are on a transportation mean, so that the dataset contains a lot of small trajectories that have to be joined to reconstruct the daily mobility. After a filtering procedure (see next subsection) these data provide information on the mobility of a sample containing 3000–4000 devices per day. Since the presences during the considered events were of the order of 105 individuals per day, as reported by the local newspapers [33] , we estimate an overall penetration of our sample of 3–4%. Figure 1 shows an example of the distribution of the GPS data recorded in the Venice historical centre. In the sequel we illustrate in details the results of our approach for the Sunday 26/2/2017 during Carnival and for Saturday 15/2/2017 during the Festa del Redentore that were particularly crowded days.
Examples of the spatial distribution of the GPS data recorded in the Venice historical centre: the top picture refers to the Carnival dataset (26/02/2017 from 12:00 p.m. to 02:00 p.m.). The bottom picture to the Festa del Redentore dataset (15/07/2017 from 19:00 p.m. to 21:00 p.m.). The red circle points out the Redentore bridge location, which is a floating bridge installed during the Festa del Redentore
Data filtering procedure
We perform a filtering process on the datasets to select the devices that give information suitable to study the daily mobility on the Venice road network. We have extracted the Venice road network from Open Street Map database [34] using a filtering procedure to neglect small open arcs and a fusion procedure to join consecutive arcs. Moreover we have added the ferryboat lines to georeference correctly people in the public means. We have compared the extracted road network with the official cartography of Venice municipality (http://smu.insula.it/ Ramses project). The Carnival and the Festa del Redentore datasets contain respectively ≃106 and \({\simeq}1.8 \times10^{6}\) georeferenced records in the Venice historic centre. We aggregate the GPS data of each device-ID to downsample the data by starting from an initial position (pivot point) and by computing the geodesic distance on the road network with the successive points associated to the same ID. When the distance overcomes a fixed threshold (we choose a threshold of 50 m) we keep the new point and restart the procedure using the new point as pivot. In this way the number of valid positions is reduced respectively to \({\simeq}60 \times10^{3}\) in the Carnival dataset and to \({\simeq}90 \times10^{3}\) in the Festa del Redentore dataset. Each selected GPS point is then located in the nearest arc of the road network within a maximal distance of 60 m including the ferryboat lines; points that cannot be attributed to any arc according to this criterion, are discarded. The positioning procedure further reduces the valid points down to \({\simeq}50 \times10^{3}\) in the Carnival dataset and down to \({\simeq}80 \times10^{3}\) in the Festa del Redentore dataset. These positions allow to get dynamic information both on the most used paths on the road network, and we have a measure of the elapsed time between two successive positions, that could point out the main points of interest.
Sample penetration estimate
We have performed a direct check for the representativity of the considered sample on the spatial scale of a single road. In particular, we compare the pedestrian flows estimated using GPS data with the pedestrian flows directly measured by volunteers on the Redentore bridge. The campaign of measures was organized by CORILA [35] and the data were collected each 15 minutes by using people count devices. The Redentore bridge is a floating bridge on the Giudecca Canal (see Fig. 1 and the map in Additional file 1). The bridge has a length of \({\simeq}300~\mbox{m}\) and it was opened from 7:00 p.m. of 15/07/2017 for all the night, except during the firework show between 23:00 p.m. and 12:30 a.m. To estimate the pedestrian flow across the bridge we have counted the mobile devices that leave two GPS signals at opposite sides of the bridge during the considered time interval slot, so that we distinguish between the two crossing directions. The results of the direct measures are reported in Fig. 2 (above picture) together with the estimated pedestrian flows scaled according to a penetration of \({\simeq}1.6\%\) for our sample. This result is obtained by means of a best fit of the direct measures with 20% average error (excluding the flow measured at the reopening of the bridge after midnight). The reduced sample penetration with respect to the 5% expected, is probably due to the small spatial scale of the bridge that requires a coincidence of two GPS signals from the same device at the opposite sides of the bridge in a short time interval. Indeed, we expect that the variability of the device activity rate reduces the sample penetration as it is shown in Fig. 2 (bottom picture), where we have computed the probability that a device located in an area near the bridge leaves GPS signal in a time interval of 10 minutes. We remark as the activity rate of the devices changes drastically form 23:30 p.m. to 12:30 a.m. The estimated flows allow to reproduce with good accuracy the evolution of the empirical observations except for a single point between midnight and 01:00 a.m. when the bridge was reopened after the firework show. A big pedestrian flow was recorded between 12:30 a.m. and 01:00 a.m. that is not detected by the GPS dataset. A possible explanation is that the mobile devices activity in the area is dropped down after the fireworks (cfr. Fig. 2 (bottom picture)). Probably, after the firework show, most of people were mainly interested in crossing the bridge towards the Venice centre. The direct people counting points out a net pedestrian flow towards the Giudecca island of 8000 people during the opening of the bridge up to 23:00 p.m. and a net flow of 14,000 people in the opposite direction, after the bridge reopening (some people arrived the island by ferryboat). The GPS dataset estimates correctly the incoming flow, but underestimates the outgoing flow with an error of approximately 8000 people. This estimate can be consistent if the device activity at the bridge were reduced by a factor 3 in the time interval from 12:30 a.m. to 01:00 a.m. The comparison with empirical observations suggests that the selected device sample recovers its representativity during the night. On our opinion, the fact that the selected sample could fail to detect localized crowded situations can be the consequence of two causes. On one hand we have selected the device sample maximizing the possibility to reconstruct the daily mobility on the road network and not to detect crowded situations. On the other hand ,since the GPS data are only recorded when the device performs an activity, there are necessary further studies to understand how people use ICT devices in crowded situations.
Left picture: comparison of the hourly pedestrian flows on the Redentore bridge estimated from the GPS dataset (continuous curves) and the empirical measures by a direct people counting (dots) performed by volunteers: the blue data refer to the pedestrian flow from Giudecca island toward Venice centreThe historical centre of Venice is an ideal experimental field to study the features of pedestrian mobility and the choice of two big tourist events (the Carnival of Venice 2017 and the Festa del Redentore) as case studies allows on one hand to increase the representativeness of the sample and on the other hand to provide quantitative information to the stakeholders that are in charge of the management of tourist flows., whereas the red data refer to the pedestrian flow on the opposite direction. The scaling factor applied to the sample of the GPS dataset corresponds to a penetration of 1.6%. We recall that the bridge was closed between 23:00 p.m. and 12:30 a.m. Right picture: empirical relative frequency to get a GPS record near the Redentore bridge in a time interval of 10 minutes from a device in the selected sample; the red line is a running average over one hour to smooth the fluctuations effect
Mobility paths reconstruction on the road network
The procedure of mobility path reconstruction considers separately the land mobility and the water mobility since the two mobility networks have different features, so that it is necessary to check carefully the transitions from one network to the other. To create a mobility path, we connect two successive points left by the same device using a best path algorithm on the road network with a check on the estimated travel speed to avoid unphysical situations and discarding the paths whose velocity is clearly not consistent with the typical pedestrian velocity (or ferryboat velocity). To end a land path and to start a water path, we require that at least two successive points of the same device are attributed to a ferryboat line by the localization algorithm. In the case of a single point on a ferryboat line, we force the localization of this point on the nearest road on the land. An example of daily mobility paths are shown in Fig. 3 (bottom). This procedure allows to reconstruct the daily mobility of ≃4000 devices for the Carnival dataset and 5000 devices for the Festa del Redentore dataset. However some devises leave a very low number of points (less than 3) that are not enough to study their mobility, and other devices show an anomalous mobility paths which crosses a very high number of roads (more than 200). In such a case we consider outliers these paths, that could be associated to people performing particular activities in Venice, which are not related to the tourist or citizen mobility. Finally, we succeed to reconstruct the daily mobility of ≃2800 (resp. ≃3600) different devices per day for the Carnival dataset (resp. for the Festa del Redentore dataset), so that the representativeness of the mobility sample is estimated between \(2.8\div3.6\%\). In Fig. 3 (top) we show the measured number of moving devices detected in the historical centre of Venice, whose mobility paths have been correctly reconstructed by the algorithms during the Festa del Redentore: the figure refers both to the land and water mobility and clearly shows the circadian rhythm of the presences with a peak during the evening of 15/7/2017 in occasion of the firework show.
Picture (a): number of selected devices present the Festa del Redentore dataset collected during the three days: we observe the anomalous increase of the presence during the night of 15/7/2017. Picture (b): some examples of mobility paths reconstructed (continuous lines) on the road network of the Venice historical centre using GPS data (red dots)
Statistical properties of mobility paths
The mobility paths provide dynamic information on how people realize their mobility demand on the road network during the considered events. The elapsed time between two successive GPS data is used to attribute a displacement velocity that of course is affected by the rest times at any point of interest. We remark that we have not a start and end point of each single trip, but only a sampling of the whole daily mobility of a device, since the GPS data are recorded only in conjunction with an activity: for example the elapsed time between two successive points may be affected by a stop for shopping. A dynamic model to simulate the pedestrian dynamics on the Venice road network based on the individual dynamics has to includes tracts covered at constant velocity and breaks due to the presence of points of interests, crowded situations or to recover from the walking fatigue. We consider some statistical properties of the reconstructed mobility paths to check if they are consistent with other statistical laws suggested by the analysis of mobility datasets in urban contexts [4, 6, 9, 10]. In Fig. 4 we report the daily path length distribution for both the considered datasets: the average mobility lengths are 3.1 km and 4.3 km respectively for the Carnival and the Festa del Redentore datasets. The differences between the two distributions may be explained both by the effect of weather conditions [36] (the Carnival takes place on winter whereas the Festa del Redentore is celebrated during summer) and by the different organization of the two events. The Venice Carnival is an ensemble of events spread on the historical centre even if San Marco square is always the main attractive location, whereas the Festa del Redentore is celebrated in the area near Giudecca Canal between the Giudecca island and the Riva degli Schiavoni. Therefore one expects a mobility more influenced by an origin destination character during the Festa del Redentore than during the Carnival of Venice. The path distribution in Fig. 4 refers only to pedestrian mobility since we have excluded all the mobility paths with a tract on a ferry line. This criterion is satisfied by 2/3 of the devices in our sample, whereas the remaining 1/3 performs a mixed mobility. We propose an exponential interpolation of the path length distribution for both the datasets (cfr. dashed lines in Fig. 4) and we observe as the exponential interpolation overestimates the short paths in the Festa del Redentore according to the existence of a great origin destination component. Assuming the existence of an average characteristic pedestrian velocity, the path length can be interpreted as a mobility energy distribution in agreement with a Maxwell-Boltzmann distribution [6] and it is consistent with the concept of travel time budget proposed in other studies of urban mobility [30, 31]. The exponential decaying defines two different characteristic lengths, 3.0 km for the Carnival dataset and 3.8 km for the Festa del Redentore dataset and it suggests the propensity of the individuals to perform a greater mobility in the last case. In both cases these distances are probably greater than the typical pedestrian mobility in a city, but they reflect the average walking distance in the historical centre of Venice, where the pedestrian mobility is prevalent. Short mobility paths are overestimated by the exponential distribution since one has to cover a minimal distance to satisfy the mobility demand. The presence of short daily mobility paths could also be related to the use of the public transportation system.
Distribution of the mobility path lengths reconstructed during the Carnival (top picture) and the Festa del Redentore (bottom picture) in the Venice historical centre. The dashed line is an exponential interpolation of the distribution tail whose equation is reported in the pictures
To understand the statistical features of the observed mobility we also consider the mobility time distribution associated to the mobility paths, computed as the elapsed time between the first and the last recorded GPS position of a device in the area of interest (see Fig. 5). The mobility time is the sum of the travel times and the rest times. The distribution tail can be affected by the device activity during the night not directly related to the mobility. It is a reasonable assumption that if an individual has spent more than 8 h in Venice then he has a relevant probability to spend also the night in Venice: indeed to spend more than 8 h in Venice living outside, one has to add a commuting time between 1 and 2 h and to consider the possibility to take lunch and dinner in Venice, that could be quite expensive. The exponential interpolation is less justified in this case due to the increased effect of the rest times with respect to the mobility times, and we derive a dynamic model for the relation between the mobility path lengths and the mobility time.
Distribution of the mobility time associated to the daily mobility paths reconstructed during the Carnival (left picture) and the Festa del Redentore (right picture) in the Venice historical centre
Dynamic properties of the mobility paths
Let us consider an ensemble of individual moving on the road network, we define the average moving velocity \(v(t)\)
$$\frac{ d \langle s\rangle }{dt}=v(t), $$
where \(\langle s\rangle \) is the average path length corresponding to a mobility time t. In Fig. 6 we show the result of an interpolation of the empirical relation between \(\langle s\rangle \) and t by means of a power law
$$ \langle s\rangle =c t^{\alpha}, $$
where c is a suitable constant. In normal conditions, the pedestrian dynamics is performed at a constant velocity \(v_{0}\), with a stochastic variation among individuals, and a linear relation \(s=v_{0} t_{w}\) is expected where \(t_{w}\) is the walking time. The statistical law \(\langle s\rangle \propto t^{\alpha}\) with \(\alpha<1\), where we average on the path lengths corresponding to a given mobility time t implies that the rest times, defined by the difference \(t-t_{w}\), increase as a function of t. Therefore the relation (1) simulates a fatigue effect of individuals during pedestrian mobility. We remark that it is difficult to relate this effect to crowding conditions in the road network unless one could compute a fundamental diagram [37] for the pedestrian dynamics in the Venice road network. On our opinion this is possible, but it requires a dataset that includes a long period of observations. The interpolation of the empirical data gives an exponent \(\alpha=0.41\) in the case of the Carnival dataset and \(\alpha=0.58\) in the case of the Festa del Redentore dataset. This difference suggests a less effective mobility during the Carnival than during the Festa del Redentore, probably due to the weather conditions in winter, but also by the many activities that could attract the attention of people. To relate the empirical observations with a microscopic dynamic model, we propose a relation between the walking time \(t_{w}\) and the mobility time t of the form
$$ dt_{w}=\frac{\alpha\, dt}{(1+t/\tau)^{1-\alpha}}, $$
where τ is a fatigue scale time for pedestrian mobility and \(\alpha >0\) measures the efficiency of the mobility: \(\alpha\to1\) is the most efficient mobility when space and time are proportional. The relation (2) implies that if \(t<\tau\) the mobility time practically coincides with the walking time, whereas the walking time reduces to a small fraction of the mobility time when \(t\gg\tau\) as fast as \(\alpha\ll1\). For a typical visit of 6 h in the Venice historical centre, the formula (2) implies that the walking time fraction is \(t^{\alpha}\tau^{1-\alpha}\simeq2.5~\mbox{h}\) for a fatigue time scale \(\tau \simeq1~\mbox{h}\). A simple calculation gives
$$s= t^{\alpha}v_{0}\tau^{1-\alpha} \biggl[ \biggl(1+ \frac{\tau}{t} \biggr)^{\alpha}- \biggl(\frac{\tau}{t} \biggr)^{\alpha}\biggr]\simeq \bar{v}_{0}\tau ^{1-\alpha}{ \alpha} t^{\alpha},\quad t\gg\tau $$
so that one recovers Eq. (1)
$$ \langle s\rangle = \frac{\bar{v}_{0} }{\alpha}\tau^{1-\alpha} t^{\alpha}. $$
We remark that the relation (3) is singular when \(\alpha\to0\) (i.e. there is no mobility). Moreover the validity of Eq. (2) for long times t is questionable since they can be affected by the device activities at home, hotels or restaurants. The numerical interpolation provides the value
$$\bar{v}_{0} \tau^{1-\alpha}\simeq1.7 $$
so that estimating \(\bar{v}_{0}\simeq0.5~\mbox{m/sec}\) as a typical average pedestrian velocity, one obtains the fatigue time scale \(\tau\simeq1~\mbox{h}\). This approach provides an analytical formula for the mobility time distribution once the distribution of \(\langle s\rangle \) is known. Due to the great individual variability in the recorded mobility, the distribution of \(\langle s\rangle \) is no more exponential and the approximation with a constant distribution is reasonable at this stage (see Additional file 1). Then one obtains a mobility time distribution of the form
$$ p(t)\propto(1+t/\tau)^{-(1-\alpha)}. $$
We remark that this distribution is not summable and we expect a validity for a limited time interval. In Fig. 5 we show the comparison between the empirical mobility time distribution and the analytical distribution (4). The parameters used in the interpolation are consistent with the interpolation shown in Fig. 7 with \(\tau=1\). We remark as the analytical law provides a quite good interpolation of the mobility time distributions with \(t\in[0:6]~\mbox{h}\), whereas the distribution tail is still of exponential nature.
Relation between the average path lengths \(\langle s\rangle \) and the mobility times: the left picture refers to the Carnival dataset and the right picture to Festa del Redentore dataset. The plots are obtained performing a running average of length 100 on the \((t,s)\) data. The continuous line is the result of an power law interpolation (cfr. Equation (1)) with exponents \(\alpha=0.41\) in the first case and \(\alpha=.58\) in the second one, whereas the proportionality coefficient is ≃1.7 in both cases
Interpolation of the empirical elapsed time distributions by using the analytical distribution (4) the left picture refers to the Carnival dataset, whereas the right picture to the Festa del Redentore dataset. The continuous line is the distribution (4) with parameter \(\alpha=0.42\), \(\tau=1\) in the first case and \(\alpha=0.58\) and \(\tau=1\) in the second one
Pedestrian mobility network
The reconstruction of the mobility paths also allows to study how people perform their mobility on the road network. We consider the problem of determining the most used subnetwork of the Venice road network. The existence of mobility subnetworks could be the consequence of the peculiarity of Venice road network, where it is quite easy to get lost if you do not have a map. Therefore people with a limited knowledge of the road network move according to paths suggested by internet sites or following the signs on the roads. To point out a mobility subnetwork we rank the roads of Venice according to a weight proportional to the number of mobility paths passing through each road. Then, we have applied an algorithm to extract a connected subnetwork, which contains the roads in the ranking able to explain a fixed percentage of the observed mobility (see Additional file 1 for a brief description of the main steps of the algorithm). We are able to extract a subnetwork which explains the 64% of the observed mobility using 13% of the total road network length for the case of the Carnival dataset and 15% of the total length in the case of the Festa del Redentore dataset. The selected road subnetworks are plotted in Fig. 8 for both the datasets. As a matter of fact, many of the highlighted paths are also suggested by internet sites [38]. However, we remark some differences that can be related by the different nature of the considered events. During the Carnival of Venice the mobility seems to highlight three main directions connecting the railway station and the Piazzale Roma (top-left in the map), which are the main access points to the Venice historical centre, with the area around San Marco square, where many activities where planned during 26/02/2017. In the case of the Festa del Redentore the structure is more complex due to the appearance of several paths connecting the station and Piazzale Roma with the Dorsoduro district in front of the Giudecca island (see map in Additional file 1). This geometrical structure could have a double explanation: on one hand the Festa del Redentore introduces an attractive area near the Giudecca island, where the fireworks take place in the evening; on the other hand the Festa del Redentore is a festivity very much felt by the local population, that knows the Venice road network and performs alternative paths.
Picture (a): selected subnetworks (highlighted in blue) from the road network of the Venice historical centre (in the background), that explain 64% of the recorded mobility in the datasets. The top picture refers to the Carnival mobility during 26/02/2017 and corresponds to 13% of the total length of the Venice road network. The picture (b) refers to the Fesat del Redentore mobility during 15/07/2017 and corresponds to 15% of the total length of the Venice road network
Foreigners versus Italians mobility
To study the possible effect on the mobility of a greater custom to visit Venice, we divide the devices in the datasets into Italian and foreign devices according to the roaming protocol. The technical details that allow this disaggregation are reported in the Supplementary Material. Of course we have no guarantee that all the Italians are more used to visit Venice than the foreigners, but this is a reasonable assumption on average, since many commuter visitors come from neighboring regions during the considered events. Then we have associated to each road two normalized weights \(w_{fo,it}\) proportional to the number of mobility paths of Italians and foreigners on the road itself (the detected Italians are approximately 10 times the foreigners). In this way we select the roads that are respectively preferred by the Italians and by the foreigners considering the distribution of the difference \(w_{fo}-w_{it}\) and introducing thresholds at \({\pm}1~\mbox{rms}\) and \({\pm}4.5~\mbox{rms}\). In Fig. 9 we plot the results for the two datasets. We remark that not all the highlighted roads are present in the subnetworks in Fig. 8 since it was not possible to connect them using the high ranked roads in our list. It is noteworthy to observe that the majority of highlighted roads show a well defined preference by one of the two populations (i.e. their difference \(|w_{fo}-w_{it}|\) is greater than 4.5 rms). During the Carnival the foreigners follow a path passing through Strada Nuova to reach San Marco square and the Rialto bridge, whereas Italians prefer to go through the central part of the Venice historical centre. Moreover we have two clear attraction areas for the foreign people at the Old Getto (up left in the picture) and near Palazzo Grassi (in the center of the picture). These preferences are also observed during the Festa del Redentore with the exception of the Old Getto that was not pointed out by the algorithm. But the attractiveness of San Marco square is increased for the foreigners with respect to the Italians that prefer to reach the area in front of Giudecca island. This is consistent with the structure of the mobility subnetwork in this area that seem to be used manly by Italians (Fig. 8 (bottom)).
Preferred roads of foreigners and Italians in the historical centre of Venice during 26/02/2017 (a) and during 15/07/2017 (b). The disaggregation has been performed according to the roaming protocol in the dataset. The roads that have been found more favorite by the foreigners are highlighted in yellow and green according to the thresholds 1 and \(4.5~\mbox{rms}\) in the weight difference \(w_{s}-w_{i}\), whereas the more favorite roads by Italians are highlighted in red and blue according to the thresholds −1 and \(-4.5~\mbox{rms}\)
Attractiveness of the main areas of interest
Finally we analyze the mobility driven by the areas of greatest attractiveness like San Marco square during the Carnival and the Giudecca island during the Festa del Redentore. We select the mobility paths passing through San Marco square (or the Redentore bridge) and we reconstruct the mobility network defined by incoming paths. The results are plotted in Fig. 10: for the Carnival dataset we select ≃1200 mobility paths corresponding to the 42% of the total mobility, whereas for the Festa del Redentore dataset we select ≃700 mobility paths corresponding to 19% of the total mobility. The highlighted road networks explain the 61% (resp. 54%) of the total pedestrian mobility towards the San Marco square (resp. towards the Giudecca island) in the datasets. In the first case the analysis points out three main mobility pedestrian paths starting from the main entry points (the railways station and the Piazzale Roma parking area) that joins near the Rialto bridge. From the Rialto bridge the observed mobility presents a more diffusive character and it does not clearly define a path. Then we have an incoming path from the Riva degli Schiavoni due to the ferryboat line contribution and a well defined path between San Marco and the Accademia Bridge (see map in Additional file 1).
Picture (a): the mobility network driven by the attractiveness of San Marco square during 26/02/2017 that takes into account the 61% of the total pedestrian mobility towards the square. Picture (b): the mobility network driven by the attractiveness of Giudecca island during 15/07/2017 that takes into account the 54% of the total pedestrian mobility towards the bridge
In the second case we observe a single pedestrian path from the main entry points towards the Giudecca island, whereas we have various incoming paths along the canal banks, indicating that people arrived by ferryboat. Noteworthy, there is not a clear connection between the San Marco square and the Giudecca island suggesting that the most of the people interested in the Festa del Redentore in the evening have not visited San Marco before.
The possibility of recording accurate anonymous georeferenced positions of mobile ICT devices whenever they perform an activity, provides dynamic information on the people mobility on a whole road network. Even if the requirements to reconstruct reliable daily mobility paths strongly reduce the penetration of the considered samples, we succeed to study some statistical and dynamic properties of pedestrian mobility in Venice. We explicitly analyze the pedestrian mobility during two large tourist events, but our methodologies apply to any dynamic GPS dataset containing information on individual mobility on a road network. The historical centre of Venice is an ideal experimental field to study the features of pedestrian mobility and the choice of two big tourist events (the Carnival of Venice 2017 and the Festa del Redentore) as case studies allows on one hand to increase the representativeness of the sample and on the other hand to provide quantitative information to the stakeholders that are in charge of the management of tourist flows. Our result are consistent with the existence of a 'mobility energy' and they point out the relevance of a 'fatigue effect' that reduces the average speed of a mobility path as the mobility time increases. Moreover the distribution of the mobility paths on the Venice road network, allows both to reconstruct connected subnetworks able to explain the majority of the observed mobility and to give information on how people use the road network to reach the main areas of interest. These results can be also relevant for the realization of a monitoring system of the pedestrian flows in Venice, suggesting where to install the people counting devices and how the local measures can be correlated to the mobility state of the road network.
The possibility of disaggregating Italians from foreigners by the roaming protocol, shows some different behaviors that should be further analyzed to understand if they can be related to the different knowledge of Venice road network. The different features of the two events (the Venice Carnival takes place during two weeks in winter, whereas the Festa del Redentore is a religious holiday in summer) are reflected by different dynamic properties of the observed mobility. Our results show the possibility of using the quality of the GPS data on small sample of mobile devices to build useful tools to study the individual mobility at the spatial scale of the road and to tune dynamic models [39] of pedestrian flows that perform a nowcasting and forecasting of the mobility state of the whole road network to avoid critical states. We expect that in the next future the quality and the quantity of GPS datasets provided by the ICT will continuously increase and that their study contribute to the debate on the development of the Smart City paradigm.
Vespignani A (2012) A modelling dynamical processes in complex socio-technical systems. Nat Phys 8:32–39
Batty M, Axhausen KW, Giannotti F, Pozdnoukhov A, Bazzani A, Wachowicz M, Ouzounis G (2012) Smart cities of the future. Eur Phys J Spec Top 214(1):481–518
Brockmann D, Hufnagel L, Geisel T (2006) The scaling laws of human travel. Nature 439:462–465
Gonzalez MC, Hidalgo CA, Barabasi AL (2008) Understanding individual human mobility pattern. Nature 453:779–782
Song C, Koren T, Wang P, Barabasi AL (2010) Modelling the scaling properties of human mobility. Nat Phys 6(10):818–823
Gallotti R, Bazzani A, Rambaldi S (2012) Towards a statistical analysis of human mobility. Int J Mod Phys C 23:1250061
Yan XY, Han XP, Wang BH, Zhou T (2013) Diversity of individual mobility patterns and emergence of aggregated scaling laws. Sci Rep 3:2678
Gallotti R, Bazzani A, Degli Esposti M, Rambaldi R (2013) Entropic measures of individual mobility patterns. J Stat Mech Theory Exp 2013:P10022
Zhao K, Musolesi M, Hui P, Rao W, Tarkoma S (2015) Explaining the power-law distribution of human mobility through transportation modality decomposition. Sci Rep 5:9136
Gallotti R, Bazzani A, Rambaldi S, Barthelemy M (2016) A stochastic model of randomly accelerated walkers for human mobility. Nat Commun 7:12600
Song C, Qu Z, Blumm N, Barabasi AL (2010) Limits of predictability in human mobility. Science 327(5968):1018–1021
Lin M, Hsu WJ, Lee ZQ (2012) Predictability of individuals' mobility with high-resolution positioning data. In: Proceedings of the 2012 ACM conference on ubiquitous computing. ACM, New York, pp 381–390
Cuttone A, Lehmann S, Gonzalez MC (2018) Understanding predictability and exploration in human mobility. EPJ Data Sci 7:2
Batty M (2016) Big data and the city. Built Environ 42(3):321–337
Shelton T, Matthew Zook M, Wiig A (2015) The 'actually existing smart city'. Camb J Reg Econ Soc 8:13
Kitchin R (2014) The real-time city? Big data and smart urbanism. GeoJournal 79:1
Batty M, Desyllas J, Duxbury E (2003) Safety in numbers? Modelling crowds and designing control for the Notting Hill Carnival. Urban Stud 4(8):1573–1590
Omodei E, Bazzani A, Rambaldi S, Michieletto P, Giorgini B (2014) The physics of the city: pedestrians dynamics and crowding panic equation in Venezia. Qual Quant 48(1):347–373
Moussaıd M, Perozo N, Garnier S, Helbing D, Theraulaz G (2010) The walking behaviour of pedestrian social groups and its impact on crowd dynamics. PLoS ONE 5(4):e10047
https://www.tim.it/
Candia J, Gonzalez MC, Wang P, Schoenharl T, Madey G, Barabasi AL (2008) Uncovering individual and collective human dynamics from mobile phone records. J Phys A, Math Theor 41(22):224015
Becker R, Caceres R, Hanson K, Isaacman S, Loh JM, Martonosi M, Rowland J, Urbanek S, Varshavsky A, Volinsky C (2013) Human mobility characterization from cellular network data. Commun ACM 56(1):74–82
Csáji BC, Browet A, Traag VA, Delvenne JC, Huens E, Van Dooren P, Smoreda Z, Blondel VD (2013) Exploring the mobility of mobile phone user. Phys A, Stat Mech Appl 392(6):1459–1473
Xu Y, Shaw SL, Zhao Z, Yin L, Lu F, Chen J, Fang Z, Li Q (2016) Another tale of two cities: understanding human activity space using actively tracked cellphone location data. Ann Am Assoc Geogr 106(2):489–502
Ratti C, Frenchman D, Pulselli RM, Williams S (2006) Mobile landscapes: using location data from cell phones for urban analysis. Environ Plan B, Plan Des 33(5):727–748
Calabrese F, Di Lorenzo GD, Liu L, Ratti C (2011) Estimating origin-destination flows using mobile phone location data. IEEE Pervasive Comput 10(4):0036
Xu Y, Shaw SL, Zhao Z, Yin L, Fang Z, Li Q (2015) Understanding aggregate human mobility patterns using passive mobile phone location data: a home-based approach. Transportation 42(4):625–646
Bazzani A, Giorgini B, Rambaldi S, Gallotti R, Giovannini L (2010) Statistical laws in urban mobility from microscopic GPS data in the area of Florence. J Stat Mech Theory Exp 2010:P05001
Toole JL, Colak S, Sturt B, Alexander LP, Evsukoff A, Gonzalez MC (2015) The path most traveled: travel demand estimation using big data resources. Transp Res, Part C, Emerg Technol 58(Part B):162–177
Mokhtariana PL, Chenb C (2004) TTB or not TTB, that is the question: a review and analysis of the empirical literature on travel time (and money) budgets. Transp Res, Part A, Policy Pract 38(9–10):643–675
Gallotti R, Bazzani A, Rambaldi S (2015) Understanding the variability of daily travel-time expenditures using GPS trajectory data. EPJ Data Sci 4:18
https://en.wikipedia.org/wiki/List_of_countries_by_smartphone_penetration
http://www.veneziatoday.it/eventi/carnevale-venezia-2017-numeri-record.html, https://www.ilgazzettino.it/nordest/venezia/redentore_venezia_2017_foto-2563968.html
https://www.openstreetmap.org/#map=14/45.4365/12.3546
http://www.corila.it/
Böcker L, Martin Dijst M, Prillwitz J Impact of everyday weather on individual daily travel behaviours in perspective: a literature review. Transp Rev 33(1):71 (2013)
Geroliminis N, Daganzo CF (2008) Existence of urban-scale macroscopic fundamental diagrams: some experimental findings. Transp Res, Part B, Methodol 42(9):759
http://www.camminandoavenezia.com/itinerari/
Barbosa-Filho H, Barthelemy M, Ghoshal G, James CR, Lenormand M, Louail T, Menezes R, Ramasco JJ, Simini F, Tomasini M Human mobility: models and applications. https://arxiv.org/abs/1710.00004
We are indebted with A4SMART, CORILA and CISET for their help to organize the data acquisition and the several helpful discussions. We also thank NOKIA for their fundamental support with the Geosynthesis system to collect the GPS data from the mobile devices.
Due to the Italian law on privacy the original data are not of public domain. The dataset is property of TIM and its availability requires a non disclosure agreement with TIM (contact ). The data in a aggregated form are available on request.
Physics and Astronomy Department, University of Bologna, Bologna, Italy
Chiara Mizzi, Alessandro Fabbri, Sandro Rambaldi, Flavio Bertini, Nico Curti, Stefano Sinigardi, Rachele Luzi, Giulia Venturi & Armando Bazzani
TIM S.p.A., Roma, Italy
Micheli Davide, Giuliano Muratore & Aldo Vannelli
Chiara Mizzi
Sandro Rambaldi
Flavio Bertini
Nico Curti
Stefano Sinigardi
Rachele Luzi
Giulia Venturi
Micheli Davide
Giuliano Muratore
Aldo Vannelli
Armando Bazzani
SR, AF, CM, FB, SS performed the data analysis and elaborated the georeferencing algorithms to reconstruct individual paths; NC, RL, GV performed the mobility network analysis and participated to the data collection on the Ponte del Redentore; MD, GM, AV collected and prepared the GPS datasets, AB conceived and supervised the work. All authors read and approved the final manuscript.
Correspondence to Armando Bazzani.
The authors declares that they have no competing interests.
Below is the link to the electronic supplementary material.
Supplementary material (PDF 1.8 MB)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Mizzi, C., Fabbri, A., Rambaldi, S. et al. Unraveling pedestrian mobility on a road network using ICTs data during great tourist events. EPJ Data Sci. 7, 44 (2018). https://doi.org/10.1140/epjds/s13688-018-0168-2
DOI: https://doi.org/10.1140/epjds/s13688-018-0168-2
PACS Codes
89.75.-k
89.65.-s
89.40.-a
Big tourist events
Pedestrian mobility
Statistical physics of human mobility
Individual and Collective Human Mobility: Description, Modelling, Prediction
|
CommonCrawl
|
Partial order, total order, and version order in transaction histories
In Section 3.1.2 "Transaction Histories" of the PhD thesis by Atul Adya [1]:
A history $H$ over a set of transactions consists of two parts: (1) a partial order of events $E$ that reflects the operations (e.g., read, write, abort, commit) of those transactions, and (2) a version order, $\ll$, that is a total order on committed object versions.
The author gives a comment on "the partial order of events" (Page 36):
"For convenience, we will present history events in our examples as a total order that is consistent with the partial order. Furthermore, wherever possible in our examples, we make this total order be consistent with the real-time ordering of events in a database system."
Question 1: Why can we present the partial order of events as a total order of them? Because a partial order may imply multiple total orders consistent with it, which one to choose? Does the choice matter for later definitions and theorems?
Some comments on "the total version order" (Page 36) are as follows:
[Added (01-10-2015)] "The version order in a history $H$ can be different from the order of write or commit events in $H$. This flexibility is needed to allow certain optimistic and multi-version implementations where it is possible that a version $x_i$ is placed before version $x_j$ in the version order even though $x_i$ is installed in the committed state after $x_j$ is installed."
"The system chooses the version order for each object."
Question 2: What does it mean for the database system to choose the version order? And how? Is this implementation-dependent?
[1] Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions by Atul Adya, 1999
terminology database-theory
hengxin
hengxinhengxin
for what it concerns Question 1:
there is a theoretical result from 1930 [1] that states that
any partial order $S$ can be extended to a total order $S'$ which contains $S$.
This result is known as the "Order Extension Principle"; the proof of this result uses the Axiom of Choice (I am not aware whether there is an alternative proof that does not use it). The paper is written in French, but you can find a proof at https://www.proofwiki.org/wiki/Order-Extension_Principle.
When Adya states that he chooses the real-time order of events, the best guess is that he assumes an implementation of the database; in this case, every history $H$ corresponds to an execution of the database, where events are totally ordered; such an order is the real-time order. More specifically, by requiring that the partial order $<$ in a history $H$ is extended by the real-time (total) order, Adya imposes that whenever $e_1 < e_2$ in $H$, then the instant of time $t_1$ at which the event $e_1$ takes place (in the execution of the database that leads to $H$) is less than the instant of time $t_2$ at which $e_2$ takes place. Choosing the real-time order of events to extend the partial order of a history $H$ is needed if one wants to prove the correctness of an abstract specification (i.e. set of properties that a history has to satisfy) with respect to a given specification.
Let's turn to Question 2:
in this case it seems to me that Adya wants to stress the fact that he allows the implementation of a database to access a version of an object which precedes (in the version order) the latest version installed. In practice, choosing an earlier version of an object could be either the result of the database not being able to access such a version (e.g. the lost update anomaly in causal consistency), or the user explicitly requesting to access an older version of an object (e.g. accessing an older version of a revision in a SVN repository).
Hope this helps, Andrea Cerone.
[1] Edward Szpilrajn - sur l'extension de l'ordre partiel
Andrea CeroneAndrea Cerone
$\begingroup$ Great thanks! For the total version order, Adya also comments: "The version order in a history $H$ can be different from the order of write or commit events in $H$. This flexibility is needed to allow certain optimistic and multi-version implementations where $\ldots$" (Have been added to the question). Is this the thing of "allowing the implementation of a database to access a version of an object which precedes (in the version order) the latest version installed"? $\endgroup$ – hengxin Jan 10 '15 at 8:02
Not the answer you're looking for? Browse other questions tagged terminology database-theory or ask your own question.
Partial recursive function and Turing machine
Multivalued, partial evaluation
Is it possible to obtain a total function by composition of partial functions?
Can there be a strict transaction that is not 2Phase?
Are all relational schemas with partial dependencies not in 3NF?
First Order Logic, First Order Logic + Recurrence and SQL
System restarted before transaction is committed
|
CommonCrawl
|
Algebra Q&A
Arithmetic Q&A
Calculus Q&A
Geometry Q&A
Matrices Q&A
Pre Calculus Q&A
Probability Q&A
Sets & Set Theory Q&A
Statistics Q&A
Trigonometry Q&A
Vectors Q&A
Find the vectors T, N, and B, at the given point.
\[ R(t) = < t^{2}, \frac{2}{3} t^{3} , t > \text {and point} < 4, \frac{-16}{3}, -2 > \]
This question aims to determine the tangent vector, normal vector, and the binormal vector of any given vector. The tangent vector T is a vector that is tangent to the given surface or vector at any particular point. The normal vector N is a vector that is normal or perpendicular to a surface at any given point. And finally, the binormal vector B is the vector obtained by calculating the cross-product of the unit tangent vector and the unit normal vector.
The 3 kinds of said vectors can easily be calculated for any given vector by simply calculating its derivative and applying some standard formulas. These standard formulas are stated in the solution of the question.
Expert Solution
In the question, the vector whose T and N need to be determined is mentioned below:
\[ R(t) = < t^{2}, \frac{2}{3} t^{3}, t > \]
The point specified in the question is point \[ < 4, \frac{-16}{3}, -2 > \]
By comparing the vector R(t) with the point, it becomes evident that this point exists at t = -2. This value of t can be counterchecked by inserting it in the vector R(t). Upon inserting the value of t in the given vector R(t):
\[ < (-2)^{2}, \frac{2}{3} (-2)^{3}, -2 > \]
\[ < 4, \frac{-16}{3}, -2 > \]
Hence, it is proved that the point exists at t = -2.
The formula for determining the tangent vector T is:
\[ T = \frac{R'(t)}{|R'(t)|} \]
So the next thing to do is to calculate the derivative of the vector $R(t)$.
Calculating the derivative of the vector $R(t)$:
\[ R'(t) = \frac{d}{dt} < t^{2}, \frac{2}{3}t^{3}, t> \]
\[ R'(t) = < 2t, 2t^{2}, 1 > \]
Now, for the distance of the derivative:
\[ |R'(t)| = \sqrt{(2t)^{2} + (2t^{2})^{2}+ 1^{2}} \]
\[ |R'(t)| = \sqrt{4t^{2} + 4t^{4} + 1} \]
\[ |R'(t)| = \sqrt{(2t^{2} + 1)^{2}} \]
\[ |R'(t)| = 2t^{2} + 1 \]
The formula for determining the tangent vector $T$ is:
\[ T = \frac{R'(t)}{|R'(t)|} \]
Inserting values into this formula gives us the tangent vector $T$:
\[ T = \frac{1}{2t^{2} + 1} . < 2t, 2t^{2}, 1 > \]
\[ T = < \frac{2t}{2t^{2} + 1}, \frac{2t^{2}}{2t^{2} + 1}, \frac{1}{2t^{2} + 1} > \]
Tangent vector $T$ at $t = -2$:
\[ T = < \frac{-4}{9}, \frac{8}{9}, \frac{1}{9} > \]
Now, let's determine the normal vector $N$. The formula for determining the vector $N$ is:
\[ N = \frac{T'(t)}{|T'(t)|} \]
The next thing to do is to calculate the derivative of the tangent vector $T$:
\[ T'(t) = \frac{d}{dt} < \frac{2t}{2t^{2} + 1}, \frac{2t^{2}}{2t^{2} + 1}, \frac{1}{2t^{2} + 1} > \]
\[ T'(t) = < \frac{(2t^{2} + 1) \times(2) – (2t) \times (4t)}{(2t^{2} + 1)^{2}}, \frac{(2t^{2} + 1) \times(4t) – (2t^{2}) \times(4t)}{(2t^{2} + 1)^{2}}, \frac{(2t^{2} + 1) \times(0) – (1) \times(4t)}{ (2t^{2} + 1)^{2}} > \]
\[ T'(t) = \frac{1}{(2t^{2} + 1)^{2}} < 4t^{2} + 2 -8t^{2}, 8t^{3} + 4t – 8t^{3}, -4t > \]
\[ T'(t) = \frac{1}{(2t^{2} + 1)^{2}} < 2 – 4t^{2}, 4t, -4t > \]
\[ T'(t) = < \frac{2 – 4t^{2}}{(2t^{2} + 1)^{2}}, \frac{4t}{(2t^{2} + 1)^{2}}, \frac{-4t}{(2t^{2} + 1)^{2}} > \]
Now, for the distance of the tangent vector $T$ derivative:
\[ |T'(t)| = \frac{1}{(2t^{2} + 1)^{2}} \sqrt{(2 – 4t^{2})^{2} + (4t)^{2} + (-4t)^{2}} \]
\[ |T'(t)| = \frac{1}{(2t^{2} + 1)^{2}} \sqrt{4 – 16t^{2} + 16t^{4} + 16t^{2} + 16t^{2}} \]
\[ |T'(t)| = \frac{1}{(2t^{2} + 1)^{2}} \sqrt{4 +16t^{2} + 16t^{4}} \]
\[ |T'(t)| = \frac{1}{(2t^{2} + 1)^{2}} \sqrt{(2 + 4t^{2})^{2}} \]
\[ |T'(t)| = \frac{2 + 4t^{2}}{(2t^{2} + 1)^{2}} \]
\[ |T'(t)| = \frac {2( 2t^{2} + 1)}{(2t^{2} + 1)^{2}} \]
\[ |T'(t)| = \frac {2}{2t^{2} + 1} \]
The formula for determining the normal vector $N$ is:
Inserting the values:
\[ N = \frac{< 2 – 4t^{2}, 4t, -4t >}{(2t^{2} + 1)^{2}} \times \frac{(2t^{2} + 1)}{2} \]
\[ N = \frac{< 2 – 4t^{2}, 4t, -4t >}{2t^{2} + 1} \times \frac{1}{2} \]
\[ N = \frac{2 < 1 – 2t^{2}, 2t, -2t >}{2t^{2} + 1} \times \frac{1}{2}\]
\[ N = < \frac{1 – 2t^{2}}{2t^{2} + 1}, \frac{2t}{2t^{2} + 1}, \frac{-2t}{2t^{2} + 1} > \]
Normal vector $N$ at $t = -2$:
\[ N = < \frac{-7}{9}, \frac{-4}{9}, \frac{4}{9} > \]
Find the vector $B$ for the above question.
The binormal vector $B$ refers to the cross-product of vectors $T$ and $N$.
\[ B(-2) = T(-2) x N(-2) \]
\[ B = \begin{vmatrix} i & j & k \\ \frac{-4}{9} & \frac{8}{9} & \frac{1}{9} \\ \frac{-7}{9} & \frac{-4}{9} & \frac{4}{9} \end{vmatrix} \]
\[ B = (\frac{32}{81} + \frac{4}{81})i – (\frac{-16}{81} + \frac{7}{81})j + (\frac{16}{81} + \frac{56}{81})k \]
\[ B = < \frac{36}{81}, \frac{9}{81}, \frac{72}{81} >\]
\[ B = < \frac{4}{9}, \frac{1}{9}, \frac{8}{9} >\]
Posted byWilliam Smith November 28, 2022 November 29, 2022 Posted inVectors Q&A
Calculate the frequency of each of the following wavelengths of electromagnetic radiation.
Resistivity measurements on the leaves of corn plants are a good way to assess stress and overall health. The leaf of a corn plant has a resistance of 2.4M Ω measured between two electrodes placed 23 cm apart along the leaf. The leaf has a width of 2.7 cm and is 0.20 mm thick. What is the resistivity of the leaf tissue?
|
CommonCrawl
|
Latent class evaluation of three serological tests for the diagnosis of human brucellosis in Bangladesh
A. K. M. A. Rahman ORCID: orcid.org/0000-0001-9660-49491,2,3,
C. Saegerman2 &
D. Berkvens3
A Bayesian latent class evaluation was used to estimate the true prevalence of brucellosis in livestock farmers and patients with prolonged pyrexia (PP) and to validate three conditionally dependent serological tests: indirect ELISA (iELISA), Rose Bengal Test (RBT), and standard tube agglutination (STAT). A total of 335 sera from livestock farmers and 300 sera from PP patients were investigated.
The true prevalence of brucellosis in livestock farmers and PP patients was estimated to be 1.1 % (95 % credibility interval (CrI) 0.1–2.8) and 1.7 % (95 % CrI 0.2–4.1), respectively. Specificities of all tests investigated were higher than 97.8 % (95 % CrI 96.1–99.9). The sensitivities varied from 68.1 % (95 % CrI 54.5–80.7) to 80.6 % (95 % CrI 63.6–93.8). The negative predictive value of all the three tests in both populations was very high and more than 99.5 % (95 % CrI 98.6–99.9). The positive predictive value (PPV) of all three tests varied from 27.9 % (95 % CrI 3.6–62.0) to 36.3 % (95 % CrI 5.6–70.5) in livestock farmers and 39.8 % (95 % CrI 6.0–75.2) to 42.7 % (95 % CrI 6.4–83.2) in patients with PP. The highest PPV were 36.3 % for iELISA and 42.7 % for RBT in livestock farmers and pyrexic patients, respectively.
In such a low prevalence scenario, serology alone does not help in diagnosis and thereby therapeutic decision-making. Applying a second test with high specificity and/or testing patients having history of exposure with known risk factors and/or testing patients having some clinical signs and symptoms of brucellosis may increase the positive predictive value of the serologic tests.
Brucellosis is a bacterial zoonosis affecting both human and animal health [1]. It is an occupational hazard for livestock farmers, milkmen, butchers, hired animal caretakers, and veterinarians [2]. Fever, sweating, fatigue, headache, and joint pain are important non-specific symptoms of brucellosis in humans. Brucellosis in humans is often misdiagnosed due to its unspecific clinical symptoms similar to that of other endemic pyrexic diseases like tuberculosis, malaria, typhoid, or rheumatic fever. Several sero-prevalence studies from Bangladesh indicate that the apparent prevalence of brucellosis in risk groups varies from 4.4 to 12.8 % [3–5]. The Rose Bengal Test (RBT), standard tube agglutination (STAT), and ELISA either alone or in combination were used for those studies. None of these tests is perfect, and thus, they cannot be used to estimate true prevalences. In the absence of a reasonable gold standard test, simultaneous estimation of true prevalence and test validation can be performed by applying multiple diagnostic tests to every individual using a Bayesian latent class analysis framework allowing the combination of test results and external information [6–8]. While evaluating the results of multiple diagnostic tests, it is essential to consider whether or not the tests can be assumed conditionally independent of each other given the true disease status. Assuming conditional independence may lead to biased estimates for test characteristics if the tests are conditionally dependent [9, 10]. As indirect ELISA (iELISA), RBT, and STAT are based on the same biological phenomenon [11], i.e., detection of anti-Brucella smooth lipopolysaccharide (sLPS) antibodies, they can primarily be considered as conditionally dependent [12]. Up to our knowledge, a latent class analysis was not used yet for the evaluation of multiple serological tests to diagnose human brucellosis.
The aims of this study were to estimate the true prevalence for brucellosis in two study groups and to evaluate three conditionally dependent serological tests using latent class analysis.
Study population, study area, and sampling strategy
Blood samples of livestock farmers were collected between September 2007 and August 2008 in Mymensingh district. Three hundred and thirty-five livestock owners or hired animal caretakers agreed to participate. The details of the livestock farmers included in this study were described in a previous paper [13]. In brief, out of a total of the 146 unions (sub Upa-Zilla) of Mymensingh district, 28 were randomly selected. One geographical coordinate was randomly selected from each selected union and located by a hand-held GPS reader. Livestock farmers within 0.5 km radius of the selected point were informed about the survey, and those who agreed were sampled.
Blood samples from prolonged pyrexia (PP) patients were taken randomly once in a week at Mymensingh Medical College (MMC) hospital. These patients originated from Mymensingh and neighboring districts like Netrakona, Jamalpur, Sherpur, and Tangail. Patients with PP were defined having a body temperature higher than 38 °C for a 3-week period. Every day, approximately 100 patients visit the outdoor facility of the hospital. Patients who met the inclusion criterion were asked for a blood sample. In addition, hospitalized patients meeting the inclusion criterion at the same day were also asked for a sample. A total of 300 PP blood samples were collected from October 2007 to May 2008.
Collection and handling of blood samples
The collection and handling of blood samples was described in a previous paper by Rahman et al. [13]. In brief, about 4 mL of blood was collected with disposable needles and Venoject tubes, labeled, and transported to the laboratory on ice (after clotting) within 12 h of collection. The blood samples were kept in refrigerator (2–8 °C), and 1 day later, sera were separated by centrifuging at 6000g for 10 min.
Serological tests
All blood samples were tested in parallel by indirect IgG iELISA, RBT, and STAT at the Medicine Department laboratory of Bangladesh Agricultural University, Mymensingh, Bangladesh. RBT was performed as described by Alton et al. [14]. The STAT was carried out on doubling dilutions of serum from 1:20 to 1:320 according to Alton et al. [14]. Brucella abortus and Brucella melitensis antigens (Cypress Diagnostics, Langdorpsesteenweg 160, B-3201, Belgium) were used according to the instruction of the manufacturer. Titres ≥1:160 were considered as positive. The iELISA was used as described by Limet et al. [15] using B. abortus biotype 1 antigen (Strain Weybridge 99, A epitope). Six dilutions of positive control serum no. 1121 (1/270–1/8340, corresponding to 2–60 units) were used to generate a standard curve. The detail procedure was described in a previous paper by Rahman et al. [13].
In order to determine the true prevalence, sensitivity, and specificity of the three tests for two subpopulations, Bayesian latent class analysis was performed using a multinomial model, based on conditional probabilities [16]. The full model assuming conditional dependence is overparameterized. It thus requires external (prior) information for prevalence and test characteristics (sensitivity and specificity). Prior information on prevalence [3–5] and sensitivity and specificity of iELISA [17] was extracted from published reports, and three other conditional probabilities adapted by experts of the Department of Infectious and Parasitic Diseases, Faculty of Veterinary Medicine, University of Liège, Belgium (Table 1) were included. The beta prior distribution was considered in Bayesian model. The analysis was conducted in WinBUGS 1.4 [18] and R 3.2.2 (R Foundation and Statistical Computing 2015). The model was run with a burn-in of 50,000 iterations, and estimates were based on a further 50,000 iterations and three chains. The posterior predictive P value, the Deviance Information Criterion (DIC), and the number of parameters effectively estimated by the model (pD)) were used to assess the fit between the prior information and the test results [16]. The WinBUGS code for conditional dependence of three tests for a two populations Bayesian model is shown in Additional file 1: Appendix A.
Table 1 Prior information for the Bayesian latent class evaluation of three serological tests for the diagnosis of brucellosis in livestock farmers and prolonged pyrexia patients (Beta distribution)
The positive predictive value (PPV) and negative predictive value (NPV) of the tests were calculated using Eqs. 1 and 2 in Bayesian model.
$$ \mathrm{P}\mathrm{P}\mathrm{V}=\frac{\mathrm{Se}\mathrm{nsitivity}\ \left(\mathrm{S}\mathrm{e}\right)\ast \mathrm{P}\mathrm{revalence}\ \left( \Pr \right)}{\mathrm{Se}\ast \Pr +\left(1-\mathrm{Specificity}\ \left(\mathrm{S}\mathrm{p}\right)\right)*\left(1- \Pr \right)} $$
$$ \mathrm{N}\mathrm{P}\mathrm{V}=\frac{\mathrm{Sp}\ast \left(1- \Pr \right)}{\left(1-\mathrm{S}\mathrm{e}\right)\ast \Pr +\mathrm{S}\mathrm{p}\kern0.1em \left(1- \Pr \right)} $$
Sensitivity analyses
The influence of the prior information on the estimates of the characteristics of the diagnostic tests was verified using sensitivity analysis. This was done by using uniform priors and slight perturbations (in steps of 10 or 15 %) of the prior intervals [18].
Cross-classified results of the three serological tests are shown in Table 2. From the 335 livestock farmers, only 0.6 % were positive and 97.3 % were negative in all three tests. The apparent prevalence of brucellosis among livestock farmers based on a parallel interpretation of the three tests was 2.7 %. Of 300 PP patients, only 2.0 % were positive and 97.3 % were negative in all three tests. Based on a parallel interpretation (if positive in at least one test), the apparent prevalence of brucellosis among PP patients was 2.7 %; 32.8 % (110/335) of the livestock farmers and 33.7 % (101/300) of the PP patients were female. All livestock farmers had contact with cattle (66.0 %) or goats (17.3 %) or with both (16.7 %). Among PP patients, only 27 % (81/300) had contact with cattle and or goats.
Table 2 Cross-classified test results of three serological tests applied on livestock farmers and prolonged pyrexia patients in Bangladesh
Posterior estimates
The true prevalence of brucellosis among livestock farmers and PP patients is presented in Table 3. The true prevalence of brucellosis in livestock farmers and PP patients estimated were 1.1 % (95 % (CrI 0.1–2.8) and 1.7 % (95 % CrI 0.2–4.1), respectively. The performance of all three tests was similar in both populations. In both groups, specificity of all tests was greater than 97.8 % (95 % CrI 96.2–99.9). The sensitivity of iELISA, RBT, and STAT varied from 68.1 % (95 % CrI 54.5–80.7) to 69.6 % (95 % Cr1 56.0–81.6), 79.4 % (95 % CrI 59.5–95.0) to 79.2 % (95 % CrI 60.3–94.8), and 80.5 (95 % CrI 63.1–93.8) to 80.6 (95 % CrI 63.6–93.8) in livestock farmers and PP patients, respectively.
Table 3 Estimates of true prevalence, sensitivity, specificity of three serological tests used for the diagnosis of brucellosis in livestock farmers and PP patients in Bangladesh
Positive and negative predictive values
The PPV and NPV of three serological tests were shown in Table 4. The PPV of three serological tests varied from 27.9 % (95 % CrI 3.6–62.0) to 36.3 % (95 % CrI 5.6–70.5) in livestock farmers and 39.8 % (95 % CrI 6.0–75.2) to 42.7 % (95 % CrI 6.4–83.2) in PP patients. The NPV of all three tests were very high and more than 99.5 %.
Table 4 The positive and negative predictive values of three serological tests
Results of sensitivity analyses
The true prevalences and specificities of all three tests obtained from the different models of sensitivity analyses were similar. Whereas the estimated sensitivities varied in two models and yielded wider confidence intervals. But as their 95 % credibility intervals overlap, the observed differences were not statistically important (data not shown).
Data on the true prevalence of brucellosis, characteristics of three serological tests in livestock farmers, and PP patients from Bangladesh are provided.
A Bayesian latent class evaluation was used to estimate the true prevalence of brucellosis in livestock farmers, PP patients, and at the same time to evaluate three conditionally dependent serological tests. Bangladesh has to be considered to be endemic for brucellosis but with a very low prevalence in animals and humans [19]. In areas of low endemicity, the risk for human infection originates either from consumption of non-pasteurized dairy products or occupation threatening veterinarians, abattoir workers, farmers, and laboratory personnel. In this study, it was possible to estimate the true prevalence for livestock farmers. Sample sizes for other occupational groups were too small to do so, and the method of collection was also non-random. This is another limitation of this study. However, livestock farmers are a promising study group as almost 85 % of rural households own animals and 75 % of the population rely to some extent on livestock for their livelihood [20, 21]. The true prevalence for this group was estimated to be 1.1 %. Brucellosis is a pyrexic disease. As such, it was of interest to investigate also PP patients due to the assumption that brucellosis may be regularly ignored or misdiagnosed. If so, the number of pyrexic patients infected with brucellosis is considered to be valuable information not only for family physicians but also for policy makers. In this study, we focused on PP patients because these patients take antipyretic drugs and antibiotics inappropriate for brucellosis, and see doctors only if recovery does not occur. Among PP patients, 1.7 % were found to be positive for brucellosis which confirms our assumption that brucellosis is ignored or misdiagnosed by physicians in Bangladesh.
Both in livestock farmers and PP patients, the performance of all three serological tests was similar. RBT does not need sophisticated infrastructure or extensive training; it is amazingly cheap and fast. For the Bangladesh setting, RBT is the test of choice. For some endemic countries, authors reported specificity problems of the RBT [22, 23]. In order to overcome this specificity problem, Diaz et al. [24] recommended a modified protocol, i.e., predilution of serum >1:4. Interestingly, we found almost the same performance for the RBT as described by Diaz et al. [24] but without any modification. If the prevalence of a disease is very low as it is in Bangladesh, there will be lower positive and higher negative predictive values for the tests [25]. We have also observed lower positive predictive values of the serologic tests. The highest positive predictive value of RBT in PP patients was 42.7 % indicating that 42.7 % test positive patients truly have the disease and the remaining are falsely positive. The positive predictive value may be increased by applying a second test with high specificity and/or by testing patients having history of exposure with known risk factors like contact with animals, consumption of raw milk, and/or having some symptoms like pyrexia, arthralgia, backache, etc.
Anti-Brucella antibodies, especially IgG, can persist for a longer period of time, i.e., several months even after recovery from disease [26]. For that reason, the presence of anti-Brucella antibodies cannot reflect the true disease status as described above. Thus, diagnosis should be confirmed in a sero-positive patient by the presence of at least one of the clinical symptoms and signs suggestive of brucellosis like pyrexia, arthralgia, headache, backache, hepatomegally, splenomegally, etc. [23, 27]. Applying a more specific test genus or species-specific real time PCR may also be performed [28] to avoid unjustified costs, drug toxicity, and masking of other potentially dangerous diseases like tuberculosis, which are also endemic in Bangladesh.
For a quantitative test, the sensitivity or specificity depends largely on the cut-off value chosen and other factors like endemicity, status and duration of infection, persistence of antibody titres after treatment, presence of cross-reacting pathogens etc. [25]. The cut-off value of the iELISA (≥20U/ml) used in our study seems to be appropriate to avoid false positives as its specificity was very high ranging from 99.3 to 99.6 %. WHO and OIE provide guidelines for STAT and RBT standardization, but not for the iELISAs. So, the iELISA test kits provided by different companies are not standardized and it is difficult to compare the results of different studies due to different cutoffs used. In general, a "new" cutoff should be determined under local conditions to avoid false positives.
Like many other authors, we have considered a STAT titre of 1:160 as positive [17, 22]. As already mentioned earlier, in regions where brucellosis is endemic, a large proportion of the population may have persistent Brucella-specific antibody titres. In this scenario, some authors recommend to use STAT titres of 1:320 or higher to avoid false positives [28, 29]. However, in our study, a STAT titre of 1:160 seems to be appropriate as this titre resulted in specificity ranging from 98.2 to 98.8 % indicating good fit for our setting.
The Bayesian latent class evaluation of diagnostic tests requires an assessment of variations in the prior information on the estimated parameters using a sensitivity analysis [30]. Our sensitivity analysis indicated that the use of diffused priors had no relevant influence on the estimated prevalence and test sensitivities and specificities.
Based on the performance of the three serological tests validated in a setting where the prevalence of brucellosis is low in humans and animals, no single test can be recommended for routine diagnosis of human brucellosis in Bangladesh. Applying a second test with high specificity and/or testing patients with the history of exposure with known risk factors and/or testing patients having some clinical signs and symptoms of brucellosis may increase the positive predictive value of the serologic tests.
CrI:
Credibility interval
Deviance Information Criterion
iELISA:
MMC:
Mymensingh Medical College
Prolonged pyrexia
RBT:
Rose Bengal Test
sLPS:
Anti-Brucella smooth lipopolysaccharide
STAT:
Standard tube agglutination test
Ariza J, Bosilkovski M, Cascio A, Colmenero JD, Corbel MJ, Falagas ME, Memish ZA, Roushan MRH, Rubinstein E, Sipsas NV, Solera J. Perspectives for the treatment of brucellosis in the 21st century: the Ioannina recommendations. PLoS Med. 2007;4:e317.
World Health Organization. The control of neglected zoonotic diseases. Geneva: Report of a joint WHO/DFID-AHP, pp 54. 2005. http://www.who.int/zoonoses/Report_Sept06.pdf. Accessed 28 Sept 2015.
Rahman MM, Chowdhury TIMFR, Rahman A, Haque F. Seroprevalence of human and animal brucellosis in Bangladesh. Indian Vet J. 1983;60:165–8.
Rahman MM, Haque M, Rahman MA. Seroprevalence of caprine and human brucellosis in some selected areas of Bangladesh. Bangladesh Vet J. 1988;22:85–92.
Muhammad N, Hossain MA, Musa AK, Mahmud MC, Paul SK, Rahman MA, Haque N, Islam MT, Parvin US, Khan SI, Nasreen SA. Seroprevalence of human brucellosis among the population at risk in rural area. Mymensingh Med J. 2010;19:1–4.
Adel A, Saegerman C, Speybroeck N, Praet N, Victor B, De Deken R, Soukehal A, Berkvens D. Canine leishmaniasis in Algeria: true prevalence and diagnostic test characteristics in groups of dogs of different functional type. Vet Parasitol. 2010;172:204–13.
Muñoz PM, Blasco JM, Engel B, de Miguel MJ, Marín CM, Dieste L, Mainar-Jaime RC. Assessment of performance of selected serological tests for diagnosing brucellosis in pigs. Vet Immunol Immunopathol. 2012;146:150–8.
Praud A, Gimenez O, Zanella G, Dufour B, Pozzi N, Antras V, Meyer L, Garin-Bastuji B. Estimation of sensitivity and specificity of five serological tests for the diagnosis of porcine brucellosis. Prev Vet Med. 2012;104:94–100.
Vacek PM. The effect of conditional dependence on the evaluation of diagnostic tests. Biometrics. 1985;41:959–68.
Gardner IA, Stryhn H, Lind P, Collins MT. Conditional dependence between tests affects the diagnosis and surveillance of animal diseases. Prev Vet Med. 2000;45:107–22.
Nielsen K. Diagnosis of brucellosis by serology. Vet Microbiol. 2002;90:447–59.
Dendukuri N, Joseph L. Bayesian approaches to modeling the conditional dependence between multiple diagnostic tests. Biometrics. 2001;7:158–67.
Rahman AKMA, Saegerman C, Berkvens D, Fretin D, Gani MO, Ershaduzzaman M, Ahmed MU, Emmanuel A. Bayesian estimation of true prevalence, sensitivity and specificity of indirect ELISA, Rose Bengal test and slow agglutination test for the diagnosis of brucellosis in sheep and goats in Bangladesh. Prev Vet Med. 2013;110:242–52.
Alton GG, Jones LM, Angus RD, Verger JM. Techniques for the brucellosis laboratory. Paris, France: Institut National de la echerché agronomique (INRA); 1988. p. 112–89.
Limet JN, Kerkhofs P, Wijffels R, Dekeyser P. Le diagnostic serologique de la brucellose bovine par ELISA. Ann De Med Vet. 1988;132:565–75.
Berkvens D, Speybroeck N, Praet N, Adel A, Lesaffre E. Estimating disease prevalence in a Bayesian framework using probabilistic constraints. Epidemiology. 2006;17:145–53.
Gómez MC, Nieto JA, Rosa C, Geijo P, Escribano MA, Muñoz A, López C. Evaluation of seven tests for diagnosis of human brucellosis in an area where the disease is endemic. Clin Vaccine Immunol. 2008;15:1031–3.
Haley C, Wagner B, Puvanendiran S, Abrahante J, Murtaugh MP. Diagnostic performance measures of ELISA and quantitative PCR tests for porcine circovirus type 2 exposure using Bayesian latent class analysis. Prev Vet Med. 2011;101:79–88.
Islam MA, Khatun MM, Werre SR, Sriranganathan N, Boyle SM. A review of Brucella seroprevalence among humans and animals in Bangladesh with special emphasis on epidemiology, risk factors and control opportunities. Vet Microbiol. 2013;166(3):317–26.
Anon. Livestock sector brief of Bangladesh. Food and agriculture organization of the United Nations, 2005. (http://www.fao.org/ag/againfo/resources/en/publications/sector_briefs/lsb_BGD.pdf). Accessed 20 October 2015.
BBS. The Bangladesh Census of Agriculture (rural) 1996, Structure of agricultural holdings and livestock population, vol. 1. Dhaka: Bangladesh bureau of statistics; 2004.
Konstantinidis A, Minas A, Pournaras S, Kansouzidou A, Papastergiou P, Maniatis A, Stathakis N, Hadjichristodoulou C. Evaluation and comparison of fluorescence polarization assay with three of the currently used serological tests in diagnosis of human brucellosis. Eur J Clin Microbiol Infect Dis. 2007;26:715–21.
Ruiz‐Mesa JD, Sánchez‐Gonzalez J, Reguera JM, Martin L, Lopez‐Palmero S, Colmenero JD. Rose Bengal test: diagnostic yield and use for the rapid diagnosis of human brucellosis in emergency departments in endemic areas. Clin Microbiol Infect. 2005;11:221–5.
Díaz R, Casanova A, Ariza J, Moriyon I. The Rose Bengal test in human brucellosis: a neglected test for the diagnosis of a neglected disease. PLoS Negl Trop Dis. 2011;5:e950.
Greiner M, Gardner IA. Epidemiological issues in the validation of veterinary diagnostic tests. Prev Vet Med. 2000;45:3–22.
Godfroid J, Nielsen K, Saegerman C. Diagnosis of brucellosis in livestock and wildlife. Croat Med J. 2010;51:296–305.
Rahman AKMA, Dirk B, Fretin D, Saegerman C, Ahmed MU, Muhammad N, Hossain A, Abatih E. Seroprevalence and risk factors for brucellosis in a high-risk group of individuals in Bangladesh. Foodborne Pathog Dis. 2012;9:190–7.
Kiel WK, Khan Y. Analysis of 506 consecutive positive serologic tests for brucellosis in Saudi Arabia. J Clin Microbiol. 1987;25:1384–7.
Memish ZA, Almuneef M, Mah MW, Qassem LA, Osoba AO. Comparison of the Brucella standard agglutination test with the ELISA IgG and IgM in patients with Brucella bacteremia. Diagn Microbiol Infect Dis. 2002;44:129–32.
Navarro E, Segura JC, Castaño MJ, Solera J. Use of real-time quantitative polymerase chain reaction to monitor the evolution of Brucella melitensis DNA load during therapy and post-therapy follow-up in patients with brucellosis. Clin Infect Dis. 2006;42:1266–73.
The authors are grateful to Professor Dr. Akram Hossain and Dr. Noor Muhammad, Department of Microbiology, Mymensingh Medical College, Bangladesh, for their help in sampling and treatment of cases.
This study was supported by the Belgian Directorate General for Development Cooperation. The funding body had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Data and codes for statistical analysis are available in Additional file 1: Appendix A. Raw data is also available on request.
AKMAR and CS conceived of the study and participated in its design. AKMAR carried out serological tests. AKMAR and DB performed statistical analysis. AKMAR, CS, and DB drafted the manuscript. All authors read and approved the final manuscript.
Informed written consent was also taken from all participants to report their demographic information in publications.
The study protocol was peer reviewed and cleared for ethics by the Ethical Review Committee of Mymensingh Medical College (MMC). Informed written consent was taken from all individuals prior to blood sample collection.
Department of Medicine, Bangladesh Agricultural University, Mymensingh, 2200, Bangladesh
A. K. M. A. Rahman
Research Unit of Epidemiology and Risk Analysis applied to the Veterinary Sciences (UREAR-ULg), Department of Infectious and Parasitic Diseases, Faculty of Veterinary Medicine, University of Liège, Liège, Belgium
A. K. M. A. Rahman & C. Saegerman
Department of Biomedical Sciences, Institute of Tropical Medicine, Antwerpen, Belgium
A. K. M. A. Rahman & D. Berkvens
C. Saegerman
D. Berkvens
Correspondence to A. K. M. A. Rahman.
Appendix A. (DOCX 23 kb)
Rahman, A.K.M.A., Saegerman, C. & Berkvens, D. Latent class evaluation of three serological tests for the diagnosis of human brucellosis in Bangladesh. Trop Med Health 44, 32 (2016). https://doi.org/10.1186/s41182-016-0031-8
Latent Class Analysis
|
CommonCrawl
|
Home page of
Gregory W. Moore
Directory information
High Energy Theory
[email protected]
Dept. Fax:
Serin E362
NHETC
136 Frelinghuysen Road
Piscataway, NJ 08854-8019 USA
Information for Physics 618, Spring 2019.
Information for Physics 695, Fall 2015.
Applied Group Theory, Physics 618, Spring 2013.
Advanced Topics in Mathematical Physics, Fall 2010. will be offered instead. Lectures can be viewed online here
My work focuses on mathematical physics, with an emphasis on string theory, M-theory, and gauge theories more generally.
My work places particular emphasis on the underlying mathematical structures and applications to and from modern mathematics.
Specific research interests include:
The theory of branes and generalized abelian gauge theories in supergravity.
This involves interesting topological issues related to generalized differential cohomology theories, especially K-theory.
There are also interesting relations to the theory of self-dual fields, anomaly cancellation, and noncommutative geometry.
Effective low energy supergravity theories in string compactification and the computation of nonperturbative stringy effects in effective supergravities.
D-branes on Calabi-Yau manifolds and BPS state counting. Relations to Borcherds products, automorphic forms, black-hole entropy, and wall-crossing.
Applications of the theory of automorphic forms to conformal field theory, string compactification, black hole entropy counting, and the AdS/CFT correspondence.
Potential connections to number theory. For example - I pointed out in 1998 that the attractor mechanism
of supersymmetric black holes singles out Calabi-Yau varieties with relations to complex multiplication.
Conformal field theories. Rational conformal field theories, especially applications to the theory of anyons and nonabelions.
Topological field theories, and applications to invariants of manifolds.
String field theory.
String cosmology and time-dependence in string theory
Does the alleged ``landscape of N=1 effective four-dimensional string vacua'' really exist?
Expository, Historical, and Philosophical
``What is a brane?'' . Published in: Notices of the American Mathematical Society, Vol. 52, No. 2, pp.214-215.
``How many black holes fit on the head of a pin?'', coauthored with Frederik Denef. Published in: Int.J.Mod.Phys. D17 (2008) 679-684, Gen.Rel.Grav. 39 (2007) 1539-1544 .
``The Impact of D-branes on Mathematics''. I wrote this trying to collect my thoughts before appearing on a panel at JOEFEST at the KITP, February 2014 .
``Response to the 2014 Eisenbud Prize''. I wrote this since the AMS requested a response.
``Some Comments on Physical Mathematics''. I wrote this to try to collect my thoughts, preparing a talk for the 2014 Heineman Prize. It is an expansion on the previous essay. The talk is here .
``Physical Mathematics and the Future''. This is the essay which is meant to accompany my ``vision talk'' in the summary section of Strings 2014 in Princeton. It is a further expansion of the previous essay (without the review of N=2 supersymmetric theories). The video of the accompanying talk is available from the conference website here and also from YouTube here .
(Almost all publications after August 1991 are available on the e-print arXiv .)
Anomalous Inequivalence of Phenomenological Theories, (with A. Manohar), Nucl. Phys. {\bf B243}(1984)55.
Anomalies and Odd Dimensions, (with L. Alvarez-Gaum\'e
and S. Della Pietra) Ann. Phys. (1985)288
Anomalies in Nonlinear Sigma Models, (with P. Nelson),
Phys. Rev. Lett. {\bf 53}(1984)1519
Constraints on a Two-Higgs Interpretation of the
$\zeta(8.3)$, (with H. Georgi and A. Manohar), Phys. Lett.
{\bf 149B}(1984)234
The Aeteology of Sigma Model Anomalies, (with P. Nelson)
Comm. Math. Phys. {\bf 100}(1985)83
A Comment on Sigma Model Anomalies, (with A. Manohar and
P. Nelson) Phys. Lett. {\bf 152B}(1985)68
Measure for Moduli, (with P. Nelson), Nucl. Phys.
{\bf B266}(1986)58
An Off-Shell Propagator for String Theory, (with
A. Cohen, P. Nelson, and J. Polchinski), Nucl. Phys.
{\bf B267}(1986)143
An Invariant String Propagator (with A. Cohen, P. Nelson,
and J. Polchinski) in {\it Unified String Theories}, D. Gross and
M. Green eds., World Scientific (1986)
Strings and supermoduli (with P.~Nelson, and J.~Polchinski)
Phys.~Lett. {\bf B169} (1986) 47; {\bf201B} (1988) 579(E)
Heterotic geometry, (with P. Nelson), Nucl. Phys.
{\bf B274} (1986) 509
An $O(16)\times O(16)$ Heterotic String, (with L. Alvarez-Gaum\'e,
P. Ginsparg, and C. Vafa) Phys. Lett. {\bf 171B}(1986)155
Theta Functions, Modular Invariance, and Strings,
(with L. Alvarez-Gaum\'e, and C. Vafa), Comm. Math. Phys.
{\bf 106}(1986)1
Semi-Off-Shell String Amplitudes (with A. Cohen,
P. Nelson, and J. Polchinski), Nucl. Phys. {\bf B281}(1986)127
Modular forms and Two-Loop String Physics, Phys. Lett.
Bosonization in Arbitrary Genus, (with L. Alvarez-Gaum\'e,
J. B. Bost, P. Nelson, and C. Vafa) Phys. Lett. {\bf 178B}(1986)41
Bosonization on higher genus Riemann surfaces,
(with L.~Alvarez-Gaum\'e, J.~B.~Bost, P. Nelson, and C.~Vafa),
Commun. Math. Phys. {\bf112} (1987) 503
Modular forms and the cosmological constant,
(with J. Harris, P. Nelson, and I. Singer)
Phys.\ Lett.\ {\bf178B} (1986) 167; {\bf201B} (1988) 579(E)
Modular Forms and Multiloop String Physics, in
{\it Proceedings of the VIIIth International Congress on
Mathematical Physics} at Marseille.
M. Mebkhout, ed. World Scientific, 1987, pp. 776-783
Atkin-Lehner Symmetry, Nucl. Phys. {\bf B293}(1987)139
Vanishing Vacuum Energies for Nonsupersymmetric Strings,
in {\it Nonperturbative Quantum Field Theory} (Cargese lectures),
G. Mack,
G. `t Hooft, A. Jaffe, P. Mitter, and R. Stora, eds.
Plenum.
Quasicrystalline Compactification, (with J. Harvey and
C. Vafa) Nucl. Phys. {\bf B304}(1988)269
Some Remarks on Two-Loop Superstring Calculations,
(with A. Morozov), Nucl. Phys. {\bf B306}(1988)387
Some global issues in string
perturbation theory, " (with J. J. Atick, and A. Sen),
Nucl. Phys. B308(1988)1-101
Rationality in Conformal Field Theory, (with
G. Anderson), Comm. Math. Phys. {\bf 117}(1988)441
Catoptric tadpoles, (with J. J. Atick, and A. Sen)
Nucl. Phys. B307(1988)221
Strings in the Operator Formalism, (with L. Alvarez-Gaum\'e,
C. Gomez, and C. Vafa), Nucl. Phys. {\bf 303}(1988)455
Polynomial Equations for Rational Conformal Field
Theories, (with N. Seiberg),
Phys. Lett. {\bf 212B}(1988)451
Naturality in Conformal Field Theory,
(with N. Seiberg), Nucl. Phys. {\bf B313}(1989)16
Classical and Quantum Conformal Field Theory,
(with N. Seiberg), Commun. Math. Phys. {\bf 123}(1989)177
Taming the Conformal Zoo,
(with N. Seiberg), Phys. Lett. {\bf 220B}(1989)422
A Comment on Quantum Group Symmetry in Conformal Field
Theory, (with N. Reshetikhin), Nucl. Phys. {\bf B328}(1989)557
Remarks on the Canonical Quantization of the Chern-Simons-
Witten Theory, (with S. Elitzur, A. Schwimmer, and N. Seiberg),
Nucl. Phys. {\bf B326}(1989)108
Rational Conformal Field Theory and Group Theory,
(with N. Seiberg), in the proceedings of the
Schloss Ringberg conference, April 1989
Lectures on Rational Conformal Field Theory,
(with N. Seiberg), in {\it Strings '89},Proceedings
of the Trieste Spring School on Superstrings,
3-14 April 1989, M. Green, et. al. Eds. World
Scientific, 1990.
The Ising model, the Yang-Lee edge singularity, and
2D quantum gravity, (with \v C. Crnkovi\'c and P. Ginsparg),
Nonabelions in the fractional quantum hall effect, (with N. Read),
Nucl. Phys. 360B(1991)362
Geometry of the String Equations, Commun. Math. Physics.
{\bf 133}(1990) 261-304
Physical Solutions for Unitary-Matrix Models
(with M. Douglas and C. Crnkovic), Nucl. Phys. {\bf B360}(1991)507
Simplex Equations and Their Solutions, (with I. Frenkel),
Commun. Math. Phys. {\bf 138}(1991)259
Matrix Models of 2D Gravity and Isomonodromic Deformation,
in Common Trends in Mathematics and Quantum Field
Theories, T. Eguchi et. al., eds. Prog. Theor. Phys. Suppl.
102; Also, in the Proceedings of the
1990 Carg\'ese Workshop on Random Surfaces, Quantum Gravity and
Strings, with slightly improved version.
Multicritical Multi-Cut Matrix Models, (with \v C. Crnkovi\'c),
Phys. Lett. {\bf B257}(1991)322
Double-Scaled Field Theory at c=1, Nucl.Phys.B368:557-590,1992,
Boundary Operators in 2D Gravity, (with E. Martinec and
N. Seiberg), Phys. Lett. B263 (1991)190.
From Loops to States in 2D Quantum Gravity, (with N. Seiberg
and M. Staudacher), Nucl. Phys. B362 (1991)665
Loop Equations and the Topological Phase of Multi-Cut Matrix
Models, (with \v C. Crnkovi\'c and M. Douglas), hep-th/9108014;
Int.J.Mod.Phys.A7:7693-7711,1992
From Loops to Fields in 2D Quantum Gravity (with N. Seiberg),
Int. Jour. Mod. Phys. {\bf 7A}(1992)2601.
Exact S-Matrix for 2D String Theory, (with R. Plesser and S.
Ramgoolam),
hepth/9111035; Nucl.Phys.B377:143-190,1992
Gravitational Phase Transitions and the Sine-Gordon Model
Classical Scattering in 1+1 Dimensional String Theory, (with R.
Plesser), hepth/9203060; Phys.Rev.D46:1730-1736,1992
Fractional quantum Hall effect and nonabelian statistics, (with
N. Read), hepth/9111035;
to appear in Proceedings of 4th Yukawa International Symposium
The partition function of 2d string theory, (with R. Dijkgraaf and
R. Plesser), hepth/9208031; Nucl.Phys.B394:356-382,1993
Lectures on 2D Gravity and 2D String Theory, (with P. Ginsparg),
hepth/9304011; in {\it Recent Directions in Particle Theory}, J. Harvey
and J. Polchinski,
eds., World Scientific, 1993
Finite in All Directions, hepth/9305139
Symmetries and Symmetry-Breaking in String Theory,
hepth/9308052; in {\it International Workshop on
Supersymmetry and Unification of Fundamental
Forces}, P. Nath, ed, World Scientific 1993
Symmetries of the bosonic string $S$-matrix,
hep-th/9310026
Large N 2D Yang-Mills Theory and Topological String Theory,
http://arxiv.org/abs/hep-th/9402107. Commun.Math.Phys.185:543-619,1997
Chaotic Coupling Constants, (with J. Horne)
Nucl.Phys.B432:109-126,1994
Addendum to: ``Symmetries of the Bosonic String S-Matrix,
http://arxiv.org/abs/hep-th/9404025. Nucl.Phys.B432:109-126,1994
Some Talks
A talk entitled ``Life After RCFT,'' delivered at the Soviet-American Workshop On String Theory, Princeton University, Oct. 30 - Nov. 2, 1989. This was perhaps the second-most disastrous talk of my career (so far). It included an announcement of my work with Nick Read, introducing the ``Pfaffian state'' of the fractional quantum Hall effect and suggesting that that state would support nonabelions. The slides can be viewed here
Lectures at the Les Houches Meeting on Number Theory, Geometry, and
Physics. Les Houches, March 9-21, 2003.
Localized tachyon flows and Hirzebruch-Jung spaces, at the conference
``Avant Strings,'' Institut des Hautes Etudes Scientifiques, Paris, June 15, 2004.
On the quantization of Page charges, at the conference
``Avant Strings,'' Institut des Hautes Etudes Scientifiques, Paris, June 23, 2004
Anomalies, Gauss Laws, and Page charges, at the conference
``Strings 2004,'' Paris, July 1, 2004.
Testing some Black Hole/Topological String conjectures, Institute
for Advanced Study, April 11, 2005
Lie Algebras, BPS States, and String Duality, Summer workshop on
Groups and Algebras in M-theory, Rutgers Math Department, May 31, 2005
Black holes and Arithmetic, Talk delivered at the Mathematicshe Arbeitstagung,
Max Planck Institute, Bonn, Germany, 11 June 2005.
Remarks on the Hamiltonian Formulation of Some
Generalized Abelian Gauge Theories, Talk delivered at the workshop,
Geometric Topology and Connections with
Quantum Field Theory, Oberwolfach, Germany, June 13, 2005.
Noncommutativity of Electric and Magnetic Fluxes, Amsterdam Workshop on
String Theory, June 22, 2005,
Electric and Magnetic Flux Do Not Commute, Aspen Workshop in Superstring
Cosmology, Aspen, Co, August 9, 2005
Mathematics Aspects of Fluxes,, KITP Workshop on Mathematics of String Theory,
Cosmology, Santa Barbara, October 11, 2005
An uncertainty principle for fluxes, with applications to self-dual fields, , CUNY, June 28, 2006 (26th International Colloquium on Group Theoretical Methods in Physics)
Split Polar Attractors , Harvard, September 29, 2006
Wall-crossing and an entropy enigma , Strings 2007, Madrid, June 28,2007. For question period see this link.
An Uncertainty Principle for Topological Sectors , Jurg Frohlich birthday Celebration, Zurich, July 2, 2007. If the powerpoint does not work a pdf version can be obtained here
Wall-crossing formulae for BPS states and some applications , Clay Mathematics Institute workshop on K3's and modular forms,
Cambridge, Mass, March 20, 2008
Four lectures on modular forms and black hole entropy , Trieste Spring School on Superstrings, Trieste, March 25-April 4, 2008. Some tex lecture notes are here . PDF's of the handwritten notes, actually used in the lectures are Lecture I , Lecture II , Lecture III , and Lecture IV .
Mathematical Foundations of Orientifolds , UCSB, July 18, 2008
Four-dimensional wall-crossing from three-dimensional field theory , KITP Miniprogram on Langlands Duality, July 31, 2008.
Wall-crossing formulae for BPS states and some applications , Zurich Prestrings, Zurich, August 11, 2008
Four-dimensional wall-crossing from three-dimensional field theory , Zurich Prestrings, Zurich, August 12, 2008
Mathematical Foundations of Orientifolds , Zurich Prestrings, Zurich, August 11, 2008
Update on Wall-Crossing , Strings 2008, CERN, August 22, 2008. Video of the talk available here.
Improved version of the Prestrings/KITP talk, Four-dimensional wall-crossing from three-dimensional field theory, 3rd New England String Meeting, Brown University, Oct. 24, 2008
Extremal N=2, 2D CFT and Constraints of Modularity Institute for Advanced Study, Nov. 5, 2008
Overview of the theory of Self-dual fields AMS Meeting, Washington DC, Jan. 8, 2009. A pdf version is available here
Four-dimensional wall-crossing from three-dimensional field theory Simons Center Workshop, StonyBrook, Jan. 13, 2009
Orientifolds, Twisted Cohomology, and Self-Duality A talk at the Conference on Perspectives in Mathematics and Physics in honor of I.M. Singer's 85th birhday, MIT, May 23, 2009. For further material see the talk by Dan Freed and the talk by Jacques Distler
BPS States, Hitchin Systems, and the WKB Approximation Workshop on Mirror Symmetry, Hausdorff Center for Mathematics, Bonn, June 3, 2009
Here is a minicourse (half of which) was given at the Galileo Galilei Institute and at Frascati, in June 2009. Lecture 1 covers basics of the BPS wall-crossing phenomenon. Lecture 2 reviews the work with D. Gaiotto and A. Neitzke on the relation of the KSWCF to hyperkahler geometry. Lecture 3 and Lecture 4 review the paper on the relation to Hitchin Systems and the WKB approximation.
The RR charge of an orientifold, Michigan Conference on Topology and Physics, Ann Arbor, Feb. 7, 2010 and Oberwolfach workshop, June 8,2010
Say ``halo!'' to new walls and new indices, Strings2010, Texas A& M, March 15, 2010. Video of the talk in Monday, Session 3 is available here
Line Operators in N=2 Gauge Theories, Simons Center Workshop on Quantum Teichmuller Theory, SUNY, Stonybrook, March 30, 2010. The paper is available here
PiTP Lectures on Wall-Crossing, PiTP School at the Institute for Advanced Study, July 27-29, 2010. These are pedagogical notes on Wall-Crossing in four-dimensional N=2 theories. Video available here .
Minicourse of three lectures on Generalized Abelian Gauge Theories, Self-Duality, and Differential Cohomology, at the Simons Center Workshop on Differential Cohomology, Simons Center for Geometry and Physics, Stonybrook, Jan. 11-14, 2011. Here are Lecture Notes . Video available here .
Surface defects in d=4, N=2 theories, BPS states, wall-crossing, and hyperkahler geometry, Simons Center Workshop on Branes and Bethe Ansatz, SCGP, March 25, 2011 Here are Lecture Notes . Video available here .
Modular Tensor Categories from Six Dimensional Field Theories, Symposium For Michael Freedman's 60th Birthday, KITP, UCSB, April 16, 2011. Video available here .
Surface defects and the BPS spectrum of 4d N=2 theories, IHES, Three Strings Generations, Paris, May 17, 2011. Powerpoint presentation here . A slightly different version presented at the Solvay Conference in Brussels is here .
On Recent Applications of Six Dimensional (0,2) Theory to Physical Mathematics, Review Talk at Strings 2011, Uppsala, Sweden, June 29, 2011 Powerpoint presentation here . A pdf version is here . Video available here .
Three Transverse Intersections Between Physical Mathematics and Condensed Matter Theory, at the Conference Strongly Interacting Electrons in Low Dimensions: New Orders, Symmetries, and Excitations, in honor of Duncan Haldane's 60th birthday. Princeton, September 14, 2011. Powerpoint presentation here .
A very long lecture on the physical relation of Donaldson to Seiberg-Witten invariants of four-manifolds. A 4.5 hour-long lecture, with 40pp. of texed handouts presented at the Simons Center for Geometry and Physics School on Supersymmetric Field Theories and Their Implications, March 8, 2012. Video available here and lecture notes are here .
Spectral Networks and Their Applications, Caltech, March 30, at the conference ``N=4 Super Yang-Mills Theory, 35 years after.'' Powerpoint presentation here .
Progress in D=4, N=2 Field Theory, Strings-Math 2012, Bonn, July 2012 Powerpoint presentation here .
Lectures at St. Ottilien: Quantum Symmetries and K-Theory. Lecture notes here .
Plenary talk at the ICMP in Aalborg, Denmark, August 8, 2012. Powerpoint version here and pdf here .
Felix Klein Lectures: ``Applications of the six-dimensional (2,0) theory to physical mathematics.'' October 1 - 11, 2012 at the Hausdorff Insitute for Mathematics, Bonn. Lecture notes, which are still very much UNDER CONSTRUCTION, are available here . Constructive comments, criticisms and reference requests are welcome. The videos of the actual lectures are here .
The talk I would have given at the Simons Center for Geometry and Physics on Oct. 31, if hurricane Sandy hadn't gotten in the way, is here . It is called ``BPS Degeneracies and Hyperkahler Geometry.''
A report on work in progress with D. Gaiotto and E. Witten, given at the conference "Aspects of Topology" for G. Segal's 70th birthday is here . A pdf version is here .
Here is the talk I gave at the SCGP workshop ``Topological Phases of Matter.'' A pdf version is here . Video of the talk is here .
Here is the talk I gave at the Strings 2013 conference in Seoul, South Korea. A pdf version is here . Video of the talk is here .
An update on the work in progress with D. Gaiotto and E. Witten, "Algebra of the Infrared,'' given at the SCGP conference "Quiver Varieties," Oct. 15, 2013 is here . A pdf version is here . This talk is directed more at mathematicians than physicists.
A version of the previous talk, "Algebra of the Infrared: Massive d=2 N=(2,2) QFT'', or ``A short ride with a big machine,'' given at the KITP Workshop on Nonperturbative Methods in Field Theory, March 11, 2014 is here . A pdf version is here . This talk is directed more at physicists than mathematicians. (The actual talk at the KITP was an unmitigated disaster due to technical problems with the projector. The video is not recommended.)
This version emphasizes the formal webology apparatus, "Web formalism and the IR limit of d=2 N=(2,2) QFT'', or ``A short ride with a big machine,'' scheduled to be given at String-Math2014, June 12, 2014: here . A pdf version is here . This talk is directed more at mathematicians than physicists.
These are lectures delivered at the Erwin Schrodinger Institute in Vienna, August 18-21, 2014. They review the paper with D. Freed on ``Twisted Equivariant Matter.'' An extended set of notes with many proofs and examples is here . A truncated version of these notes, which omits many proofs and examples and is closer to the actual lectures can be found here .
These are lectures delivered at the Journees de Physiques Mathematiques Lyon on ``BPS States, Hitchin Systems, and Quivers,'' Sept. 3-5, 2014. They review the paper with D. Gaiotto and E. Witten ``Algebra of the Infrared: String Field Theory Structures in Massive N=(2,2) Field Theory in Two Dimensions.'' The first of the lecture series is here . A powerpoint version is here .
Talk delivered at the IAS, Oct. 13, 2014. ``Algebraic structure of the IR limit of massive d=2 N=(2,2) QFT.'' A pdf version is here . A powerpoint version is here .
Talk delivered at the SCGP, Nov. 17, 2014. ``Web formalism and the IR limit of massive 2D N=(2,2) QFT -or - A short ride with a big machine.'' A pdf version is here . A powerpoint version is here .
Three talks delivered at the Conference on Homological Mirror Symmetry, Jan. 26-30, 2015. Lecture notes are here .
A pair of talks on LG models and the web formalism, given at Harvard and Brandeis, March 5 and 6, 2015. The talk at Harvard is aimed mostly at physicists and is here . The talk at the FRG conference at Brandeis is aimed mostly at mathematicians and is here .
A talk ``Measuring the elliptic genus,'' given at the workshop ``(Mock) Modularity, Moonshine, and String Theory,'' at the Perimeter Institute, April 17, 2015 can be viewed here .
Two pedagogical lectures, ``Quantum Mechanics And The 10-Fold Way,'' at the PiTP School, ``New Insights Into Quantum Matter,'' Institute for Advanced Study, July 27 and 28, 2015. The lectures can be viewed viewed here . There are typed lecture notes . In addition there are handwritten notes closer to the actual lectures. See the Munich lectures below for these. (The lectures are better and more complete in the Munich school.)
``Measuring The Elliptic Genus,'' given at ``AndyFest: A Celebration of the Science of Andrew Strominger,'' Harvard University, July 31, 2015. The powerpoint is here and a pdf version is here .
``Physics Predictions For L2 Kernels Of Dirac-Like Operators On Monopole Moduli Spaces,'' given at the workshop on metric and analytic aspects of moduli spaces, Isaac Newton Institute, Cambridge, UK, August 13, 2015. Video of the seminar is available here .
Three pedagogical lectures, ``Quantum Mechanics And 10-Fold Ways,'' at the school on topological phases of matter, Arnold Sommerfeld Center for Theoretical physics, Munich, September , 2015. The lectures can be viewed viewed here . The handwritten notes for these are Lecture 1 , Lecture 2 , and Lecture 3 .
``Monopolia,'' delivered at the Seventh New England String Theory Meeting, Brown University, November 6, 2015. The powerpoint is here and a pdf version is here . A longer (and improved) version, delivered at the CUNY workshop, Dec. 4 is here and a pdf version is here .
An updated version of the Aalborg review from 2012, ``d=4, N=2 Field Theory and Physical Mathematics,'' given to the AMS chapter of graduate students at Rutgers, Dec. 17, 2015 is here .
A talk about monopoles and BPS states that I WOULD have given at the conference ``Geometry and Physics: Mirror symmetry, Hodge theory, and related topics,'' held at the University of Miami, January 25-30, 2016, in honor of Ron Donagi's 60th birthday, had it not been for the absolutely astonishing and total incompetence of United Airlines, can be found here .
A talk given in honor of Dave Morrison's 60th birthday, delivered at Caltech, Feb. 25, 2016 at the DaveDay conference is here and a pdf version is here .
An updated version of ``Monopolia,'' delivered at the Nambu Memorial Symposium, Sunday March 13, 2016 is here and a pdf version is here .
A colloquium-level talk reviewing four-dimensional N=2 field theories and some aspects of BPS states, at Johns Hopkins, Monday, April 11, 2016, is here . This is a pdf version . A better version, given at Yale, Monday, February 23, 2017 is here . This is a pdf version . A version for CERN, scheduled for July 12, 2017 is here .
A talk updating my DaveDay talk, delivered June 23 at the Retrospective CY workshop at Herstmonceaux Castle is here . This is a pdf version . And here is a further updated version given at the SCGP workshop on Automorphic Forms and String Theory, August 31, 2016. Finally this is a version given at Yale, and the pdf is here .
My StringMath2016 talk, delivered in Paris, June 27 is here . This is a pdf version . Here is the Aspen version .
Talk for Dirac Medal ceremony, delivered in Trieste, August 8, is here and this is a pdf version .
My talk at NatiFest at the Institute for Advanced Study, Princeton, September 15, 2016, is here and this is a pdf version . (To view the powerpoint you need to use the slide show and the custom show ``NatiReverseOrder1'' - it is the only way to get PowerPoint to number the slides in decreasing order. Of course, what matters to most members of the audience is how many slides remain in a talk, not how many have already been covered.)
Powerpoint for my talk at the PCTS conference, ``New Developments in Conformal Field Theory Above Two Dimensions (CFT),'' called simply ``Three Remarks on N=2 d=4 Field Theory,'' is here and this is a pdf version . Link to the conference is here . A version scheduled to be given on July 3, 2017 at the workshop ``String Theory and Quantum Gravity,'' in Ascona, Switzerland, is here and this is a pdf version .
Powerpoint for my comments at the PCTS panel discussion combining the previous conference with ``The Quantum Hall Effect: Past, Present & Future (QHE),'' is here and this is a pdf version . Link to the conference is here .
Lecture notes on ``The Physical Approach To Donaldson And Seiberg-Witten Invariants,'' - STILL UNDER CONSTRUCTION - for lectures delivered at the SCGP March 22,23,24, 2017 are here . Comments are welcome. Lectures can be viewed on the SCGP video portal here . A fourth lecture concluding the series was delivered at the SCGP, April 26, 2017 and the powerpoint is here . The talk is available on the SCGP Video portal. A pdf version is here . Here are handwritten lecture notes for Lecture 1 , Lecture 2 , and Lecture 3 .
Here are handwritten notes of my talk at the Aspen workshop ``New Moonshines and Quantum Gravity,'' entitled Uber T-Dualische Transformationen: Giving Orbifold Groups A Lyft .
A talk on the u-plane integral and partition functions of twisted supersymmetric field theories given at the Euro-Strings Conference at Kings College London, April 2018 is here in pdf and here in powerpoint
A talk on the cancellation of global anomalies in six-dimensional supergravity theories, given on May 16, 2018 at MIT is here in pdf and here in powerpoint
My StringMath-2018 talk, ``Partition Functions Of Twisted Supersymmetric Gauge Theories On Four-Manifolds via u-Plane Integrals,'' delivered on June 20, 2018 in Sendai is here in powerpoint and here in pdf
My Strings-2018 talk, ``Global Anomalies In Six-Dimensional Supergravity,'' delivered on June 29, 2018 in Okinawa is here in powerpoint and here in pdf
Here are three lectures on class S field theories delivered at the Hamburg School On Higgs Bundles, September 10,11,13, 2018. The lecture notes for Lecture 1 have several sources for background material. They are all readily available except for my ITP lectures on D-branes. For light reading you can look at my article in the Notices of the AMS. In fact, Lecture 1 took all the time, (in part due to a lot of good questions from the students). But, for the record, here are my notes for the intended Lecture 2 and Lecture 3 .
Slightly updated colloquium "Four-dimensional N=2 supersymmetric field theories and Physical Mathematics,'' delivered at Stanford, Nov. 27, 2018 is here in powerpoint and here in pdf
Seminar at Stanford "Finding The Golay Code In A K3 Sigma-Model'' delivered at Stanford, Dec. 3, 2018 is .....
"K3 Surfaces, Matheiu Moonshine, and (Quantum) Error Correcting Codes'' delivered Tuesday, Jan. 15, Austin TX, at Dan Freed's 60th birthday conference, ``Between Topology And Quantum Field Theory,'' is here in powerpoint and here in pdf
``Categorified Wall Crossing And Twisted Masses,'' Simons Center for Geometry and Physics workshop on holomorphic differentials, Feb. 6, 2019. Video of talk available at SCGP video portal.
``Categorified Wall Crossing And Twisted Masses - v2,'' Simons Center for Geometry and Physics workshop on Challenges at the Interface of Hitchin Systems and String Theory, March 21, 2019. Video of talk available at SCGP video portal. Lecture notes are here in pdf
"Moonshine Phenomena, Supersymmetry, and (Quantum) Error Correcting Codes'' delivered Friday, April 12, Yale, at Nick Read's 60th birthday conference, ``Field theory in condensed matter: a symposium in honor of Nick Read,'' is here in powerpoint and here in pdf
I am scheduled to give a talk for the general public on May 29, 2019 at this event. The talk is here. and also online. A slightly revised version for the Park City Mathematics Institute, scheduled to be delivered on July 9, 2019 is here in power point and here in pdf .
I gave a series of pedagogical lectures on Chern-Simons theory during the week of June 3, 2019. A preliminary version of the lecture notes can be found here. These notes are still a mess and much work remains to be done. Any constructive criticism is welcome.
· Clay Mathematical Institute Lectures
If you are looking for ``D-branes and K-theory in 2D topological field theory,'' hep-th/0609042 you can get a pdf file here Dbranes_Ktheory_Final.pdf,
Or a postscript file here (warning 23MB!) Dbranes_Ktheory_Final.ps, or you can get the tex file Dbranes_Ktheory_Final.tex, and the figures as ClayFigsFinal_EPS.zip or as ClayFigsFinal_GIF.zip
|
CommonCrawl
|
The chitosan/carboxymethyl cellulose/montmorillonite scaffolds incorporated with epigallocatechin-3-gallate-loaded chitosan microspheres for promoting osteogenesis of human umbilical cord-derived mesenchymal stem cell
Jin Wang1 na1,
Wubo He1 na1,
Wen-Song Tan1 &
Haibo Cai ORCID: orcid.org/0000-0001-6449-86431
Epigallocatechin-3-gallate (EGCG) is a plant-derived flavonoid compound with the ability to promote the differentiation of human bone marrow-derived mesenchymal stem cells (MSCs) into osteoblasts. However, the effect of EGCG on the osteogenic differentiation of the human umbilical cord-derived mesenchymal stem cells (HUMSCs) is rarely studied. Therefore, in this study, the osteogenic effects of EGCG are studied in the HUMSCs by detecting cell proliferation, alkaline phosphatase (ALP) activity, calcium deposition and the expression of relevant osteogenic markers. The results showed that EGCG can promote the proliferation and osteogenic differentiation of the HUMSCs in vitro at a concentration of 2.5–5.0 μM. Unfortunately, the EGCG is easily metabolized by cells during cell culture, which reduces its bioavailability. Therefore, in this paper, EGCG-loaded microspheres (ECM) were prepared and embedded in chitosan/carboxymethyl cellulose/montmorillonite (CS/CMC/MMT) scaffolds to form CS/CMC/MMT-ECM scaffolds for improving the bioavailability of EGCG. The HUMSCs were cultured on CS/CMC/MMT-ECM scaffolds to induce osteogenic differentiation. The results showed that the CS/CMC/MMT-ECM scaffold continuously released EGCG for up to 22 days. In addition, CS/CMC/MMT-ECM scaffolds can promote osteoblast differentiation. Taken together, the present study suggested that entrainment of ECM into CS/CMC/MMT scaffolds was a prospective scheme for promotion osteogenic differentiation of the HUMSCs.
Mesenchymal stem cells (MSCs) are a type of adult stem cells originally reported to exist in the stroma of bone marrow (Ullah et al. 2015). In addition, MSCs have been isolated from many adult tissues, such as adipose tissue (Cabezas et al. 2018), synovial membrane (Neybecker et al. 2020), dental tissue (Hung et al. 2011), and umbilical cord (Zhang et al. 2020). MSCs exhibit distinctive stem cell properties of self-renewal and multi-lineage differentiation, and can differentiate into mesodermal lineage such as osteocytes, adipocytes chondrocytes, ectodermal neurocytes and endodermal lineages hepatocytes (Cheng et al. 2019). MSCs are also widely used in stem cell therapy and regenerative medicine due to their low immunogenicity, lack of ethical concerns, and immunoregulatory function (Gu et al. 2015; Rostami et al. 2020; Shariati et al. 2020). Recently, MSCs have also demonstrated the ability to repair bone tissue (Kong et al. 2019; Lei et al. 2012; Liu et al. 2020; Zhang et al. 2019).
The human umbilical cord-derived mesenchymal stem cells (HUMSCs) can differentiate into osteoblasts and have been used to repair bone defects (Kosinski et al. 2020; Yang et al. 2020a, b). The HUMSCs have certain advantages for clinical application. For instance, they are easily isolated; derived from the umbilical cord after birth, so the source of the cells is less controversial; replicate faster in vitro; the source of the cells is quite young; and less immune reactive after transplantation (Nagamura-Inoue et al. 2014). Several protocols have been established to direct the differentiation of the HUMSCs into osteoblasts, including the use of β-glycerophosphate, dexamethasone, and ascorbic acid (Fabian and Langenbach 2013; Freeman et al. 2016). However, the efficiency of osteogenic differentiation of MSCs in vitro still needs to be improved (Pittenger et al. 2019). Several bioactive molecules have been used to enhance the osteogenic differentiation of MSCs, such as growth factors (Safari et al. 2021; Su et al. 2014), cytokines (Hosogane et al. 2010), hormones (Lu et al. 2017), pharmaceuticals (Cui et al. 2017), phytochemicals compounds (Menon et al. 2018). Among them, phytochemicals compounds have attracted much attention because of their extensive sources and non-toxic side effects. Flavonoids are phytochemical compounds, which potentially promote the differentiation of MSCs into osteoblasts by activating signaling pathways associated with osteogenesis (Chen et al. 2018; Jin et al. 2014; Kulandaivelu et al. 2016; Zhou et al. 2015).
Epigallocatechin-3-gallate (EGCG) is a natural flavonoid found in green tea and has been reported to be involved in bone metabolism (Jin et al. 2014; Wang et al. 2016). Recent researches have confirmed that EGCG promotes proliferation and differentiation of human bone marrow-derived MSCs into osteoblasts (Wang et al. 2016). Further studies showed that EGCG promoted osteogenic differentiation through the activation of Wnt/β-catenin signaling pathway (Lee et al. 2013; Xi et al. 2018). However, the effect of EGCG on the osteogenic differentiation of HUMSCs is rarely studied. In addition, EGCG is easily affected by factors such as oxidant level, pH and temperature, and is rapidly metabolized by cells (Hou et al. 2005; Sato et al. 2017). Therefore, the sustained release of EGCG is of great significance in promoting the osteogenic differentiation of MSCs.
Drug-loaded microspheres are the preferred sustained-release system, because it can provide a large surface area to volume ratio, control the release time and improve the release effect (Yang et al. 2016). However, there are some obstacles that prevent microspheres from being directly used in the culture of HUMSCs, such as insufficient release time, small particle size that is not conducive to cell adhesion (Yang et al. 2020a, b). Natural polymer scaffolds are often used for three-dimensional (3D) culture of stem cells in vitro, which usually provides a more complete picture of cell-to-cell and cell–matrix interactions, better simulating the natural environment of the stem cells than traditional two dimension (2D) culture. In addition, many desirable cellular characteristics were maintained or even promoted in 3D culture (Vila-Parrondo et al. 2020; Wu et al. 2020; Ylostalo 2020). Therefore, the drug-loaded microspheres were wrapped into the scaffold to achieve the purpose of EGCG sustained release and provide adhesion carriers for cells.
Chitosan (CS) is biocompatible and linear cationic polymer (Li et al. 2016a, b). It is the partially deacetylated form of chitin, consisting of glucosamine and N-acetyl glucosamine with linkage (Komoto et al. 2019). In addition, CS is soluble under acidic conditions (Sun et al. 2019). They have also been reported to have antibacterial activity (Geisberger et al. 2013). Due to these advantageous properties, CS is widely used in the preparation of various carrier materials (Coimbra et al. 2011; Li et al. 2005). Carboxymethyl cellulose (CMC), which is very similar to CS in the structure and the cross-linking of CMC and CS plays an important role in improving the hydrophilicity, swelling and protein adsorption properties of CS (Menon et al. 2018b; Sainitya et al. 2015; Sun et al. 2019). CS/CMC scaffolds have also been used in cell cultures (Liu et al. 2009). Montmorillonite (MMT), the main component of bentonite, has been approved by the FDA as an additive in a variety of pharmaceutical products (Haroun et al. 2009; Katti et al. 2008). MMT has received significant attention in recent years due to favorable properties such as biocompatibility, availability and feasibility. In addition, Extensive research has shown that the introduction of MMT-prepared scaffolds into natural biomaterial (including gelatin, collagen, and chitosan) improves cell–scaffold interactions, cell proliferation, and cell differentiation (Hsu et al. 2012; Kevadiya et al. 2014; Nistor et al. 2015; Thakur et al. 2015).
In this study, the influence of EGCG on the HUMSCs in vitro culture was explored. Then EGCG was emulsified into chitosan microspheres. CS/CMC/MTT scaffolds were prepared by conventional freeze-drying approach. CS/CMC/MTT scaffolds were used as the matrix for loading EGCG-encapsulated chitosan microspheres (ECM). The scaffolds were characterized and then used to study its influence on the proliferation and osteoblast differentiation of the HUMSC.
Chitosan (CS) and carboxymethyl cellulose (CMC) were obtained from Shanghai Macklin Biochemical Technology. Montmorillonite (MMT, K10) and the 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT) were purchased from Sigma-Aldrich (St. Louis, MO). Paraffin, span 80, glutaraldehyde and isopropanol were obtained from Aladdin Biochemical Technology, Shanghai, China. EGCG (purity ≥ 98%, high-performance liquid chromatography) was obtained from yuanye Bio-Technology, Shanghai, China. A Cell Counting Kit-8 (CCK8) was purchased from Dojindo China Co., Ltd. (Shanghai, China). BCIP/NBT alkaline phosphatase color development kit and enhanced BCA protein assay kit were obtained from Beyotime Biochemical Technology, Shanghai, China. The calcium colorimetric assay kit and alkaline phosphatase (ALP) kit were obtained from Jiancheng Biochemical Technology, Nanjing, China. All other cell culture products and reagents were purchased from GIBCO unless otherwise specified. All chemicals were of reagent grade and used without any further purification. Ultrapure water (18.2 MΩ, Millipore Co., USA) was used in all solutions and reagents throughout the experiment.
Preparation of microspheres and scaffolds
The preparation of chitosan microspheres was based on the reported method slightly modified. In short, 400 mg CS was dissolved in 20 mL 1% (v/v) glacial acetic acid solution. The 45.86 mg EGCG was dissolved in 1 mL DMSO solution, which was prepared into 10 mM EGCG solution. Then 100 μL 10 mM EGCG solution were added drop-wise to the CS solution. The 2 mL Span 80 was added to 80 mL liquid paraffin at 50 ℃ under stirring 800 rpm to obtain continuous oil phase. The chitosan solution was slowly added to the oil phase, and stirred for 1.5 h (h) to form a stable oil/water system. After that, 2 mL 25% (v/v) glutaraldehyde was slowly added to the system, and stirred for 30 min (min) to cross-link chitosan. The microspheres suspension was centrifuged at 1500 rpm for 5 min to remove the supernatant. The microspheres were washed with isopropanol, ethanol and deionized water at room temperature. Finally, the ECM were obtained by freeze-drying. CS microsphere (CM) were also prepared without the addition of EGCG.
CS/CMC/MTT-ECM scaffolds were prepared by conventional freeze-drying approach. Briefly, 200 mg CMC and 200 mg CS power were added to deionized water and stirred for 10 min. For incorporation of microspheres, 40 mg ECM was added to the mixed solution, and MMT was added subsequently. After 30 min of stirring, acetic acid (0.5% v/v) was added, and the 250 μL solution was poured into a mold. The mold was the polyfluortetraethylene plate and the area of each hole was 2 cm2. The mold was maintained at − 20 °C overnight, followed by lyophilization. For CS/CMC/MMT-CM and CS/CMC/MMT-EGCG scaffolds, a smilar process was conducted except 40 mg ECM was replaced by 40 mg CM and 1.2 mg EGCG, respectively.
Characterizations
FT-IR analysis
The FT-IR spectrometer (Jasco-4100, JASCO, Japan) continued scanning to record the FT-IR spectra of the microspheres and scaffolds at 25 ℃ over the spectral range of 400–4000 cm−1 with accumulation of 16 scans and resolution of 4.0 cm−1.
XRD analysis
The microspheres and scaffolds were characterized using an analytical XPERT PRO powder diffractometer operating at a voltage of 40 kV (Cu Ka radiation) in the range of 5–75 ℃ with a 2θ step at a speed of 2θ min−1.
SEM analysis
Scanning electron microscopy (SEM, Hitachi S-3400 N, Hitachi Ltd) analysis was performed on the prepared microsphere and composite scaffolds to examine their morphology. Before observation, the samples were sputtered with gold for 50 s.
In vitro porosity studies
The porosity of scaffolds was evaluated as described in previous study (Yan et al. 2013). Briefly, the scaffolds were immersed in a known volume ethanol graduated cylinder (V1) for 5 min. Evacuation was repeated until no air bubble was discharged. The total volume was recorded as V2. The remaining alcohol volume after scaffolds removal was recorded as V3. The porosity of the scaffolds was calculated by the following equation:
$${\rm{Porosity}}{\mkern 1mu} {\mkern 1mu} \left( \% \right) = \left( {{{\rm{V}}_1}{\rm{ - }}{{\rm{V}}_3}} \right)/\left( {{{\rm{V}}_2}{\rm{ - }}{{\rm{V}}_3}} \right) \times 100.$$
In vitro swelling studies
The swelling ratio of the scaffolds was studied by calculating the change in the weight of the scaffolds. Dry scaffolds weight was recorded as Wd. After immersed in distilled water for 24 h, the weight of the scaffold was recorded as Ws1 after blotting with filter paper. Water retention ratio was evaluated by centrifuging the wet scaffolds (500 rpm, 3 min) and then recording its weight (Ws2). The swelling and water retention ratio were obtained by the following equations:
$${\text{Swelling}}\,{\text{ratio}} = \left( {{\text{W}}_{{{\text{s1}}}} - {\text{W}}_{{\text{d}}} } \right)/{\text{W}}_{{\text{d}}} ,$$
$${\text{Retention}}\,{\text{ratio}} = \left( {{\text{W}}_{{{\text{s2}}}} - {\text{W}}_{{\text{d}}} } \right)/{\text{W}}_{{\text{d}}} .$$
In vitro protein adsorption studies
The equal weights scaffolds were immersed in 100% ethanol for 1 h, and then pre-wetted in 1 × phosphate buffer solution (PBS) for 30 min. The scaffold was then placed in 3 mL DMEM containing 10% fetal bovine serum and incubated at 37 °C for 1, 3 and 24 h. After the incubation, the scaffolds were removed, blotted with filter paper, and then the binding loose protein was washed off with 1 × PBS. Once the scaffold was removed, Bradford analysis was used to assess the presence of non-adsorbed proteins in the incubation solution. The amount of protein absorbed is equal to the total amount of protein minus the amount of non-absorbed protein.
In vitro EGCG release studies
ECM and scaffolds of equal weight were incubated at 37 °C in 1 × PBS for 30 days. The 200 μL solution were taken at predetermined time intervals and replaced with an equal volume of 1 × PBS. The absorbance of released EGCG was read at 273 nm, the concentration was calculated from an EGCG standard curve (concentrations: 20, 40, 60, 80, 100 μM). The cumulative release rate of EGCG was calculated by the following formula:
$${\text{Release}}\,\,\left( \% \right) = \left( {{\text{amount}}\,{\text{of}}\,{\text{EGCG}}\,{\text{released}}/{\text{initial}}\,{\text{concentration}}\,{\text{of}}\,{\text{EGCG}}\,{\text{loaded}}} \right) \times 100.$$
Viability and proliferation of HUMSCs in the scaffolds
The HUMSCs were obtained from Ninth People's Hospital in Shanghai. To investigate the effect of the scaffold on the activity of HUMSCs, 2 × 104 cells cm−2 cells were seeded in the scaffold and incubated for 3 days, and then the MTT assay was performed to determine the activity of cells in the scaffold.
The proliferative capacity of HUMSCs in the scaffold was evaluated using the CCK-8 assay. HUMSCs were cultured in scaffolds for 1, 3, 5, and 7 days. The medium was removed, followed by the addition of 200 μL of α-MEM medium containing 10% (v/v) CCK8 solution. After incubation at 37 °C for 2 h, the OD450 value of the solution was determined.
In order to observe the cytoskeletal organization of HUMSCs in the scaffold, 1 × 104 cells cm−2 of cells were seeded in the scaffold. After 3 days culture, the medium was removed. The scaffold was washed with 1 × PBS for 3 times, and then fixed in 4% (w/v) paraformaldehyde at room temperature for 15 min. The scaffold was washed again with PBS for 3 times, and then was permeated with 0.1% (w/v) Triton X-100 for 10 min. After washing the samples with PBS for 3 times, the samples were stained with phalloidin for 30 min. Samples were then washed with PBS for three times before observed with the confocal laser scanning microscope (CLSM; TCS SP8, Leica).
Cell viability was analyzed by cell death/live staining. HUMSCs (1 × 104 cells cm−2) were inoculated in the scaffold and cultured in 24-well plates for 1, 4, and 7 days. Cells in the scaffold were washed three times using a PBS. 10 μL calcein-AM solution (2 mM) and 15 μL PI solution (1.5 mM) were added to 5 mL of a 1 × assay buffer and mixed thoroughly. The working solution concentration of calcein-AM was 4.0 μM, while that of PI was 4.5 μM. The samples were incubated with the working solution at 37 ℃ for 30 min. After washing with PBS, the samples were imaged by using a confocal laser microscope (CLSM; TCS SP8, Leica).
In vitro osteogenic differentiation
The HUMSCs (2 × 104 cells cm−2) seeded in different scaffolds were cultured in osteogenic induction medium for 7 days. According to the manufacturer's instructions, the ALP activity of HUMSCs was determined qualitatively and quantitatively by BCIP/NBT alkaline phosphatase color development kit and ALP kit, respectively. Images of qualitative determination of ALP activity were captured with an Olympus MVX10 MacroView (Japan). For quantitative determination of ALP activity, HUMSCs were incubated in RIPA lysis buffer for 60 min, and the total protein concentration (mg/mL) and ALP activity (units/mL) were measured following the manufacturer's instructions with an enhanced BCA protein assay kit and ALP reagent kit, respectively.
The HUMSCs growing on different scaffolds were cultured in osteogenic induction medium for 14 days. Calcium deposition was determined qualitatively and quantitatively by Alizarin red stain solution and calcium colorimetric assay kit according to the manufacturer's instructions.
Reverse transcriptase real-time (quantitative) polymerase chain reaction (RT-qPCR)
The HUMSCs (1 × 105cells cm−2) were seeded in the scaffolds. After 14 days of culture, 5–10 × 105 cells were collected in a 1.5-mL EP tube, and 500 μL TRIzol reagent was added and gently shaken to cleavage the cells, and total RNA was extracted according to the manufacturer's instructions. The cDNA was synthesized using a kit from BioRad according to the manufacturer's protocol, and a Quant Studio 3 real-time PCR system (Applied Biosystems, Foster City, CA, USA) and SYBR reagent were used for RT-qPCR amplification. The reaction procedure of RT-qPCR was set as follows: pre-deformation at 95 ℃ for 4 min was followed by 45 cycles (denaturation at 95 ℃ for 10 s, annealing 40 s at 60 ℃, extension at 72 ℃ for 30 s). GAPDH was used as the house keeping gene for normalization as housekeeping gene. 2−ΔΔCt method was used to calculate the relative expression of genes in the HUMSCs.
All experimental data in our study were analyzed by SPSS software, and the number of parallel experimental samples was not less than 3. Data were expressed as mean ± standard deviation (SD), P < 0.05 indicates significant.
Effect of EGCG on proliferation and osteogenic differentiation of the HUMSCs
Epigallocatechin-3-gallate (EGCG) is a natural flavonoid that has been shown to be involved in bone metabolism (Jin et al. 2014; Wang et al. 2016). In order to study the biocompatibility of EGCG to HUMSCs, the MTT experiment was carried out. The results in Fig. 1a show that EGCG had no significant effect on the cell activity of the HUMSCs compared to the control group at a concentration range of 2.5–10.0 μM, proving that EGCG did not produce cytotoxicity to the HUMSCs at this concentration range. However, when the concentration of EGCG was greater than 10.0 μM, the cell activity of the HUMSCs was significantly decreased by EGCG. These results indicated that 2.5–10.0 μM EGCG had the biocompatibility to the HUMSCs. In order to investigate the effect of EGCG on the proliferation of the HUMSCs, concentrations of 0, 2.5, 5.0 and 10.0 μM were set in the concentration range of 0–10.0 μM. Figure 1b shows that the number of cells increases with the extension of time. Compared with the control group, the proliferation of the HUMSCs in the experimental group supplemented with 2.5 and 5.0 μM EGCG was significantly promoted from the 3rd day of culture. In the experimental group supplemented with 10.0 μM EGCG, the proliferation of EGCG cells was significantly inhibited from the 5th day with the extension of EGCG treatment time. Therefore, 2.5–5.0 μM is the optimal concentration range for EGCG to promote cell proliferation. These results corroborated previous investigations showing the stimulatory effects of EGCG on MSCs at 2.5.0–10.0 μM in an in vitro study (Jin et al. 2014).
a Effect of EGCG on the activity of the HUMSCs after 48 h observed by MMT assay. b Effect of EGCG on the HUMSCs proliferation. *P < 0.05, **P < 0.01 and ***P < 0.001
In order to investigate the effect of EGCG on osteogenic differentiation of the HUMSCs and explore the optimal concentration range for promoting osteogenic differentiation, markers of osteogenic differentiation in HUMSCs at the cell level and at the molecular level were detected after a certain period of culture in osteogenic induction medium supplemented with different concentrations of EGCG. Commonly used as a marker of osteogenesis, ALP activity is assumed to reflect the degree of osteogenic differentiation (Watanabe et al. 2018). As shown in Fig. 2a, the BCIP/NBT staining results of the control group and the experimental groups with different concentrations of EGCG were blue and purple after staining, indicating the expression of ALP in all groups. However, compared with the control group, the bluish purple of all EGCG supplemental groups increased to varying degrees, indicating that ALP enzyme activity was increased in this concentration range of EGCG treatment, and the bluish purple was the most significant at 5.0 μM EGCG treatment, indicating that ALP enzyme activity was increased most obviously in the HUMSCs after 5.0 μM EGCG treatment. In addition, Alizarin red staining was used to assess the calcium content of the constructs (Zhou et al. 2014). The results of Alizarin red staining showed that there were scattered red calcium nodules in the control group. With the addition of EGCG, the number of calcium nodules increased, and the increase of calcium nodules at the concentration of 5.0 μM was the most obvious, indicating that the amount of calcium deposition at this concentration was also the largest. Subsequent quantitative determination of ALP enzyme activity and calcium deposition showed consistent results with qualitative staining (Fig. 2b, 2c).
a The HUMSCs were cultured for 7 days for ALP staining, and the HUMSCs were cultured for 14 days for Alizarin red staining (scale bar: 100 μM). b Quantitative analysis of ALP activity in the HUMSCs cultured for 7 days. c Calcium contents in the HUMSCs cultured for 14 days. d The quantitative evaluation of osteogenic-related genes in the HUMSCs
The effects of different concentrations of EGCG on the expression of osteogenic differentiation-related genes ALP, OCN, OPN, Col-I and Runx2 were investigated, and the expression of the above five genes was determined by qRT-PCR assay. As shown in Fig. 2d, the expression of ALP, OCN and Runx2 genes were significantly upregulated by EGCG treatment at a concentration of 2.5–10.0 μM compared with the control group. However, the expression of OPN and Col-I genes were upregulated by the addition of 5.0–10.0 μM EGCG. The expression of five genes related to osteoblastic differentiation was most significantly upregulated by EGCG at 5.0 μM concentration. Taken together, these results suggested that the detection of markers of osteogenic differentiation in HUMSCs at the molecular level was consistent with that at the cellular level.
Characterization of microspheres and scaffolds
In this study, EGCG has been proven to promote the proliferation and osteogenic differentiation of the HUMSCs in vitro. However, the EGCG is easily metabolized by cells during cell culture, which reduces its bioavailability (Li et al. 2016a, b). Therefore, ECMs were prepared and embedded in CS/CMC/MMT scaffolds to form CS/CMC/MMT-ECM scaffolds for improving the bioavailability of EGCG. FT-IR and X-ray diffraction techniques were used separately to verify whether the scaffolds were successfully formed. With reference to Fig. 3, the absorption peaks in FT-IR spectra of chitosan are at 3450 cm−1 (O–H and N–H, stretching), 2880 cm−1 (C–H, stretching) and 1640 cm−1 (amide II band N–H, stretching) (Koshani et al. 2021). The absorption peaks in FT-IR spectra of EGCG are at 3410 cm−1 (O–H, stretching), 2970 cm−1 (C–H, stretching), 1650 cm−1 (C=O, stretching), 1570 cm−1 (aromatic C=C, stretching), 1300 cm−1 (C–O, stretching). When the CM were formed, the peaks in the spectrum changed significantly. The stretching vibration of O–H and N–H at 3450 cm−1 in CS shifted slightly to 3420 cm−1. In addition, the bending vibration of N–H at 1640 cm−1 in CS shifted slightly to 1630 cm−1, indicating an augmentation of hydrogen bonding (Azizian et al. 2018; Leena et al. 2017). These results have been considered to be a connection between the aldehyde group and the amino group (Chen et al. 2017). Compared with CM, a new peak at 1650 cm−1 was detected in ECM. The presence of this peak indicated the successful introduction of EGCG into ECM. In addition, the characteristic peak in FT-IR spectra of CMC caused by the stretching vibration of carboxyl groups is 1710 cm−1 and bending vibration of C–H is 1450 cm−1. The characteristic stretching vibration peak in FT-IR spectra of MMT caused by Al–OH at 795 cm−1 were presented in the FT-IR spectra of CS/CMC/MMT. Moreover, there were characteristic peaks of CM, EGCG, ECM and CS/CMC/MMT in the FT-IR spectra of CS/CMC/MMT-CM, CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM, suggesting the successful fabrication of these scaffolds.
FT-IR spectra of CS, CMC, MMT, EGCG, CM, ECM, CS/CMC/MMT, CS/CMC/MMT-CM, CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds
The XRD pattern of CS had diffraction peaks at 2θ value of 11.2° and 20.3° as shown in Fig. 4, which shows the semi-crystal structure of CS. When CS formed CM, it will weaken the intermolecular force of chitosan and disrupt crystallization order of chitosan. Therefore, the formation of CM, the diffraction peaks observed in the XRD of the CS completely disappeared (Leena et al. 2017). The XRD pattern of EGCG showed sharp diffraction peaks at the 2θ value of 15.3° , 16.8° , 19.4° , 20.5° , 23.2° and 24.3° . However, when the ECM was formed, these diffraction peaks disappeared,indcating that EGCG was amorphized when emulsified into CM. The XRD pattern of CMC has a wide peak at the 2θ value of 20° , indicating the amorphous structure of CMC. The XRD patterns of MMT exhibited characteristic sharp peaks at the 2θ value of 19.2, 31.8, 34.0, 45.5, 57.1 and 66.4, which suggested the crystal structure of MMT. Furthermore, these characteristic peaks are also appeared in CS/CMC/MMT, but the intensity became weaker. It is possible that the addition of CS and CMC resulted in the decrease of MMT crystallinity. Compared with CS/CMC/MMT, the shape of the diffraction peaks of CS/CMC/MMT-CM, CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM hardly changed, which may be due to the fact that the addition of amorphous CM, ECM and trace amount of EGCG had no effect on the crystal structure of these scaffolds.
XRD patterns of CS, CMC, MMT, EGCG, CM, ECM, CS/CMC/MMT, CS/CMC/MMT-CM, CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds
The morphology of microspheres and scaffolds was investigated by SEM. CM and ECM had a shape of well-defined sphere, and the average particle size of microspheres was about 40–60 μM (Fig. 5a).There was no significant difference between ECM and CM, implying that the addition of EGCG had no effect on morphological characteristics of microsphere. All scaffolds had interconnected porous structures. CM and ECM existed in CS/CMC/MMT-CM and CS/CMC/MMT-ECM scaffolds (Fig. 5b). Compared with CS/CMC/MMT scaffolds, the introduction of CM and ECM did not change the morphology of CS/CMC/MMT-CM and CS/CMC/MMT-EGCG scaffolds.
a SEM images of CM and ECM. b SEM images of CS/CMC/MMT, CS/CMC/MMT-CM, CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds. The inset shows a SEM image at 200× and 600× magnification
Analysis of physicochemical properties of different scaffolds
The porosity of the scaffolds affects cell growth and nutrient permeability (Yao et al. 2018). The porosity of CS/CMC/MMT, CS/CMC/MMT-CM, CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM were 72.7 ± 2.2%, 76.0 ± 3.8%, 86.3 ± 4.8% and 88.2 ± 4.6%, respectively (Fig. 6a). The porosity of CS/CMC/MMT-CM and CS/CMC/MMT-ECM scaffolds decreased significantly compared with CS/CMC/MMT and CS/CMC/MMT-EGCG scaffolds. Combined with the SEM images in Fig. 5b, it can be inferred that CM and ECM occupied the space of some open pores of the scaffolds, causing blockage of open pores, which may lead to the decrease of porosity. In addition, as shown in Fig. 6b and c, the retention ration and swelling ration of CS/CMC/MMT-CM and CS/CMC/MMT-ECM were significantly lower than those of CS/CMC/MMT and CS/CMC/MMT-EGCG.
a The mean pore size of prepared scaffolds. b The water uptake of scaffolds. c The retention rates of scaffolds. d Enzymatic degradation of the scaffolds. *P < 0.05, **P < 0.01. e The release curve of EGCG from the ECM. f The release curve of EGCG from the CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds
Adsorption of proteins on the scaffolds facilitates cellular interactions, affecting cell adhesion, cell spreading and later cellular events such as proliferation and differentiation (Wilson et al. 2005). In order to evaluate the protein adsorption capacity of these scaffolds and the influence of the introduction of CM, EGCG and ECM on the scaffolds, the protein adsorption capacity of the scaffolds was tested at different times. It is noteworthy that the protein adsorption capacity of CS/CMC/MMT was basically the same as that of CS/CMC/MMT-EGCG, which proved that the direct entrapment of EGCG into CS/CMC/MMT did not affect the protein adsorption capacity of the scaffold. However, the protein adsorption capacity of CS/CMC/MMT-CM and CS/CMC/MMT-ECM was significantly higher than that of CS/CMC/MMT (Fig. 6d), implying that CM and ECM significantly enhanced the protein adsorption capacity of scaffolds.
EGCG is unstable in neutral and alkaline environments, and is easily metabolized by cells, which reduces its bioavailability (Li et al. 2016a, b). Therefore, the sustained release of EGCG is of great significance in promoting bioavailability. In order to study the sustained-release effect of EGCG from microspheres and scaffolds, the relationship between EGCG content and absorbance was determined by spectrophotometry. The standard curve equation was Y = 0.0127X−0.0027, R2 = 0.994. The release rate of EGCG from ECM reached 86 ± 5.6% on the 6th day, and was completely released on the 7th day (Fig. 6d). As shown in Fig. 6f, the release rate of EGCG from the CS/CMC/MMT-EGCG formed by directly encapsulated EGCG reached 67.0 ± 6.6% on the 7th day, and the release basically stopped in the subsequent time. The release rate of EGCG from CS/CMC/MMT-ECM scaffolds was 65.4 ± 2.5% on the 22nd day. The sustained-release effect of EGCG from ECM was greatly improved by filling ECM in the scaffold. In addition, compared with CS/CMC/MMT-EGCG scaffolds, CS/CMC/MMT-ECM had more significant sustained-release effect of EGCG.
Cells proliferation and viability on different scaffolds
MTT assay was used to investigate the effect of the scaffolds on HUMSCs viability, and the results showed that the OD570 values of the four groups were similar (Fig. 7a), suggesting that these scaffolds was not cytotoxic. In order to further evaluate the cell viability, the HUMSCs were cultured on different scaffolds for 3 days and characterized by F-actin filament staining. Actin bundles were spindle-shaped on all four scaffolds (Fig. 8a), indicating that the cytoskeleton structure was well organized.
a The HUMSCs viability in the scaffolds after 48 h observed by MMT assay. b The HUMSCs proliferation in different scaffolds. *P < 0.05, **P < 0.01 and ***P < 0.001
a Skeleton of the HUMSCs in the scaffolds; red indicates the cell's skeleton and blue indicates the nucleus (scale bar: 200 μM). b The live/dead staining of HUMSCs after seeding on the scaffolds for 1 days, 4 days and 7 days; green indicates the live cells and red indicates the dead cells (scale bar: 100 μM)
The proliferation of HUMSCs on these scaffolds was investigated. The results of CCK8 test showed that the OD450 values of the cells on these scaffolds increased continuously during the culture process (Fig. 7b). Moreover, on days 3, 5 and 7, the ability to support cell proliferation of CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds was significantly higher than that of CS/CMC/MMT-ECM and CS/CMC/MMT-CM scaffolds. In addition, as shown in Fig. 8b, the cells cultured on different scaffolds increased continuously on the 1st, 4th and 7th day of culture. There were a large number of cells on CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds, and the cells had healthy polygonal morphology. This was consistent with the results in Fig. 8b, indicating that the CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds could well maintain the activity and proliferation of HUMSCs.
Effect of different scaffolds on osteoblast differentiation
To investigate the influence of these scaffolds on osteoblast differentiation at the cellular and molecular levels, the HUMSCs seeded on the scaffold were cultured for 7 and 14 days, respectively. At the cellular level, ALP activity is assumed to reflect the degree of osteogenic differentiation (Watanabe et al. 2018). In addition, Alizarin red staining was used to assess the calcium content of the constructs (Zhou et al. 2014). As shown in Fig. 9a and Fig. 9b. Staining and quantify ALP revealed that ALP expression in CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds was enhanced. Alizarin red staining results showed that the number of calcium nodules increased significantly when HUMSCs were cultured in CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds. The quantitative calcium results (Fig. 9c) were also consistent with the qualitative staining results. Since the CS/CMC/MMT-ECM scaffolds played an important role in the promotion of HUMSCs into osteoblasts at the cellular level, we subsequently confirmed the osteogenic role of CS/CMC/MMT-ECM scaffolds at the molecular level. The osteogenic-related genes involving ALP, runt-related transcription factor 2 (Runx2), osteopontin (OPN), osteocalcin (OCN) and type I collagen (Col-I) were detected. RT-qPCR was used to investigate the expression of these genes in HUMSCs. When HUMSCs were cultured in CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds, the expression of intracellular osteogenic-related genes ALP, OCN, OPN, Co1-I and Runx2 were significantly upregulated (Fig. 9d).
a The HUMSCs were cultured for 7 days for ALP staining, and the HUMSCs were cultured for 14 days for Alizarin red staining (scale bar: 100 μM). b Quantitative analysis of ALP activity in HUMSCs cultured for 7 days. c Calcium contents in HUMSCs cultured for 14 days. d The quantitative evaluation of osteogenic-related genes in HUMSCs
In this study, EGCG was entrapped in the form of ECM or monomer to form CS/CMC/MMT-ECM and CS/CMC/MMT-EGCG scaffolds, respectively. The addition of ECM decreased the porosity, swelling ration and retention ratio of the scaffolds. The results showed that the protein adsorption capacity of CS/CMC/MMT, CS/CMC/MMT-CM, CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM was 0.54 ± 0.06 mg, 0.63 ± 0.05 mg, 0.56 ± 0.08 mg and 0.64 ± 0.03 mg, respectively, implying that CM and ECM significantly enhanced the protein adsorption capacity of scaffolds. In addition, compared with CS/CMC/MMT and CS/CMC/MMT-CM scaffolds, CS/CMC/MMT-EGCG and CS/CMC/MMT-ECM scaffolds had a significant promotion in the proliferation and osteoblast differentiation of the HUMSCs. Besides, CS/CMC/MMT-ECM scaffolds had a stronger effect on promoting osteogenic differentiation of the HUMSCs, which might be related to its better sustained release of EGCG.
All data generated or analyzed during this study are included in this published article.
MSCs:
HUMSCs:
Human umbilical cord-derived mesenchymal stem cells
EGCG:
Epigallocatechin-3-gallate
CMC:
Carboxymethyl cellulose
MMT:
CS microsphere
ECM:
EGCG-encapsulated chitosan microspheres
Scanning electron microscopy
MTT:
3-(4,5-Dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide
RT-qPCR:
Reverse transcriptase real-time (quantitative) polymerase chain reaction
Runx2:
Runt-related transcription factor 2
OPN:
Osteopontin
OCN:
Osteocalcin
Col-I:
Type I collagen
Azizian S, Hadjizadeh A, Niknejad H (2018) Chitosan-gelatin porous scaffold incorporated with chitosan nanoparticles for growth factor delivery in tissue engineering. Carbohydr Polym 202:315–322
Cabezas J, Rojas D, Navarrete F, Ortiz R, Rivera G, Saravia F, Rodriguez-Alvarez L, Castro FO (2018) Equine mesenchymal stem cells derived from endometrial or adipose tissue share significant biological properties, but have distinctive pattern of surface markers and migration. Theriogenology 106:93–102
Chen H, Xing X, Tan H, Jia Y, Zhou T, Chen Y, Ling Z, Hu X (2017) Covalently antibacterial alginate-chitosan hydrogel dressing integrated gelatin microspheres containing tetracycline hydrochloride for wound healing. Mat Sci Eng C 70:287–295
Chen PF, Xia C, Mo J, Mei S, Lin XF, Fan SW (2018) Interpenetrating polymer network scaffold of sodium hyaluronate and sodium alginate combined with berberine for osteochondral defect regeneration. Mater Sci Eng C-Mater Biol Appl 91:190–200
Cheng YH, Dong JC, Bian Q (2019) Small molecules for mesenchymal stem cell fate determination. World J Stem Cells 11:1084–1103
Coimbra P, Ferreira P, de Sousa HC, Batista P, Rodrigues MA, Correia IJ, Gil MH (2011) Preparation and chemical and biological characterization of a pectin/chitosan polyelectrolyte complex scaffold for possible bone tissue engineering applications. Int J Biol Macromol 48:112–118
Cui ZK, Kim S, Baljon JJ, Doroudgar M, Lafleur M, Wu BM, Aghaloo T, Lee M (2017) Design and characterization of a therapeutic non-phospholipid liposomal nanocarrier with osteoinductive characteristics to promote bone formation. ACS Nano 11:8055–8063
Fabian L, Langenbach J (2013) Effects of dexamethasone, ascorbic acid and β-glycerophosphate on the osteogenic differentiation of stem cells in vitro. World J Stem Cells 10:1023–1034
Freeman FE, Stevens HY, Owens P, Guldberg RE, McNamara LM (2016) Osteogenic differentiation of mesenchymal stem cells by mimicking the cellular niche of the endochondral template. Tissue Eng Pt A 22:1176–1190
Geisberger G, Gyenge EB, Hinger D, Kach A, Maake C, Patzke GR (2013) Chitosan-thioglycolic acid as a versatile antimicrobial agent. Biomacromol 14:1010–1017
Gu LH, Zhang TT, Li Y, Yan HJ, Qi H, Li FR (2015) Immunogenicity of allogeneic mesenchymal stem cells transplanted via different routes in diabetic rats. Cell Mol Immunol 12:444–455
Haroun AA, Gamal-Eldeen A, Harding DR (2009) Preparation, characterization and in vitro biological study of biomimetic three-dimensional gelatin-montmorillonite/cellulose scaffold for tissue engineering. J Mater Sci Mater Med 20:2527–2540
Hosogane N, Huang ZP, Rawlins BA, Liu X, Boachie-Adjei O, Boskey AL, Zhu W (2010) Stromal derived factor-1 regulates bone morphogenetic protein 2-induced osteogenic differentiation of primary mesenchymal stem cells. Int J Biochem Cell B 42:1132–1141
Hou Z, Sang SM, You H, Lee MJ, Hong J, Chin KV, Yang CS (2005) Mechanism of action of (-)-epigallocatechin-3-gallate: auto-oxidation-dependent inactivation of epidermal growth factor receptor and direct effects on growth inhibition in human esophageal cancer KYSE 150 cells. Cancer Res 65:8049–8056
Hsu SH, Wang MC, Lin JJ (2012) Biocompatibility and antimicrobial evaluation of montmorillonite/chitosan nanocomposites. Appl Clay Sci 56:53–62
Hung CN, Mar K, Chang HC, Chiang YL, Hu HY, Lai CC, Chu RM, Ma CM (2011) A comparison between adipose tissue and dental pulp as sources of MSCs for tooth regeneration. Biomaterials 32:6995–7005
Jin P, Wu H, Xu G, Zheng L, Zhao J (2014) Epigallocatechin-3-gallate (EGCG) as a pro-osteogenic agent to enhance osteogenic differentiation of mesenchymal stem cells from human bone marrow: an in vitro study. Cell Tissue Res 356:381–390
Katti KS, Katti DR, Dash R (2008) Synthesis and characterization of a novel chitosan/montmorillonite/hydroxyapatite nanocomposite for bone tissue engineering. Biomed Mater 3:34122–34125
Kevadiya BD, Rajkumar S, Bajaj HC, Chettiar SS, Gosai K, Brahmbhatt H, Bhatt AS, Barvaliya YK, Dave GS, Kothari RK (2014) Biodegradable gelatin-ciprofloxacin-montmorillonite composite hydrogels for controlled drug release and wound dressing application. Coll Surf B Biointerfaces 122:175–183
Koc DA, Elcin AE, Elcin YM (2018) Strontium-modified chitosan/montmorillonite composites as bone tissue engineering scaffold. Mater Sci Eng C Mater Biol Appl 89:8–14
Komoto D, Furuike T, Tamura H (2019) Preparation of polyelectrolyte complex gel of sodium alginate with chitosan using basic solution of chitosan. Int J Biol Macromol 126:54–59
Kong Y, Zhao Y, Li D, Shen HW, Yan MM (2019) Dual delivery of encapsulated BM-MSCs and BMP-2 improves osteogenic differentiation and new bone formation. J Biomed Mater Res A 107:2282–2295
Koshani R, Tavakolian M, Ven T (2021) Natural emulgel from dialdehyde cellulose for lipophilic drug delivery. ACS Sustain Chem Eng 9:8680–8692
Kosinski M, Figiel-Dabrowska A, Lech W, Wieprzowski L, Strzalkowski R, Strzemecki D, Cheda L, Lenart J, Domanska-Janik K, Sarnowska A (2020) Bone defect repair using a bone substitute supported by mesenchymal stem cells derived from the umbilical cord. Stem Cells Int 2020:18–29
Kulandaivelu K, Mandal AKA (2016) Improved bioavailability and pharmacokinetics of tea polyphenols by encapsulation into gelatin nanoparticles. IET Nanobiotechnol 11:469–476
Lee H, Bae S, Yoon Y (2013) The anti-adipogenic effects of (-)epigallocatechin gallate are dependent on the WNT/beta-catenin pathway. J Nutr Biochem 24:1232–1240
Leena RS, Vairamani M, Selvamurugan N (2017) Alginate/gelatin scaffolds incorporated with silibinin-loaded chitosan nanoparticles for bone formation in vitro. Colloid Surface B 158:308–318
Lei C, Liu G, Gan Y, Fan Q, Fei Y, Zhang X, Tang T, Dai KJb, (2012) The use of autologous enriched bone marrow MSCs to enhance osteoporotic bone defect repair in long-term estrogen deficient goats. Stem Cells Int 33:5076–5084
Li Z, Ramay HR, Hauch KD, Xiao D, Zhang M (2005) Chitosan-alginate hybrid scaffolds for bone tissue engineering. Biomaterials 26:3919–3928
Li M, Xu JX, Shi TX, Yu HY, Bi JP, Chen GZ (2016a) Epigallocatechin-3-gallate augments therapeutic effects of mesenchymal stem cells in skin wound healing. Clin Exp Pharmacol Physiol 43:1115–1124
Li Q, Tan W, Zhang C, Gu G, Guo Z (2016b) Synthesis of water soluble chitosan derivatives with halogeno-1,2,3-triazole and their antifungal activity. Int J Biol Macromol 91:623–629
Liu Z, Ge Y, Zhang L, Wang Y, Guo C, Feng K, Yang S, Zhai Z, Chi Y, Zhao J, Liu F (2020) The effect of induced membranes combined with enhanced bone marrow and 3D PLA-HA on repairing long bone defects in vivo. J Tissue Eng Regen Med 14:1403–1414
Liuyun J, Yubao L, Chengdong X (2009) Preparation and biological properties of a novel composite scaffold of nano-hydroxyapatite/chitosan/carboxymethyl cellulose for bone tissue engineering. J Biomed Sci 16:65–78
Lu XL, Ding Y, Niu QN, Xuan SJ, Yang Y, Jin YL, Wang H (2017) ClC-3 chloride channel mediates the role of parathyroid hormone [1–34] on osteogenic differentiation of osteoblasts. Plos One 12:269–283
Menon AH, Soundarya SP, Sanjay V, Chandran SV, Balagangadharan K, Selvamurugan N (2018) Sustained release of chrysin from chitosan-based scaffolds promotes mesenchymal stem cell proliferation and osteoblast differentiation. Carbohyd Polym 195:356–367
Nagamura-Inoue T, He HJWJSC (2014) Umbilical cord-derived mesenchymal stem cells: their advantages and potential clinical utility. J Biomed Sci 6:195–202
Neybecker P, Henrionnet C, Pape E, Grossin L, Mainard D, Galois L, Loeuille D, Gillet P, Pinzano A (2020) Respective stemness and chondrogenic potential of mesenchymal stem cells isolated from human bone marrow, synovial membrane, and synovial fluid. Stem Cell Res Ther 11:316–328
Nistor MT, Vasile C, Chiriac AP (2015) Hybrid collagen-based hydrogels with embedded montmorillonite nanoparticles. Mater Sci Eng C Mater Biol Appl 53:212–221
Pittenger MF, Discher DE, Peault BM, Phinney DG, Hare JM, Caplan AI (2019) Mesenchymal stem cell perspective: cell biology to clinical progress. Npj Regen Med 4:1278–1289
Rostami Z, Khorashadizadeh M, Naseri M (2020) Immunoregulatory properties of mesenchymal stem cells: micro-RNAs. Immunol Lett 219:34–45
Safari B, Davaran S, Aghanejad A (2021) Osteogenic potential of the growth factors and bioactive molecules in bone regeneration. Int J Biol Macromol 175:544–557
Sainitya R, Sriram M, Kalyanaraman V, Dhivya S, Saravanan S, Vairamani M, Sastry TP, Selvamurugan N (2015) Scaffolds containing chitosan/carboxymethyl cellulose/mesoporous wollastonite for bone tissue engineering. Int J Biol Macromol 80:481–488
Sato K, Mera H, Wakitani S, Takagi M (2017) Effect of epigallocatechin-3-gallate on the increase in type II collagen accumulation in cartilage-like MSC sheets. Biosci Biotechnol Biochem 81:1241–1245
Shariati A, Nemati R, Sadeghipour Y, Yaghoubi Y, Baghbani R, Javidi K, Zamani M, Hassanzadeh A (2020) Mesenchymal stromal cells (MSCs) for neurodegenerative disease: a promising frontier. Eur J Cell Biol 99:159–172
Su N, Jin M, Chen L (2014) Role of FGF/FGFR signaling in skeletal development and homeostasis: learning from mouse models. Bone Res 2:978–991
Sun XX, Shen JF, Yu D, Ouyang XK (2019) Preparation of pH-sensitive Fe3O4@C/carboxymethyl cellulose/chitosan composite beads for diclofenac sodium delivery. Int J Biol Macromol 127:594–605
Thakur G, Singh A, Singh I (2015) Chitosan-montmorillonite polymer composites: formulation and evaluation of sustained release tablets of aceclofenac. Sci Pharm 84:603–617
Ullah I, Subbarao RB, Rho GJ (2015) Human mesenchymal stem cells—current trends and future prospective. Biosci Rep 35:487–496
Vila-Parrondo C, Garcia-Astrain C, Liz-Marzan LM (2020) Colloidal systems toward 3D cell culture scaffolds. Adv Colloid Interfac 283:2497–2508
Wang DW, Wang YH, Xu SH, Wang F, Wang BM, Han K, Sun DQ, Li LX (2016) Epigallocatechin-3-gallate protects against hydrogen peroxide-induced inhibition of osteogenic differentiation of human bone marrow-derived mesenchymal stem cells. Stem Cells Inter 138:1087–1096
Watanabe J, Yamada M, Niibe K, Zhang M, Kondo T, Ishibashi M, Egusa H (2018) Preconditioning of bone marrow-derived mesenchymal stem cells with N-acetyl-L-cysteine enhances bone regeneration via reinforced resistance to oxidative stress. Biomaterials 185:25–38
Wilson CJ, Clegg RE, Leavesley DI, Pearcy MJ (2005) Mediation of biomaterial-cell interactions by adsorbed proteins: a review. TiEng 11:1–18
Wu L, Li XX, Guan TM, Chen Y, Qi CW (2020) 3D bioprinting of tissue engineering scaffold for cell culture. Rapid Prototyp J 26:835–840
Xi J, Li Q, Luo X, Li J, Guo L, Xue H, Wu G (2018) Epigallocatechin-3-gallate protects against secondary osteoporosis in a mouse model via the Wnt/β-catenin signaling pathway. Mol Med Report 18:4555–4562
Yan S, Zhang Q, Wang J, Liu Y, Lu S, Li M, Kaplan DL (2013) Silk fibroin/chondroitin sulfate/hyaluronic acid ternary scaffolds for dermal tissue reconstruction. Acta Biomater 9:6771–6782
Yang T, Cui XJ, Kao YB, Wang HY, Wen JH (2016) Electrospinning PTMC/Gt/OA-HA composite fiber scaffolds and the biocompatibility with mandibular condylar chondrocytes. Coll Surf A 499:123–130
Yang J, Zhou M, Li WD, Lin F, Shan GQ (2020a) Preparation and evaluation of sustained release platelet-rich plasma-loaded gelatin microspheres using an emulsion method. ACS Omega 5:27113–27118
Yang S, Zhu B, Yin P, Zhao LS, Wang YZ, Fu ZG, Dang RJ, Xu J, Zhang JJ, Wen N (2020b) Integration of human umbilical cord mesenchymal stem cells-derived exosomes with hydroxyapatite-embedded hyaluronic acid-alginate hydrogel for bone regeneration. Acs Biomater Sci Eng 6:1590–1602
Yao ZA, Chen FJ, Cui HL, Lin T, Guo N, Wu HG (2018) Efficacy of chitosan and sodium alginate scaffolds for repair of spinal cord injury in rats. Neural Regen Res 13:502–509
Ylostalo JH (2020) 3D stem cell culture. Stem Cells Inter 9:2178–2188
Zhang Y, Yang WX, Devit A, Beucken JJJP (2019) Efficiency of coculture with angiogenic cells or physiological BMP-2 administration on improving osteogenic differentiation and bone formation of MSCs. J Biomed Mater Res A 107:643–653
Zhang N, Zhu JY, Ma QC, Zhao Y, Wang YC, Hu XY, Chen JH, Zhu W, Han ZC, Yu H (2020) Exosomes derived from human umbilical cord MSCs rejuvenate aged MSCs and enhance their functions for myocardial repair. Stem Cell Res Ther 11:129–138
Zhou C, Lin Y (2014) Osteogenic differentiation of adipose-derived stem cells promoted by quercetin. Cell Proliferat 47:124–132
Zhou Y, Wu Y, Jiang X, Zhang X, Xia L, Lin K, Xu Y (2015) The effect of quercetin on the osteogenesic differentiation and angiogenic factor expression of bone marrow-derived mesenchymal stem cells. Plos One 10:129593–129605
This work was supported by the National Key Research and Development Program of China, 2018YFC1105800.
Jin Wang and Wubo He contributed equally to this work
State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai, 200237, People's Republic of China
Jin Wang, Wubo He, Wen-Song Tan & Haibo Cai
Jin Wang
Wubo He
Wen-Song Tan
Haibo Cai
JW: conceptualization, methodology, investigation, writing—original draft, writing—review and editing. WH: investigation, writing—original draft, writing—review and editing. WST: resources. HC: supervision, writing—original draft, writing—review and editing, resources. All authors read and approved the final manuscript.
Correspondence to Haibo Cai.
Ethical approval and consent to participate
All procedures carried out in this study were approved by the State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology committee and followed the guidelines for the quality control and preclinical research of stem cells preparations, China (2015).
All authors have read and approved the manuscript before submitting it to Bioresources and Bioprocessing.
There are no competing interests to report.
Wang, J., He, W., Tan, WS. et al. The chitosan/carboxymethyl cellulose/montmorillonite scaffolds incorporated with epigallocatechin-3-gallate-loaded chitosan microspheres for promoting osteogenesis of human umbilical cord-derived mesenchymal stem cell. Bioresour. Bioprocess. 9, 36 (2022). https://doi.org/10.1186/s40643-022-00513-7
Chitosan microspheres
|
CommonCrawl
|
cell death & disease
Article | Open | Published: 10 May 2018
Autophagy diminishes the early interferon-β response to influenza A virus resulting in differential expression of interferon-stimulated genes
Brieuc P. Perot1,2,3,
Jeremy Boussier1,2,4,5,
Nader Yatim1,2,
Jeremy S. Rossman ORCID: orcid.org/0000-0001-6124-41036,
Molly A. Ingersoll1,2 &
Matthew L. Albert ORCID: orcid.org/0000-0001-7285-69731,2,7
Cell Death & Diseasevolume 9, Article number: 539 (2018) | Download Citation
Influenza A virus (IAV) infection perturbs metabolic pathways such as autophagy, a stress-induced catabolic pathway that crosstalks with cellular inflammatory responses. However, the impact of autophagy perturbation on IAV gene expression or host cell responses remains disputed. Discrepant results may be a reflection of in vivo studies using cell-specific autophagy-related (Atg) gene-deficient mouse strains, which do not delineate modification of developmental programmes from more proximal effects on inflammatory response. In vitro experiments can be confounded by gene expression divergence in wild-type cultivated cell lines, as compared to those experiencing long-term absence of autophagy. With the goal to investigate cellular processes within cells that are competent or incompetent for autophagy, we generated a novel experimental cell line in which autophagy can be restored by ATG5 protein stabilization in an otherwise Atg5-deficient background. We confirmed that IAV induced autophagosome formation and p62 accumulation in infected cells and demonstrated that perturbation of autophagy did not impact viral infection or replication in ATG5-stablized cells. Notably, the induction of interferon-stimulated genes (ISGs) by IAV was diminished when cells were autophagy competent. We further demonstrated that, in the absence of ATG5, IAV-induced interferon-β (IFN-β) expression was increased as compared to levels in autophagy-competent lines, a mechanism that was independent of IAV non-structural protein 1. In sum, we report that induction of autophagy by IAV infection reduces ISG expression in infected cells by limiting IFN-β expression, which may benefit viral replication and spread.
Macroautophagy (hereafter referred to as autophagy) is a catabolic pathway conserved among eukaryotes by which cytoplasmic elements are isolated within double-membrane autophagosomes that mature by fusing with the endo-lysosomal compartment1. The elongation of autophagosome membranes requires two ubiquitin-like conjugation systems1. First, autophagy-related 5 (ATG5) is conjugated to ATG12, which is required for formation of the second complex composed of phosphatidylethanolamine (PE) conjugated to microtubule-associated protein 1 light chain-3 (LC3). Free cytosolic LC3 is referred to as LC3-I, whereas the PE-conjugated form is termed LC3-II. Autophagy occurs in all nucleated cells, playing a key role in maintaining homeostasis2. In stress conditions, such as viral infection, autophagic activity may increase3,4,5.
Autophagy can increase or decrease viral fitness depending on the virus or model system studied6. One direct antiviral action mediated by autophagy is the degradation of viral components7. Autophagy can also alter antiviral cell pathways, including programmed cell death, sensing of virus-associated molecular patterns and cytokine secretion4, 8. For example, autophagy supports hepatitis C virus (HCV) replication and negatively regulates interferon-β (IFN-β) induction during HCV infection9,10,11,12.
Influenza A virus (IAV), a member of the Orthomyxoviridae family, causes yearly epidemic infections and sporadic pandemics. IAV-related symptoms are mainly the result of excessive inflammation including high levels of pro-inflammatory cytokines13. IAV virus-associated molecular patterns are sensed by Toll-like receptors, nucleotide oligomerization domain-like receptors and retinoic acid-induced gene I (RIG-I)-like receptors, which induce type I IFN and pro-inflammatory cytokine secretion14. IAV has evolved mechanisms to antagonize innate immune responses in infected cells, mainly via non-structural protein 1 (NS1), which inhibits RIG-I signalling14. While IAV stimulates autophagy, its matrix protein 2 (M2) has been proposed to block the maturation of autophagosomes, although this finding remains disputed15,16,17,18.
We investigated the impact of IAV-mediated autophagy perturbation on the host cell response to infection. We designed our study to circumvent limitations of techniques commonly used to study autophagy. Notably, chemical treatments used to manipulate autophagy impact other biological processes. For example, rapamycin, used to inhibit autophagy, inhibits the kinase activity of the mammalian target of rapamycin, impacting transcription, translation and mitochondrial metabolism19. Transfection of small interfering RNAs (siRNAs) to suppress autophagy genes can activate innate signalling pathways in a structure- or sequence-dependent manner20. Knockout (KO) or siRNA knockdown cell lines are subject to genetic drift, with compensatory mutations resulting in unanticipated off-target effects when compared to wild-type (WT) cell lines21,22,23. Finally, the ATG5 tet-off cell system is prone to bias due to the requirement of long-term exposure to doxycycline to repress autophagy24. Notably, doxycycline and related antibiotics can alter mitochondrial function, inflammation, proliferation, metabolism and, in some instances, induce cell death25,26,27,28,29,30,31,32,33.
We generated a new experimental model in which the capacity to undergo autophagy can be controlled through drug-induced stabilization of critical components of the autophagy pathway that are otherwise targeted for degradation. Importantly, this model does not induce autophagy but instead restores the capacity of a cell to undergo autophagy. We observed that autophagy was dispensable for IAV replication, but cells lacking a functional autophagy pathway had an enhanced type I IFN-induced inflammatory response at early time points post-infection. Together, our findings clarify the interplay of IAV infection, autophagy and host response. Moreover, the experimental model presented herein will establish a new path towards validating the role of autophagy during inflammatory processes.
A novel model to initiate autophagy through the induced stabilization of ATG5
Many experimental systems used to study autophagy result in off-target effects due to the disruption of bystander pathways. To avoid potential confounding artefacts, we generated novel expression systems and cell lines in which autophagy can be controlled through the induced stabilization of ATG5. We generated clonal populations of Atg5–/– mouse embryonic fibroblasts (MEFs) that stably expressed the ATG5 protein fused to a destabilization domain (ATG5DD), which is known to be rapidly degraded by the proteasome (Fig. 1a)34. The small, biologically inert, cell-permeable molecule, Shield1, interacts with the destabilization domain, preventing its degradation by the proteasome34, 35. Addition of Shield1 to Atg5–/– MEFs expressing ATG5DD rescued the degradation of the ATG5DD fusion protein (Fig. 1b). Notably, accumulated ATG5DD protein was primarily found within ATG5–ATG12 complexes, supporting that the fusion protein retained this function (Fig. 1b). Intra-incubator microscopy was used to measure cell confluence over time, with or without the addition of Shield1, revealing that ATG5DD accumulation, in nutrient-rich conditions, did not impact the kinetics of cell growth (Fig. 1c).
Fig. 1: Stabilization of ATG5 in Atg5–/– cells enables experimental control of autophagy.
a Schematic representation of Shield1 (Sh1) stabilization of ATG5 illustrates the rescue of destabilization domain (DD)-fused ATG5 (ATG5DD). b ATG5DD-expressing Atg5–/– cells were treated with ethanol vehicle (∅) or Sh1 for 20 h, followed by immunoblot analysis with anti-ATG5 antibody. c ATG5DD-expressing Atg5–/– cells were treated with Sh1 and images were obtained every hour for 60 h to assess cell growth. Points depict mean confluence at interval time and error bars depict standard deviation. d ATG5DD-expressing cells were treated for the indicated times with Sh1 or vehicle (∅). Protein extracts were subjected to immunoblot analysis using anti-LC3 and anti-GAPDH antibodies. e In the presence or absence of Sh1, cells were exposed to serum deprivation, an inhibitor of the mammalian target of rapamycin (PP242), a proton pump inhibitor (chloroquine, CQ) or a proteasome inhibitor (MG132). Wild-type (WT) and Atg5–/– MEFs (Atg5–/–) were used as positive and negative controls, respectively. After 4 h of culture, protein extracts were subjected to immunoblot analysis using anti-p62, anti-LC3 and anti-GAPDH antibodies. f ATG5DD cells, pretreated or not with Shield1 (Sh1), were infected with GFP-expressing chikungunya virus at an MOI of 0.1. The number of green cells were monitored through live imaging. Graph shows mean and standard deviation of three biological replicates, and data are representative of three experiments. ns, not significant; *q < 0.01, **q < 0.01 (two-tailed unpaired t-test followed by Holm's multiple testing correction). g–k ATG5DD expression was stabilized by Sh1 treatment and cells were infected with influenza A virus (IAV). After 20 h, cells were fixed, permeabilized and stained using anti-NP antibody to quantify IAV infection and anti-LC3 antibody to visualize autophagy puncta. Imaging flow cytometry permitted gating based on NP expression (representative dot plot, g) and quantification of autophagic vesicles using the bright detail intensity R3 (BDI R3, histogram, h). i Three representative images of single cells with three different BDI R3, corresponding to the indicated numbered arrows in h, are shown. The topography of LC3 within cells is shown in red. Scale bar, 10 μm. j Graphs plot mean percentage of cells with high autophagic vesicle content and standard deviation; analysis is based on images captured from >10,000 cells per experiment (n = 2 experiments). k Cells were analysed by immunoblot for p62, LC3 and GAPDH expression
We next tested whether ATG5DD stabilization restored the ability of the cell to undergo autophagy, as determined by the conversion of LC3-I to LC3-II, a measure of autophagosome and autophagolysosome accumulation36. Shield1 rescued basal autophagy as early as 1 h post addition (Fig. 1d). As expected, untreated ATG5DD cells were phenotypically similar to the parental Atg5–/– cell line. Both lines exhibited low levels of LC3-II conversion; however, following Shield1 treatment, modest levels of LC3-II could be detected in the ATG5DD cell line, similar to the levels of autophagy in WT cells (Fig. 1e). Furthermore, we observed that inducing autophagy by serum starvation or PP242 treatment or inhibiting autophagolysosome function using chloroquine led to increased LC3-II/LC3-I ratios within Shield1-treated cells (Fig. 1e). We also measured p62 expression, an adaptor protein that is degraded in the course of autophagy and whose accumulation in the presence of high levels of LC3-II is indicative of abortive autophagy36. We demonstrated that p62 levels were reduced following Shield1 treatment, with further reduction observed in cells exposed to serum starvation or PP242 (Fig. 1e). As expected, p62 was not degraded in autophagy-competent cells that were treated with chloroquine, which blocks autophagosome fusion with lysosomes36. As the DD is expected to lead to proteasomal degradation of ATG5DD, we tested whether proteasome inhibition by MG132 would result in Shield1-independent ATG5DD accumulation and rescue of autophagy competence. Indeed, MG132 treatment resulted in p62 degradation in both control and Shield1-treated cells (Fig. 1e). Together, these data established that ATG5DD was degraded by the proteasome and that Shield1 treatment restored the capacity to undergo autophagy in ATG5DD-expressing Atg5–/– cells. We validated that our cell-based system rescued autophagy by testing whether chikungunya virus (CHIKV) propagation was inhibited upon Shield1 treatment (Fig. 1f)37.
We next assessed autophagy in the context of IAV infection, exposing ATG5DD cells to IAV A/PR/8/1934 (H1N1, PR8 strain) for 20 h in the presence or absence of Shield1. Using imaging flow cytometry, we delineated infected and uninfected cells (Fig. 1g) and measured autophagic cells within each population (Fig. 1h). IAV-induced LC3 puncta accumulated in IAV-infected cells (green) but not in uninfected cells (purple) (Fig. 1i, j). Overall, LC3-II levels increased when Shield1 was present during infection (Fig. 1k). Notably, p62 was not degraded following Shield1 treatment during infection, consistent with previous reports that IAV inhibits autophagosome maturation17.
Autophagy does not impact viral infection or replication
As autophagy impacts viral replication in several infectious models7, 38, we used our inducible cell lines to investigate the influence of autophagy on IAV replication. We found that the IAV RNA content of infected samples at 5 h post-infection was not different between control and Shield1-treated cells (Fig. 2a). The expression of nucleoprotein (NP) in infected cells was measured by flow cytometry (Fig. 2b). The percentage of NP-expressing cells was similar between control and Shield1-treated cells 16 h post-infection, at low and higher multiplicities of infection (MOIs) (Fig. 2c). Moreover, the intensity of NP expression within IAV-infected cells was unchanged (Fig. 2d). M2 protein was also expressed at similar levels (Fig. 2e, f). Finally, haemagglutinin protein and NP RNA levels in supernatants were similar in autophagy-competent and -incompetent cells (Fig. 2g, h). Of note, low levels of LC3-II were observed in the absence of Shield1 when cells were infected by IAV (Fig. 1j). This is likely a result of lower levels of proteasome activity, which in turn permitted modest Atg5 expression. To confirm that low levels of autophagy were not required for IAV replication, we infected Atg5–/– MEFs, showing that autophagy is indeed dispensable for infection (Fig. 2i). Taken together, these data demonstrate that restoration of autophagy did not impact IAV infection or replication.
Fig. 2: ATG5 stabilization does not impact IAV replication.
a Following 5 h infection, with or without Shield1 (Sh1) pretreatment, IAV RNA expression levels were determined using RT and qPCR primer/probe sets specific for the NP, NS1, PB1, M1 and M2 genes. b–d ATG5DD cells, with or without Sh1 treatment, were infected with IAV for 16 h at the indicated MOI. The gating strategy after nucleoprotein (NP) immunostaining and flow cytometry is shown for two samples: uninfected, and infected at an MOI of 7 (b). The percentage of NP-expressing cells (c) and the geometric mean fluorescent intensity (GMFI) of NP per cell (d) were determined by flow cytometry. e, f IAV M2 expression was determined using flow cytometry (e) and immunoblotting (f) following 16 or 20 h infection at the indicated MOI, respectively. g ATG5DD cells, pretreated or not with Shield1 (Sh1), were infected for 16 h at the indicated MOI. NP RNA expression in supernatants was analysed by RT–qPCR. h ATG5DD cells, pretreated or not with Shield1 (Sh1), were infected at MOI 3 for the indicated times; haemagglutination assays were used to quantify of the number of haemagglutinin units (HAUs) per millilitre of supernatant. i Atg5+/+ or Atg5–/– were infected with IAV at the indicated MOI for 16 h. Percentages of NP-expressing cells was determined by flow cytometry. Graphs show mean and standard deviation of biological triplicates, and data are representative of two experiments. ns, not significant (one-tailed unpaired t-test followed by Holm's multiple testing correction)
Autophagy limits interferon-stimulated gene (ISG) induction independently of the key IAV anti-type I IFN protein NS1
We next wanted to determine whether autophagy impacts the host response to IAV. To test whether immune pathways were impacted by autophagy, we used the NanoString nCounter technology, a method that allows the quantitative measurement of single mRNA molecules, without the need for RT or amplification. We measured RNA from a 561 gene set of immunology-related genes, including cytokine and Toll-like receptor-related pathways (Supplementary Figure S1). ATG5DD cells, pretreated or not with Shield1 for 16 h, were infected with IAV for 4 or 12 h. Raw counts were normalized to the geometric mean value of five internal control genes (Ppia, Gapdh, Rpl19, Oaz1 and Polr2a), selected based on the application of the geNorm method39 (Supplementary Figure S2 and Materials and methods). From the results of three independent experiments, we found that the inability to undergo autophagy during infection led to higher expression of many pro-inflammatory genes as compared to autophagy-competent cells, including established ISGs, such as Psmb10, Cd274, Cxcl10, Irf1 and Tap1 (Fig. 3a and Supplementary Table S1). Of note, two members of the class I major histocompatibility complex (MHC I) pathway, Psmb10 and Tap1, were decreased in autophagy-competent cells (Fig. 3a). To identify biological processes and signalling pathways, rather than single genes, impacted by autophagy capacity in a robust fashion, we performed a gene set enrichment analysis (GSEA)40. Among the 129 gene sets tested (corresponding to sets from databases that included at least five genes measured in this experiment and with expression levels consistently above the lower limit of quantification), we identified four pathways that were significantly enriched in genes differentially expressed by autophagy-competent (Shield1-treated) vs. autophagy-deficient (control vehicle-treated) infected cells (Fig. 3b), which were ranked by decreasing order of enrichment: IFN signalling; IFN-α/β signalling; cytokine signalling; and IFN-γ signalling. Owing to shared gene expression among IFN-γ and IFN-α/β signalling pathways, and given that GSEA relies on unweighted gene set lists, this method is not suitable for distinguishing between type I and type II IFN responses. We, therefore, implemented a more quantitative approach, using recently published data comparing the gene signature of IFN-β- and IFN-γ-stimulated whole blood, analysed by the same nCounter technology41. The genes most significantly impacted by autophagy and present in the gene sets of both our study and the whole blood approach (44 genes, t-test p-value < 0.05, Supplementary Table S1) were weighted by their t-statistic, which gave rise to an "autophagy" vector lying in a 44-gene feature dimensional space. We then used data from control, IFN-β or IFN-γ whole-blood stimulation of 25 healthy donors to create two new vectors by weighting each gene by its t-statistic (paired t-test, control vs. IFN-β or control vs. IFN-γ). All vectors were normalized to length 1, and the "autophagy" vector was projected onto the "IFN-β" and "IFN-γ" vectors (Fig. 3c). The scalar product 〈autophagy, IFN-β〉 was found to be greater than the 〈autophagy, IFN-γ〉 scalar product. Bootstrapping over the 25 donors from the whole-blood study confirmed the robustness of this result, as the difference between the two scalar products showed a consistent positive value (95% confidence interval = (0.014, 0.12)), suggesting that the autophagy signature was more characteristic of stimulation by IFN-β than by IFN-γ (Fig. 3d). Moreover, Ifnb1 was the only IFN gene consistently detected in our cellular model after infection (Supplementary Figure S3). These results indicate that autophagy modulates IFN-β-stimulated genes following IAV infection.
Fig. 3: IAV-induced expression of type I IFN-stimulated genes is reduced when ATG5DD is stabilized.
a ATG5DD cells, pretreated for 16 h with Shield1 (Sh1), were infected with IAV PR8 at MOI 3 for 4 or 12 h followed by RNA extraction. mRNA levels of 561 genes (see Materials and methods) were quantified using Nanostring nCounter technology. Volcano plots show the p-value determined by two-tailed paired t-tests and fold change of gene expression in control vs. Sh1-treated cells. Iso z-value curves are depicted. Data were generated from three independent experiments. b Gene set enrichment analysis was performed after ranking genes according to their differential expression in control vs. Sh1-treated samples (see Materials and methods for computation of t-statistic). Shown are normalized enrichment score and p-values, computed by the GSEA method40. Each point represents a gene set (the 40 most enriched gene sets are shown), and sets with an enrichment false-discovery rate <0.2 are coloured and labelled. c The 44 genes most significantly impacted by Sh1 treatment (paired t-test p-value <0.05) were selected and weighted by their t-statistic, which gave rise to an "autophagy" vector lying in a 44-dimensional space (red). Data from whole-blood stimulation were used to create an IFN-β vector (control vs. IFN-β treatment t-statistic, green) and an IFN-γ vector (control vs. IFN-β treatment t-statistic, purple), after which all vectors were normalized to length 1. d Bootstrapping over the 25 donors of the whole-blood study was performed and the difference 〈autophagy, IFN-β〉−〈autophagy, IFN-γ〉 between the two scalar products was computed for each iteration. Plotted is the distribution of the differences, with a 95% confidence interval of (0.014, 0.12)
We next tested whether IAV-induced autophagy impacted the inflammatory response at the protein level. During IAV infection, C-X-C chemokine motif ligand 10 (CXCL10) secretion was decreased by ATG5DD stabilization, with differential expression between Shield1-treated and -untreated cells detected as early as 5 h (Fig. 4a). Furthermore, surface expression of CD274 and class I major histocompatibility protein H-2Kb were reduced by ATG5DD stabilization 16 h post-infection (Fig. 4b, c). Of note, surface expression of H-2Kb was also significantly, although to a lesser extent, impacted by autophagy in uninfected cells. Additionally, we measured PSMB10 expression, a subunit of the immunoproteasome, which was also reduced in autophagy-competent cells (Fig. 4d). All three of these proteins are induced by type I IFN42,43,44,45 and support the conclusion that IAV-induced autophagy negatively regulates IFN-β–induced inflammatory responses. We confirmed that Cd274 was indeed an ISG by blocking IFN-α/β receptor (IFNAR) signalling through the use of a neutralizing anti-IFNAR1 antibody (Fig. 4e). Cd274 was the most differentially expressed ISG at the RNA level. Therefore, we used this molecule, as well as H-2Kb expression, as functional readouts of the impact of autophagy for the remainder of our study. As autophagy limits vesicular stomatitis virus (VSV)-induced inflammation through dampening of cellular reactive oxygen species (ROS) content46, we tested whether restoring autophagy capacity led to changes in cellular ROS content. Measurement of both total and mitochondrial ROS revealed that Shield1 treatment did not impact ROS content, arguing that differences in ROS are not responsible for the hyperinflammatory phenotype of autophagy-incompetent cells (Supplementary Figure S4). We next investigated whether NS1, a key negative regulator of the type I IFN response47, played a role in the suppression of ISGs following infection. We infected cells with wild type and ΔNS1 IAV PR8 in the presence or absence of Shield1. Stabilization of ATG5DD decreased CD274 expression following infection with either viral strain (Fig. 4f).
Fig. 4: ISG expression levels are suppressed by autophagy machinery during IAV infection.
a ATG5DD cells, pretreated or not with Shield1 (Sh1) for 16 h, were infected with IAV at the indicated MOIs for 5 h. CXCL10 concentration in the supernatants was measured by ELISA. b, c ATG5DD cells, pretreated or not with Sh1 for 20 h, were infected with IAV at the indicated MOIs for 16 h. Surface expression of CD274 (b) and H-2Kb (c) was measured by flow cytometry. d ATG5DD cells, pretreated or not with Sh1 for 20 h, were infected with IAV PR8 at the indicated MOIs for 20 h before analysing PSMB10 expression by immunoblot. e ATG5DD cells, pretreated or not with Sh1 for 16 h and with anti-IFNAR1 antibody for 1 h, were infected for 5 h at the indicated MOIs and Cd274 expression was assayed by RT–qPCR (calculated as 2(CtHprt1−CtCd274)). f ATG5DD cells, pretreated or not with Sh1 for 20 h, were infected with PR8 or ΔNS1 PR8 at the indicated MOIs for 16 h before measuring CD274 surface expression by flow cytometry. g, h ATG5K130RDD (g) or ATG7DD (h) cells were treated as in b and c before monitoring of surface CD274 expression by flow cytometry. a–c, e–h Graphs show mean and standard deviation of three biological replicates, and data are representative of three experiments. ns, not significant; *q < 0.05, **q < 0.01, ***q < 0.001, ****q < 0.0001 (one-tailed unpaired t-test followed by Holm's multiple testing correction)
ATG5 mediates several non-autophagy-related phenotypes48. Thus we generated an Atg5–/– cell line that stably expressed the mutant ATG5 molecule, ATG5K130RDD. ATG5 lysine K130 is an amino acid required for binding to ATG12, thus ATG5K130R cannot form the ATG5–ATG12 complex. We confirmed that ATG5K130RDD accumulated upon Shield1 treatment (Supplementary Figure S5). However, as expected, we did not detect a band corresponding to the ATG5–ATG12 complex in Shield1-treated cells. LC3-II was also undetectable (Supplementary Figure S5). We infected ATG5K130RDD cells in the presence or absence of Shield1 and observed that lysine 130 of ATG5 was required for suppression of CD274 expression (Fig. 4g). We established Atg7–/– cell lines, which stably expressed ATG7DD, thus permitting controlled regulation of a distinct step in autophagy. The ATG7 protein is required for ATG5 conjugation to ATG12 during autophagosomal membrane elongation49. Confirming that ATG7 was stabilized, Shield1 treatment permitted ATG5–ATG12 complex formation, rendering the cell line competent for autophagy as measured by the LC3-II/LC3-I ratio (Supplementary Figure S5). Validating our findings in the ATG5DD cell line, we demonstrated that ATG7DD stabilization also resulted in reduced CD274 expression in infected cells (Fig. 4h). Together, we conclude that ATG5–ATG12 complex formation is a key step in limiting ISG induction in response to IAV infection.
Early post-infection, virus-induced autophagy decreases IFN-β expression leading to diminished ISG expression
Based on the differential expression of ISGs, we considered two possible hypotheses: that autophagy negatively impacted IFNAR signalling or, alternatively, that IAV-infected, autophagy-competent cells produced less IFN-β. To address the first possibility, we directly tested whether autophagic flux impacted IFNAR signalling. Following treatment with Shield1 or vehicle for 16 h, ATG5DD cells were exposed to increasing concentrations of recombinant IFN-β. CD274 and H-2Kb expression was equally increased following IFN-β treatment, suggesting that ATG5DD stabilization did not alter IFNAR signalling (Fig. 5a, b). To test our alternate hypothesis, we treated cells with IFNAR blocking antibody and observed by reverse transcription–quantitative polymerase chain reaction (RT–qPCR) that ATG5DD stabilization inhibited Ifnb1 expression independently of IFNAR signalling (Fig. 5c). Importantly, we previously determined that RT–qPCR was sensitive enough to detect fold changes as small as 1.4 with 80% power using triplicate samples, thus permitting confirmation of the results we observed in the nCounter analysis (Supplementary Figure S6). ATG5DD stabilization resulted in reduced IFN-β expression as early as 1.5 h post-infection (Fig. 5d). Early IFN-β expression in the context of virus infection relies on nuclear factor-kappaB (NF-κB) activation50. To determine the upstream signalling events impacted by autophagy, we measured NF-κB activation through IκBα degradation and NF-κB reporter assay, observing that autophagy limited IAV-induced NF-κB activation (Fig. 5e–g). We then tested whether autophagy was induced by IAV at this early time point. LC3-II/LC3-I ratio was increased at 1 h post-infection correlating with decreased levels of p62 when Shield1 was present. These findings argue that autophagy was rapidly induced and that IAV-mediated inhibition of autophagy maturation occurred later in the viral life cycle (Fig. 6a, see also Fig. 1j). Interestingly, the ATG5–ATG12 complex limits VSV-induced type I IFN51. We assessed whether the ATG5–ATG12 complex was sufficient to modulate type I IFN or, alternatively, whether the effect required autophagosome maturation. We treated cells with thapsigargin, which blocks autophagosome maturation without preventing ATG5–ATG12 conjugation52. As expected, thapsigargin treatment potentiated LC3-II accumulation and prevented p62 degradation in cells where ATG5DD was stabilized (Fig. 6b) but did not inhibit ATG5DD–ATG12 complex formation (Fig. 6b). Notably, treatment with thapsigargin abrogated the negative impact of autophagy on the expression of Ifnb1 and induced expression of CD274 (Fig. 6c, d). Thus we concluded that induction of autophagy by IAV at early time points in the viral life cycle inhibits IFN-β induction. This inhibitory effect was independent of NS1 and required autophagosome maturation.
Fig. 5: ATG5DD stabilization inhibits ISG expression via cell-intrinsic modulation of IFN-β expression and not through desensitization to IFNAR signalling.
a, b ATG5DD cells, pretreated or not with Shield1 (Sh1), were treated with recombinant mouse IFN-β at the indicated concentration for 6, 10 or 20 h, followed by surface expression analysis of CD274 (a) and H-2Kb (b) by flow cytometry. c ATG5 DD cells were pretreated with anti-IFNAR1 antibodies followed by IAV infection. Ifnb1 expression was assayed by RT–qPCR. d In independent experiments, Ifnb1 induction was measured at 1.5 and 3.5 h post-infection in autophagy-competent (Sh1) vs. autophagy-null cells (∅). e, f ATG5DD cells, pretreated or not with Shield1 (Sh1), were infected with IAV for 1 h before monitoring IκBα degradation by immunoblot (e) and quantification with ImageJ (f). g ATG5DD cells transfected with NF-κB transcription activity GFP reporter for 20 h were pretreated or not with Shield1 (Sh1) and infected with IAV. Incucyte intra-incubator microscope allowed the monitoring of GFP-positive cells. a–d, f, g Graphs show mean and standard deviation of three (a–d, f) or four (g) biological replicates, and data are representative of three experiments. ns, not significant; *q < 0.05, **q < 0.01 (one-tailed unpaired t-test followed by Holm's multiple testing correction)
Fig. 6: Autophagy must go to completion to inhibit IFN-β expression.
a ATG5DD cells were pretreated or not with Sh1 for 16 h and infected with IAV at the indicated MOIs. One hour post-infection, p62, LC3-I, LC3-II and GAPDH levels were measured by immunoblot. Three biological replicates were run for infected control or Sh1-treated samples. b ATG5DD cells were pretreated or not with Sh1 for 16 h and infected with IAV at the indicated MOIs, with or without thapsigargin (Thaps) for 5 h. p62, LC3-I, LC3-II, ATG5–ATG12 and GAPDH levels were measured by immunoblot. c ATG5DD cells were pretreated or not with Sh1 and infected with IAV at the indicated MOIs, with or without thapsigargin. Three hours post-infection, Ifnb1 expression was assayed by RT–qPCR. d ATG5DD cells were pretreated or not with Sh1 and infected with IAV. Sixteen hours post-infection, surface CD274 expression was monitored by flow cytometry. c, d Graphs show mean and standard deviation of three biological replicates, and data are representative of three experiments. ns, not significant; *q < 0.05, **q < 0.01, ***q < 0.001 (one-tailed unpaired ∅ followed by Holm's multiple testing correction)
Our study describes the generation of a cellular model in which the capacity of a cell to undergo autophagy can be dynamically restored within 1 h after addition of a small, immunologically inert molecule. Currently available models, on the contrary, do not enable rapid manipulation of the autophagy pathway and are confounded by off-target effects24,25,26,27,28,29,30,31,32,33, 53,54,55. The development of a rapid autophagy restoration system permitted us to dissect the impact of autophagy perturbation on inflammatory responses to viral infection. One important caveat, however, is that, under selected conditions, we observed low levels of ATG5DD and autophagic activity in cells in which we did not rescue the destabilized ATG5DD. However, cells in which ATG5DD was stabilized through Shield1 had autophagy levels that were comparable to that of WT MEFs, and untreated cells exhibited a marked decrease in autophagic activity.
Using this model system, we demonstrated that the absence of ATG5 did not impact IAV infectivity or replication, yet the induction of ISGs, such as CD274 and MHC I, were diminished as a result of IAV-induced autophagy. With respect to regulation of MHC I, we suggest that the negative impact of autophagy (secondary to ATG5DD stabilization) is mediated by decreased expression of two MHC I presentation pathway genes, Psmb10 and Tap1 (see also Table S1). These findings are in agreement with a report that autophagy-deficient dendritic cells express higher levels of MHC I on their surface at steady state and more efficiently prime anti-IAV-specific CD8+ T-cell responses56. Of note, even though autophagy limited the induction of ISGs, we did not detect any impact on IAV replication at later time points. This may be a reflection of a key anti-IAV ISG, MX1, being inactive in our cells57, 58. Alternatively, we may not observe the impact of ISGs due to our experimental approach focussing on the first cycle of replication, a direct result MEFs lacking the protease activity required cleave haemagglutinin and generate replication competent viral progeny59, 60.
The decreased expression of key proteins involved in immune regulation correlated with decreased of Ifnb1 expression and lower NF-κB activation. Interestingly, the best-described mechanism by which IAV inhibits pattern recognition receptor signalling and subsequent IFN-β and ISG induction is through the action of NS161,62,63. Using IAV lacking expression of NS1, we confirm that the mechanisms reported here are novel and likely apply to early inflammatory responses occurring prior to NS1 expression. Importantly, while we showed that autophagy did not require NS1 to reduce IFN-β activity, inhibition of IFN-β by NS1 may rely in part on NS1-mediated autophagy stimulation in the infected cell64. Of note, even though we observed that autophagy competency did not impact the cell response to IFN-β within IAV-infected cells, autophagy perturbation may impact other cytokine pathways.
Future studies will be important to determine the role during the early phases of infection. As the inflammatory response to IAV impacts viral propagation and symptom severity, we suggest that autophagy-mediated suppression of inflammation represents a host mechanism to limit acute pathology.
Cell lines, cell growth media, treatments and viruses
Atg5+/+ cells and Atg5–/– MEFs were obtained from Christian Munz, University of Zurich, Switzerland. Atg7–/– MEFs were obtained from Stephen Tait, University of Glasgow. All cell lines in this study were cultivated in complete growth medium, comprised of Dulbecco's modified Eagle's medium with high glucose, pyruvate and GlutaMAX and supplemented with non-essential amino-acids, HEPES buffer, penicillin/streptomycin (all reagents, ThermoFisher Scientific) and 10% foetal calf serum (GE Healthcare, A15-502). Shield1 (Clontech, 632188) was added at a final concentration of 1 µM (stock maintained at 1 mM) in growth media. Ethanol, the solvent for Shield1, was used as a control at a 1:1000 dilution in growth media. Treatments were used at the following concentrations: chloroquine (Sigma, C6628), 50 µM; PP242 (Selleck Chemical, S2218), 1 µM; MG132 (Sigma, C2211) 10 µM; recombinant IFN-β, at the indicated concentrations (BioLegend, 581302); and thapsigargin (Sigma, T9033) 3 µg/mL. Blocking IFNAR antibody (BD Pharmingen, 561183) or isotype control (BD Pharmingen, 553447) were used at 10 µg/mL and added to culture media 1 h before infection for the duration of the experiment. Influenza A/PR/8/76 (PR8) and ΔNS1 PR8 were purchased as purified allantoic fluid or purified antigen, respectively, from Charles River Laboratories (Spafas, CT, USA).
Lentivirus production and clonal stably modified cell line generation
pLVX pTuner lentiviral vector (Clontech) with a puromycin resistance gene and destabilization domain at the 5'P end of the multiple cloning site was from Clontech. Mutagenesis PCR allowed the introduction of an AgeI site and 3 glycin codons (to increase flexibility between the DD and ATG5 protein) at the 5'-terminus of the ATG5 coding site in the pmCherry–ATG5 plasmid (Plasmid #13095; Addgene). Mutagenesis qPCR was performed using Phusion polymerase (ThermoFisher Scientific) following the manufacturer's protocol and using primers: ACCGGTGGAGGAGGAACAGATGACAAAGATGTGCTT and CATGGTACCGTCGACTG. The ATG5DD coding insert was then cleaved by AgeI and BamHI restriction enzymes and ligated into the pLVX pTuner lentiviral vector (all enzymes, New England Biolabs). The final plasmid (pATG5DD) was confirmed to have the expected sequence.
For the generation of the ATG5K130RDD coding plasmid, mutagenesis PCR was performed using Phusion polymerase to change codon 130 from AAA to CGA in the pATG5DD plasmid with the primers: CGAGAAGCTGATGCTTTAAAGCA and ATACACGACATAAAGTGAGCC. pBabeATG7DD, a retroviral vector coding for ATG7DD, was a generous gift from Douglas Green, St. Jude Children's Research Hospital and Stephen Tait, University of Glasgow. Lentiviruses (from pATG5DD and pATG5K130RDD) and retroviruses (pATG7DD) were produced and Atg5–/– or Atg7–/– MEFs were infected. Puromycin was used at 4 µg/mL for 1 week for selection (ThermoFisher Scientific, A11138-03) before single-cell cloning and phenotyping were performed.
Virus infection
For infection, adherent cells were washed with growth medium without foetal calf serum. Growth medium, without serum, containing IAV or CHIKV was added to cells and plates were incubated at 37 °C for 1 h with gentle shaking every 15 min. Cells were then washed with complete growth medium before adding fresh complete growth medium with or without additional treatments.
Cells were harvested by trypsinization, centrifuged and resuspended in 100 µL per million cells in lysis buffer: 1% Nonidet P40 substitute (Sigma, 74385) with protease inhibitor (Sigma, 11836145001). Protein concentration in the resulting supernatants was determined by BCA assay (ThermoFisher Scientific, 23225) following lysis on ice and clarification by centrifugation according to the manufacturer's guidelines. In all, 30 µg of protein per sample were prepared in Lithium Dodecyl Sulphate sample buffer (ThermoFisher Scientific, NP0007) with dithiothreitol (20 mM final concentration) and loaded in 4–12% gradient polyacrylamide gel (Biorad, 3450124). Following transfer, polyvinylidene fluoride membranes (BioRad, 1704157) were blocked for 1 h in 5% w/v non-fat dry milk in TBS with 0.05% Tween 20 (Sigma P5927) (blocking solution). Membranes were then incubated for 15 h in 1:1000 dilution of the following antibodies in blocking solution with gentle shaking at 4 °C: anti-ATG5 (Abcam, ab108327), anti-GAPDH (CST, 2118), anti-LC3B (CST, 2775 S), anti-p62 (CST, 5114), anti-M2 (Abcam, ab5416), anti-IκBα (CST, 9242), and anti-PSMB10 (Abcam, ab77735) antibodies. Membranes were then incubated in 1:1000 horseradish peroxidase-conjugated anti-mouse antibody (CST, 7076 S) or anti-rabbit antibody (CST, 7074 S) in blocking solution for 1 h at room temperature with gentle shaking. Membranes were revealed with Supersignal Enhanced Chemoluminescence Substrate (ThermoFisher Scientific, 34080) according to the manufacturer's guidelines and exposed to film.
Intra-incubator microscopy
To monitor cell growth, cells were plated at 10,000 cells per well in 24-well plates. To monitor infection of cells by CHIKV harbouring a green fluorescent protein (GFP) coding sequence after the subgenomic promoter (a kind gift of Marco Vignuzzi, Institut Pasteur), 200,000 cells were plated in 12-well plates 24 h before infection. To monitor NF-κB transcription factor activity, cells were transfected with pNF-κB–GFP plasmid (a kind gift from Eliane Meurs, Institut Pasteur) using lipofectamine 2000 (ThermoFisher Scientific, 11668027) according to the manufacturer's guidelines 24 h before plating (200,000 cells were plated in 12-well plates) and 40 h before infection. Images were taken every hour using Incucyte ZOOM System and analysis was performed using Incucyte ZOOM software (EssenBioScience).
Flow cytometry and imaging flow cytometry
Cells were harvested and washed as for immunoblotting. After washing, cells were incubated for 20 min in 1:500 dilution of LIVE/DEAD Fixable Violet Dead Cell Stain (ThermoFisher, L34955) in phosphate-buffered saline (PBS), then 30 min in 1:100 antibody dilution for surface staining, all at 4 °C. Antibodies included allophycocyanin-conjugated anti-mouse CD274 (BD Pharmingen, 564715) or PE-conjugated anti-mouse H-2Kb (BD Pharmingen, 553570).
For intracellular staining (ICS), cells were fixed using BD Cytofix/Cytoperm Fixation and Permeabilization Solutions (BD Biosciences, 554722). Cells were then washed twice in BD Perm/wash buffer (PWB, BD Biosciences, 554723). Immunostaining was performed at 4 °C for 45 min with 1:100 dilution of fluorescein isothiocyanate (FITC)-conjugated anti-IAV NP antibody (Abcam, ab20921) or 1:500 dilution of anti-IAV M2 protein antibody (Abcam, ab5416) in PWB for 45 min at 4 °C. For M2 staining, cells were then incubated in 1:500 dilution of Alexa Fluor 647 goat anti-mouse IgG (H+L) (Jackson ImmunoResearch, 115-606-062) for 45 min at 4 °C. Cells were then washed in PWB buffer twice and in PBS once before resuspension in PBS and acquisition using a BD LSR Fortessa flow cytometer. Data analysis was performed using FlowJo 9 (Flowjo, LLC).
For imaging flow cytometry, the antibodies used for ICS were: mouse anti-LC3 antibody (MBL International) at 1:500 dilution and rabbit FITC-conjugated anti-IAV NP antibody (Abcam, ab20921) at 1:100 dilution. Staining of LC3 was performed before washing and staining with Alexa Fluor 647 goat anti-mouse IgG (H+L) (Jackson ImmunoResearch, 115-606-062). After washes, the anti-NP antibody was introduced. Cells were then washed in PWB buffer twice and in PBS once before resuspension in PBS and acquisition using Amnis Imagestream X imaging flow cytometer. Data analysis was performed using the Ideas software (Amnis); the masking strategy and the metric (bright detail intensity R3 or BDI R3) used to measure autophagic activity were previously described65.
RNA extraction, RT and qPCR
For RNA extraction, cells were harvested and washed as for protein extraction. High pure RNA Isolation Kit (Roche, 11828665001) was used to extract RNA according to the provider's protocol. For IAV RNA extraction from supernatants, 200 µL of supernatants cleared of cell debris by centrifugation at 1500 × g for 5 min at 4 °C were used instead of the 200 µL cell suspension in PBS at the first step of Roche's protocol. RT was performed using Maxima reverse transcriptase (ThermoFisher Scientific, catalogue number: EP0741) and random primers (ThermoFisher Scientific, SO142). Taqman primer/probe mixes were used for cDNA quantification of Hprt1 (Mm03024075_m1), Ifnb1 (Mm00439546_s1), Cd274 (Mm00452054_m1) and Cxcl10 (Mm00445235_m1). For viral gene detection, we designed the following primer/probe sets:
Primer 1 Primer 2 FAM MGB probe
NS1 CACTGTGTCAAGCTTTCAGGTAGATT GCGAAGCCGATCAAGGAAT TTTCTTTGGCATGTCCG
M1 TCCAGTGCTGGTCTGAAAAATG GGATCACTTGAACCGTTGCAT AAAATTTGCAGGCCTATCA
M2 ACCGAGGTCGAAACGCCTAT AAAAAAGACGATCAAGAATCCACAAT TGCAGATGCAACGGT
NP CGGAAAGTGGATGAGAGAACTCA AGTCAGACCAGCCGTTGCAT CCTTTATGACAAAGAAGAAA
PB1 TGTCAATCCGACCTTACTTTTCTTAA TGTTGACAGTATCCATGGTGTATCC CCAGCACAAAATG
Custom gene expression assays were synthesized by ThermoFisher Scientific. In all, 2 µL of cDNA diluted 4 times in water was used at concentration per qPCR reaction. qPCR was performed using Taqman Fast Advanced Master Mix (ThermoFisher Scientific) according to the provider's protocol. StepOnePlus Real-Time PCR System (ThermoFisher Scientific) was used for thermocycling and data acquisition using the Fast Advanced Master Mix recommended conditions. The StepOnePlus software (ThermoFisher Scientific) was used for analysis.
Haemagglutination assay
In all, 100 µL of two-fold serial dilutions (in PBS CaCl2−MgCl2−) of infected cell supernatants were mixed with 100 µL of 0.5% Guinea pig blood (Charles River Laboratories) in PBS CaCl2−MgCl2− in round-bottom 96-well plates before incubation for 2 h at 4 °C. The haemagglutinin titre was calculated as the dilution of the most diluted well where red blood cell agglutination was visible.
Power calculation for RT–qPCR
Standard deviation of Ct values did not show mean dependence. We therefore modelled Ct values of technical replicates for gene i as independent normally distributed random variables with mean μ i and standard deviation σ (this latter parameter being independent of the gene). Estimation of σ was performed using pooled variance. RNA levels are compared using normalized data (ΔCt values), and by error propagation, ΔCt values follow a normal distribution with standard deviation σ' = √2 σ. The power curve was computed using the function power.t.test in R (CRAN), with parameters sd = σ', power = 0.8, sig.level = 0.05 and delta = log2(fold change).
Nanostring nCounter assays, normalization and analysis
Infected cells were harvested and RNA was extracted as detailed above. nCounter Mouse Immunology Kit was used, following Nanostring guidelines. Raw RNA counts were exported from the nSolver Analysis Software (Nanostring). Raw RNA counts were normalized by housekeeping genes to account for the inter-sample variation of RNA quantity. The selected housekeeping gene pool was built from the 14 candidate control genes provided by Nanostring, following the geNorm method39. Briefly, for each two genes j ≠ k, pairwise variation coefficient V jk is defined as
$$V_{{jk}}{\mathrm{ = }}\;{\mathrm{sd}}_i\left( {{\mathrm{log}}_2\left( {\frac{{a_{{ij}}}}{{a_{{ik}}}}} \right)} \right),$$
where a ij is the number of counts for the gene j in the sample i. The gene stability measure M j for control gene j is the arithmetic mean of all pairwise variations V jk for k ≠ j. M j evaluates the degree of correlation of gene j to other control genes (the smaller M j is, the more correlated gene j is to other control genes). Genes were ranked by increasing M (Figure S2A), and to determine a threshold, the normalization factors NF n was computed for all n (defined as the geometric mean of the housekeeping gene counts) of each sample when considering the n genes with lowest M as a housekeeping gene set (Figure S2B). Correlations between consecutive normalization factors increased then decreased when adding the sixth gene with lowest M (Figure S2C). This threshold was confirmed by studying the pairwise variation between consecutive NF n s (data not shown). The final housekeeping gene set consisted of the following five genes: Ppia, Gapdh, Rpl19, Oaz1, and Polr2a. Normalization was performed as follows: the scaling factor for a sample was defined as the ratio of the average across all geometric means and the geometric mean of the sample. For each sample, all gene counts were multiplied by the corresponding scaling factor.
As protein and mRNA data are generally close to log-normally distributed66, normalized RNA counts were subsequently log-transformed. For each time point, only genes that were consistently expressed above the lower limit of quantification were tested. Paired t-tests, which are shown to be extremely robust against non-normality67, were performed to compare Shield1-treated vs. untreated mRNA (log-transformed) normalized copy numbers. z-score was defined as
$$z = -{\mathrm{log}}_{10}\left( {p - {\mathrm{value}}} \right) \times \left| {{\mathrm{log}}_2\left( {{\mathrm{fold}}\;{\mathrm{change}}} \right)} \right|.$$
Gene set enrichment analysis
Each gene was scored using the absolute value of the sum of the t-statistics from paired t-tests at 4 and 12 h. Genes were ranked by decreasing score, and gene set enrichment analysis was performed using GSEA v. 2.2.1 (Broad Institute), with the following settings: method, pre ranked gene list; gene setsdatabase, reactome (c2.cp.reactome.v5.1.symbols.gmt); number of permutations, 10,000; enrichment statistic, classic; min set size, 5; max set size, 100; and all other parameters as default.
Quantitative approach to distinguish between IFN-β and IFN-γ signatures
Genes present in both nCounter panels (mouse immunology in this study and human immunology v.2 for the human whole blood study41) and consistently expressed above the lower limit of quantification were considered. Those for which the EtOH vs. Shield1 t-test p-value was <0.05 at 12 h were selected and weighted by their t-statistic, resulting in a 44-dimensional vector that was subsequently normalized by its L2 norm. Similar vectors were obtained by weighting the same 44 genes by the t-statistic from t-test comparing control vs. IFN-β or control vs. IFN-γ in the human whole blood study, after which they were also normalized. The difference 〈autophagy, IFN-β〉−〈autophagy, IFN-γ〉 between the scalar products was computed for 100,000 iterations through bootstrapping over the 25 donors of the whole-blood study.
CXCL10 ELISAs (R&D Biosystems, DY466) were performed using cell supernatant clarified by spinning at 850 × g for 5 min.
Given that Shield1 treatment resulted in the reduced expression of ISGs (Fig. 3), statistical analysis in the subsequent figures (Figs. 4 and 5) was performed using one-tailed t-tests to test for downregulation of ISG expression upon Shield1 treatment. t-Tests were performed using GraphPad Prism 6 (Figs. 2, 4 and 5) or R (Fig. 3), and multiple testing correction was performed using R.
Geng, J. & Klionsky, D. J. The Atg8 and Atg12 ubiquitin-like conjugation systems in macroautophagy. 'Protein modifications: beyond the usual suspects' review series. EMBO Rep. 9, 859–864 (2008).
Ryter, S. W., Cloonan, S. M. & Choi, A. M. Autophagy: a critical regulator of cellular metabolism and homeostasis. Mol. Cells 36, 7–16 (2013).
Kroemer, G., Marino, G. & Levine, B. Autophagy and the integrated stress response. Mol. Cell 40, 280–293 (2010).
Perot, B. P., Ingersoll, M. A. & Albert, M. L. The impact of macroautophagy on CD8(+) T-cell-mediated antiviral immunity. Immunol. Rev. 255, 40–56 (2013).
Deretic, V. & Levine, B. Autophagy, immunity, and microbial adaptations. Cell Host Microbe 5, 527–549 (2009).
Chiramel, A. I., Brady, N. R. & Bartenschlager, R. Divergent roles of autophagy in virus infection. Cells 2, 83–104 (2013).
Dong, X. & Levine, B. Autophagy and viruses: adversaries or allies? J. Innate Immun. 5, 480–493 (2013).
Joubert, P. E. et al. Chikungunya-induced cell death is limited by ER and oxidative stress-induced autophagy. Autophagy 8, 1261–1263 (2012).
Vescovo, T. et al. Autophagy in HCV infection: keeping fat and inflammation at bay. Biomed. Res. Int. 2014, 265353 (2014).
Dreux, M., Gastaminza, P., Wieland, S. F. & Chisari, F. V. The autophagy machinery is required to initiate hepatitis C virus replication. Proc. Natl. Acad. Sci. USA 106, 14046–14051 (2009).
Ke, P. Y. & Chen, S. S. Activation of the unfolded protein response and autophagy after hepatitis C virus infection suppresses innate antiviral immunity in vitro. J. Clin. Invest. 121, 37–56 (2011).
Shrivastava, S., Raychoudhuri, A., Steele, R., Ray, R. & Ray, R. B. Knockdown of autophagy enhances the innate immune response in hepatitis C virus-infected hepatocytes. Hepatology 53, 406–414 (2011).
Ramos, I. & Fernandez-Sesma, A. Modulating the innate immune response to influenza A virus: potential therapeutic use of anti-inflammatory drugs. Front. Immunol. 6, 361 (2015).
Schmolke, M. & Garcia-Sastre, A. Evasion of innate and adaptive immune responses by influenza A virus. Cell. Microbiol. 12, 873–880 (2010).
Zhou, Z. et al. Autophagy is involved in influenza A virus replication. Autophagy 5, 321–328 (2009).
Ren, Y. et al. Proton channel activity of influenza A virus matrix protein 2 contributes to autophagy arrest. J. Virol. 90, 591–598 (2016).
Gannage, M. et al. Matrix protein 2 of influenza A virus blocks autophagosome fusion with lysosomes. Cell Host Microbe 6, 367–380 (2009).
Comber, J. D., Robinson, T. M., Siciliano, N. A., Snook, A. E. & Eisenlohr, L. C. Functional macroautophagy induction by influenza A virus without a contribution to major histocompatibility complex class II-restricted presentation. J. Virol. 85, 6453–6463 (2011).
Betz, C. & Hall, M. N. Where is mTOR and what is it doing there? J. Cell Biol. 203, 563–574 (2013).
Robbins, M., Judge, A. & MacLachlan, I. siRNA and innate immunity. Oligonucleotides 19, 89–102 (2009).
Gazdar, A. F., Gao, B. & Minna, J. D. Lung cancer cell lines: useless artifacts or invaluable tools for medical science? Lung Cancer 68, 309–318 (2010).
Hao, L. Y. & Greider, C. W. Genomic instability in both wild-type and telomerase null MEFs. Chromosoma 113, 62–68 (2004).
Masramon, L. et al. Genetic instability and divergence of clonal populations in colon cancer cells in vitro. J. Cell Sci. 119, 1477–1482 (2006).
Hosokawa, N., Hara, Y. & Mizushima, N. Generation of cell lines with tetracycline-regulated autophagy and a role for autophagy in controlling cell size. FEBS Lett. 580, 2623–2629 (2006).
Moullan, N. et al. Tetracyclines disturb mitochondrial function across eukaryotic models: a call for caution in biomedical research. Cell Rep. 10, 1681–1691 (2015).
Chatzispyrou, I. A., Held, N. M., Mouchiroud, L., Auwerx, J. & Houtkooper, R. H. Tetracycline antibiotics impair mitochondrial function and its experimental use confounds research. Cancer Res. 75, 4446–4449 (2015).
Di Caprio, R., Lembo, S., Di Costanzo, L., Balato, A. & Monfrecola, G. Anti-inflammatory properties of low and high doxycycline doses: an in vitro study. Mediat. Inflamm. 2015, 329418 (2015).
Wilcox, J. R., Covington, D. S. & Paez, N. Doxycycline as a modulator of inflammation in chronic wounds. Wounds 24, 339–349 (2012).
Son, K. et al. Doxycycline induces apoptosis in PANC-1 pancreatic cancer cells. Anticancer Res. 29, 3995–4003 (2009).
Alexander-Savino, C. V., Hayden, M. S., Richardson, C., Zhao, J. & Poligone, B. Doxycycline is an NF-kappaB inhibitor that induces apoptotic cell death in malignant T-cells. Oncotarget 7, 75954–75967 (2016).
Ahler, E. et al. Doxycycline alters metabolism and proliferation of human cell lines. PLoS ONE 8, e64561 (2013).
Onoda, T. et al. Doxycycline inhibits cell proliferation and invasive potential: combination therapy with cyclooxygenase-2 inhibitor in human colorectal cancer cells. J. Lab. Clin. Med. 143, 207–216 (2004).
Larrayoz, I. M. et al. Molecular effects of doxycycline treatment on pterygium as revealed by massive transcriptome sequencing. PLoS ONE 7, e39359 (2012).
Banaszynski, L. A., Chen, L. C., Maynard-Smith, L. A., Ooi, A. G. & Wandless, T. J. A rapid, reversible, and tunable method to regulate protein function in living cells using synthetic small molecules. Cell 126, 995–1004 (2006).
Maynard-Smith, L. A., Chen, L. C., Banaszynski, L. A., Ooi, A. G. & Wandless, T. J. A directed approach for engineering conditional protein stability using biologically silent small molecules. J. Biol. Chem. 282, 24866–24872 (2007).
Klionsky, D. J. et al. Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition. Autophagy 12, 1–222 (2016).
Joubert, P. E. et al. Chikungunya virus-induced autophagy delays caspase-dependent cell death. J. Exp. Med. 209, 1029–1047 (2012).
Lennemann, N. J. & Coyne, C. B. Catch me if you can: the link between autophagy and viruses. PLoS Pathog. 11, e1004685 (2015).
Vandesompele, J. et al. Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. Genome Biol. 3, RESEARCH0034 (2002).
Subramanian, A. et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl. Acad. Sci. USA 102, 15545–15550 (2005).
Urrutia, A. et al. Standardized whole-blood transcriptional profiling enables the deconvolution of complex induced immune responses. Cell Rep. 16, 2777–2791 (2016).
Kelly-Scumpia, K. M. et al. Type I interferon signaling in hematopoietic cells is required for survival in mouse polymicrobial sepsis by regulating CXCL10. J. Exp. Med. 207, 319–326 (2010).
Xu, H. C. et al. Type I interferon protects antiviral CD8+T cells from NK cell cytotoxicity. Immunity 40, 949–960 (2014).
Liu, Y. et al. PD-L1 expression by neurons nearby tumors indicates better prognosis in glioblastoma patients. J. Neurosci. 33, 14231–14245 (2013).
Shin, E. C. et al. Virus-induced type I IFN stimulates generation of immunoproteasomes at the site of infection. J. Clin. Invest. 116, 3006–3014 (2006).
Tal, M. C. et al. Absence of autophagy results in reactive oxygen species-dependent amplification of RLR signaling. Proc. Natl. Acad. Sci. USA 106, 2770–2775 (2009).
Gack, M. U. et al. Influenza A virus NS1 targets the ubiquitin ligase TRIM25 to evade recognition by the host viral RNA sensor RIG-I. Cell Host Microbe 5, 439–449 (2009).
Subramani, S. & Malhotra, V. Non-autophagic roles of autophagy-related proteins. EMBO Rep. 14, 143–151 (2013).
Glick, D., Barth, S. & Macleod, K. F. Autophagy: cellular and molecular mechanisms. J. Pathol. 221, 3–12 (2010).
Balachandran, S. & Beg, A. A. Defining emerging roles for NF-kappaB in antivirus responses: revisiting the interferon-beta enhanceosome paradigm. PLoS Pathog. 7, e1002165 (2011).
Jounai, N. et al. The Atg5 Atg12 conjugate associates with innate antiviral immune responses. Proc. Natl. Acad. Sci. USA 104, 14050–14055 (2007).
Ganley, I. G., Wong, P. M., Gammoh, N. & Jiang, X. Distinct autophagosomal-lysosomal fusion mechanism revealed by thapsigargin-induced autophagy arrest. Mol. Cell 42, 731–743 (2011).
Bernardino, A. L., Kaushal, D. & Philipp, M. T. The antibiotics doxycycline and minocycline inhibit the inflammatory responses to the Lyme disease spirochete Borrelia burgdorferi. J. Infect. Dis. 199, 1379–1388 (2009).
Fujioka, S. et al. Stabilization of p53 is a novel mechanism for proapoptotic function of NF-kappaB. J. Biol. Chem. 279, 27549–27559 (2004).
Tae, H. J. et al. Chronic treatment with a broad-spectrum metalloproteinase inhibitor, doxycycline, prevents the development of spontaneous aortic lesions in a mouse model of vascular Ehlers-Danlos syndrome. J. Pharmacol. Exp. Ther. 343, 246–251 (2012).
Loi, M. et al. Macroautophagy proteins control MHC class I levels on dendritic cells and shape anti-viral CD8(+) T cell responses. Cell Rep. 15, 1076–1087 (2016).
Sellers, R. S., Clifford, C. B., Treuting, P. M. & Brayton, C. Immunological variation between inbred laboratory mouse strains: points to consider in phenotyping genetically immunomodified mice. Vet. Pathol. 49, 32–43 (2012).
Villalon-Letelier, F., Brooks, A. G., Saunders, P. M., Londrigan, S. L. & Reading, P. C. Host cell restriction factors that limit influenza A infection. Viruses 9, 376 (2017).
Bahadoran, A. et al. Immune responses to influenza virus and its correlation to age and inherited factors. Front. Microbiol. 7, 1841 (2016).
Kido, H. et al. Role of host cellular proteases in the pathogenesis of influenza and influenza-induced multiple organ failure. Biochim. Biophys. Acta 1824, 186–194 (2012).
Marc, D. Influenza virus non-structural protein NS1: interferon antagonism and beyond. J. Gen. Virol. 95, 2594–2611 (2014).
Liedmann, S. et al. Viral suppressors of the RIG-I-mediated interferon response are pre-packaged in influenza virions. Nat. Commun. 5, 5645 (2014).
Hutchinson, E. C. et al. Conserved and host-specific features of influenza virion architecture. Nat. Commun. 5, 4816 (2014).
Zhirnov, O. P. & Klenk, H. D. Influenza A virus proteins NS1 and hemagglutinin along with M2 are involved in stimulation of autophagy in infected cells. J. Virol. 87, 13107–13114 (2013).
de la Calle, C., Joubert, P. E., Law, H. K., Hasan, M. & Albert, M. L. Simultaneous assessment of autophagy and apoptosis using multispectral imaging cytometry. Autophagy 7, 1045–1051 (2011).
Lu, C. & King, R. D. An investigation into the population abundance distribution of mRNAs, proteins, and metabolites in biological systems. Bioinformatics 25, 2020–2027 (2009).
Rasch, D. A. M. K., Teuscher, F. & Guiard, V. How robust are tests for two independent samples? J. Stat. Plan. Inference 137, 2706–2720 (2007).
We thank the Centre for Human Immunology and the Centre for Translational Science of Institut Pasteur for access and support for the Incucyte ZOOM system and ISX technologies. This work was supported by Canceropôle Île-de-France and Fondation pour la Recherche médicale (to BP); Ecole normale superieure (to JB); Inca and Plan Cancer, INSERM (to NY); Medical Research Council [MR/L00870X/1 and MR/L018578/1] and the European Union Seventh Framework Programme [FP7-PEOPLE-2012-CIG: 333955] (to JDR); European Union Seventh Framework Programme Marie Curie Action (PCIG11-GA-2012-3221170) (to MAI); and ANR, ImmunoOnco LabEx (ANR-10-LABX-15), La Ligue contre le Cancer (to MLA). The authors thank the Milieu Intérieur consortium for access to data prior to publication.
Unit of Dendritic Cell Immunobiology, Department of Immunology, Institut Pasteur, Paris, France
Brieuc P. Perot
, Jeremy Boussier
, Nader Yatim
, Molly A. Ingersoll
& Matthew L. Albert
Inserm 1223, Paris, France
Ecole Doctorale Physiologie, Physiopathologie et Thérapeutique, Université Pierre et Marie Curie (Université Paris 6), Paris, France
International Group for Data Analysis, Institut Pasteur, Paris, France
Jeremy Boussier
Ecole Doctorale Frontières du Vivant, Université Paris Diderot, Paris, France
School of Biosciences, University of Kent, Canterbury, UK
Jeremy S. Rossman
Department of Cancer Immunology, Genentech Inc., South San Francisco, CA, USA
Matthew L. Albert
Search for Brieuc P. Perot in:
Search for Jeremy Boussier in:
Search for Nader Yatim in:
Search for Jeremy S. Rossman in:
Search for Molly A. Ingersoll in:
Search for Matthew L. Albert in:
Corresponding authors
Correspondence to Molly A. Ingersoll or Matthew L. Albert.
M.L.A. is a full-time employee of Genentech. All the other authors declare that they have no conflict of interest.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Edited by G.M Fimia
Supplementary figures
Supplementary figure legends
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
The roles of apoptosis, autophagy and unfolded protein response in arbovirus, influenza virus, and HIV infections
Parvaneh Mehrbod
, Sudharsana R. Ande
, Javad Alizadeh
, Shahrzad Rahimizadeh
, Aryana Shariati
, Hadis Malek
, Mohammad Hashemi
, Kathleen K. M. Glover
, Affan A. Sher
, Kevin M. Coombs
& Saeid Ghavami
Virulence (2019)
Cellular Proteostasis During Influenza A Virus Infection—Friend or Foe?
Mariana Marques
, Bruno Ramos
, Ana Soares
& Daniela Ribeiro
Cells (2019)
Role of c-Jun terminal kinase (JNK) activation in influenza A virus-induced autophagy and replication
Jingting Zhang
, Tao Ruan
, Tianyu Sheng
, Jiongjiong Wang
, Jing Sun
, Jin Wang
, Richard A. Prinz
, Daxin Peng
, Xiufan Liu
& Xiulong Xu
Virology (2019)
Microrna-130a Downregulates HCV Replication through an atg5-Dependent Autophagy Pathway
Xiaoqiong Duan
, Wenting Li
, Jacinta A. Holmes
, Annie J. Kruger
, Chunhui Yang
, Yujia Li
, Min Xu
, Haiyan Ye
, Shuang Li
, Xinzhong Liao
, Qiuju Sheng
, Dong Chen
, Tuo Shao
, Zhimeng Cheng
, Batul Kaj
, Esperance A. Schaefer
, Shilin Li
, Limin Chen
, Wenyu Lin
& Raymond T. Chung
Autophagy-Virus Interplay: From Cell Biology to Human Disease
Liyana Ahmad
, Serge Mostowy
& Vanessa Sancho-Shimizu
Frontiers in Cell and Developmental Biology (2018)
Cell Death & Disease menu
For Authors & Referees
About the Partner
Celebrating 25 Years of CDD
Change of Authorship forms
Best of CDDpress 2018 on p53
|
CommonCrawl
|
K* production in the ${{KN\to K \pi p}}$ reaction
Shao-Fei Chen ,
Bo-Chao Liu ,
School of Science, Xi'an Jiaotong University, Xi'an 710049, China
We investigate the $ K^* $ production in the $ KN\to K \pi p $ reaction using the effective Lagrangian approach and the isobar model. To describe this reaction, we first take into account the contributions from the $ \pi $ , $ \rho $ and $ \omega $ exchanges, as in previous studies. We find that although the experimental data can be generally described, there are some obvious discrepancies between the model and the experiments. To improve the model, we consider the contributions of the axial-vector meson and hyperon exchange. It is shown that a large contribution of the axial-vector meson exchange can significantly improve the results. This may indicate that the coupling of the axial-vector meson, e.g. $ a_1(1260) $ , is large in the $ KK^* $ channel. To verify our model, measurements of the angular distributions and spin density matrix elements of $ K^{*0} $ in the $ K_{\rm L} p\to K^{*0} p $ reaction would be helpful, and we make predictions for this reaction for a future comparison.
polarization ,
effective Largrangian approach ,
hadron-hadron interaction
[1] T. Nakano et al, Phys. Rev. Lett., 91: 012002 (2003) doi: 10.1103/PhysRevLett.91.012002
[2] T. Nakano et al, Phys. Rev. C, 79: 025210 (2009) doi: 10.1103/PhysRevC.79.025210
[3] D. G. Ireland et al, Phys. Rev. Lett., 100: 052001 (2008) doi: 10.1103/PhysRevLett.100.052001
[4] M. Abdel-Bary et al, Phys. Lett. B, 649: 252 (2007) doi: 10.1016/j.physletb.2007.04.023
[5] J. Z. Bai et al, Phys. Rev. D, 70: 012004 (2004) doi: 10.1103/PhysRevD.70.012004
[6] M. Z. Wang et al, Phys. Lett. B, 617: 141 (2005) doi: 10.1016/j.physletb.2005.05.008
[7] R. A. Schumacher, AIP Conf. Proc., 842: 409 (2006) doi: 10.1063/1.2220285
[8] T.Sekihara, H. C. Kim, and A. Hosaka, arXiv: 1910.09252
[9] A. Baldini, V. Flamino, W. G. Moorhead et al, 1988 Landolt-Bornstein: Numerical Data and Functional Relationships in Science and Technology vol 12 ed H Schopper (Berlin: Springer)
[10] The GlueX Collaboration et al, arxiv: 1707.05284
[11] R. W. Bland et al, Nucl. Phys. B, 18: 537 (1970)
[12] J. D. Jackson and H. Pilkuhn, Nuovo. Cim., 33: 906 (1964) doi: 10.1007/BF02749903
[13] S. Goldhaber, W. Chinowsky, G. Goldhaber et al, Phys. Rev., 142: 913 (1966) doi: 10.1103/PhysRev.142.913
[14] Roger Woodward Bland, Ph. D. Thesis (1968)
[15] T. Hattori, Prog. Theor. Phys., 79: 3 (1988)
[16] M. Tanabashi et al, Phys. Rev. D, 98: 030001 (2018)
[17] Nikolai I. Kochelev, Dong-Pil Min, Yongseok Oh et al, Phys. Rev. D, 61: 094008 (2000)
[18] K. Nakayama, Y. Oh, and H. Haberzettl, Journal of the Korean Physical Society., 59: 32 (2011)
[19] R. Buettgen, K. Holinde, A. Mueller-Groeling et al, Nucl. Phys. A, 506: 586 (1990)
[20] F. Q. Wu, B. S. Zou, L. Li et al, Nucl. Phys. A , 735: 111 ( 2004)
[21] F. Q. Wu and B. S. Zou, Phys. Rev. D, 73: 114008 (2007)
[22] Y. Oh and H. Kim, Phys. Rev. C, 73: 065202 (2006)
[23] K. Nakayama, J. Speth, and T. -S. H. Lee, Phys. Rev. C, 65: 045210 (2002)
[24] R. Machleidt, J. Phys. G: Nucl. Part. Phys., 63: 024001 (2001)
[25] Y. Oh, K. Nakayama, and T.-S. H. Lee, Phys. Rept. , 423: 49 (2006) doi: 10.1016/j.physrep.2005.10.002
[26] Bo-Chao Liu, J. Phys. G: Nucl. Part. Phys., 39: 105107 (2012) doi: 10.1088/0954-3899/39/10/105107
[27] V. G. J. Stoks and Th. A. Rijken, Phys. Rev. C, 59: 3009 (1999)
[28] Bo-Chao Liu and Ju-Jun Xie, Phys. Rev. C, 85: 038201 (2012)
[29] K. Schilling, P. Seyboth, and G. E. Wolf, Nucl. Phys. B, 15: 397 (1970) doi: 10.1016/0550-3213(70)90070-2
[30] C. Bourrely, J. Sofier, and E. Leader, Phys. Rept., 59: 95 (1980) doi: 10.1016/0370-1573(80)90017-4
[31] A. Berthon et al, Nucl. Phys. B, 63: 54 (1973)
[32] G. Giacomelli et al (Bologna-Glasgow-Rome-Trieste Collaboration), Nucl. Phys. B, 111: 365 (1976)
[33] Michael Birkel and Harald Fritzsch, Phys. Rev. D, 53: 6195 (1996)
[34] Leonard Gamberg and Gary R. Goldstein, Phys. Rev. Lett., 87: 242001 (2001) doi: 10.1103/PhysRevLett.87.242001
[35] Martin Vojík and Peter Lichard, arXiv: 1006.2919
[36] D. M. Asner et al, Phys. Rev. D, 61: 012002 (1999)
[37] T. E. Coan et al, Phys. Rev. Lett., 92: 232001 (2004) doi: 10.1103/PhysRevLett.92.232001
[38] Belle Collaboration et al, Phys. Lett. B, 542: 171 (2002)
[39] L. Roca, J. E. Palomar, and E. Oset, Phys. Rev. D, 70: 094006 (2004) doi: 10.1103/PhysRevD.70.094006
[40] J. J. Xie, C. Wilkin, and B. S. Zou, Phys. Rev. C, 77: 058202 (2008) doi: 10.1103/PhysRevC.77.058202
[1] FENG Yu , GONG Bin , WAN Lu-Ping , WANG Jian-Xiong . An updated study of Y production and polarizationat the Tevatron and LHC. Chinese Physics C, 2015, 39(12): 123102. doi: 10.1088/1674-1137/39/12/123102
[2] Hai-Nan Lin , Xin Li , Zhe Chang . Polarization of gamma-ray burst afterglows in the synchrotron self-Compton process from a highly relativistic jet. Chinese Physics C, 2017, 41(4): 045101. doi: 10.1088/1674-1137/41/4/045101
[3] Zuo-Tang Liang , Jun Song , Isaac Upsal , Qun Wang , Zhangbu Xu . Rapidity dependence of global polarization in heavy ion collisions. Chinese Physics C, 2021, 45(1): 014102. doi: 10.1088/1674-1137/abc065
[4] Yu Feng , Bin Gong , Chao-Hsi Chang , Jian-Xiong Wang . Complete study on polarization of $ {\Upsilon {(nS)}}$ hadroproduction at QCD next-to-leading order. Chinese Physics C, 2021, 45(1): 013117. doi: 10.1088/1674-1137/abc682
[5] Mei-Wei Hu , Xue-Yi Lao , Pan Ling , Qian Wang . X0(2900) and its heavy quark spin partners in molecular picture. Chinese Physics C, 2021, 45(2): 1-6.
[6] Wu Yuanfang , Liu Lianshou . Investigation of the High Energy Limit in Hadron-Hadron Collision Using MEM. Chinese Physics C, 1989, 13(S3): 273-280.
[7] Zhan Sun , Hong-Fei Zhang . Reconciling charmonium production and polarization data in the midrapidity region at hadron colliders within the nonrelativistic QCD framework. Chinese Physics C, 2018, 42(4): 043104. doi: 10.1088/1674-1137/42/4/043104
[8] Thomas Gutsche , Tanja Branz , Amand Faessler , Ian Woo Lee , Valery E. Lyubovitskij . Hadron molecules. Chinese Physics C, 2010, 34(9): 1185-1190. doi: 10.1088/1674-1137/34/9/007
[9] Yu. A. Berezhnoy , V. P. Mikhailyuk . Polarization of protons in the optical model. Chinese Physics C, 2017, 41(2): 024102. doi: 10.1088/1674-1137/41/2/024102
[11] on behalf , of the , ATLAS collaboration . Prospectives of the hadron program in ATLAS. Chinese Physics C, 2010, 34(9): 1360-1363. doi: 10.1088/1674-1137/34/9/042
[12] I. Uman (on behalf of the COMPASS collaboration) . Hadron program at COMPASS. Chinese Physics C, 2010, 34(9): 1375-1378. doi: 10.1088/1674-1137/34/9/046
[13] LIU Bei-Jiang , for BES . Recent BES results on hadron spectroscopy. Chinese Physics C, 2010, 34(9): 1303-1306. doi: 10.1088/1674-1137/34/9/029
[14] YANG Jin-Min , Young Bing-Lin . Searching for Top Squark at Hadron Colliders. Chinese Physics C, 2002, 26(S1): 25-33.
[15] Zhe Duan , Qing Qin . Simulations to study the static polarization limit for RHIC lattice. Chinese Physics C, 2016, 40(1): 017002. doi: 10.1088/1674-1137/40/1/017002
[16] Irfan Siddique , Zuo-tang Liang , Michael Annan Lisa , Qun Wang , Zhang-bu Xu . Alternative methods for measurement of the global polarization of Λ hyperons. Chinese Physics C, 2019, 43(1): 014103. doi: 10.1088/1674-1137/43/1/014103
[17] Irfan Siddique;Zuo-tang Liang;Michael Annan Lisa;Qun Wang;Zhang-bu Xu . Alternative methods for measurement of the global polarization of $\Lambda$ hyperons. Chinese Physics C, 2019, df87c41e-622f-49a7-b081-5c584df157aa(11): 14103-.
[18] Deng Jingkang , Shang Rencheng , Zhu Shengjiang , Xu Wang , Zhang Juping , Jin Qingzhen , Zhu Henian , Xia Shaojian , Yan Yonglian , Guo Yanan . Study on Beam Polarization and the Design of a Polarimeter in BEPC. Chinese Physics C, 1997, 21(S3): 85-93.
[19] Shen Qixing , Yu Hong . The Polarization States of the Gluon and the Glueball Interpretation of the ξ (2230). Chinese Physics C, 1991, 15(S2): 153-157.
[20] Klaus Götzen (on behalf of the BES-Ⅲ collaboration) . Recent results on hadron spectroscopy from BES. Chinese Physics C, 2010, 34(6): 638-643. doi: 10.1088/1674-1137/34/6/005
Shao-Fei Chen and Bo-Chao Liu. K* production in the ${{KN\to K \pi p}}$ reaction[J]. Chinese Physics C, 2020, 44(3): 034107. doi: 10.1088/1674-1137/44/3/034107
Shao-Fei Chen
Corresponding author: Bo-Chao Liu, [email protected]
Abstract: We investigate the $ K^* $ production in the $ KN\to K \pi p $ reaction using the effective Lagrangian approach and the isobar model. To describe this reaction, we first take into account the contributions from the $ \pi $ , $ \rho $ and $ \omega $ exchanges, as in previous studies. We find that although the experimental data can be generally described, there are some obvious discrepancies between the model and the experiments. To improve the model, we consider the contributions of the axial-vector meson and hyperon exchange. It is shown that a large contribution of the axial-vector meson exchange can significantly improve the results. This may indicate that the coupling of the axial-vector meson, e.g. $ a_1(1260) $ , is large in the $ KK^* $ channel. To verify our model, measurements of the angular distributions and spin density matrix elements of $ K^{*0} $ in the $ K_{\rm L} p\to K^{*0} p $ reaction would be helpful, and we make predictions for this reaction for a future comparison.
The $ KN $ interactions constitute an important sector of the studies of strong interactions. Due to the positive strangeness of the $ KN $ system, their interactions have some special features. One example is that no 3-quark state can be formed in the $ KN $ channel, which has attracted a lot of interest in finding the pentaquark state $ \Theta $(or $ Z^* $ in the old literature) in $ KN $ interactions. Even though there is still no clear evidence for the existence of the pentaquark state in this channel [1-7], the proposed measurements of the $ K^+d $ interactions [8] at J-PARC are expected to provide a further test for the existence of $ \Theta $. Besides the search for the pentaquark state, the studies of resonance production in the $ KN $ scattering reactions are also relevant. Even though the resonance states have not been found in the $ KN $ elastic scattering, they can be produced in the inelastic processes in at least a three-body final state. In fact, the $ K^+N $ or $ K_{\rm L}N $ scattering processes were widely used for investigating the properties of hadron resonances during the 1960's and 1970's [9]. It is known that resonance production processes usually dominate these scattering processes in the resonance region. This feature makes these reactions suitable for investigating the properties of hadron resonances. Although such studies started a few decades ago, the understanding of these reactions is still not satisfactory. Due to the low statistics of the available experimental data and the absence of new data, the relevant studies almost ceased in the past decade. Recently, it was proposed to use the secondary $ K_{\rm L} $ beam to perform the $ K_{\rm L} N $ scattering experiments at Jefferson Lab [10]. If such experiments could be done in the future, the obtained data would certainly prompt relevant studies and help clarify some problems in understanding the $ KN $ interactions. From the theoretical side, it is hence meaningful to recheck the previous studies and look for new perspectives and physical motivations for studying the relevant $ K_{\rm L} N $ scattering processes.
In this paper, we analyze the $ K^* $ production in the $ KN\to K \pi p $ reaction using the effective Lagrangian approach and the isobar model. It was shown that this reaction is dominated by the production of $ K^* $ and $ \Delta(1232) $ resonances in the low energy region, and the contributions of these two resonances could be separated [11]. Thus, this reaction offers a possibility to study the $ K^* $ production mechanism in $ KN $ interactions. Previous analysis of this reaction was mainly focused on relatively high energies, and it was assumed that the reaction is dominated by the $ \pi $, $ \omega $ and $ \rho $ exchange. In Refs. [12, 13], it was argued that $ K^*N $ is produced partly via pion exchange and partly via vector meson exchange, and that with energy decrease the pion exchange contribution gradually increases. In a later analysis [11, 14], the authors concluded that the reaction is dominated by vector meson exchange down to threshold, and no evidence of a significant increase of pseudoscalar exchange at low energy was seen. The same reaction was also studied in Ref. [15], and it was concluded that the $ \omega $ exchange does not play any role in this reaction, in contrast to the results in Refs. [11-14]. However it seems that the parameters of the models [11, 12, 15] are not compatible with the values in recent literature. Therefore, further studies are required. In the present work, we first consider the contribution of the $ \pi $, $ \rho $ and $ \omega $ exchange, as in previous works. We fix the parameters that are relatively well-known in literature, while the others are fitted to the experimental data as free parameters. We find that although the experimental data can be generally described, there are some obvious discrepancies between the model and the experiments. The natural way to resolve the issue is to include in the model some other mechanism, e.g. hyperon and axial-vector meson exchange. In fact, we find that the inclusion of the axial-vector meson exchange can significantly improve the model, while the hyperon exchange does not seem important. Among the various axial-vector mesons, we focus on the role of $ a_1(1260) $(hereafter referred to as $ a_1 $). Contrary to other axial-vector mesons, the branching ratio for the decay $ a_1\to K^*K $ was measured [16]. The $ a_1NN $ coupling can be estimated due to its role in the $ NN $ axial-vector coupling [17]. Other axial-vector mesons, such as $ f_1(1285) $ or $ b_1(1235) $, may also give a contribution. However, we do not consider them explicitly since their couplings to the $ NN $ and $ KK^* $ channels are unknown. It should be mentioned that the $ \eta $ exchange is also allowed in this reaction. We do not consider it, since its contribution is expected to be small due to the vanishing $ \eta NN $ coupling [18]. To verify our model, it would be helpful to test its predictions for the $ K_{\rm L} N\to K^* N $ reaction. As shown below, various models give distinct predictions for this reaction. Thus, the measurements at JLab could provide valuable information about the reaction mechanism.
This article is organized as follows. In Sec. 2, we present the theoretical formalism used in the calculations. Numerical results and discussion are presented in Sec. 3, followed by a summary in the last section.
2. The formalism
In this work, we study the following two reactions:
$ \begin{split} (a) \quad K^+ p \to pK^{*+}( \to K^0 \pi^+), \\ (b)\quad K^+ n \to pK^{*0}( \to K^+ \pi^-). \end{split} $
The $ K^* $ production in the two reactions can be described by the Feynman diagrams shown in Fig. 1. It includes the $ t- $channel $ \pi $, $ \rho $, $ \omega $, $ a_1 $ exchange terms and the $ u- $channel $ \Lambda $, $ \Sigma $ exchange terms. To compute these contributions, the following interaction Lagrangian densities are needed [19-23]:
Figure 1. Model of the reactions $ K^+ p \to K^{*+} p \to K^0 \pi^+ p $ and $ K^+ n \to K^{*0} p \to K^+ \pi^- p $. $ p_1 $ and $ p_2 $ are the four-momenta of the initial and final nucleons.
$ \begin{split} {\cal L}_{K^* K \pi} = & {\rm i} G_V\{(\partial_\mu{\bar{K}})\vec{\tau}K^{*\mu}\cdot\vec{\pi} -{\bar{K}}\vec{\tau}K^{*\mu}\cdot(\partial_\mu\vec{\pi})\}+{h.c.}, \end{split} $
$ \begin{split} {\cal L}_{K^* K V} =& g_{K^*K V}\varepsilon^{\mu\nu\alpha\beta} (K^-\partial_{\alpha}K^{*+}_\beta\partial_{\mu}V_\nu \\ & +K^+\partial_{\mu}K^{*-}_\nu\partial_{\alpha}V_\beta), \end{split} $
$ \begin{array}{l} {\cal L}_{NNV} = -g_{NNV}\bar{N}\left(\gamma_\mu-\dfrac{k_V}{2 m_N}\sigma_{\mu\nu}\partial^\nu\right)V^\mu N , \end{array} $
$ \begin{split} {\cal L}_{NY K^*} = -g_{NY K^*}\bar{N}\left(\gamma_\mu Y K^{*\mu}-\frac{\kappa_{Y}}{2m_N}\sigma_{\mu\nu}Y\partial^{\nu}K^{*\mu}\right) +{h.c.}, \end{split} $
$ \begin{array}{l} {\cal L}_{NYK} = -g_{NYK}\bar{N}\gamma_5 Y K+{h.c.}, \end{array} $
$ \begin{array}{l} {\cal L}_{NN\pi} = -{\rm i} g_{NN\pi}\bar N \gamma_5 \vec{\tau}\cdot\vec{\pi} N, \end{array} $
where V indicates the vector meson $ \rho $ or $ \omega $, and Y represents $ \Lambda $ or $ \Sigma $. The $ \sigma_{\mu\nu} $ in Eqs. (3) and (4) is defined as
$ \begin{split} \sigma_{\mu\nu} = \frac{\rm i}{2}(\gamma_\mu\gamma_\nu-\gamma_\nu\gamma_\mu). \end{split} $
The coupling constants in the Lagrangians can be determined either by extracting them from the experimental data or by predictions of theoretical models. In Tables 1 and 2, we list the values of the coupling constants used in this work.
vertex CD-Bonn model [24]
g[$ \kappa $ ] $ \Lambda_\alpha $ /GeV
$ NN\omega $ 15.85(0.0) 1.5
$ NN\rho $ 3.25(6.1) 1.31
$ NN\pi $ 13.07 1.72
Table 1. Parameters of the N-N-meson vertices.
vertex $ g_{ { {{K^*KM}}}} $ Ref. vertex g($ \kappa $ ) Ref.
$ KK^*\rho $ 7.45 GeV−1 [25] $ NK\Lambda $ −13.24 [21]
$ KK^*\omega $ 7.45 GeV−1 [25] $ NK\Sigma $ 3.58 [21]
$ KK^*\pi $ 3.02 [26] $ NK^*\Lambda $ −4.26 (2.66) [27]
$ NK^*\Sigma $ −2.46 (−0.47) [27]
Table 2. Coupling constants of the K*-K-meson and K*(K)-N-Y vertices used in this work.
The general invariant scattering amplitude for the reactions under study can be written as
$ \begin{array}{l} {\cal M}_i = \bar u(p_2)\; {\cal A}_i\; u(p_1), \end{array} $
where i denotes the various exchanged particles. $ \bar u(p_2) $ and $ u(p_1) $ are the spinors of the outgoing and incoming nucleons, respectively. With the effective Lagrangian densities given above, one can, for example, easily construct $ {\cal A}_i $ for the $ K^+ p \to K^{*+} p \to K^0 \pi^+ p $ reaction as
$ \begin{split} {\cal A}_V =& \sqrt{2}G_V\; g_{NNV}g_{KK^*V} \frac{-p^{K^0}_\nu+p^{\pi^+}_\nu}{p_{K^*}^2-m_{K^*}^2+{\rm i} m_{K^*}\Gamma}\varepsilon^{\mu\nu\alpha\beta} \\ &\times p^{K^*}_\mu p^V_\alpha\frac{1}{p_V^2-m_V^2} \left(\gamma_\beta-{\rm i} \frac{k_V}{2m_N}\sigma_{\beta\gamma}p^\gamma\right), \end{split} $
$ \begin{split} {\cal A}_Y =& {\rm i} \sqrt{2}G_Vg_{YNK}g_{K^*NY}\gamma_5(-p^\alpha_{\pi^+}+p^\alpha_{K^0}) \frac{{\not\!\!\!p }_Y+m_Y}{p^2_Y-m^2_Y} \\&\times \frac{-g_{\mu\alpha}+\frac{p_\mu^{K^*}p_\alpha^{K^*}}{m^2_{K^*}}}{p^2_{K^*}-m^2_{K^*}+im_{K^*}\Gamma} \left(\gamma^\mu-{\rm i}\frac{\kappa_{Y}}{2m_N}\sigma^{\mu\nu}p_\nu^{K^*}\right), \end{split} $
$ \begin{split} {\cal A}_P =& g_{NNP}\sqrt{2}G_V g_{K^*KP}(p_{K^0}^\mu-p_{\pi^+}^\mu) \\ &\times \frac{-g_{\mu\nu}+\frac{p^{K^*}_\mu p^{K^*}_\nu}{p_{K^*}^2}}{p_{K^*}^2-m_{K^*}^2+{\rm i} m_{K^*}\Gamma}\cdot\frac{p_{k^+}^\nu-p_P^\nu}{p_P^2-m_P^2}\gamma_5, \end{split} $
where the subscript V (vector meson), Y (hyperon) and P (pseudoscalar meson) stand for the corresponding exchanged particles. The width of $ K^* $ is taken as $ \Gamma = 50.8 $ MeV and the coupling constant $ G_V = 3.02 $ [26] is used in the calculations.
To take into account the finite extension of hadrons, we also introduce the form factors in the amplitudes. For the N-N-meson vertex, we adopt the form factors used in the Bonn model [24],
$ \begin{array}{l} F^\alpha_N(q_{\rm ex}^2,M_{\rm ex}) = \dfrac{\Lambda_\alpha^2-M_{\rm ex}^2}{\Lambda_\alpha^2-q_{\rm ex}^2}\, , \end{array} $
where $ \Lambda_\alpha $ takes the values shown in Table 1. For the hyperon exchange vertex, we use the following form factor [20, 21, 28],
$ \begin{split} F_Y(q_{\rm ex}^2,M_{\rm ex}) = {\Lambda^4_Y\over \Lambda^4_Y+(q_{\rm ex}^2-M_{\rm ex}^2)^2}\, , \end{split} $
where Y is $ \Lambda $ or $ \Sigma $. For the $ K^* $-K-meson vertex, we take the following form factor [19]
$ \begin{array}{l} F^\alpha_{K^*}(q_{\rm ex}^2,M_{\rm ex}) = \dfrac{{\Lambda^{*}_\alpha} ^2-M_{\rm ex}^2}{{\Lambda^{*}_\alpha}^2-q_{\rm ex}^2}\, . \end{array} $
In the above formulae, $ q_{\rm ex} $ and $ M_{\rm ex} $ are the 4-momentum and mass of the exchanged particle. The index $ \alpha $ can be $ \pi $, $ \rho $, $ \omega $ and $ a_1 $, denoting the corresponding exchanged particles. The cutoff parameter $ \Lambda_\alpha $ is taken from the Bonn model (see Table 1). $ \Lambda_Y $ and $ \Lambda^*_\alpha $ are free parameters since they are not well constrained in previous studies.
The differential cross-section for this reaction can be represented by
$ \begin{split} {\rm d}\sigma =& {m_N \over 4F}\sum\limits_{s_i}\sum\limits_{s_f}|{\cal M}|^2{m_N {\rm d}^3p_2 \over E_2}{{\rm d}^3p_{K} \over E_{K}}{{\rm d}^3p_{\pi} \over E_{\pi}} \\ & \times\delta^4(p_1+p_{K^+}-p_2-p_{K}-p_{\pi}), \end{split} $
where $ F = (2\pi)^5 \sqrt{(p_1\cdot p_{K^+})^2-m_N^2m_{K^+}^2} $ and $ {\cal M} $ is the full amplitude.
The spin density matrix elements (SDMEs) of $ K^* $ can be extracted by analyzing the angular distributions of its decay products, which offer valuable information about the reaction mechanism. SDMEs can be defined as
$ \begin{split} \rho_{ { {{mm'}}}} = \frac{\displaystyle\sum\limits_{ { {{s_i,s_f}}}}M(K^+p_{ { {{s_i}}}}\to K^*_{ { {{m}}}}p_{ { {{s_f}}}})M^*(K^+p_{ { {{s_i}}}}\to K^{*}_{ { {{m'}}}}p_{ { {{s_f}}}})}{\displaystyle\sum\limits_{ { {{s_i,s_f,m}}}}|M(K^+p_{ { {{s_i}}}}\to K^{*}_{ { {{m}}}}p_{ { {{s_f}}}})|^2},\\ \end{split} $
where $ s_i $, $ s_f $ and m denote the spin polarization of the corresponding particles. Using Eq. (16) to calculate SDMEs, we treat the reaction as a quasi two-body process, i.e. $ K^+ p\to K^* p $, and ignore the decay of $ K^* $①. Note that to describe the spin polarization of $ K^* $ there are three kinds of quantum axes used in literature: the s-channel helicity frame (Helicity frame), the t-channel helicity frame (Gottfried-Jackson frame), and the Adair frame. These three frames are not independent and can be related through frame transformations [29, 30]. In the present study, we consider the Helicity frame and Gottfried-Jackson frame, since the experimental data for these two frames are available [31]. In this work, we follow the conventions in Ref. [30].
3. The fitting process
With the formulae presented in the last section, the full amplitude can be written as
$ \begin{split} {\cal M} =& {\cal M}_\omega + {\rm e}^{{\rm i}\phi_\pi} {\cal M}_\pi+ {\rm e}^{{\rm i}\phi_\rho} {\cal M}_\rho + {\rm e}^{{\rm i}\phi_\Lambda} {\cal M}_\Lambda \\&+ {\rm e}^{{\rm i}\phi_\Sigma} {\cal M}_\Sigma+ {\rm e}^{{\rm i}\phi_{a_1}} {\cal M}_{a_1}, \ \end{split} $
for the reaction $ K^+ p \to K^{*+} p \to K^0 \pi^+ p $ and
$ \begin{array}{l} {\cal M} = 2{\rm e}^{{\rm i}\phi_\pi}{\cal M}_\pi+2 {\rm e}^{{\rm i}\phi_\rho} {\cal M}_\rho+ {\rm e}^{{\rm i}\phi_\Lambda} {\cal M}_\Lambda - {\rm e}^{{\rm i}\phi_\Sigma} {\cal M}_\Sigma+2 {\rm e}^{{\rm i}\phi_{a_1}} {\cal M}_{a_1}, \ \end{array} $
for the reaction $ K^+ n \to K^{*0} p \to K^+ \pi^- p $. The relative phases among the amplitudes are introduced since they should in general be complex, and in a model based on tree-level calculations the relative phases cannot be determined②. Thus, it is better to set them as free parameters and examine their effect on the results. In the full amplitude, we have taken $ \phi_\omega = 0 $. The constant coefficients in Eq. (18) are the isospin factors due to the different charge channels. To fix the undetermined parameters, we fit them to the experimental data using the CernLib Minuit code.
3.1. Model I
In this section, we only consider the $ \pi $, $ \rho $ and $ \omega $ exchange to describe the reaction. We consider this scenario because in previous works such a model was widely used [11-15]. To evaluate the amplitudes, all parameters in the model need to be determined. Here, we use the parameters from the CD-Bonn model for the N-N-meson vertices as listed in Table 1. For the $ K^* $-K-meson vertices, the coupling constants are usually evaluated using the SU(3) relations. Hence, we fix them using the SU(3) predictions [25, 26]. The cutoff parameters $ \Lambda_\pi^* $, $ \Lambda_\rho^* $ and $ \Lambda_\omega^* $ in the $ K^* $-K-meson vertices are not well determined, so we treat them as free parameters. Furthermore, we find that the parameter $ \phi_\pi $ is not relevant since the interference terms between the pseudoscalar and vector meson exchange vanish. Thus, we have four parameters $ \phi_\rho $, $ \Lambda^*_\pi $, $ \Lambda^*_\rho $ and $ \Lambda^*_\omega $ for fitting. The results are listed in Table 3.
$ \phi_\rho $ $ \Lambda^*_\omega $ $ \Lambda^*_\pi $ $ \Lambda^*_\rho $ $ \chi^2/dof $
$ -0.84\pm0.23 $ $ 2.02\pm0.13 $ $ 0.48\pm0.01 $ $ 1.08\pm 0.01 $ 5.18
Table 3. Fit results for the parameters in Model I.
The fit results for the angular distributions and SDMEs are shown by the black dashed lines in Fig. 2, which shows that the model gives just a rough description of the experimental data. It is interesting to compare our results with previous studies. In Ref. [13], the authors analyzed the $ K^+p\to K^{*+} p $ reaction for $ p_{K^+} = 1.96 $ and $ 3.0 $ GeV and considered the $ \pi $, $ \rho $ and $ \omega $ exchange. They found that $ K^*p $ is produced partly via pion exchange and partly via vector meson exchange. Their results were based on a rather poor fit, and the role of the $ \rho $ exchange was not discussed. In Ref. [12], the authors analyzed the reaction at the same energies and found that although the $ \omega $ exchange plays an important role, the $ \pi $ exchange contribution becomes more important at low energies. The statistics of the experimental data analyzed in these two works is rather limited. In a later analysis [11, 14], the authors analyzed both the angular distributions and the SDME data and argued that the reaction is dominated by vector meson exchange down to threshold. However, the role of $ \pi $ exchange was not clarified in this work and the parameters of the model are not compatible with the commonly used values. After these analyses, new experimental data were published in Refs. [31, 32]. The only theoretical work concerning these data was presented in Ref. [15]. However, the authors did not consider the SDME data at all, and came to the conclusion that the $ \omega $ exchange did not play a role in this reaction. Obviously, the understanding of these reactions is still unsatisfactory.
Figure 2. (color online) Fit results for the angular distributions (top) and SDMEs (middle) of $K^*$ in the reaction $K^+ p \to p K^{*+}(\to K^0\pi^+)$, and the angular distributions of $K^*$ in the reaction $K^+ n \to pK^{*0}( \to K^+ \pi^-)$(bottom), in the center-of-mass frame for various beam momenta. The dashed (black), dash-dotted (blue) and solid (red) lines correspond to the results of Model I, Model IIA and Model IIB, respectively. The angular distributions of $K^*$ were calculated with the amplitudes listed in Eqs. (9)-(11). $\theta_{K^*}$ is defined as $\pi-\theta_p$ with $\theta_p$ the scattering angle of the final proton in the center-of-mass frame. The experimental data are from Refs. [31, 32].
To illustrate why the present model cannot reproduce the data very well, we study the contribution of the individual Feynman diagrams. In Fig. 3 and Fig. 4, we plot the angular distributions and SDMEs of $ K^* $ in the $ K^+p\to K^{*+} p $ reaction at $ p_K = 1.2 $ GeV, where the individual contributions are shown. It can be seen that the forward enhancement in the angular distributions favors the $ \pi $ exchange, since such an enhancement cannot be provided by the vector meson exchange. On the contrary, the SDME data clearly favor the $ \omega $ exchange and exclude the possibility that the $ \pi $ exchange dominates the reaction. In fact, based on this finding the authors of Ref. [14] argued that the vector meson exchange should be dominant. Nevertheless, this argument could result in a poor prediction of the angular distributions at the forward angles. In our results (black dashed lines in Fig. 2), the poor description of the angular distributions and the Re$ \rho_{10} $ data at forward angles illustrates the limitations of the present model. On the one hand, SDME data result in a relatively small value of the cutoff of the $ K^*K\pi $ vertex obtained in the fit, which suppresses the $ \pi $ exchange and leads to a poor prediction of the angular distributions at the forward angles. On the other hand, problems in reproducing the Re$ \rho_{10} $ data at forward angles show that the $ \pi $ exchange is still too large. Such problems originate from the conflicting demands of the angular distribution and SDME data. It seems that considering only the $ \pi $, $ \rho $ and $ \omega $ exchange, the angular distribution and SDME data cannot be simultaneously well described. As a byproduct, the results shown in Fig. 4 also explain why the interference terms between the $ \pi $ and $ \rho $(or $ \omega $) exchange vanish. This is most clearly illustrated by the result for $ \rho_{00} $ measured in the Gottfried-Jackson frame. For the $ \pi $ exchange, the resultant $ \rho_{00}^{ { {{G-J}}}} $ is 1, while for the vector meson exchange $ \rho_{00}^{ { {{G-J}}}} $ is 0. This means that $ K^* $ induced by the $ \pi $ and $ \rho $($ \omega $) exchanges are in the orthogonal spin states. Thus, there are no interference terms between them.
Figure 3. (color online) Contribution of the individual meson exchange diagrams in the $ K^+p\to pK^{*+}(\to\pi^+K^0) $ reaction for $ p_{K^+} = 1.2 $ GeV in Model I.
Figure 4. (color online) Density matrix elements induced by the individual diagrams for pK+ = 1.2 GeV and the experimental data [31].
3.2. Model II
In Model I, it was shown that by only considering the $ \pi $, $ \rho $ and $ \omega $ exchange, one cannot give a satisfactory description of the experimental data. In the following, we include the contributions of the $ a_1 $, $ \Lambda $ and $ \Sigma $ exchange diagrams.
We include the contribution of these new diagrams in two steps to show their effect on improving the model. First, we only include the hyperon exchange diagrams (Model IIA). In this case, to evaluate the amplitudes, the coupling constants and cutoff parameters of the $ KNY $ and $ K^*NY $ vertices need to be determined. The coupling constants are relatively well known from the $ KN $ scattering, or some other strange production processes [22, 27], and we use the values from literature as listed in Table 2. Since we are dealing with the u-channel hyperon exchange diagrams, the cutoff parameters are poorly known. Therefore, we treat the cutoff parameters in the $ K^*NY $ vertex, $ \Lambda_\Lambda $ and $ \Lambda_\Sigma $, as free parameters. We now have 9 free parameters in total. The fit parameters are shown in Table 4 and the corresponding results are shown in Fig. 2 by the blue dash-dotted line. The obtained $ \chi^2/dof $ for this fit is 4.32, which shows that with five more fit parameters the improvement is rather limited. An explicit study of the magnitude of the hyperon exchange shows that their contribution is small (as shown in Fig. 5), which means that the hyperon exchange diagrams play only a minor role in this reaction.
parameter value parameter/GeV value
$ \phi_\pi $ 1.18±0.16 $ \Lambda_\Lambda $ 0.65±0.01
$ \phi_\rho $ 0.56±0.22 $ \Lambda_\Sigma $ /GeV 2.50±0.13
$ \phi_\Lambda $ 3.20±0.14 $ \Lambda_\omega^* $ 1.69±0.07
$ \phi_\Sigma $ 5.10±0.12 $ \Lambda_\pi^* $ 0.52±0.01
$ \Lambda_\rho^* $ 1.11±0.03
Table 4. Fit parameters obtained with Model IIA ($ \chi^2/dof = 4.32 $).
Figure 5. (color online) Contribution of the individual diagrams in the $ K^+ p \to pK^{*+}(\to \pi^+K^0) $ reaction for $ p_{K^+} = 1.2 $ GeV in Model IIA.
As a next step, we include the contribution of the $ a_1 $ exchange diagram (Model IIB). The Lagrangians and the coupling constants for the $ a_1KK^* $ and $ a_1NN $ vertices are discussed in the Appendix. As noted in the Introduction, there are in fact some other axial-vector meson exchanges that may contribute to this reaction. However, due to the poor knowledge of the relevant couplings, we do not consider them explicitly and assume that their contribution is partly absorbed in the $ a_1 $ exchange amplitude. The number of free parameters is 13 in this fit. The best fit parameters are shown in Table 5 and the corresponding results for the observables are shown in Fig. 2 by the red solid line. The obtained $ \chi^2/dof $ is 1.98, which shows that the contribution of the $ a_1 $ exchange significantly improves the fit results. The improvements occur both in the angular distributions and SDMEs. We have also checked that the u-channel diagrams are not important for this fit. If we turn off their contribution, the $ \chi^2/dof $ only slightly increases. However, the relative phases among the amplitudes are important for describing the data. In fact, we have fixed the relative phases according to the SU(3) relations and found that $ \chi^2/dof $ significantly increases②.
parameter value parameter value
$ \phi_\pi $ 2.06±0.19 $ \Lambda_{a_1} $ /GeV 2.04±0.08
$ \phi_\rho $ 2.79±0.14 $ \Lambda_\omega^* $ /GeV 1.48±0.09
$ \phi_\Lambda $ 2.44±0.19 $ \Lambda^*_\pi $ /GeV 0.55±0.04
$ \phi_\Sigma $ 2.05±0.40 $ \Lambda^*_\rho $ /GeV 1.04±0.05
$ \phi_{a_1} $ 3.86±0.15 $ \Lambda_\Lambda $ /GeV 0.58±0.02
$ \theta_{a_1} $ 3.59±0.76 $ \Lambda_\Sigma $ /GeV 2.50±0.11
$ g_{a_1KK^*} $ 18.26±2.23
Table 5. Fit parameters obtained with Model IIB ($ \chi^2/dof = 1.98 $).
The individual contributions in Model IIB are shown in Fig. 6. It is found that in this model the $ \pi $, $ \omega $ and $ a_1 $ exchange plays an important role. The strength of the $ a_1 $ exchange is comparable to that of $ \pi $ and $ \omega $. The significant contribution of $ a_1 $ is due to the large fitted coupling constant $ g_{a_1KK^*} $ and mixing angle $ \theta_{a_1} $. The present knowledge of these two parameters is rather poor. As discussed in the Appendix, to constrain these two parameters, we also take into account in the fit the experimental partial decay width of $ a_1\to KK^* $. With the fitted coupling constant, the obtained partial width $ \Gamma_{a_1KK^*} $ is 79.92 MeV, and the corresponding branching ratio is 18.80% using $ \Gamma_{a_1} = 425 $ MeV [16]. The fit result for the decay branching ratio is larger than the experimental value (2.2%-15%) [16], indicating that the experimental data favor a large contribution from the axial-vector meson exchange. The large partial decay width of $ a_1 $ obtained in the fit and the value of $ \chi^2/dof $ show that there is room for improvement of the model. One possibility is that some other axial-vector meson is also important, but is ignored in the present model. Possible contributions may come from $ f_1(1285) $, $ h_1 $, $ b_1 $ or some other higher mass mesons whose contributions are difficult to include due to the lack of knowledge of their couplings to $ NN $ and $ KK^* $. It should also be noted that to identify the $ a_1KK^* $ coupling or the partial decay width of $ a_1 $ in the $ KK^* $ channel, one should also take into account the uncertainties of $ g_{a_1NN} $. If a larger value of $ g_{NNa_1} $ is used, the obtained $ \Gamma_{a_1\to KK^*} $ can be reduced. The current knowledge of this coupling constant comes mainly from the analysis of the axial-vector form factor of the nucleon based on the axial-vector meson dominance model [33]. In these studies, the uncertainties of the extracted $ g_{a_1NN} $ are not well controlled [17, 33, 34]. Therefore, it is still not possible to draw a decisive conclusion about $ g_{a_1KK^*} $. However, the significant improvement of $ \chi^2/dof $ compared to Model I shows that the axial-vector meson exchange may be important and deserves further studies.
Figure 6. (color online) Contribution of the individual diagrams in the reactions $ K^+ p \to pK^{*+}( \to K^0 \pi^+) $(a) and $ K^+ n \to pK^{*0}( \to K^+ \pi^-) $(b) for $ p_{K^+} = 1.2 $ GeV based on Model IIB.
It is interesting to note that compared to $ K^+p \to p K^{*+} $, all models give a fairly good description of the angular distribution data for the $ K^+n \to p K^{*0} $ reaction. It is hence important to check whether the models also describe well the SDME data of this reaction. Unfortunately, such a comparison is still not possible due to the absence of data, indicating the need for new and updated data for the relevant reactions. One candidate reaction is $ K_{\rm L} N \to K^{*} N $. In fact, this has already been suggested using the secondary $ K_{\rm L} $ beam at JLab [10]. The new data for the $ K_{\rm L}N \to K^*N $ reaction could verify our models and help to better understand the reaction mechanism. We give the predictions of the angular distributions and SDMEs for the $ K_{\rm L} p \to K^{*0} p $ reaction in Fig. 7. The fact that the models result in distinct predictions indicates that they can be distinguished once the data for the $ K_{\rm L}N \to K^*N $ reaction are available.
Figure 7. (color online) Predictions of the angular distributions and SDMEs (Helicity frame) of $ K^* $ in the $ K_{\rm L} p \to K^{*0} p $ reaction in the center-of-mass frame for various beam momenta, based on Model I (black dashed line), Model IIA (blue dash-dotted line) and Model IIB (red solid line).
In the present work, we investigated the $ K^* $ production in the $ KN\to K \pi N $ reaction using the effective Lagrangian approach. We calculated the contributions of $ \pi $, $ \rho $, $ \omega $, hyperons and axial-vector meson exchange. The available experimental data, such as the angular distributions and spin density matrix elements were analyzed. It was found that Model IIB is favored by the existing experimental data, in which the pseudoscalar meson ($ \pi $), vector meson ($ \omega $) and axial-vector meson ($ a_1 $) exchanges are important for understanding of this reaction. In order to identify the role of the axial-vector meson exchange, measurements of $ K_{\rm L} p\to K^{*0} p $ would be helpful. Model predictions for this reaction were also presented for a future comparison.
Appendix A: Lagrangians and coupling constants for the $ a_1NN $ and $ a_1KK^* $ vertices
In this Appendix, we present the Lagrangains and coupling constants for the $ a_1NN $ and $ a_1KK^* $ vertices. For the $ a_1NN $ vertex, we use the following Lagrangian [17],
$\tag{A1} \begin{array}{l} {\cal L}_{NNa_1} = g_{NNa_1}\bar{N}\gamma_\mu \gamma_5 N a_1^\mu . \end{array} $
The coupling constant $ g_{NNa_1} $ is difficult to determine directly. In practice, it can be evaluated from the nucleon axial-vector coupling constant based on the axial-vector meson dominance model. The uncertainty of this method comes from the model itself and from the ratio of $ g_A/g_V $ used in the model. For example, one gets $ g_{NNa_1} = 6.70\pm 1.0 $ [17, 33] and $ g_{NNa_1} = 7.49\pm 1.0 $ [34] by using different values of $ g_A/g_V $. Bearing the uncertainties in mind, we use $ g_{NNa_1} = 6.70 $ in this work.
The Lagrangian for the $ a_1KK^* $ vertex is [35],
$\tag{A2} \begin{array}{l} {\cal L}_{a_1KK^*} = \frac{g_{a_1KK^*}}{\sqrt{2}}({\cal L}_1 \cos\theta_{a_1}+{\cal L}_2 \sin\theta_{a_1}) , \end{array} $
$ \begin{array}{l} {\cal L}_1 = \partial^\nu \bar{K} a_1^\mu K^*_{\mu\nu}+h.c.,\\ {\cal L}_2 = \bar{K} \partial^\mu a_1^\nu K^*_{\mu\nu}+h.c., \end{array} $
and $ K^*_{\mu\nu} = \partial_\mu K^*_\nu-\partial_\nu K^*_\mu $. With the Lagrangians given above, the $ a_1 $ exchange amplitude can be expressed as
$\tag{A3} \begin{split} {{\cal A}_{{a_1}}} =&{ - \sqrt 2 {G_V}{g_{{a_1}NN}}{g_{{a_1}K{K^*}}}\frac{{ - {g_{\mu \nu }} + \frac{{{p_{{K^*},\mu }}{p_{{K^*},\nu }}}}{{p_{{K^*}}^2}}}}{{p_{{K^*}}^2 - m_{{K^*}}^2 + {\rm i}{m_{{K^*}}}\Gamma }}}\\ &\times{\left[ {\left( {p_{{K^*}}^\alpha p_K^\nu {\rm cos}\theta + p_{{K^*}}^\alpha p_{{a_1}}^\nu {\rm sin}\theta } \right)\frac{{ - {g_{\alpha \beta }} + \frac{{{p_{{a_1},\alpha }}{p_{{a_1},\beta }}}}{{p_{{a_1}}^2}}}}{{p_{{a_1}}^2 - m_{{a_1}}^2}}} \right.}\\ &{\left. { {\left. { - ({p_{{K^*}}} \cdot {p_K}{\rm cos}\theta + {p_{{K^*}}} \cdot {p_{{a_1}}}{\rm sin}\theta } \right)\frac{{ - {g_{\nu \beta }} + \frac{{{p_{{a_1},\nu }}{p_{{a_1},\beta }}}}{{p_{{a_1}}^2}}}}{{p_{{a_1}}^2 - m_{{a_1}}^2}}} } \right]}\\ &\times{( - p_{{\pi ^ + }}^\mu + p_{{K^0}}^\mu ){\gamma _\beta }{\gamma _5}.} \end{split}$
In previous studies, the coupling constant $ g_{a_1KK^*} $ and the mixing angle $ \theta_{a_1} $ are rarely studied. In practice, one can extract the coupling constant from the partial decay width. For the $ a_1 \to KK^* $ process, there are indeed some experimental values [36-38]. However, one cannot determine both of these parameters with one input. In this work, we choose to set them as free parameters and obtain them from the simultaneous fit of the data for the reactions $ K^+ p\to K^{*+} p $ and $ K^+ n\to K^{*0} p $ and their partial widths.
Since $ a_1 $ lies below the $ KK^* $ threshold, the decay is due to its width. The partial decay width $ \Gamma_{a_1\to KK^*} $ can be calculated as [39]
$\tag{A4} \begin{split} \Gamma_{a_1\to KK^*} =& \frac{1} {\pi^2}\int {\rm d}s_{ { {{a_1}}}} {\rm d}s_{ { {{K^*}}}} \\ &\times {\rm Im}\left(\frac{1}{s_{ { {{K^*}}}}-M_{ { {{K^*}}}}^2+{\rm i}M_{ { {{K^*}}}}\Gamma_{K^*}}\right) {\rm Im}\left(\frac{1}{s_{ { {{a_1}}}}-M_{ { {{a_1}}}}^2+{\rm i}M_{ { {{a_1}}}}\Gamma_{ { {{a_1}}}}}\right) \\ &\times\Gamma_{ { {{a_1KK^*}}}}(\sqrt{s_{ { {{a_1}}}}},\sqrt{s_{ { {{K^*}}}}}) \Theta(\sqrt{s_{ { {{a_1}}}}}-\sqrt{s_{ { {{K^*}}}}}-M_{ { {{K}}}}), \end{split} $
$\tag{A5} \begin{array}{l} \Gamma_{a_1KK^*} = \frac{q}{12\pi M_A^2}\sum|M_{a_1\rightarrow KK^*}|^2 \end{array} $
and $ M_{a_1\rightarrow KK^*} $ is the decay amplitude. To make the integral converge, it is necessary to consider the form factors. We follow Ref. [40] and add
$\tag{A6} \begin{array}{l}\left(\frac{\Lambda_{a_1}^2}{\Lambda_{a_1}^2+|s_A-m_{a_1}^2|}\right)^2\cdot \left(\frac{\Lambda_{K^*}^2}{\Lambda_{K^*}^2+|s_V-m_{K^*}^2|}\right) \end{array} $
in the amplitude. Note that we use a dipole form factor for $ a_1 $ since the $ a_1KK^* $ coupling involves both the S-wave and D-wave.
To obtain the results presented in the text, we use $ \Lambda_{a_1} = \Lambda_{K^*} = 1.0 $ GeV as in Ref. [40]. The partial decay width of $ a_1\rightarrow KK^* $ obtained in Model IIB is 79.92 MeV and the corresponding decay branching ratio is 18.80%. Note that we have also tried in the fit the values of 1.5 and 2.0 GeV for $ \Lambda_{a_1} $ and $ \Lambda_{K^*} $. The corresponding values for the partial decay width are 106.35 and 121.81 MeV, respectively, with a worse $ \chi^2/dof $.
|
CommonCrawl
|
Search Results: 1 - 10 of 3445 matches for " Ruth Charney "
Page 1 /3445
An introduction to right-angled Artin groups
Ruth Charney
Abstract: Recently, right-angled Artin groups have attracted much attention in geometric group theory. They have a rich structure of subgroups and nice algorithmic properties, and they give rise to cubical complexes with a variety of applications. This survey article is meant to introduce readers to these groups and to give an overview of the relevant literature.
Automorphisms of higher-dimensional right-angled Artin groups
Ruth Charney,Karen Vogtmann
Abstract: We study the algebraic structure of the automorphism group of a general right-angled Artin group. We show that this group is virtually torsion-free and has finite virtual cohomological dimension. This generalizes results proved by the authors and John Crisp (arXiv:math/0610980) for two-dimensional right-angled Artin groups.
Random groups arising as graph products
Ruth Charney,Michael Farber
Mathematics , 2010, DOI: 10.2140/agt.2012.12.979
Abstract: In this paper we study the hyperbolicity properties of a class of random groups arising as graph products associated to random graphs. Recall, that the construction of a graph product is a generalization of the constructions of right-angled Artin and Coxeter groups. We adopt the Erdos - Renyi model of a random graph and find precise threshold functions for the hyperbolicity (or relative hyperbolicity). We aslo study automorphism groups of right-angled Artin groups associated to random graphs. We show that with probability tending to one as $n\to \infty$, random right-angled Artin groups have finite outer automorphism groups, assuming that the probability parameter $p$ is constant and satisfies $0.2929
<1$.
Length functions of 2-dimensional right-angled Artin groups
Ruth Charney,Max Margolis
Abstract: Morgan and Culler proved that a minimal action of a free group on a tree is determined by its translation length function. We prove an analogue of this theorem for 2-dimensional right-angled Artin groups acting on CAT(0) rectangle complexes.
Metric characterizations of spherical, and Euclidean buildings
Ruth Charney,Alexander Lytchak
Mathematics , 2001, DOI: 10.2140/gt.2001.5.521
Abstract: A building is a simplicial complex with a covering by Coxeter complexes (called apartments) satisfying certain combinatorial conditions. A building whose apartments are spherical (respectively Euclidean) Coxeter complexes has a natural piecewise spherical (respectively Euclidean) metric with nice geometric properties. We show that spherical and Euclidean buildings are completely characterized by some simple, geometric properties.
Subgroups and quotient groups of automorphism groups of RAAGs
Abstract: We study subgroups and quotients of outer automorphism groups of right-angled Artin groups (RAAGs). We prove that for all RAAGS, the outer automorphism group is residually finite and, for a large class of RAAGs, it satisfies the Tits alternative. We also investigate which of these automorphism groups contain non-abelian solvable subgroups.
Relative hyperbolicity and Artin groups
Ruth Charney,John Crisp
Abstract: This paper considers the question of relative hyperbolicity of an Artin group with regard to the geometry of its associated Deligne complex. We prove that an Artin group is weakly hyperbolic relative to its finite (or spherical) type parabolic subgroups if and only if its Deligne complex is a Gromov hyperbolic space. For a 2-dimensional Artin group the Deligne complex is Gromov hyperbolic precisely when the corresponding Davis complex is Gromov hyperbolic, that is, precisely when the underlying Coxeter group is a hyperbolic group. For Artin groups of FC type we give a sufficient condition for hyperbolicity of the Deligne complex which applies to a large class of these groups for which the underlying Coxeter group is hyperbolic.
Divergence and quasimorphisms of right-angled Artin groups
Jason Behrstock,Ruth Charney
Abstract: We give a group theoretic characterization of geodesics with superlinear divergence in the Cayley graph of a right-angled Artin group A(G) with connected defining graph G. We use this to determine when two points in an asymptotic cone of A(G) are separated by a cut-point. As an application, we show that if G does not decompose as the join of two subgraphs, then A(G) has an infinite-dimensional space of non-trivial quasimorphisms. By the work of Burger and Monod, this leads to a superrigidity theorem for homomorphisms from lattices into right-angled Artin groups.
Convexity of parabolic subgroups in Artin groups
Ruth Charney,Luis Paris
Abstract: We prove that any standard parabolic subgroup of any Artin group is convex with respect to the standard generating set.
Contracting Boundaries of CAT(0) Spaces
Ruth Charney,Harold Sultan
Abstract: As demonstrated by Croke and Kleiner, the visual boundary of a CAT(0) group is not well-defined since quasi-isometric CAT(0) spaces can have non-homeomorphic boundaries. We introduce a new type of boundary for a CAT(0) space, called the contracting boundary, made up rays satisfying one of five hyperbolic-like properties. We prove that these properties are all equivalent and that the contracting boundary is a quasi-isometry invariant. We use this invariant to distinguish the quasi-isometry classes of certain right-angled Coxeter groups.
|
CommonCrawl
|
Adibatic process in QM and thermodynamics?
I have come over the phrase 'Adiabatic process' in two different contexts, that of QM and Thermodynamics .
A adiabatic process is one is slow compared to: $$t=\frac{\hbar}{E_n-E_m}$$ and in which the probability that the system is in a given eigenstate remains the same.
An adiabatic process is one in which there is no heat transfer to or from the surroundings.
I was wondering how these two definitions are linked?
quantum-mechanics thermodynamics statistical-mechanics
Quantum spaghettificationQuantum spaghettification
They are linked via the Gibbs-Shannon entropy given by: $$S_G=-\sum_i P_i \ln(P_i)$$ Where here $p_i$ is the probability that the system will be in the $i$th eigenstate. The relationship between the Gibbs-Shannon entropy and your standard thermodynamic entropy is: $$S=k_BS_G$$ In the quantum mechanical 'adiabatic process' we are told that the probabilities $P_i$ do not change, this means that $S_G$ does not change and hence $S$ also does not change (the thermodynamic definition of an adiabatic process).
So an adiabatic process in the quantum mechanic sense of the word is in essence equivalent to that of the thermodynamic meaning.
$\begingroup$ Not quite. A unitary evolution in QM or (condensed matter) QFT, to which the procedure of "adiabatically turning on an interaction" is usually applied, is always an "adiabatic process" in the sense of your definition, since the probabilities of energy eigenstates, in general of evolved states, and the entropy itself are always conserved for any state/density matrix $\rho$: $\langle E_i|\rho(t)|E_i\rangle = \langle E_i|\rho(0)|E_i\rangle$, $\langle \Psi(t)|\rho(t)|\Psi(t)\rangle = \langle \Psi(0)|\rho(0)|\Psi(0)\rangle$, $S(\rho(t)) = S(\rho(0)$ for $\rho(t) = U(t) \rho(0) U^\dagger(t)$. $\endgroup$ – udrv Mar 16 '16 at 15:33
$\begingroup$ Your observation would be applicable perhaps to open quantum systems, so you need to specify the context to which the condition $t = \hbar/(E_n - E_m)$ applies. $\endgroup$ – udrv Mar 16 '16 at 15:34
Not the answer you're looking for? Browse other questions tagged quantum-mechanics thermodynamics statistical-mechanics or ask your own question.
The notion of an adiabatic process in thermodynamics -vs- quantum mechanics
Simplest way to analytically determine whether a claimed heat transfer process obeys the second law of thermodynamics?
Change in entropy of the Universe for charging/discharging a capacitor
Does hysteresis always cause an increase in entropy?
Accounting for work in an irreversible thermodynamic process
Adiabatic theorem - quantum mechanics and thermodynamics
Why do we consider dU= d(qv) in adiabatic process?
Randomness and Thermodynamics
How does the Second Law of Thermodynamics place limits on allowable heat transfers?
Is the Process of Projection of a Generic State Onto a Subspace Impossible?
|
CommonCrawl
|
Béla Szőkefalvi-Nagy Medal 2000
Translations on graphs
Juhani Nieminen, Matti Peltola
Abstract. The concept of translations on graphs is introduced. Graphs, where every block is a complete graph, median graphs and the covering graphs of finite distributive lattices graphs are characterized by means of special translations and other mappings.
AMS Subject Classification (1991): 05C12, 06B10
Keyword(s): convexes of graphs, translation, medians, lattices
Received October 21, 1998, and in revised form March 1, 2000. (Registered under 2746/2009.)
Direct systems of localizations of polynomial rings
Souad Ameziane, Othman Echi, Ihsen Yengui
Abstract. We study a class of direct systems of rings satisfying a lifting property $(L)$ in order to generalize some properties known in $R[\infty ]$, $R(\infty )$ and $R\langle\infty \rangle $. Moreover, the following theorem is given, generalizing that $R\langle\infty \rangle $ and $R(\infty )$ are stably strong $S$ if $R$ has a finite valuative dimension. If $A=\lim_\to(S_j^{- 1}R[\Lambda_j],f_{kj})$ is a locally finite-dimensionsal domain, $f_{kj}$ are $R$-homomorphisms, and t.d.$[A:R]=\infty $, then $A$ is a stably strong $S$-domain. Finally, we present another characterization of rings satisfying the valuative altitude formula.
AMS Subject Classification (1991): 13C05, 13F05, 13F20
Received April 29, 1999, and in revised form March 2, 2000. (Registered under 2747/2009.)
Endomorphism monoids in varieties of bands
M. Demlová, V. Koubek
Abstract. Let ${\msbm W}$ be a proper subvariety of a variety ${\msbm V}$. We say that a functor $F\colon{\cal K}\to{\msbm V}$ is a ${\msbm W}$-relatively full embedding if $F$ is faithful, $\mathop{\rm Im} (Ff)\notin{\msbm W}$ for any ${\cal K}$-morphism $f$, and if $f\colon Fa\to Fb$ is a homomorphism for ${\cal K}$-objects $a$ and $b$ then either $\mathop{\rm Im} (f)\in{\msbm W}$ or $f=Fg$ for some ${\cal K}$-morphism $g\colon a\to b$. A variety of algebras ${\msbm V}$ is called var-relatively universal if there exist a proper subvariety ${\msbm W}$ of ${\msbm V}$ and a ${\msbm W}$-relatively full embedding from the category of all graphs and compatible mappings into ${\msbm V}$. We prove that a variety ${\msbm V}$ of bands is var-relatively universal if and only if ${\msbm V}$ contains the variety of all left semi-normal bands or the variety of all right semi-normal bands.
AMS Subject Classification (1991): 18B15, 20M07, 20M15
Keyword(s): full embedding, lattice of varieties of bands, determinacy
Received July 27, 1999, and in revised form April 4, 2000. (Registered under 2748/2009.)
Rees matrix semigroups over semigroupoids and the structure of a class of abundant semigroups
Mark V. Lawson
Abstract. McAlister proved that every locally inverse regular semigroup is a locally isomorphic image of a regular Rees matrix semigroup over an inverse semigroup. In this paper, we show how this result can be generalised to a class of locally adequate abundant semigroups.
Received September 28, 1995, and in final form June 5, 2000. (Registered under 2749/2009.)
When dim$=$const implies subspace$=$const
J. M. Szucs
Abstract. If the dimension of the space spanned by the vectors $\langle f_{1}^{(s)}(x)$, $\ldots $, $f_{n}^{(s)}(x)\rangle $, $s=0,1,\ldots,k$, of $n$ real-valued functions $f_{1},\ldots,f_{n}$ and of their first $k$ derivatives is independent of $x\in I$ (an interval $\subseteq{\msbm R}{}$) and is at most $k$, then the space itself is independent of $x\in I$. This was proved by Curtiss and Moszner assuming the continuity of $f_{1}^{(k)},\ldots,f_{n}^{(k)}$. Their proofs are simplified and extended to operator-valued maps. The extension relies on this generalization of a theorem of Peano: Let $T\colon I\to L(V,W)$ be a differentiable map from a nondegenerate interval $I\subseteq{\msbm R}$ to the space $L(V,W)$ of linear operators from a real finite-dimensional vector space $V$ to another such space $W$. Then $\mathop{\rm range}T(x)$, $x\in I$, is constant if and only if $\mathop{\rm range}T^{\prime }(x)\subseteq\mathop{\rm range}T(x)$, $\dim\mathop{\rm range}T(x)=\mathop{\rm const}$, $x\in I$.
Received June 23, 1999, and in revised form December 2, 1999. (Registered under 2750/2009.)
An equivalent norm on BMO spaces
Stevo Stević
Abstract. Let $p>0$. A Borel function $f$, locally integrable in the unit ball $B$, is said to be a $BMO_p(B)$ function if $$||f||_{BMO_p}=\sup_{B(a,r)\subset B}\big(\frac{1}{V(B(a,r))}\int_{B(a,r)}|f(x)-f_{B(a,r)}|^pdV(x)\big)^{1/p}<+\infty,$$ where the supremum is taken over all balls $B(a,r)$ in $B$, and $f_{B(a,r)}$ is the mean value of $f$ over $B(a,r)$. Let ${\cal H}(B)$ denote the set of harmonic functions in open unit ball $B$, $f_{a,r}(x)$ denotes $f(a+rx)$ for arbitrary function $f$. The main result of this paper is to prove the following theorem: Let $u\in{\cal H}(B)$, $p>1$. Then a) $$\eqalign{||u||_{BMO_p}^p=\sup_{{a\in B}\atop{0< r< 1-|a|}}\frac{p(p-1)}{2n(n-2)} \int_B\big(&|u_{a,r}(x)-u_{a,r}(0)|^{p-2} |\nabla u_{a,r}(x)|^2\times\cr &\times(2|x|^{2-n}+(n-2)|x|^2-n)\big)dV_N(x)}$$ for $n\geq3$, and b) $$\eqalign{||u||_{BMO_p}^p=\sup_{{a\in B}\atop{0< r< 1-|a|}}p(p-1)\int_B\big(&|u_{a,r}(x)-u_{a,r}(0)|^{p-2} |\nabla u_{a,r}(x)|^2\times\cr &\times\big(\ln\frac {1}{|x|}-1+|x|\big)\big)dV_N(x)}$$ for $n=2$.
AMS Subject Classification (1991): 31B05, 31C05
Received March 8, 1999. (Registered under 2751/2009.)
The uniform asymptotic stability for functional differential equations with finite delay
Younhee Ko
Abstract. We consider a system of functional differential equations $x'(t)=F(t,x_t)$ and obtain conditions on a Liapunov functional and a Liapunov function to ensure the stability of the zero solution of functional differential equation with finite delay.
AMS Subject Classification (1991): 34K20
Keyword(s): Uniform asymptotic stability, functional differential equations
Received April 1, 1998, and in revised form December 16, 1999. (Registered under 2752/2009.)
Baire property implies continuity for solutions of functional equations --- even with few variables
Antal Járai
Abstract. It is proved that --- under certain conditions --- solutions $f$ of the functional equation $$ f(x)=h(x,y,f(g_1(x,y)),\ldots,f(g_n(x,y))), (x,y)\in D\subset{{\msbm R}^n}\times{\msbm R}^l $$ having Baire property are continuous, even if $1\le l\le n$. As a tool we introduce new function classes which --- roughly speaking --- interpolate between Baire property and continuity.
AMS Subject Classification (1991): 39B05, 54E52
Received August 12, 1999, and in revised form April 5, 2000. (Registered under 2753/2009.)
A functional equation on complementary means
Zoltán Daróczy, Che Tat Ng
Abstract. Let $M$ be a mean on $[a,b]$ and let $\hat M(x,y):=x+y-M(x,y)$ $(x,y\in[a,b])$ be the mean which is complementary to $M$ with respect to the arithmetic mean. A function $f\colon[a,b]\to{\msbm R}$ is called {\it $M$-associate } if it possesses the following property: If $x,y\in[a,b]$ satisfy $M(x,y)=(x+y)/2$ and $f(x )=f\left((x+y)/2\right )$, then $f(y)=f(x)$. We consider the functional equation $$ f(M(x,y))=f(\hat M( x,y)) (x,y\in[a,b]) $$ with and without $f$ being $M$-associate.
AMS Subject Classification (1991): 39B22, 39B12, 26A18
Keyword(s): quasi-arithmetic mean, functional equation
The maximal Cesàro operator on Hardy spaces
Hubert Berens, Luoqing Li
Abstract. We will prove that the maximal Cesàro operator $\sigma_*^\delta$ is bounded from $H^p({{\msbm R}})$ to $L^p({{\msbm R}})$ when $\delta >\delta_p:=1/p-1$, $0< p\leq1$, while $\sigma_*^{\delta_p}$ maps $H^p({{\msbm R}})$ boundedly into {\sl weak}-$L^p({{\msbm R}})$ for $0< p< 1$. The weak type estimate is best possible in the sense that it cannot be strengthened to strong type. The results extend and strengthen those of [7], [11], and [1].
AMS Subject Classification (1991): 42A38, 42A08, 42B30
Keyword(s): Fourier transforms, Cesàro means, Hardy spaces
Extensions of operator valued positive definite functions on an interval of ${\msbm Z}^2$ with the lexicographic order
Ramón Bruzual, Marisela Domínguez
Abstract. We prove that an operator valued positive definite function defined on an interval of ${\msbm Z}^2$ with the lexicographic order can be extended to a positive definite function on the whole discrete plane.
Keyword(s): operator valued positive definite functions, semigroup of operators, lexicographic order
Received April 29, 1999, and in revised form November 8, 1999. (Registered under 2756/2009.)
A Kreĭn space approach to representation theorems and generalized Friedrichs extensions
Andreas Fleige, Seppo Hassi, Henk de Snoo
Abstract. Let ${\eufm t}[\cdot,\cdot ]$ be a densely defined symmetric sesquilinear form in a Hilbert space ${\eufm H}$ with inner product $(\cdot,\cdot )$. Assume that for some $\lambda\in {\msbm R}$ the form ${\eufm t}[\cdot,\cdot ]-\lambda(\cdot,\cdot )$ induces a Kreĭn space structure on $\mathop{\rm dom}{\eufm t}$, which can be continuously embedded in ${\eufm H}$. Then there exists a unique selfadjoint operator $T_{\eufm t}$ in ${\eufm H}$ such that $\mathop{\rm dom}T_{\eufm t}\subset\mathop{\rm dom}{\eufm t}$ and ${\eufm t}[f,g]=(T_{\eufm t}f,g)$, $f \in\mathop{\rm dom}T_{\eufm t}$, $g \in\mathop{\rm dom}{\eufm t}$. This generalizes the first representation theorem in T. Kato [Kato] to a non-semibounded situation. Based on the theory of definitizable operators in Kreĭn spaces an analog of the second representation theorem in [Kato] will be given. These results provide an approach to generalized Friedrichs extensions for a class of non-semibounded symmetric operators with defect numbers $(1,1)$, which is analogous to the classical theory in the semibounded case.
AMS Subject Classification (1991): 46C20, 47A67, 47B50; 47B25
Keyword(s): Sesquilinear form, representation theorem, Kreĭn space, singular critical point, generalized Friedrichs extension
Weighted integrals of analytic functions
Aristomenis G. Siskakis
Abstract. We derive a formula of a weight $v$ in terms of a given weight $w$ such that the estimate $$ \int_{\msbm D}| f(z)| ^pw(z) dm(z) \sim | f(0)| ^p + \int_{\msbm D}| f'(z)| ^pv(z) dm(z) $$ is valid for all analytic functions $f$ on the unit disc.
AMS Subject Classification (1991): 46E15, 30E99
Received February 15, 1999, and in revised form March 3, 2000. (Registered under 2758/2009.)
Continuous and compact imbeddings of weighted Sobolev spaces
Pankaj Jain, Bindu Bansal, P. K. Jain
Abstract. Continuous and compact imbeddings of weighted Sobolev spaces $W^{1,p}(\Omega; v)$ and $W_0^{1,p}(\Omega; v)$ into the weighted Lebesgue space $L^q (\Omega; w)$, where $1\le q< p< \infty $, have been considered, where the weights $v$ and $w$ are some functions of the distance measured either from the boundary $\partial\Omega $ of $\Omega $ or from a point $x_0 \in\partial \Omega $ and $\Omega $ is a bounded domain of ${{\msbm R}^N}$ in the class ${{\cal C}^{0,1}}$, ${{\cal K}(x_0), {\cal K}^{0,1}(x_0)}$.
Received October 26, 1998, and in final form September 15, 1999. (Registered under 2759/2009.)
The irreducible decomposition of Cowen--Douglas operators and operator weighted shifts
Chun Lan Jiang, Jue Xian Li
Abstract. This paper concerns strongly irreducible decomposition and irreducible decomposition for Cowen--Douglas operators and operator weighted shifts. We characterize strong irreducibility of an operator weighted shift by the Jacobson radical of its commutant. Moreover, we show that every Cowen--Douglas operator and operator weighted shift has uniquely finite irreducible decomposition under unitary equivalence.
AMS Subject Classification (1991): 46H30, 47A10, 47A55, 47A58
Received November 18, 1998, and in revised form July 6, 1999. (Registered under 2760/2009.)
Dvoretzky's type result for operators on Banach spaces
Vladimír Müller
Abstract. Let $\lambda_1,\ldots,\lambda_n$ be elements of the essential approximate point spectrum of a bounded Banach space operator. Then there are corresponding approximate eigenvectors $x_1,\ldots,x_n$ such that the norm on the subspace generated by them is almost symmetric. The result can be used in the Scott Brown technique for Banach space operators. Another application is for the local behaviour of operators.
Received July 1, 1999, and in revised form December 13, 1999. (Registered under 2761/2009.)
On simultaneous triangularization of commutants
Reza Yahaghi
Abstract. The main purpose of this paper is to present a reducibility result based on a recent theorem of Turovskii on semi-groups of compact quasinilpotent operators. More precisely, we prove that every non-zero triangularizable family of compact operators has a hyperinvariant subspace, and then we present several sufficient conditions for simultaneous triangularization of a family of compact operators together with its commutant. We also give a different proof of Shulman's theorem. The finite-dimensional version of the results is also mentioned and emphasized.
AMS Subject Classification (1991): 47A15, 47D03, 20M20
Keyword(s): Volterra semigroup (algebra), hyperinvariant subspace, Commutant, Triangularization
On the structure of c.n.u. bi-isometries
Dan Popovici
Abstract. As shown by Berger, Coburn and Lebow [1] and recently rediscovered by Douglas and Foiaş [3] every c.n.u. bi-isometry is unitarily equivalent with a certain isometric pair on $H^2({\msbm T},{\cal E})$ ($\cal E$ is a Hilbert space) defined in terms of two operators on ${\cal E}$, $U$ unitary and $P$ orthogonal projection. It is our aim in this paper to characterize the structure of a double commuting c.n.u. bi-isometry related to the Wold-Słociński decomposition [10] in terms of a representative $\{U,P\} $ of its complete unitary invariant. Some results concerning the minimal unitary extension are also given.
A note on real parts of some semi-hyponormal operators
Muneo Cho, Tadasi Huruya, Young Ok Kim, Jun Ik Lee
Abstract. We have two typical examples of semi-hyponormal but not hyponormal operators. In this paper, we show that these examples have the following property: Re $\sigma(T) = \sigma(\mathop{\rm Re }T)$.
Received October 24, 1999, and in revised form April 19, 2000. (Registered under 2764/2009.)
Remarks, examples and spectral properties of generalized Toeplitz operators
Carmen H. Mancera, Pedro J. Paúl
Abstract. An operator $X\colon{\cal H}_1 \to{\cal H}_2$ is said to be a generalized Toeplitz operator with respect to given contractions $T_1$ and $T_2$ if $X=T_2XT_1^*$. The purpose of this line of research, started by Douglas, Sz.-Nagy and Foiaş, and Pták and Vrbová, is to study which properties of classical Toeplitz operators depend on their characteristic relation. Following this spirit, we give some clarifying examples and a new characterization of analytic Toeplitz operators that complement the work done by Pták and Vrbová, as well as some spectral properties of generalized Toeplitz operators that complement the work done by Sz.-Nagy and Foiaş. As a by-product we prove that the spectrum of a function $\phi\in H^\infty $ equals the approximate point spectrum of its Toeplitz operator.
Keyword(s): Toeplitz operators, spectral properties, minimal isometric dilation
Received December 21, 1998, and in revised form November 18, 1999. (Registered under 2765/2009.)
Composition operators with multivalent symbol
Rebecca G. Wahl
Abstract. If $\varphi $ is an analytic map of the unit disk $D$ into itself, the composition operator $C_{\varphi }$ on the Hardy space $H^2(D)$ is defined by $C_{\varphi}(f) = f\circ\varphi$. For a certain class of composition operators with multivalent symbol $\varphi$, we identify a subspace of $H^2(D)$ on which $C^*_{\varphi}$ behaves like a weighted shift. We reproduce the description of the spectrum found in [Kam75] and show for this class of composition operators that the interior of the spectrum is a disk of eigenvalues of $C^*_{\varphi}$ of infinite multiplicity.
Received February 9, 1999. (Registered under 2766/2009.)
Elementary operators. II
Matej Brešar, Lajos Molnár, Peter Šemrl
Abstract. The concept of an elementary operator between two algebras was recently introduced and this paper continues and extends the study of this concept. Elementary operators on some function algebras are computed. Jordan elementary operators are introduced and, in particular, their form on standard operator algebras is described.
AMS Subject Classification (1991): 47B47, 46E25, 16W99
The manifold of minimal partial isometries in the space ${\cal L}(H,K)$ of bounded linear operators
José M. Isidro
Abstract. Given a complex Hilbert space $X$ and the von Nuemann algebra ${\cal L}(X)$, we study the Riemannian geometry of the manifold ${\cal P}(X)$ consisting of all minimal projections in ${\cal L}(X)$. To do it we take the Jordan--Banach triple approach (briefly, the JB$^*$-triple approach) because this setting provides a unifying framework for many other situations and simplifies the study previously made by other authors. We then apply this method to study the differential geometry of the manifold of minimal partial isometries in ${\cal L}(H, K)$, the space of bounded linear operators between the complex Hilbert spaces $H$ and $K$ with $\dim H \leq\dim K$.
AMS Subject Classification (1991): 17C36, 53C22
Keyword(s): Partial isometries, JB*-triples, Affine connections, Geodesics, Riemannian distance
Receved July 5, 1999, and in revised form November 12, 1999. (Registered under 2768/2009.)
Asymptotic freeness almost everywhere for random matrices
Fumio Hiai, Dénes Petz
Abstract. Voiculescu's asymptotic freeness result for random matrices is improved to the sense of almost everywhere convergence. The asymptotic freeness almost everywhere is first shown for standard unitary matrices based on the computation of multiple moments of their entries, and then it is shown for rather general unitarily invariant selfadjoint random matrices (in particular, standard selfadjoint Gaussian matrices) by applying the first result to the unitary parts of their diagonalization. Bi-unitarily invariant non-selfadjoint random matrices are also treated via polar decomposition.
AMS Subject Classification (1991): 15A52, 62E20, 60F99
Keyword(s): random matrices, free probability, asymptotic freeness
|
CommonCrawl
|
SN 2016coi/ASASSN-16fp: An example of residual helium in a type Ic supernova? (1709.03593)
S. J. Prentice, C. Ashall, P. A. Mazzali, J.-J. Zhang, P. A. James, X.-F. Wang, J. Vinko, S. Percival, L. Short, A. Piascik, F. Huang, J. Mo, L.-M. Rui, J.-G. Wang, D.-F. Xiang, Y.-X. Xin, W.-M. Yi, X.-G. Yu, Q. Zhai, T.-M. Zhang, G. Hosseinzadeh, D. A. Howell, C. McCully, S. Valenti, B. Cseh, O. Hanyecz, L. Kriskovics, A. Pal, K. Sarneczky, A. Sodor, R. Szakats, P. Szekely, E. Varga-Verebelyi, K. Vida, M. Bradac, D. E. Reichart, D. Sand, L. Tartaglia
May 9, 2018 astro-ph.HE
The optical observations of Ic-4 supernova (SN) 2016coi/ASASSN-16fp, from $\sim 2$ to $\sim450$ days after explosion, are presented along with analysis of its physical properties. The SN shows the broad lines associated with SNe Ic-3/4 but with a key difference. The early spectra display a strong absorption feature at $\sim 5400$ \AA\ which is not seen in other SNe~Ic-3/4 at this epoch. This feature has been attributed to He I in the literature. Spectral modelling of the SN in the early photospheric phase suggests the presence of residual He in a C/O dominated shell. However, the behaviour of the He I lines are unusual when compared with He-rich SNe, showing relatively low velocities and weakening rather than strengthening over time. The SN is found to rise to peak $\sim 16$ d after core-collapse reaching a bolometric luminosity of Lp $\sim 3\times10^{42}$ \ergs. Spectral models, including the nebular epoch, show that the SN ejected $2.5-4$ \msun\ of material, with $\sim 1.5$ \msun\ below 5000 \kms, and with a kinetic energy of $(4.5-7)\times10^{51}$ erg. The explosion synthesised $\sim 0.14$ \msun\ of 56Ni. There are significant uncertainties in E(B-V)host and the distance however, which will affect Lp and MNi. SN 2016coi exploded in a host similar to the Large Magellanic Cloud (LMC) and away from star-forming regions. The properties of the SN and the host-galaxy suggest that the progenitor had $M_\mathrm{ZAMS}$ of $23-28$ \msun\ and was stripped almost entirely down to its C/O core at explosion.
SN 2016X: A Type II-P Supernova with A Signature of Shock Breakout from Explosion of A Massive Red Supergiant (1801.03167)
F. Huang, X.-F. Wang, G. Hosseinzadeh, P. J. Brown, J. Mo, J.-J. Zhang, K.-C. Zhang, T.-M. Zhang, D.-A. Howell, I. Arcavi, C. McCully, S. Valenti, L.-M. Rui, H. Song, D.-F. Xiang, W.-X. Li, H. Lin, L.-F. Wang
Jan. 8, 2018 astro-ph.SR, astro-ph.HE
We present extensive ultraviolet (UV) and optical photometry, as well as dense optical spectroscopy for type II Plateau (IIP) supernova SN 2016X that exploded in the nearby ($\sim$ 15 Mpc) spiral galaxy UGC 08041. The observations span the period from 2 to 180 days after the explosion; in particular, the Swift UV data probably captured the signature of shock breakout associated with the explosion of SN 2016X. It shows very strong UV emission during the first week after explosion, with contribution of $\sim$ 20 -- 30% to the bolometric luminosity (versus $\lesssim$ 15% for normal SNe IIP). Moreover, we found that this supernova has an unusually long rise time of about 12.6 $\pm$ 0.5 days in the $R$ band (versus $\sim$ 7.0 days for typical SNe IIP). The optical light curves and spectral evolution are quite similar to the fast-declining type IIP object SN 2013ej, except that SN 2016X has a relatively brighter tail. Based on the evolution of photospheric temperature as inferred from the $Swift$ data in the early phase, we derive that the progenitor of SN 2016X has a radius of about 930 $\pm$ 70 R$_{\odot}$. This large-size star is expected to be a red supergiant star with an initial mass of $\gtrsim$ 19 -- 20 M$_{\odot}$ based on the mass $--$ radius relation of the Galactic red supergiants, and it represents one of the most largest and massive progenitors found for SNe IIP.
Massive stars exploding in a He-rich circumstellar medium - IX. SN 2014av, and characterization of Type Ibn SNe (1509.09069)
A. Pastorello, X.-F. Wang, F. Ciabattari, D. Bersier, P. A. Mazzali, X. Gao, Z. Xu, J.-J. Zhang, S. Tokuoka, S. Benetti, E. Cappellaro, N. Elias-Rosa, A. Harutyunyan, F. Huang, M. Miluzio, J. Mo, P. Ochner, L. Tartaglia, G. Terreran, L. Tomasella, M. Turatto
Nov. 13, 2015 astro-ph.SR
We present spectroscopic and photometric data of the Type Ibn supernova (SN) 2014av, discovered by the Xingming Observatory Sky Survey. Stringent pre-discovery detection limits indicate that the object was detected for the first time about 4 days after the explosion. A prompt follow-up campaign arranged by amateur astronomers allowed us to monitor the rising phase (lasting 10.6 days) and to accurately estimate the epoch of the maximum light, on 2014 April 23 (JD = 2456771.1 +/- 1.2). The absolute magnitude of the SN at the maximum light is M(R) = -19.76 +/- 0.16. The post-peak light curve shows an initial fast decline lasting about 3 weeks, and is followed by a slower decline in all bands until the end of the monitoring campaign. The spectra are initially characterized by a hot continuum. Later on, the temperature declines and a number of lines become prominent mostly in emission. In particular, later spectra are dominated by strong and narrow emission features of He I typical of Type Ibn supernovae (SNe), although there is a clear signature of lines from heavier elements (in particular O I, Mg II and Ca II). A forest of relatively narrow Fe II lines is also detected showing P-Cygni profiles, with the absorption component blue-shifted by about 1200 km/s. Another spectral feature often observed in interacting SNe, a strong blue pseudo-continuum, is seen in our latest spectra of SN 2014av. We discuss in this paper the physical parameters of SN 2014av in the context of the Type Ibn supernova variety.
|
CommonCrawl
|
Yi's Knowledge Base
Posts Series Publications About
#Nonparametric Methods #Statistics #Distribution
Other Single Sample Inferences
Nonparametric methods
Explore whether the sample is consistent with a specified distribution at the population level. Kolmogorov's test, Lilliefors test and Shapiro-Wilk test are introduced, as well as tests for runs or trends.
Previously we talked a lot about location inference, which is looking at the mean or median of the population distribution, or in fancier words, inferences about centrality. In this chapter we explore whether our sample is consistent with being a random sample from a specified (continuous) distribution at the population level.
Under $H_0$, the population distribution is completely specified with all the relevant parameters, such as a normal distribution with given mean and variance, or a uniform distribution with given lower and upper limits, or at least it's some specific family of distributions.
Kolmogorov's test
The Kolmogorov's test is a widely used procedure. The idea is to look at the empirical CDF $S_n(x)$, which is a step function that has jumps of $\frac{1}{n}$ at the observed data points.
We mentioned before that as $n \rightarrow \infty$, it becomes "close" to the true CDF, $F(x) = P(X \leq x)$. So, $S_n(x)$ is a consistent estimator for $F(x)$.
If our data really are a random sample from the distribution $F(x)$, we should then be seeing evidence of that in $S_n(x) - F(x)$ and $S_n(x)$ should be "close". Under $H_0$, $F(x)$ is completely specified, so it's known. $S_n(x)$ is determined by the data, so it's known as well, which means we can compare them explicitly!
The logic of the test statistic is if $x_1, x_2, \cdots, x_n$ are a sample from a population with distribution $F(x)$, then the maximum difference between the CDF under $H_0$ and the empirical CDF should be small. The larger the maximum difference is, the more evidence against $H_0$. It's generally a good idea to plot the empirical CDF together with the hypothesized to see visually how close they are1:
The red line is our data $S_n(x)$, and the blue line is the hypothesized empirical distribution $F(x)$.
Our Test statistic is the maximum vertical distance between $F(x)$ and $S_n(x)$, or
$$ T = \sup_x \left|F(x) - S_n(x) \right| $$
for a two-sided test for a two-tailed alternative. Deriving the exact distribution in this case is much more complex. In R, the function ks.test() does the job. You have to specify what distribution to compare to, e.g. ks.test(dat, "pnorm", 2, 4) to test whether dat look like a sample from $N(2, 4)$.
More typically, though, we won't know the values of the parameters that define the distribution. In other words, we have unknown parameters that need to be estimated. If we use the Kolmogorov test with estimated values (from the sample) of the parameters, the distribution of the test statistic $T$ changes.
Lilliefors test for normality
The Lilliefors test is a simple modification of Kolmogorov's test. We have a sample $x_1, x_2, \cdots, x_n$ from some unknown distribution $F(x)$. Compute the sample mean and sample standard deviation as estimates of $\mu$ and $\sigma$, respectively:
$$ \begin{gather*} \bar{x} = \frac{1}{n}\sum\limits_{i=1}^n x_i \\
S = \sqrt{\frac{1}{n-1} \sum\limits_{i=1}^n (x_i - \bar{x})^2} \end{gather*} $$
Use these to compute "standardized" or "normalized" versions of the data to test for normality:
$$ \begin{aligned}Z_i = \frac{x_i - \bar{x}}{S} && i = 1, 2, \cdots, n\end{aligned} $$
Compare the empirical CDF of the $Z_i$ to the CDF of $N(0, 1)$, as with the Kolmogorov procedure. Alternatively, use the original data and compare to $N(\bar{x}, S^2)$. Here $H_0$: random sample comes from a population with the normal distribution with unknown mean and standard deviation, and $H_1$: the population distribution is non-normal.
This is a composite test of normality (testing multiple things simultaneously). We can obtain the distribution of the test statistics via simulation. In R, we can use the function nortest::lillie.test().
We computed $\bar{x}$ and $S$ and used those as estimators for the normal mean and s.d. in the population. Basically follow the Kolmogorov procedure with $\bar{x}$ and $s$.
Lilliefors vs. Kolmogorov - procedurally very similar, but the reference distribution for the test statistic changes because we estimate the population mean and standard deviation.
Lilliefors found this reference distribution by simulation in the late 1960s2. The idea was to generate random normal variates. For various values of sample size $n$, these random numbers are grouped into "samples". For example, if $n = 8$, a simulated sample of size 8 from $N(0, 1)$ (under $H_0$) is generated. The $Z_i$ values are computed as described earlier. The empirical CDF is compared to the $N(0, 1)$ CDF, and the maximum vertical discrepancy is found / recorded. Repeat this thousands of times to build up the simulated reference distribution for the test statistic under $H_0$ when $n=8$. Repeat for many different sample sizes. As the number of simulations increases for a given sample size, the approximation improves.
Test for the exponential
Let's look at a different example, the exponential. Our $H_0$: random sample comes from the exponential distribution
$$ F(x) = \begin{cases} 1 - e^{-\frac{x}{\theta}}, & x \geq 0 \\
0, & x < 0 \end{cases} $$
where $\theta$ is an unknown parameter, vs. $H_1$: distribution is not exponential. Another composite null. We can compute
$$ Z_i = \frac{x_i}{\bar{x}} $$
where we use $\bar{x}$ to estimate $\theta$. Consider the empirical CDF of $Z_1, Z_2, \cdots, Z_n$. Compare it to
$$ F(x) = 1 - e^{-x} $$
and find the maximum vertical distance between the two. This is the test statistic for the Lilliefors test for exponentiality. Tables for the exact distribution for this case exist, but not in general. The R package KScorrect tests against many hypothesized distributions.
Another Test for Normality
The Shapiro-Wilk test is another important test for normality which is used quite often in practice. We again have a random sample $x_1, x_2, \cdots, x_n$ with unknown distribution $F(x)$. $H_0$: $F(x)$ is a normal distribution with unspecified mean and variance, vs. $H_1$: $F(x)$ is non-normal.
The idea essentially is to look at the correlation between the ordered sample values (order statistics from the sample) and the expected order statistics from $N(0, 1)$. If the null holds, we'd expect this correlation to be near 1. Smaller values are evidence against $H_0$. A Q-Q plot has the same logic as this test.
For the test more specifically:
$$ W = \frac{1}{D} \left[ \sum\limits_{i=1}^k a_i \left( x_{(n - i + 1)} - x_{(i)} \right) \right]^2 $$
$k = \frac{2}{n}$ if $n$ is even, otherwise $k = \frac{n-1}{2}$
$x_{(j)}$ are the order statistics for the sample
$a_j$ are the expected order statistics from $N(0, 1)$, obtained from tables
$D = \sum\limits_{i=1}^n (x_i - \bar{x})^2$
We may also see it written as
$$ W =\frac {\left[ \sum\limits_{i=1}^n a_i x_{(i)} \right]^2}{D} $$
With large samples, the chance to reject $H_0$ increases - even small departures from normality will be detected, and formally lead to rejecting $H_0$ even if the data are "normal enough". Many parametric tests (such as the t-test) are pretty robust to departures from normality.
The takeaway here is to always think about what you're doing. Don't apply tests blindly - think about results, what they really mean, and how you will use them.
Runs or Trends
The motivation here is that many basic analyses make the assumption of a random sample, i.e. independent, identically distributed observations (i.i.d). When this assumption doesn't hold, we need a different analysis strategy (e.g. time series, spatial statistics, etc.) depending on the characteristics of the data.
Cox-Stuart test
When the data are taken over time (ordered in time), there may be a trend in the observations. Cox and Stuart3 proposed a simple test for a monotonically increasing or decreasing trend in the data. Note that monotonic doesn't mean linear, but simply a consistent tendency for values to increase or decrease.
The procedure is based on the sign test. Consider a sample of independent observations $x_1, x_2, \cdots, x_n$. If $n = 2m$, take the differences
$$ x_{m+1} - x_1, x_{m+2} - x_2, \cdots, x_{2m} - x_m $$
If $n = 2m + 1$, omit the middle value $x_{m+1}$ and calculate $x_{m+2} - x_1, \cdots$.
If there is an increasing trend over time, we'd expect the observations earlier in the series will tend to be smaller, so the differences will tend to be positive, and vice versa if there is an decreasing trend. If there's no monotonic trend, the observations differ by random fluctuations about the center, and the differences are equally likely to be positive or negative.
Under $H_0$ of no monotonic trend, the $+/-$ signs of the differences are $Bin(m, 0.5)$. That's a sign test scenario!
Example: The U.S. Department of Commerce publishes estimates obtained for independent samples each year of the mean annual mileages covered by various classes of vehicles in the U.S. The figures for cars and trucks (in $1000$s of miles) for the years $1970-1983$ are:
Trucks 11.5 11.5 12.2 11.5 10.9 10.6 11.1
Is there evidence of a monotonic trend in each case?
$$ \begin{aligned} &H_0: \text{no monotonic trend} \\
\text{vs. } &H_1: \text{monotonic trend} \end{aligned} $$
We don't specify increasing or decreasing because we don't have that information, so it's a two-sided alternative.
For cars, all the differences are negative. When $X \sim Bin(7, 0.5)$, $P(X = 7) = 0.5^7 = 0.0078125$. We have a two-sided alternative, so we need to consider also $X=0$, which by symmetry has the same probability, so we get a p-value $\approx 0.0156$. This is reasonably strong evidence against $H_0$.
For trucks, we have 4 negative differences and 3 positive differences, which is supportive of $H_0$, in fact, the most supportive you could be with just 7 differences.
Runs test
Note that the sign test does not account for, or "recognize", the pattern in the signs for the trucks. There is evidence for some sort of trend, but since it's not monotonic, the sign test can't catch it. It also can't find periodic, cyclic, and seasonal trends, because it only counts the number of successes / failures. We need a different type of procedure.
One possibility is the runs test, which looks for patterns in the successes / failures. We're looking for patterns that may indicate a "lack of randomness" in the data. Suppose we toss a coin 10 times and see
$$ \text{H, T, H, T, H, T, H, T, H, T} $$
We'd suspect non-randomness because of the constant switching back and forth. Similarly, if we saw $$ \text{H, H, H, T, T, T, T, H, H, H} $$
We'd suspect non-randomness because of too few switches, or too "blocky".
For tests of randomness, both the numbers and lengths of runs are relevant. In the first case we have 10 runs of length 1 each, and in the second case we have 3 runs - one of length 3, followed by one of length 4, and another of length 3. Too many runs and too few are both indications of lack of randomness. Let
$$ \begin{aligned} &r\text{: the number of runs in the sequence} \\
&N \text{: the length of the sequence} \\
&m \text{: the number of observations of type 1 (e.g. H)} \\
&n = N - m \text{: the observations of type 2} \end{aligned} $$
Our hypotheses are $$ \begin{aligned} &H_0: \text{independence / randomness} \\
\text{vs. } &H_1: \text{not }H_0 \end{aligned} $$
We reject $H_0$ if $r$ is too big or too small. To get a handle on this, we need to think about the distribution of the number of runs for a given sequence length. We'd like to know
$$ P(R = r) = \frac{\text{# of ways of getting r runs}}{\text{total # of ways of arranging H/Ts}} $$
This is conceptually easy, but doing this directly would be tedious for a even moderate $N$. We can use combinatorics to work it out. The denominator is the number of ways to pick $m$ out of $N$: $\binom{N}{m}$. As for the numerator, we need to think about all the ways to arrange $m$ Hs and $n$ Ts to get $r$ runs in total:
$$ \begin{gather*} P(R = 2s + 1) = \frac{\binom{m-1}{s-1} \binom{n-1}{s} + \binom{m-1}{s} \binom{n-1}{s-1}}{\binom{N}{m}} \\
P(R = 2s) = \frac{2 \cdot \binom{m-1}{s-1} \binom{n-1}{s-1} }{\binom{N}{m}} \end{gather*} $$
In principle, we can use these formulas to compute tail probabilities of events, and hence p-values, if $m$ and $n$ aren't too large (both $\leq 20$). We could run into numerical issues if this isn't the case, and computing the tail probabilities is tedious, so we also have a normal approximation:
$$ \begin{gather*} E(R) = 1 + \frac{2mn}{N} \\
Var(R) = \frac{2mn(2mn - N)}{N^2(N-1)} \end{gather*} $$
Asymptotically,
$$ Z = \frac{R - E(R)}{\sqrt{Var(R)}} \dot\sim N(0, 1) $$
We can still improve by continuity correction: add $\frac{1}{2}$ to numerator if $R < E(R)$ and substract $\frac{1}{2}$ if $R > E(R)$.
The question of interest overall is randomness, or lack of randomness, thereof the test is two-sided by nature. There are two extremes of run behavior:
clustering or clumping of types - small number of long runs is evidence (one-sided).
alternating pattern of types - large number of runs is evidence of an alternating pattern (again, in an one-sided perspective).
Runs test for multiple categories
We may also take a simulation-based approach. The goal is to find critical values, or p-values empirically based on simulation, rather than using the normal approximation.
The procedure is to generate a large number of random sequences of length $N$, with $m$ of type 1 events and $n$ of type 2 events (e.g. use R to generate a random sequence of 0's and 1's, the probabilities for $m$ and $n$ comes from the original data - essentially permuting the original sequence). Count the number of runs in each sequence, and this number is what we found for our test statistic based on the data. The generated data is what we expect if the null is reasonable. Gathering all of these together gives an empirical distribution for the number of runs you might expect to see in a sequence of length $N$ ($m$ of type 1, $n$ of type 2) if $H_0$ is reasonable.
If $N$ (hence also $m, n$) is small, we can compute the exact probabilities. Also, if $N$ is small or moderate, if you generate a lot of random sequences, you will see a lot of repeated sequences.
What if we have more than 2 types of events? Smeeton and Cox4 described a method for estimating the distribution of the number of runs by simulating permutations of the sequence of categories. Suppose we have $k$ different outcomes / events, and let $n_i$ denote an observation of type $i$. We have $N = \sum\limits_{i=1}^k n_i$ - the total length of the sequence, and $p_i = \frac{n_i}{N}$ - proportion of observation of type $i$.
We can again use the simulation approach here: generate a lot of sequences of length $N$, with $n_1$ of type 1, $n_2$ of type 2, …, $n_k$ of type $k$, and count the number of runs in each sequence.
p-values: suppose we have $1000$ random sequences of length $N$, and the number of runs ranges from $5$ to $25$. In the $1000$ simulations, we need to take down how many showed $5$ runs, $6$ runs, …, $25$ runs. If we observed $12$ runs in our data, the tail probability is $P(R \leq 12)$, and find the tail probability on the other tail by symmetry (e.g. $(5, 6, 24, 25)$).
Normal approximation: Schuster and Gu5 proposed an asymptotic test based on the normal distribution makes use of the mean and variance of the number of runs: $$ \begin{gather*} E(R) = N \left( 1 - \sum\limits_{i=1}^k {p_i^2}\right) + 1 \\
Var(R) = N \left[ \sum\limits_{i=1}^k {\left(p_i^2 - 2p_i^3 \right)} + \left(\sum\limits_{i=1}^k {p_i^2} \right)^2\right] \end{gather*} $$
Use these in a normal approximation:
where $R$ denotes the observations in our sample. Barton and David6 suggest that the normal approximation is adequate for $N > 12$, no matter what the number of categories is.
So far we've been talking about inferences on single samples. Next we'll take a step further and discuss paired samples.
Below is the R code for generating the plot:
library(tidyverse)
set.seed(42)
tibble(
Type = c(rep("Empirical", 1000), rep("Data", 10)),
Value = c(
round(rnorm(1000, mean=60, sd=15)),
round(rnorm(10, mean = 60, sd = 15))
) %>%
ggplot(aes(Value, color = Type)) +
stat_ecdf(geom = "step")+
theme_minimal()
↩︎
Lilliefors, H. W. (1967). On the Kolmogorov-Smirnov test for normality with mean and variance unknown. Journal of the American statistical Association, 62(318), 399-402. ↩︎
Cox, D. R., & Stuart, A. (1955). Some quick sign tests for trend in location and dispersion. Biometrika, 42(1/2), 80-95. ↩︎
Smeeton, N., & Cox, N. J. (2003). Do-it-yourself shuffling and the number of runs under randomness. The Stata Journal, 3(3), 270-277. ↩︎
Schuster, E. F., & Xiangjun, G. (1997). On the conditional and unconditional distributions of the number of runs in a sample from a multisymbol alphabet. Communications in Statistics-Simulation and Computation, 26(2), 423-442. ↩︎
Barton, D. E., & David, F. N. (1957). Multiple runs. Biometrika, 44(1/2), 168-178. ↩︎
Location Inference for Single Samples
May 08 Modern Nonparametric Regression 8 min read
May 06 Bootstrap 11 min read
May 06 Categorical Data 17 min read
May 06 Density Estimation 6 min read
May 05 Correlation and Concordance 9 min read
2019-2021 Yi's Knowledge Base © Powered by the Northeast theme for Hugo.
#statistics #mathematical-statistics #linear-algebra #visualization #time-series
|
CommonCrawl
|
BMC Health Services Research
Active and adaptive case finding to estimate therapeutic program coverage for severe acute malnutrition: a capture-recapture study
Sheila Isanaka ORCID: orcid.org/0000-0002-4503-28611,2,
Bethany L. Hedt-Gauthier3,4,
Halidou Salou5,
Fatou Berthé6,
Rebecca F. Grais2 &
Ben G. S. Allen7
BMC Health Services Research volume 19, Article number: 967 (2019) Cite this article
Coverage is an important indicator to assess both the performance and effectiveness of public health programs. Recommended methods for coverage estimation for the treatment of severe acute malnutrition (SAM) can involve active and adaptive case finding (AACF), an informant-driven sampling procedure, for the identification of cases. However, as this procedure can yield a non-representative sample, exhaustive or near exhaustive case identification is needed for valid coverage estimation with AACF. Important uncertainty remains as to whether an adequate level of exhaustivity for valid coverage estimation can be ensured by AACF.
We assessed the sensitivity of AACF and a census method using a capture-recapture design in northwestern Nigeria. Program coverage was estimated for each case finding procedure.
The sensitivity of AACF was 69.5% (95% CI: 59.8, 79.2) and 91.9% (95% CI: 85.1, 98.8) with census case finding. Program coverage was estimated to be 40.3% (95% CI 28.6, 52.0) using AACF, compared to 34.9% (95% CI 24.7, 45.2) using the census. Depending on the distribution of coverage among missed cases, AACF sensitivity of at least ≥70% was generally required for coverage estimation to remain within ±10% of the census estimate.
Given the impact incomplete case finding and low sensitivity can have on coverage estimation in potentially non-representative samples, adequate attention and resources should be committed to ensure exhaustive or near exhaustive case finding.
ClinicalTrials.gov ID NCT03140904. Registered on May 3, 2017.
Program coverage is a measure of how many individuals in need are receiving treatment or an intervention. It is an important indicator to assess the performance of public health programs and is essential to inform program planning and prioritization of limited resources. Coverage, combined with program effectiveness, is critical to assess how many of those in need are accessing treatment or prevention activities and achieving the desired outcome.
In the management of severe acute malnutrition (SAM), several practical methods for treatment coverage estimation have been proposed [1] that identify cases using active and adaptive case finding (AACF). AACF is an informant-driven sampling method that yields a sample of individuals who possess specific characteristics and have been referred by others, starting with a "seed" or key informant(s) to begin the referral chain [1, 2]. Similar methods have commonly been used when sampling hard-to-reach populations such as injection drug users [3] or men who have sex with men [4]. When sampling children with SAM, AACF has two advantages: it is active and therefore does not rely on cases self-presenting as in central point sampling, thus avoiding cases not arriving due to stigma associated with the illness, distance or other factors [5]; and it is efficient as only houses of suspected cases, not all houses, in a sampling area are visited. AACF is particularly suitable for conditions with symptoms that can be visibly identified and that are rare and therefore require a larger sampling area in order to reach an adequate sample size. However, as this method can yield a non-representative sample, AACF should be exhaustive or nearly exhaustive to yield valid estimates of coverage [1].
Although practical guidelines have been proposed to indicate when sample exhaustion has been reached during AACF [1, 2], there is uncertainty around whether the method can ensure an adequate exhaustivity in operational settings [6]. Debate surrounding the practical validity of the case finding method thus remains. To inform the continued use of AACF in the estimation of SAM treatment coverage, we assessed the exhaustivity of AACF to identify SAM cases in northwestern Nigeria.
Study setting
This study was conducted in the Wamako Local Government Area in Sokoto State of northwestern Nigeria in 2017. The region is characteristic of the rural Sahel and has a stable population with a high burden of acute malnutrition (global acute malnutrition: 10.4, 95% CI: 7.5, 14.2% in 2015 [7]). From 2013 to 2017, International Medical Corps supported the Sokoto State Ministry of Health to deliver treatment of uncomplicated SAM at five outpatient centers, with community surveillance and outreach teams in approximately 430 villages (average village size: 483 people) [8].
For this study, we defined sensitivity as the probability of a sampling method to correctly identify a child in the community that has SAM or is recovering from SAM. We assessed the sensitivity of AACF using a capture-recapture design [9, 10]. Capture-recapture designs were first used in ecological studies to estimate animal populations [10] and have more recently been applied to assess the total case population of health conditions using two independent sources, such as two disease registers [10, 11]. In a capture-recapture study, two case finding methods are used to determine the size of the total case population, and with that information, the sensitivity of each case finding method can be estimated [9]. In this study, AACF was compared to a census method where all households were visited and all children 6–59 months screened on sequential days. While often considered to be a gold standard, the census method may miss cases, for example due to routine absence from the household on the day of recapture. The capture-recapture study design does not require that either case finding procedure find 100% of cases to estimate the total number of cases in the study population or method-specific sensitivity [9]. However to be valid, the capture-recapture study must adhere to five assumptions: closed population; ability to perfectly match cases captured in both methods; in both methods, perfect classification (perfect diagnosis of SAM and coverage status); within a method, any child with SAM has equal probability of capture; and independence of capture between methods [9,10,11] (Table 1).
Table 1 Descriptions of the assumptions underlying AACF and study procedures to reduce potential violations
Current operational guidance on the use of capture-recapture studies to validate SAM case finding recommends that the estimated number of cases found in both samples be greater than seven and the number of total cases found across both samples be greater than the estimated SAM population [9, 12]. A priori, we estimated AACF would capture 40% of SAM cases (sensitivityAACF = 40%) and that a census would capture 80% of SAM cases (sensitivitycensus = 80%). This would require 24 SAM cases to exist to meet the first condition. Assuming an average village size of 483 [8], a SAM prevalence of 2.7% [7] and the proportion of children aged 6–59 month to be 20% of the population [7], nine villages were estimated to be necessary to identify 24 SAM cases. Given the time and resources available, 15 villages were ultimately sampled to be sure that the minimum sample size would be reached for the first condition above.
Study procedures
AACF method
Prior to case finding, a SAM screening definition was developed using qualitative methods [2] [see Additional file 1]. Semi-structured interviews were first conducted in four villages. An interview guide was used to identify context-specific terms related to SAM, which were triangulated and used to devise a screening definition. This screening definition was then iteratively tested and revised with new information over three days until no new information was found. The resulting screening definition included terminology in two local languages (Hausa and Fulani) to describe the signs and symptoms of SAM as well as associated illnesses. Stigmatizing terms were identified to ensure they were avoided, and teams were aware if used by informants. Information on local beliefs about the etiology, health-seeking behaviors and the types of individuals with knowledge about children with SAM were also collected. This additional information was collected to allow enumerators to target individuals during case finding that would be more knowledgeable about the location of SAM cases.
During case finding, the context-specific screening definition iteratively developed for the study, as well as photos of malnourished children and packets of ready-to-use therapeutic food (RUTF) used for the treatment of SAM, were presented in each sampled village to help orient key informants towards suspected SAM or recovering cases. Key informants included traditional birth attendants, village leaders, caregivers, grandmothers, traditional healers, community nutrition volunteers, children and health center staff. The houses of all suspected cases were visited for individual evaluation. In each household, a brief household interview was conducted to ensure no child aged 6–59 months was sleeping or absent. All children present were assessed for SAM, defined as mid-upper arm circumference (MUAC) < 11.5 cm and/or bilateral pitting edema (Table 2). To identify recovering cases in the household, caregivers were asked if any child was undergoing treatment for SAM. RUTF sachets were presented to confirm enrollment. All identified SAM and recovering cases were confirmed to be resident in the village, and if so, name, age and sex were recorded to facilitate matching between case finding methods. Any identified SAM case not undergoing treatment was referred to the nearest outpatient center for treatment. In this study, AACF was considered exhaustive when teams were referred back to two cases already identified and all areas of the village had been visited.
Table 2 Case definitions used during case finding and coverage estimation [13]
Census method
During census case finding, the survey teams systematically visited each household in the village. Following the same household-level procedures as AACF, a household census was completed to identify all children 6–59 months of age, and all children present were evaluated using the standard case definition for SAM and recovering cases (Table 2). Census case finding was considered exhaustive when all households in the sampled village were visited.
Sensitivity was calculated as the proportion of all SAM and recovering cases that were correctly identified as such and estimated using the Chapman modification to the Lincoln-Petersen estimator [14]. The numerator was defined as the number of cases found using each method, and a denominator was defined as the total case population (N). The total case population (N) was estimated using Eq. 1 below [9, 14] with the observed number of cases identified using each method (a, b, and c in Table 3).
Table 3 2 × 2 table showing types of cases found in both samples
$$ N=\frac{\left(a+b+1\right)\times \left(a+c+1\right)}{a+1}-1\ \Big( $$
SAM treatment coverage was estimated using each case finding procedure according to current guidelines [1, 15]. To better understand the influence of the AACF sensitivity on coverage estimation, in a sensitivity analysis we calculated program coverage at varying levels of AACF sensitivity and distribution of coverage among missed cases and compared to coverage estimated using the census method.
In our study 59 SAM and recovering cases were found using AACF and 75 were found using the census method. Of those cases, 52 were found using both case finding methods, seven were found using only AACF, and 23 found only using the census method (Table 4). Three children were not found by either method. From this, we estimate the total SAM and recovering case population size across the 15 sampled villages to be 85. The sensitivity of our AACF method was 69.5% (95% CI: 59.8, 79.2) and for the census method was 91.9% (95% CI: 85.1, 98.8). The estimated SAM treatment coverage was 40.3% (95% CI 28.6, 52.0) using AACF and 34.9% (95% CI 24.7, 45.2) using the census method.
Table 4 Cases found during active and adaptive and census case finding
In sensitivity analyses, we found that AACF yielded coverage estimates very similar to that produced using the census method when either the AACF method had high sensitivity (e.g. 90–100%) or when the program coverage in the cases missed by AACF was approximately the same as the overall coverage of 34.9% (Table 5). In our study, six out of the 23 (26%) cases not found by AACF were covered by the program. This resulted in a non-significant over-estimate of coverage in this example (40.3% with AACF vs. 34.9% with census).
Table 5 Estimated coverage by sensitivity of AACF and corrected for the unobserved coverage of missed cases
AACF has been proposed as an efficient case finding method to estimate SAM treatment coverage. In this study, we estimated the sensitivity of AACF to be 69.5% (95% CI: 59.8, 79.2), or more specifically that AACF as applied in this study correctly identified approximately 7 out of 10 SAM and recovering cases.
Field-friendly approaches for obtaining coverage estimates are now available to help nutrition program managers directly measure treatment coverage [1]. These methodologies allow for routine assessment by program staff and support community engagement through participatory methods [16] . The current operational guidance on these methods for SAM coverage estimation introduce various case finding procedures. Selection of the most appropriate procedure is necessarily context-dependent, but in practice, AACF is often considered the default method. However, as AACF is an informant-driven procedure and may yield non-representative samples, AACF case finding should be exhaustive or nearly exhaustive to produce valid SAM coverage estimates. The operational guidance specific to AACF suggests 75% sensitivity to be adequate but offers limited guidance to know exactly when this has been achieved [9]. Notwithstanding implementation of a parallel capture-recapture study to measure sensitivity, guidance suggests simply that "sampling stops only when you are sure that you have found all SAM cases in the community" and "case-finding was considered to be exhaustive when no new leads to potential cases were forthcoming and when information given by different sources (e.g., key informants and carers) identified children that had already been seen by the team" [1] . In this study, we applied a stricter definition of exhaustivity, which required teams to be referred back to cases already identified at least two times, and significant resources were made to support exhaustivity, including iterative development of a sound case definition and appropriate training of enumerators to support complete case identification.
Despite these efforts, the AACF missed a total of 26 of a potential 85 cases (30.6%), including 23 cases found using census and an estimated three cases found by neither method. The missed cases lowered AACF sensitivity below the level suggested to be acceptable by operational guidance (75%) [9]. There is little published evidence that quantifies AACF sensitivity in the context of SAM treatment; however, a report of capture-recapture studies (2003–2011), including six comparing AACF to house-to-house case finding and 17 to a central location screening method, showed sensitivities of above 75% in 20/23 (87%) studies [17]. The authors of that report acknowledge that surveys analyzed were provided from early adopters of the coverage methodology, and that subsequent results using procedures locally adapted from these early studies in other settings may not replicate these findings.
The impact of incomplete case finding (e.g. low sensitivity) on coverage estimation is not well understood, and incomplete case-finding could result in bias in either direction depending on the distribution of coverage among missed cases. Sensitivity analyses suggested that case finding should generally have a sensitivity of ≥70% in order to avoid bias of more than 10% in coverage estimation, depending on the distribution of coverage among missed cases. Program managers using AACF should consider the resources and technical capacity needed to ensure such case finding sensitivity can be achieved for valid coverage estimation and consider alternative methods (e.g. census) if necessary.
This study has a number of strengths. First, the sample size ensured greater precision to estimate sensitivity and coverage estimates. Second, careful planning was made to ensure that the five assumptions underlying the capture-recapture design were adhered to (Table 1). For example, four individual-level identifiers were collected from confirmed cases in order to allow cases in both samples to be effectively matched. A well-developed and tested local case definition ensured key informants were able to orientate enumerators towards SAM and recovering cases. The same objective case definition was applied during both case finding methods and survey enumerators were trained, standardized and supervised in anthropometric assessment, assuring a correct and equal diagnosis in both methods. To maintain independence of capture between methods, the census method systematically assessed all households in a sampled village, irrespective of case finding results using AACF the previous day. Finally, efforts were made by teams to ensure each child had an equal risk of being captured, for example by finding the child if absent from the household but known to be in the village.
Despite using the same village boundaries, avoiding known market and treatment days and encouraging carers of cases to remain at home the following day, we were unable to guarantee a perfectly "closed population" to ensure the same population was present during both samples. Violation of the assumption of a closed population meant that seven cases were found during AACF and not during the census method the following day, and an additional seven cases were absent from the village during AACF. The direction of bias in the coverage point estimate due to such missed cases depend on the distribution of coverage among these children. In future use of AACF, absent cases could be reduced by informing village authorities and carers of children aged 6–59 months to stay at home between certain hours when the survey team were to visit.
With limited operational guidance on how to define and achieve exhaustivity, future coverage assessments that use AACF should take care to develop a strong screening definition and ensure exhaustivity by all reasonable measures. This may require dedicating additional personnel to each village during case finding, communicating with village leaders prior to arriving in the village and applying strict criteria to determine when exhaustivity has been reached, such as continuing case finding until re-directed to several cases already found that day. If there is any doubt in the sensitivity of AACF being adequate, a census method, such as door-to-door sampling, might be also considered as recommended in the operational guidance [1]. In this study, 15 days were needed to complete AACF and 14.5 days to complete for census case finding. As such, a census may not present substantially greater logistical or financial burden. We further note that AACF requires the development and testing a local screening definition, and in diverse study populations, this process may need to be repeated among different sub-groups that speak different languages or represent different socio-cultural contexts. In such settings, the census method which does not require context-specific adaptations may offer a comparative efficiency. In contrast, in a large homogenous population where the same screening definition could be reasonably used for case finding across many villages, AACF may prove to be a more efficient approach than a systematic census. These results may apply to assessing coverage of SAM treatment in other rural settings, though AACF is still not recommended for assessing coverage of moderate acute malnutrition treatment (where cases are less recognizable and may not be readily identifiable by key informants), or in urban or camp settings where community cohesion may be limited and key informants may not be aware of incident cases [18].
Given the impact incomplete case finding and low sensitivity can have on coverage estimation in potentially non-representative samples, adequate resources and capacity should be committed to ensure exhaustive or near exhaustive case finding.
The datasets generated and/or analysed during the current study are available from the corresponding author on reasonable request.
AACF:
Active and adaptive case finding
MUAC:
Mid-upper arm circumference
Outpatient therapeutic program
RUTF:
Ready-to-use therapeutic food
Severe acute malnutrition
SQUEAC:
Semi-quantitative evaluation of access and coverage
Myatt M, Sadler K. Semi-Quantitative Evaluation of Access and Coverage (SQUEAC)/ Simplified Lot Quality Assurance Sampling Evaluation of Access and Coverage (SLEAC) Technical Reference. 2012;(October):1–241. Available from: www.fantaproject.org
Myatt M, Woodhead S. Developing an active and adaptive case-finding procedure for use in coverage assessments of therapeutic feeding programs [internet]. 2016. Available from: http://www.coverage-monitoring.org/wp-content/uploads/2016/01/Developing-an-active-and-adaptive-case-finding-procedure-for-use-in-coverage-assessments-of-therapeutic-feeding-programs.pdf
Thompson SK, Collins LM. Adaptive sampling in research on risk-related behaviors. Drug Alcohol Depend. 2002;68:57–67.
Kendall C. An Emprical comparison of respondent-driven sampling, time location sampling, and snowball sampling for Behavioural surveillance in men who have sex with men, Fortaleza. Brazil AIDS Behav. 2008;12:S97–104.
Bliss JR, Njenga M, Stoltzfus RJ, Pelletier DL. Stigma as a barrier to treatment for child acute malnutrition in Marsabit County, Kenya. Matern Child Nutr. 2016;12(1):125–38.
Epicentre. Open Reivew of Coverage Methodologies: Questions, comments and ways forward. 2015; Available from: http://www.coverage-monitoring.org/wp-content/uploads/2015/03/Open-Review-of-Coverage-Methodologies-Questions-Comments-and-Way-Forwards.pdf
International Medical Corps. SMART Nutrition and Mortality Survey Report in Wamakko and Binji. 2015;
Sokoto State Ministry of Health. Nigeria National Population Commission Census. 2006.
Myatt M, Wegerdt J, Zanchettin M. Using capture-recapture studies to investigate the performance of case-finding procedures. 2016; Available from: http://www.coverage-monitoring.org/wp-content/uploads/2016/01/Developing-an-active-and-adaptive-case-finding-procedure-for-use-in-coverage-assessments-of-therapeutic-feeding-programs.pdf
Hook EB, Regal RR. Capture-recapture methods in epidemiology: methods and limitations. Epidemiol Rev. 1995;17(2):243–64.
Tilling K. Capture-recapture methods - useful or misleading? Int J Epidemiol. 2001;30(1):12–4.
Seber GAF. The effects of trap response on tag recapture estimates. Biometrics. 1970;26(1):13–22.
Nigerian Federal Ministry of Health, Family Health Department ND. National Guidelines for Community Management of Acute Malnutrition. 2011.
Chapman D. Some properties of the hypergeometric distribution with applications to zoological sample censuses. Berkeley: University of California Press; 1951. p. 131–59.
Balegamire BS, Siling K, Alvarez Moran J-L, Guevarra E, Woodhead S, Norris A, et al. A single coverage estimator for use in SQUEAC , SLEAC , and other CMAM coverage assessments. Field Exchange [Internet]. 2015;(49) Available from: https://www.ennonline.net/fex/49/singlecoverage.
Blanárová L, Rogers E, Magen C, Woodhead S. Taking severe acute malnutrition treatment Back to the community: practical experiences from nutrition coverage surveys. Front Public Heal. 2016;4(September):1–5.
Myatt M, Fieschi L, Ouma C, Guevarra E, Emary C. A review of historical data on the case-finding sensitivity of active and adaptive case-finding procedures for severe acute malnutrition; 2016.
Guerrero S, Kyalo K, Yishak Y, Kirichu S, Sebinwa U, Norris A. Debunking urban myths : access & coverage of SAM-treatment programmes in urban contexts. Field Exhange [Internet]. 2013;(46) Available from: https://www.ennonline.net/fex/46/debunking.
We sincerely thank Mark Myatt for his careful review of the methods and early version of the manuscript.
No specific funding was provided for preparation of this manuscript.
Department of Nutrition and Global Health and Population at Harvard School of Public Health, Boston, USA
Sheila Isanaka
Department of Research, Epicentre, Paris, France
& Rebecca F. Grais
Department of Global Health and Social Medicine (Harvard Medical School), Boston, USA
Bethany L. Hedt-Gauthier
Department of Biostatistics (Harvard School of Public Health), Boston, USA
Epicentre Niger, Maradi, Niger
Halidou Salou
Epicentre Nigeria, Sokoto, Nigeria
Fatou Berthé
Technical Rapid Response Team and International Medical Corps, Washington, DC, USA
Ben G. S. Allen
Search for Sheila Isanaka in:
Search for Bethany L. Hedt-Gauthier in:
Search for Halidou Salou in:
Search for Fatou Berthé in:
Search for Rebecca F. Grais in:
Search for Ben G. S. Allen in:
SI and BGSA contributed to the conception and design of the study, analysis and interpretation of data, and drafted the manuscript. BHG contributed to the analysis and interpretation of data and critically reviewed the manuscript. RG contributed to the conception of the study, interpretation of data, and critically reviewed the manuscript for important intellectual content. HS and FB contributed to the design of the study and critically reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Sheila Isanaka.
Ethics approval was provided by the Harvard T.H. Chan School of Public Health and the Sokoto State Ministry of Health.
Additional file 1. Supplementary Methods Appendix 1
Isanaka, S., Hedt-Gauthier, B.L., Salou, H. et al. Active and adaptive case finding to estimate therapeutic program coverage for severe acute malnutrition: a capture-recapture study. BMC Health Serv Res 19, 967 (2019) doi:10.1186/s12913-019-4791-9
Active and adaptive
Case finding
Capture recapture
SQUEAC
Therapeutic feeding program
Community-based management of acute malnutrition
Quality, performance, safety and outcomes
|
CommonCrawl
|
JPL ephemerides: effect of Saturn, Uranus and Neptune
According to the Folkner ( Folkner et al, 2014, The Planetary and Lunar Ephemerides DE430 and DE431, IPN Progress Report 42-196 • February 15, 2014), JPL ephemerides consider the following effects:
The modeled accelerations of bodies due to interactions of point masses with the gravitational field of nonspherical bodies include: (a) the interaction of the zonal harmonics of the Earth (through fourth degree) and the point mass Moon, Sun, Mercury, Venus, Mars, and Jupiter; (b) the interaction between the zonal, sectoral, and tesseral harmonics of the Moon (through sixth degree) and the point mass Earth, Sun, Mercury, Venus, Mars, and Jupiter; (c) the second-degree zonal harmonic of the Sun ( J2) interacting with all other bodies.
However, at the JPL Horizons website it is said, that the effects of 8 planets are considered.
Question: Do JPL Horizons ephemerides consider the effects of Saturn, Uranus and Neptune?
jpl-horizons ephemeris
LeelooLeeloo
1,07811 silver badge1616 bronze badges
$\begingroup$ ipnpr.jpl.nasa.gov/progress_report/42-196/196C.pdf appears to be the link if anyone wants to read the original document $\endgroup$ – barrycarter Oct 6 '19 at 15:07
$\begingroup$ This text appears in a very specific section of the document titled "Point Mass Interaction with Extended Bodies". Because the Earth is nonspherical (it's closer to an ellipsoid), the Moon's gravity is stronger where the Moon is closer, so the lunar gravitational effect can't be modeled by treating the Earth and Moon as point masses. Apparently, they extend this all the way out to Jupiter, but not as far as Saturn. I'm guessing that Saturn is far enough away that it can be treated as a point mass. For the general orbits, all 8 planets + Pluto (+ more) are considered. $\endgroup$ – barrycarter Oct 6 '19 at 15:13
$\begingroup$ For the positions, as page 2 notes "Perturbations from 343 asteroids have been included in the dynamical model.", so it's much more complete. $\endgroup$ – barrycarter Oct 6 '19 at 15:16
$\begingroup$ @barrycarter I didn't understand. For the positions they take the planets as point masses, but up to Jupiter they take into account the oblateness also? $\endgroup$ – Leeloo Oct 6 '19 at 15:42
$\begingroup$ This is a really interesting discussion! In this answer I included very approximate corrections for non-point-mass-to-point-mass effects. I only selectively "turned on" the Sun's J2 for Mercury, and the Earth's J2 for the Moon because my numerical accuracy was low, but these references discuss turning on more J2's and also some higher order multipole moments. $\endgroup$ – uhoh Oct 7 '19 at 1:01
The equations of motion for how bodies move in the Solar System, which are then fitted to the observational data of positions and ranges to provide the ephemeris, involve a nested set of effects which account for increasingly subtle and smaller effects.
As detailed in the documentation for DE430 and DE431 and the introduction in Section III these are:
the basic N-body gravitational attraction between all the bodies, treated as point-masses
the effects of the non-spherical oblateness of the Sun (its figure as it is described) on the other bodies of the Solar System
the effects of the static non-spherical shape of the Earth and the Moon on each other and on the planets Mercury to Jupiter
the effects of the time-varying shape (tides) raised on the Earth by the Sun and the Moon back on the Moon's orbit.
For 1. this is a generalized version of the classical force/acceleration due to 2 bodies $F=\frac{Gm_1m_2}{r^2}$ (e.g as in these course notes) but extended to include multiple (N) bodies (Newtonian N-body equations of motion) and generalized beyond the effects of Newtonian gravity to allow general relativity to be included (the so-called parametrized post-Newtonian (PPN) metric). This acceleration on a particular body is summed over everything else: the Sun, the Moon, the planets Mercury through Pluto and the 343 largest asteroids. So this is where the statement you quote
comes from as all the planets (plus more) are included in the basic equations of force/acceleration.
In addition to the basic equations from 1., the effects of non-spherical, non-point mass bodies are included as detailed in Section III B and which you quote in your question. These effects are:
the non-spherical Earth (up to 4th degree in the spherical harmonics expansion of the non-spherical Earth) on the Moon, the Sun, the planets Mercury - Jupiter (all treated as point masses)
the non-spherical Moon (up to 6th degree) on the Earth, the Sun, the planets Mercury - Jupiter (all treated as point masses)
the effect of the second order oblateness of the Sun on everything else
These effects are going to be much smaller than the main gravitational effect from 1. For example we very rarely need to take into account the $J_2$ effect of the Earth when calculating the effects on Near Earth Object trajectories and this is the largest of the non-spherical effects (the higher harmonics are weaker still). An additional issue is that we don't have very good gravity data that would reveal higher harmonics for the outer planets as this can normally only be measured by close-orbiting spacecraft and Uranus, Neptune and Pluto have only received brief distant flybys. (I suspect additional gravity data may be coming out for Saturn based on the 'Grand Finale' orbits of the Cassini spacecraft but this is likely still being worked based on these abstracts)
astrosnapperastrosnapper
$\begingroup$ great thorough answer! fyi I've just asked Have gravitation multipoles of Jupiter and Saturn beyond J2 been measured or at least estimated? $\endgroup$ – uhoh Oct 8 '19 at 8:24
Not the answer you're looking for? Browse other questions tagged jpl-horizons ephemeris or ask your own question.
How to calculate the planets and moons beyond Newtons's gravitational force?
Have gravitation multipoles of Jupiter and Saturn beyond J2 been measured or at least estimated? At least the zonal harmonics?
Is this what station keeping maneuvers look like, or just glitches in data? (SOHO via Horizons)
What was Chang'e-2's 3D orbit in space? (since it's not in Horizons)
Discrepancy between HORIZONS and SPICE
Calculating the planets and moons based on Newtons's gravitational force
Where to find the best values for standard gravitational parameters of solar system bodies?
Getting state vectors from JPL Horizons ephemerides
Plotting error values between simulated and JPL data
I've almost learned to spell Chebyshev, why has JPL switched to Hermite interpolation for DE438?
|
CommonCrawl
|
Elliptic curve point addition Python
And finally, here are the two functions to compute negation and addition on the elliptic curve. The addition function is based directly on the formulas you gave (after correcting the sign of Z.y ), makes use of inv_mod_p to perform the divisions modulo p , and does a final reduction modulo p for the computed x and y coordinates P+Q P +Q. Addition of two points on an elliptic curve over a field of real numbers. To find the coordinates of the third point of intersection, simply calculate the slope between P and Q, and extrapolate it using the general equation of elliptic curve Elliptic curve point addition over a finite field in Python. the Wikipedia page, https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication. and my textbook (Information Security, by Mark Stamp), I came up with the following code: def point_add (N_x, N_y, Q_x, Q_y, p): m = (Q_y - N_y) * pow ( (Q_x-N_x), p-2, p) ret_x = (m ** 2 - N_x - Q_x). So now we have a concrete understanding of the algorithm for adding points on elliptic curves, and a working Python program to do this for rational numbers or floating point numbers (if we want to deal with precision issues). Next time we'll continue this train of thought and upgrade our program (with very little work!) to work over other simple number fields. Then we'll delve into the cryptographic issues, and talk about how one might encode messages on a curve and use. mul(point, scalar) - returns point multiplied with a scalar; add(point, point) - returns addition of two points; inv(point) - returns inverse of point; valid(point) - returns 1 if point on curve, 0 otherwise; comress(point) - returns 33 bytes - sign of Y coordinate (0x02 or 0x03) and X coordinate (32 bytes) decompress(bytestring33) - returns unpacked point
Re commutativity: geometrically, when you add two (unequal) points on an elliptic curve, you draw a secant line through the two points and find the point where it intersects. Then reflect this point across the x-axis to get the sum. But two points determine a line, so it doesn't matter whether you do or ; the secant line is the same either way R= P+ Q: x. y. Point addition over the elliptic curve in 픽. The curve has points (including the point at infinity). Warning:this curve is singular. Warning:pis not a prime 1 Answer1. The arithmetic done during a point addition is done using the addition and multiplication operations in the field; when you are using a prime field, that is equivalent to doing addition and multiplication modulo the prime (23 in this case)
The special thing about these curves is that they have a built in arithmetic operation that is the analogue of addition. You can add and subtract points, and this operation is both associative and commutative (an abelian group). How does addition work? Note: addition of points on elliptic curves is not intuitive. This kind of addition is defined the way it is because it has certain nice properties. It's weird, but it works Defining secp256k1. secp256k1 refers to the parameters of the elliptic curve used in Bitcoin's public-key cryptography. The name represents the specific parameters of curve: sec: stands for Standards for Efficient Cryptography. p: indicates that what follows are the parameters of the curve. 256: length in bits of the field size sum that will compute the sum of two points on an elliptic curve, using the curve's group structure. Before we start, we have to decide how we want to describe the curve and arbitrarypoints. We can start by assuming that the curve is given in Weierstrass form y2 = x3 +ax2 +bx+c (2) so that the curve is determined by the tuple (a,b,c). Fortunately, python know For the elliptic curve given below: y 2 = x 3 + ax + b, where (a=-7 and b=10) Or: y 2 = x 3 - 7x + 10 And two given points: P = (x P, y P) = (1,2) Q = (x Q, y Q) = (3,4) Find the sum of P and Q: R = P + Q = (x R, y R) From equation (10): y P - y Q m = ----- (10) x P - x Q We get: m = -2/-2 = 1 From equations (8) and (9): x R = m 2 - x P - x Q (8) y R = m(x P - x R) - y P (9) We get: x R = 1*1 - 1 - 3 = -3 y R = 1*(1 + 3) - 2 = 2 So: R = (-3,2 Arbitrary Elliptic Curve Arithmetic The Point class allows arbitrary arithmetic to be performed over curves. The two main operations are point addition and point multiplication (by a scalar) which can be done via the standard python operators (+ and * respectively)
Elliptic curve point addition over a finite field in Pytho
(* Task : Elliptic_curve_arithmetic *) (* Using the secp256k1 elliptic curve (a=0, b=7), define the addition operation on points on the curve. Extra credit: define the full elliptic curve arithmetic (still not modular, though) by defining a multiply function. *) (*** Helpers ***) type ec_point = Point of float * float | In
Elliptic curve point addition over a finite field in Python. Tag: python,math,cryptography,elliptic-curve. In short, Im trying to add two points on an elliptic curve y^2 = x^3 + ax + b over a finite field Fp. I already have a working implementation over R, but do not know how to alter the general formulas Ive found in order for them to sustain addition over Fp. When P does not equal Q, and Z.
Bases: ecpy.curves.Curve. An elliptic curve defined by the equation: a*x²+y²=1+d*x²*y². The given domain must be a dictionary providing the following keys/values: name (str) : curve unique name; size (int) : bit size; a (int) : a equation coefficient; d (int) : b equation coefficient; field (inf) : field valu
The value of nP is our public key, and the value of n is our private key. For point addition, we take two points on the elliptic curve and then add them together (R=P+Q)
Point Addition. We're given an algorithm for efficiently adding points on an elliptic curve (better than doing it geometrically every time!), which we need to implement: Using the above curve, and the points P = (493, 5564), Q = (1539, 4742), R = (4403,5202), find the point S(x,y) = P + P + Q + R by implementing the above algorithm. So let's do it! I'm going to create some simple helper classes to represent elliptic curves and points on them
[python]basics of elliptic curve cryptography. GitHub Gist: instantly share code, notes, and snippets
To do any meaningful operations on a elliptic curve, one has to be able to do calculations with points of the curve. The two basic operations to perform with on-curve points are: Point addition: R = P + Q; Point doubling: R = P + Elliptic curve scalar multiplication is the operation of successively adding a point along an elliptic curve to itself repeatedly. It is used in elliptic curve cryptography as a means of producing a one-way function. The literature presents this operation as scalar multiplication, as written in Hessian form of an elliptic curve. A widespread name for this operation is also elliptic curve point multiplication, but this can convey the wrong impression of being a multiplication. Two points over an elliptic curve (EC points) can be added and the result is another point. This operation is known as EC point addition. If we add a point G to itself, the result is G + G = 2 * G. If we add G again to the result, we will obtain 3 * G and so on. This is how EC point multiplication is defined Arbitrary Elliptic Curve Arithmetic. The Point class allows arbitrary arithmetic to be performed over curves. The two main operations are point addition and point multiplication (by a scalar) which can be done via the standard python operators (+ and * respectively)
Specifically: the elements of the group are the points of an elliptic curve; the identity element is the point at infinity 0; the inverse of a point P is the one symmetric about the x -axis; addition is given by the following rule: given three aligned, non-zero points P, Q and R, their sum is P + Q + R = 0. The sum of three aligned point is 0 Point addition operations are handled on a public modulo whereas signing and verification could be handled on order of elliptic curve group. This states total number of points over that finite field. Points of an elliptic curve over finite field Brute Force Method. Curve equation, base point and modulo are publicly known information. The.
Elliptic curve point addition in projective coordinates Introduction. Elliptic curves are a mathematical concept that is useful for cryptography, such as in SSL/TLS and Bitcoin. Using the so-called group law, it is easy to add points together and to multiply a point by an integer, but very hard to work backwards to divide a point by a number; this asymmetry is the basis. Functions. add (p1, p2) adding two elements in the elliptic curve group. gen () generator in the elliptic curve group. inf () the point at infinity. order (
Point Addition in Python - secp256k1 Pytho
Visualization of point addition on elliptic curve in simple Weierstrass form over real numbers and finite field. The underlying math is explained in next art..
Elliptic Curve Cryptography via Jacobi Coordinates CS 463/480 Lecture, Dr. Lawlor. If you benchmark a naive ECDH key exchange, it's actually quite slow, taking 0.120 seconds (!) per exchange.This would limit a server to about 8 connections per second, which is far too slow
Here's some python3 code to directly implement elliptic curve point addition and multiplication, including the special cases with the identity element: # An elliptic curve has these fields: # p: the prime used to mod all coordinates # a: linear part of curve: y^2 = x^3 + ax + b # b: constant part of curve # G: a curve point (G.x,G.y) used as a generator (starting point) class ECcurve: def.
Elliptic curve point addition in projective coordinates Introduction. Elliptic curves are a mathematical concept that is useful for cryptography, such as in SSL/TLS and Bitcoin. Using the so-called group law, it is easy to add points together and to multiply a point by an integer, but very hard to work backwards to divide a point by a number; this asymmetry is the basis for elliptic curve cryptography
∟ Elliptic Curve Subgroups This chapter provides notes on subgroup generation from reduced elliptic curve groups, Ep(a,b). Python programs are provided to perform point addition, scalar multiplication, and subgroup generation
Elliptic Curve Point Doubling Hot Network Questions deploying managed package to scratch org failing giving Invalid username, password, security token; or user locked ou
How do you add two points P and Q on an elliptic curve over a finite field F p . For example: adding the points ( 1, 4) and ( 2, 5) on the curve y 2 = x 3 + 2 x + 2 over F 11 . I know one way involves drawing a straight through the two points P and Q and getting a third point R (P+Q) which means using a straight line equation and the elliptic curve. Elliptic curve groups are additive groups; that is, their basic function is addition. The addition of two points in an elliptic curve is defined geometrically. The negative of a point P = (xP,yP) is its reflection in the x-axis: the point -P is (xP,-yP). Notice that for each point P on an elliptic curve, the point -P is also on the curve
addition) of points of elliptic curves is currently getting momentum and has a tendency to replace public key cryptography based on the infeasibility of factorization of integers, or on infeasibility of the computation of discrete logarithms. For example, theUS-government has recommended to its governmental institutions to usemainly elliptic curve cryptography - ECC. The main advantage of. Since the elliptic curve is symmetric with respect to the x-axis, we can guarantee that the inverse -R = -(x,y) = (x,-y) will also be a point on the curve. And since R and -R have the same x-coordinate, the line that connects them is vertical representing their sum as the identity O or the point at infinity for elliptic curve E(F p): Y^2 =X^3+AX+B , p prime mod p (be sure its a prime, just fermat prime test here, so avoid carmichael numbers) A: B (will be calculated so that point P is on curve) point P : x : y: point Q: x: it's your own responsibility to ensure that Q is on curve: y: number n : Result: x: y: Order of point P:-will only give you result for fair sizes of p (less than 1000. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators.
Elliptic curve double and add implementation in pytho
Source code of Python included in this notes. Main goal of these Notes is to bring together three topics Operations of the Multiplication of the Elliptic Curves Points; Addition of Two Different Points; Duplication of a Point Elliptic Curves With Relational Points Download PDF Download free Algebra And Geometry With Python in PDF Learn Algebra and Geometry With Python PDF Notes. You may.
Finding the point intersects the elliptic curve a third time by drawing a line through \(P_1\) and \(P_2\) Reflect the resulting point over the x-axis; As you can see, point addition is not easily predictable. We can calculate point addition easily enough with a formula, but intuitively, the result of point addition can be almost anywhere given.
However, for curves of rank 1, I know that the methods used are pretty good, and can often be used to prove that the list is complete (via linear forms in elliptic logs, for example). You might try looking at the LMFDB documentation to see if they explain how reliable their lists of integral points are. $\endgroup$ - Joe Silverman Oct 6 '17 at 13:1
Python �� ; Ruby on Rails; SQL �� Elliptic Curve Crypto ,Point Doubling [email protected]. Elliptic Curve Crypto ,Point Doubling. Originally published by Short Tech Stories on July 4th 2017 4,505 reads @garciaj.ukShort Tech Stories. Sr App Engineer. Hi Guys , last article we spoke about addition , one of the most important invented operations on eliptic curve arithmetic . There's.
ECC¶. ECC (Elliptic Curve Cryptography) is a modern and efficient type of public key cryptography. Its security is based on the difficulty to solve discrete logarithms on the field defined by specific equations computed over a curve. ECC can be used to create digital signatures or to perform a key exchange Point Addition is essentially an operation which takes any two given points on a curve and yields a third point which is also on the curve. The maths behind this gets a bit complicated but think of it in these terms. Plot two points on an elliptic curve. Now draw a straight line which goes through both points. That line will intersect the curve at some third point. That third point is the. ECPy (pronounced ekpy), is a pure python Elliptic Curve library providing ECDSA, EDDSA (Ed25519), ECSchnorr, Borromean signatures as well as Point operations. Full html documentation is available here. ECDSA sample. from ecpy.curves import Curve,Point from ecpy.keys import ECPublicKey, ECPrivateKey from ecpy.ecdsa import ECDSA cv = Curve.get_curve('secp256k1') pu_key = ECPublicKey(Point. point addition on elliptic curve. edit. torsion. EllipticCurve . number_fields. asked 2016-11-02 09:40:58 +0200. Sha 254 5 13 28. I have the following code where I want to add a 4-torsion point given by P=[15+36*B, 27*a*(a^2-4*B-5)] with B^2=-2 and a^4-5*a^2-32=0 and Q=[r,s] on my elliptic curve E as given below: E=EllipticCurve([-3267,45630]) k.<B>=NumberField(x^2-2) k.<a>=NumberField(x^4-5*x.
Elliptic Curves as Python Objects - Math ∩ Programmin
In elliptic curve cryptography one uses the fact, that it is computationally infeasible to calculate the number x only by knowing the points P and R. This is often described as the problem of.
In Elliptic Curve Cryptography, operations are performed on the coordinate points of an elliptic curve. To perform addition of two distinct point coordinate the following calculation is used. Figure 1(a) shows graphical representation of pointaddition. P(x1,y1)+ Q(x2,y2) = R(x3,y3) (1) x3 = (λ2 −x1 −x2) mod p (2) y3 = (λ(x1 −x3)− y1) mod p (3) where λ = (y2 − y1) (x2 −x1) mod p.
point addition on elliptic curve. edit. EllipticCurve. asked 2016-11-12 07:33:05 +0200. Sha 254 5 13 28. I have point (x,y) on my elliptic curve that I want to add with point [51,108]. I have tried the following code: kX.<X>=FunctionField(k) R.<Y> = kX[] kY.<Y> = kX.extension(Y^2-X^3+3267*X-45630) E=EllipticCurve(kY,[-3267,45630]) Q=E([X,Y]) P1=E([51,108]) W=P1+Q;W Unfortunately it keep giving.
To define the addition of points on elliptic curves, we need to first define the operation. Figure:The operation Brian Rhee MIT PRIMES Elliptic Curves, Factorization, and Cryptography. ADDITION OF POINTS ON ELLIPTIC CURVES, CONT. To add P and Q, take the third intersection point P Q, join it to Oby a line, and then take the third intersection point to be P +Q. In other words, set P +Q = O(P.
II. Intuition About Elliptic Curve: mod. With mod, elliptic curve is no longer a curve, instead it is turned into a group of discrete points. With mod(P), the result D will be capped by P, therefore we can control the magnitude of the output. In Figure 1.2, we start from an initial point G(3,10), i.e. the point labeled 1, use EC multiplication.
Elliptic Curves: Definition. Coding Elliptic Curves in Python Point Addition Conclusion. Elliptic Curve Cryptography: Elliptic Curves over Finite Fields Closure Commutativity Associativity. Serialization: Big- and Little-Endian Redux Conclusion. Transactions: Parsing Script Outputs Script: Mechanics of Script How Script Works Parsing the Script.
Elliptic curve scalar multiplication is the operation of successively adding a point along an elliptic curve to itself repeatedly. It is used in elliptic curve cryptography (ECC) as a means of producing a one-way function.The literature presents this operation as scalar multiplication, as written in Hessian form of an elliptic curve.A widespread name for this operation is also elliptic curve.
This idea is mainly based on ElGamal encryption schema and elliptic curves. We will create a python implementation of this concept. May the curve be with you Curve configuration. Elliptic curves satisfy the equation y 2 = x 3 + ax + b. Here, a and b specify the characteristic feature of the curve. Also, we define elliptic curves over prime fields to produce points including integer coordinates. Elliptic curves are an excellent example of such a group. There is no sensible ordering for points on an elliptic curve, and we don't know how to do division efficiently. The best we can do is add to itself over and over until we hit , and it could easily happen that (as a number) is exponentially larger than the number of bits in and . What we really want is a polynomial time algorithm for.
Fast elliptic curve point operations in Python - GitHu
ing the points on an elliptic curve. Of course, the text omits failed ideas and backtracking; it chooses the next step with incredible accuracy. Most of the de nitions, theorems, and proofs come from the elementary in-troduction to elliptic curves by Charlap and Robbins [2]. In that sense, the present text can be seen as a rearranged and commented version of their intro-duction. urtherF. The Magic of Elliptic Curve Cryptography. Finite fields are one thing and elliptic curves another. We can combine them by defining an elliptic curve over a finite field. All the equations for an elliptic curve work over a finite field. By work, we mean that we can do the same addition, subtraction, multiplication and division as defined. 2. Elliptic Curves and the ECIES An elliptic curve over GF(2n) is de ned by the simpli ed Weierstrass equation y2 + xy = x3 +ax2 +b, where a6= 0 andb6= 0 [2]. It is possible to de ne several operations on the points of the elliptic curve, namely point negation, addition, and doubling. We de ne the point operations as follows Elliptic-curve point addition and doubling are governed by fixed formulas. The most time-consuming operation in classical ECC iselliptic-curve scalar multiplication: Given an integer n and an elliptic-curve pointP, compute nP. It is easy to find the opposite of a point, so we assume n >0. Scalar multiplication is the inverse of ECDLP (given P and nP, compute n). Scalar multiplication behaves. We introduce a software tool for the automatic generation of addition circuits for ordinary binary elliptic curves, a prominent platform group for digital signatures. The resulting circuits reduce the number of \(T\) -gates by a factor \(13/5\) compared to the best previous construction, without increasing the number of qubits or \(T\) -depth
addition on finite elliptic curves - Cryptography Stack
Math on the elliptic curve uses familiar mathematical operations such as addition and subtraction, but the effect of these operations is defined by the curve. Instead of having a set of rational or whole numbers as possible values, the allowed discrete values are defined by the curve. Any point on the curve is a possible value. Each number is in the set of points that make up the curve An elliptic curve (EC) is a function in which the square of the y coordinate is equal to a third degree polynomial of the x coordinate. An interesting property of elliptic curves is that any two points on an EC will define a line that also hits the curve at one more place. The sum of the first two points is defined as the mirror image (over. basic ideas about elliptic curves and their properties. Essentially, I will be interested in the group law and the computation of the addition inside an elliptic curve, which will later be used in di erent ways. After that follows the de nition of torsion points and divisors, which will be necessary for the most important part of this thesis: th Software optimization of binary elliptic curves arithmetic using modern processor architectures Manuel Bluhm June 17, 2013 Department of Mathematics, University of Haifa Prof. Dr. Shay Gueron Embedded Security Group, Ruhr University Bochum Prof. Dr.-Ing. Christof Paar. Abstract This work provides an e cient and protected implementation of the binary elliptic curve point multiplication for the. of the elliptic curve, namely point negation, addition, and doubling. W e define the point operations as follows. First, let E be an elliptic curve over GF (2 n) and. P (x, y) a point on E. The.
To add two points on an elliptic curve together, you first find the line that goes through those two points. Then you determine where that line intersects the curve at a third point. Then you reflect that third point across the x-axis (i.e. multiply the y-coordinate by -1) and whatever point you get from that is the result of adding the first two points together
a point randomization method proposed by Joye and Tymen [20] for differential analysis. 2 Background In this section, we give a brief overview of elliptic curve cryptography (see [1, 3, 4, 15] for more details) and the double-base number system. 2.1 Elliptic Curve Cryptography Definition 1. An elliptic curve E over a field K is defined by. Elliptic Curves. In 1985, cryptographic algorithms were proposed based on elliptic curves. An elliptic curve is the set of points that satisfy a specific mathematical equation. They are symmetrical. Uses. Websites make extensive use of ECC to secure customers' hypertext transfer protocol connections Elliptic curves can be defined over any field K; the formal definition of an elliptic curve is a non-singular (no cusps, self-intersections, or isolated points) projective algebraic curve over K with genus 1 with a given point defined over K. If the characteristic of K is neither 2 or 3, then every elliptic curve over K can be written in the form y2 =x3 px q where p;q 2K such that the RHS. Based on the theory of Elliptic Curve Cryptography, this paper has carried out modular addition, Elliptic Curve Point doubling and addition, modular squaring and projective to affine coordinates system. Elliptical Curve Cryptography is a public key encryption technique depend on elliptic curve theory. It can be used to make faster, smaller, and more efficient cryptographic keys. ECC generate.
We use exactly the same addition rule for these Edwards elliptic curves (the warped circle): the red dot plus the blue dot equals the purple dot. The sum of the red, blue, and green dots is the identity (the point at 3 o'clock). The moving curve that passes through the red, blue, green, and black dots is a rectangular hyperbola, with asymptotes parallel to the coordinate axes, as before. To. 2.2 Geometric addition of elliptic curve points, P+Q=R..... 13 2.3 Geometric doubling of elliptic curve point, 2P=R with two tables elliptic curve point multiplication methods over finite fields..... 52. xv LIST OF ABBREVIATIONS AES Advanced Encryption Standard DES Data Encryption Standard DL Discrete Logarithm DLP Discrete Logarithm Problem DSA Digital Signature Algorithm EC Elliptic.
Elliptic Curve point addition (픽ₚ) - Andrea Corbellin
additions. Thus, the elliptic curve discrete logarithm is the following given public key kP, find the private key k. The work of [6] gives a comprehensive explanation about elliptic curve mathematical foundation and its implementation. Fig.1. ECC Point Addition III. FINITEFIELDS To make operations on elliptic curve accurate and more efficient, the curve cryptography is defined over two finite. squarings, and 1 multiplication by a curve constant) point doubling and 7M+3S+1Dpoint addition algorithms. Furthermore, the new addition algorithm provides an efficient way to protect against side channel attacks which are based on simple power analysis (SPA). Keywords:Efficient elliptic curve arithmetic, unified addition, side channel attack. 1 Introduction From the advent of elliptic. Live. •. Video 1: Point addition R = P + Q R = P + Q of points P ≈ [−2.13,1.05] P ≈ [ − 2.13, 1.05] and Q ≈ [0.12,1.93] Q ≈ [ 0.12, 1.93] on the elliptic curve in simple Weierstrass form y2 = x3 −2x+ 2 y 2 = x 3 − 2 x + 2 over R R and of points P = [15,9] P = [ 15, 9] and Q = [1,1] Q = [ 1, 1] over GF(p) G F ( p) . The following are 15 code examples for showing how to use ecdsa.ellipticcurve.Point().These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example We use exactly the same addition rule for these Edwards elliptic curves (the warped circle): the red dot plus the blue dot equals the purple dot. The sum of the red, blue, and green dots is the identity (the point at 3 o'clock). The moving curve that passes through the red, blue, green, and black dots is a rectangular hyperbola, with asymptotes parallel to the coordinate axes, as before. To add two points (red and blue), fit them to a hyperbola, and use the hyperbola to.
elliptic curves - ECC - Point Addition/Point
\$\begingroup\$ If you have to include comments like # doubling the point and # normal point addition then it is really time to introduce methods. Those are pretty useful functions for elliptic curve calculations anyway. \$\endgroup\$ - Maarten Bodewes Mar 3 '20 at 0:0 I am doing an experiment to prove the associativity of the addition of points on an elliptic curve. So far, I have produced a code which allows me to move points on my curve. To find their sum, I.. def add_point (self, P, Q): Returns the sum of P and Q This function ignores the default curve attach to P and Q, and assumes P and Q are on this curve. Args: P (Point): first point to add Q (Point): second point to add Returns: Point: A new Point R = P+Q Raises: ECPyException : with Point not on curve, if Point R is not on \ curve, thus meaning either P or Q was not on. raise NotImplementedError ('Abstract method add_point'
code golf - Addition on Elliptic Curves - Code Golf Stack
Abstract: This paper describes the verilog implementation of point addition and doubling used in Elliptic Curve Point Multiplication. Based on the theory of Elliptic Curve Cryptography, this paper has carried out modular addition, Elliptic Curve Point doubling and addition, modular squaring and projective to affine coordinates system. Elliptical Curve Cryptography is a public key encryption technique depend on elliptic curve theory. It can be used to make faster, smaller, and more efficient.
squarings, and 1 multiplication by a curve constant) point doubling and 7M+3S+1Dpoint addition algorithms. Furthermore, the new addition algorithm provides an efficient way to protect against side channel attacks which are based on simple power analysis (SPA). Keywords:Efficient elliptic curve arithmetic, unified addition, side channel attack
g Bitcoin, Song just passes FieldElement objects into Point's constructor and lets the Python interpreter's type inference do the rest. The mathematical code in the Point class.
The first version (EllipticCurvePoint - see repo) takes integer parameters and implements equality and addition operators, as well as a method to check whether a point is on a given curve. In the sample code, Song creates a Point class that takes both curve parameters (a, b) and point parameters (x, y) in its __init__ function and then raises an error if the point is not on the curve
Fq, the number of points (x,y) in Fq x Fq which satisfy the elliptic curve equation (when taken for all finite fields) characterizes the isogeny class of the curve E. If instead we take Eas a curve over some finite field Fq from the beginning, then the number of points of E/Fq can help to solve the discrete logarithm problem for two points Pand Qon E
We can add points that lay on the curve in the F p \mathbb{F}_p F p space and resulting point will always remain in this space. It is the same in with multiplying them by a scalar n n n . The addition is implemented in Python as follow
These calculations are in Python style. The above mentioned elliptic curve and the points {5, 8} and {9, 15} are visualized below: Multiplying ECC Point by Integer. Two points over an elliptic curve (EC points) can be added and the result is another point. This operation is known as EC point addition
Elliptic Curve in Python - secp256k1 Pytho
Elliptic curve point addition in projective coordinates; AA tree set; Binary indexed tree; BitTorrent bencode format tools; Time-based One-Time Password tools; Ending my support for Python 2; Sidebar. Recent. Free small FFT in multiple languages: Image unshredder by annealing: Overview of Project Nayuki software licenses: Practical guide to XHTML: Poor feedback from readers : Random. Extending.
Generic addition of points self and other: An elliptic curve; the points support additive infix notation for group operations, as well as multiplication by integers. This is a template class that must be instantiated with the field and the parameters A and B. Use it, for example, as follows: 1 # Instantiate the elliptic curve template as y^2 = x^3 + 3x + 4 over GF(7) 2 E = EllipticCurve.
1985, elliptic curves were used independently by Neal Koblitz [Kob, 1987] and Victor Miller [Mil, 1986] to design public key cryptographic systems. Their proposal was using the group of points on an elliptic curve (EC) defined over a finite field to implement discrete log cryptosystems. Since then lots of researc
The easiest way to understand Elliptic Curve (EC), point addition, scalar multiplication and trapdoor function; explained with simple graphs and animations. [Read More] cryptography elliptic-curve math sagemath python
Elliptic curve addition Geometrically, the main rule to add two points on elliptic curve is to draw a line passing thru those points that will intersect the curve in another point and the inverse of this intersection point if the result of point addition
istic threshold signatures, zk-SNARKs and other simpler forms of zero-knowledge proofs is the elliptic curve pairing. Elliptic curve pairings (or bilinear maps) are a recent addition to a 30.
number of points on the elliptic curve to make the cryptosystem secure. Point Addition Consider two distinct points J and K such that J = (x J, y J) and K = (x K, y K) Let L = J + K where L = (x L, y L), then x L = s 2 - xJ - x K mod p y L = -y J + s (x J - x L) mod p s = (y J - y K) / (x J - x K) mod p, s is the slope of the line through J and K
element under addition. Figure 1 Addition of 2 points P and Q on the curve y 2 = x 3 - 3x + 3 The Addition operator is defined over E(F p) and it can be seen that E(F p) forms an abelian group under addition. The addition operation in E(F p) is specified as follows: • P + O = O + P = P, ∀ P ∈ E(F p Elliptic curves. In a cryptographic setting-we'll avoid abstract mathematics for now-an elliptic curve is any polynomial equation of the form. y 2 = x 3 + A x + B y^2 = x^3 + Ax + B y 2 = x 3 + A x + B. Where A, B ∈ F A, B F A, B ∈ F and F F F is some field. Bitcoin's curve. Satoshi chose a curve called secp256k1 for Bitcoin's elliptic curve public key cryptography. The curve has the for Python [12] software we introduce synthesizes for a given curve and curve point an optimized addition circuit and outputs this circuit as a .qc file. This file can then be processed with. In elliptic curve math, there is a point called the point at infinity, which roughly corresponds to the role of zero in addition. On computers, it's sometimes represented by x = y = 0 (which doesn't satisfy the elliptic curve equation, but it's an easy separate case that can be checked) Introduction Conditions for this to work: 3) should have coordinates in ��, in order for the arithmetic to work over ��. Definition: an elliptic curve over ��is a smooth projective cubic curve /��equipped with a ��-rational base point . (Caution: there exist more general and less general definitions.
Elliptic Curve Point Addition Example - herongyang
2.2.1 Points addition and doubling on elliptic curves As it was shown earlier in the formulations of points on an elliptic curve, adding points on elliptic curve is not the same as adding points in the plane. Scalar multiplication of a point on the curve for which we have say, mP with m = 2185, will be evaluated a Abstract: We show the relationships among efficient formulas based on the Weierstrass equation for elliptic curve point addition over binary extension fields. We give a simple proof that there can be no Weierstrass point addition function in terms of x coordinates only (for distinct points), though there are formulas that include the x coordinate of a fourth point, or an indication of the y. There is a rule for adding two points on an elliptic curve E(p) to give a third elliptic curve point. Together with this addition operation, the set of points E(p) forms a group with serving as its identity. It is this group that is used in the construction of elliptic curve cryptosystems. The addition rule, which can be explained geometrically. In this paper we revisit the addition of elliptic curves and give an algebraic proof to the associative law by use of MATHEMATICA. The existing proofs of the associative law are rather complicated and hard to understand for beginners. An ''elementary proof to it based on algebra has not been given as far as we know. Undergraduates or non-experts can master the addition of elliptic.
GitHub - AntonKueltz/fastecdsa: Python library for fast
An elliptic curve is the set of points that satisfy a specific mathematical equation. The equation for an elliptic curve looks something like this: y 2 = x 3 + ax + b. That graphs to something that looks a bit like the Lululemon logo tipped on its side: There are other representations of elliptic curves, but technically an elliptic curve is the set points satisfying an equation in two.
Elliptic curves over any eld can be divided into two classes of ordinary and supersingular elliptic curves. Every ordinary elliptic curve over the nite led F 3m can be written in the Weierstraˇ form y2 = x3 + ax2 + b, where a;b2F 3m and ab6= 0. It is known, [21], that every ordinary elliptic curve over F 3m with a point of order three can be written in the form E b: y2 = x3 + x2 +
The cryptography library currently already supports the most basic elliptic curve operation you need: scalar multiplication. Sometimes you need fancier operations. For example, this supports direct point addition and point subtraction, which for example is used in SPAKE2 to achieve blinding. Right now these are focused on a tight binding between Python and C (specifically, OpenSSL). The.
Derive equations For point addition & point doubl... Support Vector Machines (SVM) w/ JAVA & Sequential... Sentiment Classification w/ Naive Bayes + JAVA + L... Elliptic Curve Cryptography (ECC) - Public Key Cry... Backpropagation w/ JAVA (01) - Neural Networks (09) Diffie-Hellman Key Exchange - Public Key Cryptogra... PrototypePrj.com Core Values; Friday, July 17, 2020. Class Scheduling w.
The addition of points on an elliptic curve E satis es the following properties: 1. (commutativity) P 1 + P 2 = P 2 + P 1 for all P 1;P 2 on E. 2. (existence of identity) P + 1= P for all points P on E. 3. (existence of inverses) Given P on E, there exists P0on E with P +P0= 1. This point P0will usually be denoted P. 4. (associativity) (P 1 + P 2) + P 3 = P 1 + (P 2 + P 3) for all P 1;P 2;P 3.
Python class Curve implemented in the script in order to per-form elliptic curve operations, and is necessary to check if one of the candidates for the private key matches the public key. Other elliptic curves can be used by giving their explicit parameters. — pubkey_point: the public key point of the signer, given as tw
Elliptic curve arithmetic - Rosetta Cod
will study elliptic curves over an arbitrary field K because most of the theory is not harder to study in a general setting - it might even become clearer. 1.1 Weistrass equations An elliptic curve over a a field K is a pair (E;O), where Eis a cubic equation in the projective geometry and O2Ea point of the curve called the base point, o
g operation in ECC Fourth Level: ECC protocol ECDSA, ECDH, ECMQV, ElGamal.
Elliptic curve cryptography is a modern public-key encryption technique based on mathematical elliptic curves and is well-known for creating smaller, faster, and more efficient cryptographic keys. For example, Bitcoin uses ECC as its asymmetric cryptosystem because of its lightweight nature. In this introduction to ECC, I want to focus on the high-level ideas that make ECC work
In addition, Tan et al.'s 3PAKE protocol has high computation cost due to the involvement of the additional elliptic curve scalar point multiplications and symmetric cryptosystem. We then designed a computation efficient 3PAKE protocol for mobile commerce environment to resolve the security pitfalls of the Tan's 3PAKE protocol Elliptic curves. An elliptic curve E over ℤp (p ≥ 5) is defined by an equation of the form y^2 = x^3 + ax + b, where a, b ∈ ℤp and the discriminant ≢ 0 (mod p), together with a special point 풪 called the point at infinity.The set E(ℤp) consists of all points (x, y), with x, y ∈ ℤp, which satisfy the above defining equation, together with 풪 Elliptic Curves over GF(p) Basically, an Elliptic Curve is represented as an equation of the following form. y 2 = x 3 + ax + b (Weierstrass Equation) Pre-condition: 4a 3 + 27b 2 ≠ 0 (To have 3 distinct roots) Addition of two points on an elliptic curve would be a point on the curve, too. Adding two points on an elliptic curve is demonstrated. This is a python package for doing fast elliptic curve cryptography, specifically digital signatures. Security. There is no nonce reuse, no branching on secret material, and all points are validated before any operations are performed on them. Timing side challenges are mitigated via Montgomery point multiplication. Nonces are generated per RFC6979_. The default curve used throughout the. Choosing Elliptic Curve Cryptosystems (ECC) •Choice of different curves and Coordinate Systems •Affects the formulas for point doubling, addition, and negation •Affects the minimum number of Galois Field multiplications, additions, subtractions, and inversions, required to perform point operation ECadd is addition in elliptic curves and ECdouble is sort of point doubling, right. These are invented so if you try and do regular math say you take the generator point multiply the X and Y by the private key you're not going to come up with the right thing. You have to be doing elliptic curve multiplication, elliptic curve addition and most importantly which is the mod inverse elliptic curve.
Level up Casino No deposit Bonus.
NEM price prediction 2025.
Instagram symbol copy and paste.
Blockchain App Erfahrungen.
Verdampfer Set.
Norwegen Handwerker gesucht.
Autotrader buying a used car.
Mailchimp subscribe unsubscribed user.
Python bitcoin RPC.
Mobile wallet app.
EHang Aktie in Euro.
Bitwala Kreditkarte Erfahrungen.
Orchid Prognose 2025.
Gekko bot fork.
Crypto 10 Hedged.
Coinmixer.es erfahrung.
Amazon Gutschein verkaufen.
Altcoin predictions 2021 Reddit.
Blockchain App Gebühren.
Domain mit Bitcoin kaufen.
Der vorgang des bitcoin mining ist vergleichbar mit folgender analogie.
Blockchain the amazing solution for almost nothing.
XRP SEC who will win.
Beste Bitcoin Börse.
100 Free Spins Fire Joker.
Bullrun Bedeutung.
ExpressVPN trial.
Libertex Aktien.
HelloFresh Aktie.
Ravencoin mining hardware.
360 BTC accelerator.
CoinMarketCap Portfolio login.
Plan B One Step.
XEM Kraken.
Bitcoin everything you need to know.
Bitcoin price difference between exchanges.
Schnellster Bitcoin Miner.
New No deposit Casino bonus codes 2020.
Y Yachts Y7 price.
PokerStars App in Österreich nicht verfügbar.
|
CommonCrawl
|
Covariance matrix
A bivariate Gaussian probability density function centered at (0, 0), with covariance matrix [ 1.00, 0.50 ; 0.50, 1.00 ].
Sample points from a multivariate Gaussian distribution with a standard deviation of 3 in roughly the lower left-upper right direction and of 1 in the orthogonal direction. Because the x and y components co-vary, the variances of x and y do not fully describe the distribution. A 2×2 covariance matrix is needed; the directions of the arrows correspond to the eigenvectors of this covariance matrix and their lengths to the square roots of the eigenvalues.
In probability theory and statistics, a covariance matrix (also known as dispersion matrix or variance–covariance matrix) is a matrix whose element in the i, j position is the covariance between the i th and j th elements of a random vector (that is, of a vector of random variables). Each element of the vector is a scalar random variable, either with a finite number of observed empirical values or with a finite or infinite number of potential values specified by a theoretical joint probability distribution of all the random variables.
Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x and y directions contain all of the necessary information; a 2×2 matrix would be necessary to fully characterize the two-dimensional variation.
Because the covariance of the i th random variable with itself is simply that random variable's variance, each element on the principal diagonal of the covariance matrix is the variance of one of the random variables. Because the covariance of the i th random variable with the j th one is the same thing as the covariance of the j th random variable with the i th one, every covariance matrix is symmetric. In addition, every covariance matrix is positive semi-definite.
1.1 Generalization of the variance
1.2 Correlation matrix
2 Conflicting nomenclatures and notations
3.1 Block matrices
4 As a linear operator
5 Which matrices are covariance matrices?
6 How to find a valid correlation matrix
7 Complex random vectors
8 Estimation
9 As a parameter of a distribution
10 Applications
10.1 In financial economics
Throughout this article, boldfaced unsubscripted X and Y are used to refer to random vectors, and unboldfaced subscripted Xi and Yi are used to refer to random scalars.
If the entries in the column vector
X=[X1⋮Xn]{\displaystyle \mathbf {X} ={\begin{bmatrix}X_{1}\\\vdots \\X_{n}\end{bmatrix}}}
are random variables, each with finite variance, then the covariance matrix Σ is the matrix whose (i, j) entry is the covariance
Σij=cov(Xi,Xj)=E[(Xi−μi)(Xj−μj)]{\displaystyle \Sigma _{ij}=\mathrm {cov} (X_{i},X_{j})=\mathrm {E} {\begin{bmatrix}(X_{i}-\mu _{i})(X_{j}-\mu _{j})\end{bmatrix}}}
μi=E(Xi){\displaystyle \mu _{i}=\mathrm {E} (X_{i})\,}
is the expected value of the ith entry in the vector X. In other words,
Σ=[E[(X1−μ1)(X1−μ1)]E[(X1−μ1)(X2−μ2)]⋯E[(X1−μ1)(Xn−μn)]E[(X2−μ2)(X1−μ1)]E[(X2−μ2)(X2−μ2)]⋯E[(X2−μ2)(Xn−μn)]⋮⋮⋱⋮E[(Xn−μn)(X1−μ1)]E[(Xn−μn)(X2−μ2)]⋯E[(Xn−μn)(Xn−μn)]].{\displaystyle \Sigma ={\begin{bmatrix}\mathrm {E} [(X_{1}-\mu _{1})(X_{1}-\mu _{1})]&\mathrm {E} [(X_{1}-\mu _{1})(X_{2}-\mu _{2})]&\cdots &\mathrm {E} [(X_{1}-\mu _{1})(X_{n}-\mu _{n})]\\\\\mathrm {E} [(X_{2}-\mu _{2})(X_{1}-\mu _{1})]&\mathrm {E} [(X_{2}-\mu _{2})(X_{2}-\mu _{2})]&\cdots &\mathrm {E} [(X_{2}-\mu _{2})(X_{n}-\mu _{n})]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(X_{n}-\mu _{n})(X_{1}-\mu _{1})]&\mathrm {E} [(X_{n}-\mu _{n})(X_{2}-\mu _{2})]&\cdots &\mathrm {E} [(X_{n}-\mu _{n})(X_{n}-\mu _{n})]\end{bmatrix}}.}
The inverse of this matrix, Σ−1{\displaystyle \Sigma ^{-1}} is the inverse covariance matrix, also known as the concentration matrix or precision matrix;[1] see precision (statistics). The elements of the precision matrix have an interpretation in terms of partial correlations and partial variances.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
Generalization of the variance
The definition above is equivalent to the matrix equality
Σ=E[(X−E[X])(X−E[X])T]{\displaystyle \Sigma =\mathrm {E} \left[\left(\mathbf {X} -\mathrm {E} [\mathbf {X} ]\right)\left(\mathbf {X} -\mathrm {E} [\mathbf {X} ]\right)^{\rm {T}}\right]}
This form can be seen as a generalization of the scalar-valued variance to higher dimensions. Recall that for a scalar-valued random variable X
σ2=var(X)=E[(X−E(X))2]=E[(X−E(X))⋅(X−E(X))].{\displaystyle \sigma ^{2}=\mathrm {var} (X)=\mathrm {E} [(X-\mathrm {E} (X))^{2}]=\mathrm {E} [(X-\mathrm {E} (X))\cdot (X-\mathrm {E} (X))].\,}
Indeed, the entries on the diagonal of the covariance matrix Σ{\displaystyle \Sigma } are the variances of each element of the vector X{\displaystyle \mathbf {X} } .
Correlation matrix
A quantity closely related to the covariance matrix is the correlation matrix, the matrix of Pearson product-moment correlation coefficients between each of the random variables in the random vector X{\displaystyle \mathbf {X} } , which can be written
corr(X)=(diag(Σ))−12Σ(diag(Σ))−12{\displaystyle {\text{corr}}(\mathbf {X} )=\left({\text{diag}}(\Sigma )\right)^{-{\frac {1}{2}}}\,\Sigma \,\left({\text{diag}}(\Sigma )\right)^{-{\frac {1}{2}}}}
where diag(Σ){\displaystyle {\text{diag}}(\Sigma )} is the matrix of the diagonal elements of Σ{\displaystyle \Sigma } (i.e., a diagonal matrix of the variances of Xi{\displaystyle X_{i}} for i=1,…,n{\displaystyle i=1,\dots ,n} ).
Equivalently, the correlation matrix can be seen as the covariance matrix of the standardized random variables Xi/σ(Xi){\displaystyle X_{i}/\sigma (X_{i})} for i=1,…,n{\displaystyle i=1,\dots ,n} .
Each element on the principal diagonal of a correlation matrix is the correlation of a random variable with itself, which always equals 1. Each off-diagonal element is between 1 and –1 inclusive.
Conflicting nomenclatures and notations
Nomenclatures differ. Some statisticians, following the probabilist William Feller, call the matrix Σ{\displaystyle \Sigma } the variance of the random vector X{\displaystyle X} , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector X{\displaystyle X} . Thus
var(X)=cov(X)=E[(X−E[X])(X−E[X])T].{\displaystyle \operatorname {var} (\mathbf {X} )=\operatorname {cov} (\mathbf {X} )=\mathrm {E} \left[(\mathbf {X} -\mathrm {E} [\mathbf {X} ])(\mathbf {X} -\mathrm {E} [\mathbf {X} ])^{\rm {T}}\right].}
However, the notation for the cross-covariance between two vectors is standard:
cov(X,Y)=E[(X−E[X])(Y−E[Y])T].{\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )=\mathrm {E} \left[(\mathbf {X} -\mathrm {E} [\mathbf {X} ])(\mathbf {Y} -\mathrm {E} [\mathbf {Y} ])^{\rm {T}}\right].}
The var notation is found in William Feller's two-volume book An Introduction to Probability Theory and Its Applications,[2] but both forms are quite standard and there is no ambiguity between them.
The matrix Σ{\displaystyle \Sigma } is also often called the variance-covariance matrix since the diagonal terms are in fact variances.
For Σ=E[(X−E[X])(X−E[X])T]{\displaystyle \Sigma =\mathrm {E} \left[\left(\mathbf {X} -\mathrm {E} [\mathbf {X} ]\right)\left(\mathbf {X} -\mathrm {E} [\mathbf {X} ]\right)^{\rm {T}}\right]} and μ=E(X){\displaystyle {\boldsymbol {\mu }}=\mathrm {E} ({\textbf {X}})} , where X is a random p-dimensional variable and Y a random q-dimensional variable, the following basic properties apply:[3]
Σ=E(XXT)−μμT{\displaystyle \Sigma =\mathrm {E} (\mathbf {XX^{\rm {T}}} )-{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\rm {T}}}
Σ{\displaystyle \Sigma \,} is positive-semidefinite and symmetric.
cov(AX+a)=Acov(X)AT{\displaystyle \operatorname {cov} (\mathbf {AX} +\mathbf {a} )=\mathbf {A} \,\operatorname {cov} (\mathbf {X} )\,\mathbf {A^{\rm {T}}} }
cov(X,Y)=cov(Y,X)T{\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )=\operatorname {cov} (\mathbf {Y} ,\mathbf {X} )^{\rm {T}}}
cov(X1+X2,Y)=cov(X1,Y)+cov(X2,Y){\displaystyle \operatorname {cov} (\mathbf {X} _{1}+\mathbf {X} _{2},\mathbf {Y} )=\operatorname {cov} (\mathbf {X} _{1},\mathbf {Y} )+\operatorname {cov} (\mathbf {X} _{2},\mathbf {Y} )}
If p = q, then var(X+Y)=var(X)+cov(X,Y)+cov(Y,X)+var(Y){\displaystyle \operatorname {var} (\mathbf {X} +\mathbf {Y} )=\operatorname {var} (\mathbf {X} )+\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )+\operatorname {cov} (\mathbf {Y} ,\mathbf {X} )+\operatorname {var} (\mathbf {Y} )}
cov(AX+a,BTY+b)=Acov(X,Y)B{\displaystyle \operatorname {cov} (\mathbf {AX} +\mathbf {a} ,\mathbf {B} ^{\rm {T}}\mathbf {Y} +\mathbf {b} )=\mathbf {A} \,\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )\,\mathbf {B} }
If X{\displaystyle \mathbf {X} } and Y{\displaystyle \mathbf {Y} } are independent or uncorrelated, then cov(X,Y)=0{\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )=\mathbf {0} }
where X,X1{\displaystyle \mathbf {X} ,\mathbf {X} _{1}} and X2{\displaystyle \mathbf {X} _{2}} are random p×1 vectors, Y{\displaystyle \mathbf {Y} } is a random q×1 vector, a{\displaystyle \mathbf {a} } is a q×1 vector, b{\displaystyle \mathbf {b} } is a p×1 vector, and A{\displaystyle \mathbf {A} } and B{\displaystyle \mathbf {B} } are q×p matrices.
This covariance matrix is a useful tool in many different areas. From it a transformation matrix can be derived, called a whitening transformation, that allows one to completely decorrelate the data{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} or, from a different point of view, to find an optimal basis for representing the data in a compact way{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} (see Rayleigh quotient for a formal proof and additional properties of covariance matrices). This is called principal components analysis (PCA) and the Karhunen-Loève transform (KL-transform).
Block matrices
The joint mean μX,Y{\displaystyle {\boldsymbol {\mu }}_{X,Y}} and joint covariance matrix ΣX,Y{\displaystyle {\boldsymbol {\Sigma }}_{X,Y}} of X{\displaystyle {\boldsymbol {X}}} and Y{\displaystyle {\boldsymbol {Y}}} can be written in block form
μX,Y=[μXμY],ΣX,Y=[ΣXXΣXYΣYXΣYY]{\displaystyle {\boldsymbol {\mu }}_{X,Y}={\begin{bmatrix}{\boldsymbol {\mu }}_{X}\\{\boldsymbol {\mu }}_{Y}\end{bmatrix}},\qquad {\boldsymbol {\Sigma }}_{X,Y}={\begin{bmatrix}{\boldsymbol {\Sigma }}_{\mathit {XX}}&{\boldsymbol {\Sigma }}_{\mathit {XY}}\\{\boldsymbol {\Sigma }}_{\mathit {YX}}&{\boldsymbol {\Sigma }}_{\mathit {YY}}\end{bmatrix}}}
where ΣXX=var(X),ΣYY=var(Y),{\displaystyle {\boldsymbol {\Sigma }}_{XX}={\mbox{var}}({\boldsymbol {X}}),{\boldsymbol {\Sigma }}_{YY}={\mbox{var}}({\boldsymbol {Y}}),} and ΣXY=ΣYXT=cov(X,Y){\displaystyle {\boldsymbol {\Sigma }}_{XY}={\boldsymbol {\Sigma }}_{\mathit {YX}}^{T}={\mbox{cov}}({\boldsymbol {X}},{\boldsymbol {Y}})} .
ΣXX{\displaystyle {\boldsymbol {\Sigma }}_{XX}} and ΣYY{\displaystyle {\boldsymbol {\Sigma }}_{YY}} can be identified as the variance matrices of the marginal distributions for X{\displaystyle {\boldsymbol {X}}} and Y{\displaystyle {\boldsymbol {Y}}} respectively.
If X{\displaystyle {\boldsymbol {X}}} and Y{\displaystyle {\boldsymbol {Y}}} are jointly normally distributed,
x,y∼N(μX,Y,ΣX,Y){\displaystyle {\boldsymbol {x}},{\boldsymbol {y}}\sim \ {\mathcal {N}}({\boldsymbol {\mu }}_{X,Y},{\boldsymbol {\Sigma }}_{X,Y})} ,
then the conditional distribution for Y{\displaystyle {\boldsymbol {Y}}} given X{\displaystyle {\boldsymbol {X}}} is given by
y|x∼N(μY|X,ΣY|X){\displaystyle {\boldsymbol {y}}|{\boldsymbol {x}}\sim \ {\mathcal {N}}({\boldsymbol {\mu }}_{Y|X},{\boldsymbol {\Sigma }}_{Y|X})} ,[4]
defined by conditional mean
μY|X=μY+ΣYXΣXX−1(x−μX){\displaystyle {\boldsymbol {\mu }}_{Y|X}={\boldsymbol {\mu }}_{Y}+{\boldsymbol {\Sigma }}_{YX}{\boldsymbol {\Sigma }}_{XX}^{-1}\left(\mathbf {x} -{\boldsymbol {\mu }}_{X}\right)}
and conditional variance
ΣY|X=ΣYY−ΣYXΣXX−1ΣXY.{\displaystyle {\boldsymbol {\Sigma }}_{Y|X}={\boldsymbol {\Sigma }}_{YY}-{\boldsymbol {\Sigma }}_{\mathit {YX}}{\boldsymbol {\Sigma }}_{\mathit {XX}}^{-1}{\boldsymbol {\Sigma }}_{\mathit {XY}}.}
The matrix ΣYXΣXX−1 is known as the matrix of regression coefficients, while in linear algebra ΣY|X is the Schur complement of ΣXX in ΣX,Y
The matrix of regression coefficients may often be given in transpose form, ΣXX−1ΣXY, suitable for post-multiplying a row vector of explanatory variables xT rather than pre-multiplying a column vector x. In this form they correspond to the coefficients obtained by inverting the matrix of the normal equations of ordinary least squares (OLS).
As a linear operator
Applied to one vector, the covariance matrix maps a linear combination, c, of the random variables, X, onto a vector of covariances with those variables: cTΣ=cov(cTX,X){\displaystyle \mathbf {c} ^{\rm {T}}\Sigma =\operatorname {cov} (\mathbf {c} ^{\rm {T}}\mathbf {X} ,\mathbf {X} )} . Treated as a bilinear form, it yields the covariance between the two linear combinations: dTΣc=cov(dTX,cTX){\displaystyle \mathbf {d} ^{\rm {T}}\Sigma \mathbf {c} =\operatorname {cov} (\mathbf {d} ^{\rm {T}}\mathbf {X} ,\mathbf {c} ^{\rm {T}}\mathbf {X} )} . The variance of a linear combination is then cTΣc{\displaystyle \mathbf {c} ^{\rm {T}}\Sigma \mathbf {c} } , its covariance with itself.
Similarly, the (pseudo-)inverse covariance matrix provides an inner product, ⟨c−μ|Σ+|c−μ⟩{\displaystyle \langle c-\mu |\Sigma ^{+}|c-\mu \rangle } which induces the Mahalanobis distance, a measure of the "unlikelihood" of c.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
Which matrices are covariance matrices?
From the identity just above, let b{\displaystyle \mathbf {b} } be a (p×1){\displaystyle (p\times 1)} real-valued vector, then
var(bTX)=bTvar(X)b,{\displaystyle \operatorname {var} (\mathbf {b} ^{\rm {T}}\mathbf {X} )=\mathbf {b} ^{\rm {T}}\operatorname {var} (\mathbf {X} )\mathbf {b} ,\,}
which must always be nonnegative since it is the variance of a real-valued random variable. From the symmetry of the covariance matrix's definition it follows that only a positive-semidefinite matrix can be a covariance matrix.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Conversely, every symmetric positive semi-definite matrix is a covariance matrix. To see this, suppose M is a p×p positive-semidefinite matrix. From the finite-dimensional case of the spectral theorem, it follows that M has a nonnegative symmetric square root, that can be denoted by M1/2. Let X{\displaystyle \mathbf {X} } be any p×1 column vector-valued random variable whose covariance matrix is the p×p identity matrix. Then
var(M1/2X)=M1/2(var(X))M1/2=M.{\displaystyle \operatorname {var} (\mathbf {M} ^{1/2}\mathbf {X} )=\mathbf {M} ^{1/2}(\operatorname {var} (\mathbf {X} ))\mathbf {M} ^{1/2}=\mathbf {M} .\,}
How to find a valid correlation matrix
In some applications (e.g., building data models from only partially observed data) one wants to find the "nearest" correlation matrix to a given symmetric matrix (e.g., of observed covariances). In 2002, Higham[5] formalized the notion of nearness using a weighted Frobenius norm and provided a method for computing the nearest correlation matrix.
Complex random vectors
The variance of a complex scalar-valued random variable with expected value μ is conventionally defined using complex conjugation:{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
var(z)=E[(z−μ)(z−μ)∗]{\displaystyle \operatorname {var} (z)=\operatorname {E} \left[(z-\mu )(z-\mu )^{*}\right]}
where the complex conjugate of a complex number z{\displaystyle z} is denoted z∗{\displaystyle z^{*}} ; thus the variance of a complex number is a real number.
If Z{\displaystyle Z} is a column-vector of complex-valued random variables, then the conjugate transpose is formed by both transposing and conjugating. In the following expression, the product of a vector with its conjugate transpose results in a square matrix, as its expectation:
E[(Z−μ)(Z−μ)†],{\displaystyle \operatorname {E} \left[(Z-\mu )(Z-\mu )^{\dagger }\right],}
where Z†{\displaystyle Z^{\dagger }} denotes the conjugate transpose, which is applicable to the scalar case since the transpose of a scalar is still a scalar. The matrix so obtained will be Hermitian positive-semidefinite,[6] with real numbers in the main diagonal and complex numbers off-diagonal.
{{#invoke:main|main}} If MX{\displaystyle \mathbf {M} _{\mathbf {X} }} and MY{\displaystyle \mathbf {M} _{\mathbf {Y} }} are centred data matrices of dimension n-by-p and n-by-q respectively, i.e. with n rows of observations of p and q columns of variables, from which the column means have been subtracted, then, if the column means were estimated from the data, sample correlation matrices QX{\displaystyle \mathbf {Q} _{\mathbf {X} }} and QXY{\displaystyle \mathbf {Q} _{\mathbf {XY} }} can be defined to be
QX=1n−1MXTMX,QXY=1n−1MXTMY{\displaystyle \mathbf {Q} _{\mathbf {X} }={\frac {1}{n-1}}\mathbf {M} _{\mathbf {X} }^{T}\mathbf {M} _{\mathbf {X} },\qquad \mathbf {Q} _{\mathbf {XY} }={\frac {1}{n-1}}\mathbf {M} _{\mathbf {X} }^{T}\mathbf {M} _{\mathbf {Y} }}
or, if the column means were known a-priori,
QX=1nMXTMX,QXY=1nMXTMY{\displaystyle \mathbf {Q} _{\mathbf {X} }={\frac {1}{n}}\mathbf {M} _{\mathbf {X} }^{T}\mathbf {M} _{\mathbf {X} },\qquad \mathbf {Q} _{\mathbf {XY} }={\frac {1}{n}}\mathbf {M} _{\mathbf {X} }^{T}\mathbf {M} _{\mathbf {Y} }}
These empirical sample correlation matrices are the most straightforward and most often used estimators for the correlation matrices, but other estimators also exist, including regularised or shrinkage estimators, which may have better properties.
As a parameter of a distribution
If a vector of n possibly correlated random variables is jointly normally distributed, or more generally elliptically distributed, then its probability density function can be expressed in terms of the covariance matrix.
In financial economics
The covariance matrix plays a key role in financial economics, especially in portfolio theory and its mutual fund separation theorem and in the capital asset pricing model. The matrix of covariances among various assets' returns is used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of diversification.
Covariance mapping
Gramian matrix
Eigenvalue decomposition
Quadratic form (statistics)
↑ {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ Template:Cite web
↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
{{#invoke:citation/CS1|citation
|CitationClass=citation }}
Weisstein, Eric W., "Covariance Matrix", MathWorld.
|CitationClass=book }}
Template:Statistics
Retrieved from "https://en.formulasearchengine.com/index.php?title=Covariance_matrix&oldid=225642"
Covariance and correlation
Summary statistics
|
CommonCrawl
|
Evaluation of multi-hazard map produced using MaxEnt machine learning technique
Multi-hazard spatial modeling via ensembles of machine learning and meta-heuristic techniques
Mojgan Bordbar, Hossein Aghamohammadi, … Zahra Azizi
Assessing and mapping multi-hazard risk susceptibility using a machine learning technique
Hamid Reza Pourghasemi, Narges Kariminejad, … Artemio Cerda
Geoinformation-based landslide susceptibility mapping in subtropical area
Xiaoting Zhou, Weicheng Wu, … Xiao Fu
A machine learning framework for multi-hazards modeling and mapping in a mountainous area
Saleh Yousefi, Hamid Reza Pourghasemi, … John P. Tiefenbacher
A multi-hazard map-based flooding, gully erosion, forest fires, and earthquakes in Iran
Soheila Pouyan, Hamid Reza Pourghasemi, … John J. Clague
Analytical techniques for mapping multi-hazard with geo-environmental modeling approaches and UAV images
Narges Kariminejad, Hamid Reza Pourghasemi & Mohsen Hosseinalizadeh
Evaluating scale effects of topographic variables in landslide susceptibility models using GIS-based machine learning techniques
Kuan-Tsung Chang, Abdelaziz Merghadi, … Jie Dou
Estimate earth fissure hazard based on machine learning in the Qa' Jahran Basin, Yemen
Yousef A. Al-Masnay, Nabil M. Al-Areeq, … Xingpeng Liu
Sinkhole susceptibility mapping in Marion County, Florida: Evaluation and comparison between analytical hierarchy process and logistic regression based approaches
Praveen Subedi, Kabiraj Subedi, … Pradeep Subedi
Narges Javidan1,
Ataollah Kavian1,
Hamid Reza Pourghasemi2,
Christian Conoscenti3,
Zeinab Jafarian4 &
Jesús Rodrigo-Comino5,6
Scientific Reports volume 11, Article number: 6496 (2021) Cite this article
Natural hazards are diverse and uneven in time and space, therefore, understanding its complexity is key to save human lives and conserve natural ecosystems. Reducing the outputs obtained after each modelling analysis is key to present the results for stakeholders, land managers and policymakers. So, the main goal of this survey was to present a method to synthesize three natural hazards in one multi-hazard map and its evaluation for hazard management and land use planning. To test this methodology, we took as study area the Gorganrood Watershed, located in the Golestan Province (Iran). First, an inventory map of three different types of hazards including flood, landslides, and gullies was prepared using field surveys and different official reports. To generate the susceptibility maps, a total of 17 geo-environmental factors were selected as predictors using the MaxEnt (Maximum Entropy) machine learning technique. The accuracy of the predictive models was evaluated by drawing receiver operating characteristic-ROC curves and calculating the area under the ROC curve-AUCROC. The MaxEnt model not only implemented superbly in the degree of fitting, but also obtained significant results in predictive performance. Variables importance of the three studied types of hazards showed that river density, distance from streams, and elevation were the most important factors for flood, respectively. Lithological units, elevation, and annual mean rainfall were relevant for detecting landslides. On the other hand, annual mean rainfall, elevation, and lithological units were used for gully erosion mapping in this study area. Finally, by combining the flood, landslides, and gully erosion susceptibility maps, an integrated multi-hazard map was created. The results demonstrated that 60% of the area is subjected to hazards, reaching a proportion of landslides up to 21.2% in the whole territory. We conclude that using this type of multi-hazard map may be a useful tool for local administrators to identify areas susceptible to hazards at large scales as we demonstrated in this research.
Natural disasters are serious threats to human life and properties all over the world. Preventing natural catastrophes is not conceivable, yet by developing suitable preparation plans and mitigation measures its drawbacks can be alleviated1. Considerable morphological changes in landforms due to active tectonics or climate changes can impact control in human activities2,3,4,5,6,7,8,9. However, also humans can drastically modify natural ecosystems, negatively. For example, deforestation, non-sustainable agricultural management or human-made constructions can increase soil mobilization and the transportation of sediments, resulting in extreme land degradation processes10,11,12,13. Combining land degradation processes allow us understanding environmental issues and threats difficult to be assessed because of its complexity.
Events such as gully erosion, landslides, and floods are physical phenomena, active in geological times but uneven in time and space14,15,16,17. They are considered hazard events, which can be induced by humans or not, but all of them are key global issues threatening human life, resources and goods18,19,20,21. Moreover, they have different drawbacks in various places and because of their correlated subsequences, these catastrophes have contrary long-term effects. When these penalties have a considerable impact on human life and activities, they become natural disasters23,24. Since human interventions in natural ecosystems that cause natural catastrophe to lead to endangering human life and significant economic consequences, the awareness of society is vital to reduce them25,26,27. Mitigating the effects of potential catastrophes and preparing the proper infrastructure for tackling them requires notably accurate information about the vulnerability and susceptibility of a specific territory about environmental hazards22. In general, natural disasters occur more frequently overpassing the human capability to restore the effects of past events28. Hence, it is necessary to plan and manage the natural catastrophes to decrease both the economic penalties and loss of humankind life. To achieve this goal, it is key to consider the natural catastrophe predictive maps throughout the land use planning stages.
Since natural hazards are difficult to be predicted, most of the studies focus on a single hazard to be mapped. However, unfortunately, it is usual that several hazards occur at the same time in one place. Therefore, there is a necessity to make integrate studies although they are more complex and difficult to be represented in one synthesized map or article. During the last years, major developments have been done to quantify the feedback, mechanisms and interconnection among different hazards and factors29,30,31. One of the most relevant achievements are the multi-hazard mapping initiative (MMI), which started by the Federal Emergency Management Agency (FEMA) to provide multi-hazard advisory maps32,33 and the novel UN framework for catastrophe risk decrement sturdily highlights the necessity of a multi-hazard approach34.
These models allow showing the spatial pattern of a natural phenomenon, environmental elements, or some human activities, and provide information on the spatial distribution of natural hazards such as flood, landslides, and erosion and they are important tools for planners and environmental managers to identify susceptible areas and to prioritize their mitigation response efforts28,35,36,37,38. A multi-hazard susceptibility map (MHSM) represents susceptibility and hazard information, together, on a single combined map. Because of the large number of maps and their probable variance in the area covered by different scales, applying a single hazard map to supply information on every single hazard, is complicated for planners39. Alternatively, a MHSM raised from the synthesis of different hazard maps would allow giving proper information from a particular area and it could help the land planners to analyze all of them from a holistic point of view. The multi-hazard map is an accurate tool to create awareness in mitigating multiple hazards40 and also for the selection of appropriate land uses and evaluation of susceptibility areas. By the way, the United Nation has emphasized the significance of multi-hazard assessment and referred that it "is an essential element of a safer world in the twenty-first century". Nevertheless, analyzing a multi-hazard map is complicated and requires major challenges as well as the analysis of susceptibility41.
In this regard, several types of research have focused on multi-hazard evaluation via GIS-based methods that make it possible to analyze various data and the improvement of natural hazard models for a specific area41,42,43,44. For example, it is representative the research conducted by Sheikh et al. in the Golestan province at a large scale to assess a multi-hazard-based management using a coupled TOPSIS–Mahalanobis distance45. Also, there are several inventive, statistical, and deterministic methods that can be used in a single hazard or even multi-hazards28,46,47,48,49,50,51,52,53,54,55,56. Data-mining models have newly been suggested with predictive skills and progressive pattern learning and can present a good platform to synthesize and to analyze the information for the definition of potential hazard areas57,58,59,60,61,62,63. The most common methods proposed in the literature are artificial neural networks64,65, frequency ratio-FR17,66, logistic regression67, index-of-entropy68, fuzzy logic69, and multivariate adaptive regression splines31,70. Also, the development of machine learning and statistical methods techniques, including support vector machine (SVM), random forest (RF), boosted regression trees (BRT), maximum entropy (MaxEnt) has contributed significantly to the field of natural hazards71,72,73. Among these, maximum entropy has been successfully used for assessing different types of natural hazards such as landslides55,67, floods74, gully erosion75 and soil salinity76 due to fast and easy implementation and robust mathematical functions and theoretical backgrounds.
So, the main aims of the current research are: (1) to explore the ability of the MaxEnt model to predict the spatial occurrence of flood, landslides, and gully erosion; (2) to better understand the relationships between these processes and their controlling factors; and (3) to design a methodological perspective for preparing a combined multi-hazard map for land use planning and hazard mitigation. To achieve these goals, we present a study case in the Gorganrood Watershed, which has witnessed several landslides, gully erosion, and floods that have been a matter of debate in recent years77,78.
The Gorganrood Watershed is located in the Golestan Province which is situated in the north-eastern part of Iran and covers an area of 10,197 km2. The study area lies between the latitudes of 36° 34′ to 38° 15′ N and the longitudes of 54° 5′ to 56° 8′ E (Fig. 1). Topographically, it is characterized by steep slopes, up to 69° in mountainous regions. The central and western parts are generally characterized as plain and flat areas with an average elevation of between 95 and 3652 m78. The annual mean rainfall is approximately 231–848 mm78. The southern section has a typical mountain climate and the central and northern regions have a Mediterranean climate. The average minimum and maximum temperatures are 11 and 18.5 °C, respectively80. In the last decade, this area has been challenged with different natural hazards such as erosion, landslides, and floods which was selected as an appropriate application site for the multi-hazard probability assessment (MHPA).
Location of the study area, sampling points and elevation.
According to national reports, the study area is covered with some prone lithological formations such as dark grey shale, sandstone, and Quaternary deposits78. Regarding the infrastructures, the area consisted of main cities, villages, and 1218 km of national roads, where to some extent, they can be exposed to a divertimento of natural hazard occurrences78. Figure 2 presents some photographs of gully erosion locations (a, b), landslides (c, d), and flood (e) in Golestan Province.
Some examples of gully erosion (a, b) and landslides (c, d) in the Golestan Province, Iran. *(A) and (C) were obtained from Google Earth, and (B) and (D) were taken by Narges Javidan.
In this area, one of the deadliest flash floods occurred on 10th August 2001 which caused up to 300 people deceased, 381 harmed, and 4000 buildings were endured heavy loss79. Moreover, 99 mm3 (millions square meters) of sediments loaded behind the dams, and also made damages to rangelands, forests, and residential units78. Some authors estimated 430,000 hectares affected by erosion because of gullies, landslides, etc. from 1990 to 2005. In this province, about 5–6 t/ha/year soil erodes in forest areas80. Also, because of the existence of the steep slopes in the study area, the landslide is one of the destructive events in the area and often destroys gardens and agricultural land, the damage of roads and natural resources55. Loss of soil, the imposition of plenty of costs, reduced agricultural potential and has caused the migration of people in the villages of this region.
Figure 3 illustrates the methodological flowchart of this approach that was used for the MHPM analysis using the MaxEnt model. The flowchart comprises main four steps: (1): preparing thematic layers (17 geo-environmental conditioning factor); (2): gully erosion, flood, and landslides susceptibility modelling using the ME machine learning techniques; (3): validation of the susceptibility maps using the ROC-AUC curve; and, (4) to combine flood, landslides, and gully erosion susceptibility maps to prepare a multi-hazard probability map for land use planning and hazard mitigation.
Flowchart of the methodology used for the MHPM in Gorganrood Watershed, Golestan Province, Iran. *Own elaboration.
Landslides, gully erosion, and flood inventory mapping
A key step for susceptibility mapping is the preparation of an inventory of hazard landforms81. The landslide, gully erosion, and flood inventory for the Gorganrood Watershed were compiled from field investigation and national and regional documents from various organizations including the Water Resources Organization and Department of Natural Resources Management of Golestan. Considering that some hazard locations are located in mountainous areas and the field investigation may miss them, Google Earth images were used for landslides identification, as well. The inventory map for gully erosion is a collection of occurrences (283 gully locations), where a landslide inventory map is containing 351 landslide locations. Some authors highlighted that using analysis of the past documents of flood occurrences, the future flood events in an area can be estimated82. So, in this study, a flood inventory map was prepared by containing 127 flood locations. In general, a random partition algorithm83,84 was used to separate training points from the validation points. In the current study, 70% of each hazard was used in the model building (training) and the remaining 30% were used for the validation. Three replications and three sample data sets were used to perform these processes including S1, S2, and S3. These datasets were arranged to evaluate the robustness of the built models and data sensitivity81,85,86. The equal data sets (positives/negatives) are applied which includes all the positive cells (hazard locations) and the same randomly inferred negative cells (non-hazard locations).
Flood, landslides, and gully erosion conditioning factors
It is essential to determine the effective factors on different natural hazards and human-made fatalities to performing flood, landslides, and gully erosion susceptibility maps, separately87. A good understanding of the main hazards-related factors is needed to recognize the susceptible areas. For this aim, the conditioning factors for different hazards were selected from the literature review55,59,84,88,89. In this study, ArcGIS 10.5 (ESRI, USA) and System for Automated Geoscientific Analyses (SAGA) software were used to produce and display these data layers. For the application of the MaxEnt machine learning model, all the factors were converted to a raster grid with 30 × 30 m grid cells. Entire the conditioning factors were primarily continuous, and some of them were classified within different categories based on expert knowledge and literature review36,90,91,92.
As in Suppl. Material 1, the predicting factors used in this study for three different types of hazards are as follows: (a) Digital Elevation Model/elevation (m), (b) slope aspect, (c) slope per cent, (d) land use, (e) plan curvature, (f) profile curvature, (g) TWI, (h) lithological units (i) drainage density (mm), (j) soil texture, (k) distance to streams (m), (l) annual mean rainfall, (m) relative slope position, and for Landslide are (n) distance to faults (m), (o) stream power index, (p) LS factor Also (q) distance to roads (m) was employed for gully erosion. Table 2 also shows the predicting factors used in this work for three hazards.
DEM (Digital Elevation Model) of the subject area with a 30 m pixel size was produced using digital contours data prepared from the Department of Natural Resources Management of Iran (Suppl. Material 1 a-c). From this DEM, some geomorphological layers such as slope93,94,95,96, hillslope aspect97,98,99,100,101,102, curvature layers103,104 were obtained using ArcGIS 10.5 software (ESRI, USA). The slope curvature map was compiled with three categories: convex, concave, and flat. Positive curvature exhibits convex (> + 0.1), negative curvature depicts concave (< − 0.1), and zero curvature represents flat (− 0.1 to + 0.1). Also, profile and plan curvatures possess a range of positive and negative values and return a different description in every single index. Positive and negative values in profile curvature demonstrate convexity (increasing flow velocity) and concavity (reducing flow velocity), respectively. On the contrary, positive and negative values in the plan curvature denote concavity (flow convergence) and the convexity (flow divergence), respectively54,105. Values close to zero represent neutral curvature in both cases.
Land use/land cover (Suppl. Material 1 d) plays a significant role in the operation of hydrological and geomorphological processes by directly or indirectly influences on evapotranspiration, infiltration, run-off generation, and sediment dynamics102,106. The land use/land cover map of the subject area in 1:100,000-scale was prepared from the Natural Resources Office of Golestan Province and modified by Google Earth images. The land use/ land cover of the subject area comprises of the lake, residential areas, forest lands, rangelands, drying farming, irrigation farming, rocky lands, and saline lands. Soil texture is generally recognized as a weighty controlling factor in the mechanism of infiltration and runoff generation and is effective on hazard occurrence107,108,109.
This layer was created by digitizing the soil texture map of Golestan Province (1:100,000-scale) gained from the Agriculture Department, Iran. The soil texture in the subject area comprises of sandy-loam, clay-loam, sandy-clay-loam, silty-clay, silty-clay-loam, and silty-loam (Suppl. Material 1 g). The topographic position index (TPI) approach was applied to assess topographic slope location, and to zone ordination automation, which creates a single-band raster characterized quantities measured upon elevation110. It is an algorithm increasingly applied to measure topographic slope positions and displays the corresponding position of each cell (Suppl. Material 1 h).
Moore and Grayson111 and Grabs et al.112 mentioned that TWI (Topographic wetness index) represents the tendency of gravitational forces and the spatial distribution of wetness conditions to move water to the downslope. This factor has been prepared using Eq. (2):
$$TWI=ln\left(\frac{\propto }{tan\beta }\right)$$
where α is the cumulative upslope area draining through a point (per unit contour length) and tan β is the slope angle at the point. In this survey, the TWI map was prepared in SAGA-GIS and its value ranges from 1.20 to 22.92 (Suppl. Material 1 i).
Distance to streams is one of the key conditioning factors due to its importance on the flood magnitude and spread of landslides and gully erosion48,113. Layers of the proximity were produced using the Euclidean distance function in ArcGIS 10.5 software and varying from 0 to 11,720 m for roads (Suppl. Material 1 r), 0–15,080 m for streams (Suppl. Material 1 j), and 0–55,212 m for faults (Suppl. Material 1 s). The roads and rivers were derived from the national topographic map at the scale of 1:50,000 whereas faults extracted from geology map in 1:100,000-scale. Based on field studies, landslides are distributed typically nearby the linear features especially faults and roads. Landslide hazard level is closely related to the proximity to faults and roads. It affects not only surface structures but also terrain permeability100,101,114. Where water flow concentrates may be appropriate for hosting gullies, road construction undoubtedly has a sturdy negative impact on slope stability115. The drainage density (Suppl. Material 1 k) is also one of the main conditioning factors that strongly contribute to many hazards' occurrence59. According to Tehrany et al.88, a high drainage density causes a larger surface runoff ratio. The drainage pattern of a region is influenced by different factors such as the structure and nature of the soil characteristics, geological formation, infiltration rate, slope degree, and vegetation cover condition83. To convert the drainage network pattern to measurable quantity, the drainage density was determined using an extension of" line density" in ArcGIS 10.5 software. Rainfall-triggered landslides have brought great damages to communication sub-structures, properties, and pasture biomass production116,117,118. The annual mean rainfall map of Gorganrood Watershed was prepared based on the rainfall data extracted from the Regional Water Organization of Golestan Province. This map created using fifty-three stations and a statistical period of 2001–2016 based on the Inverse Distance Weight (IDW) interpolation method (Eq. 3). This map ranges from 384 to 810 mm/year. The rainfall map was prepared in a raster format of 30 × 30 m in ArcGIS 10.5 as an input layer for assessment of hazard (Suppl. Material 1 l).
$$\mathrm{\lambda i}=\frac{{\mathrm{Di}}^{-\mathrm{\alpha }}}{\sum_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{Di}}^{-\mathrm{\alpha }}}$$
where λi is the weight of the point i, Di is the distance between the points i and the point of the unknown, and α is equal to the weighing power119. Assuming that discharge is associated with the specific catchment area, the erosive power of water flow can be measured by the stream power index-SPI (Suppl. Material 1 m)111:
$$SPI=As\times tan\sigma$$
where As represents the specific catchment area in meters and r is the slope gradient in degrees. The SPI index is one of the most important factors controlling slope erosion processes since the erosive power of running water straightly influences river cutting and slope toe erosion120. The areas with high stream power indices have an excessive potential for erosion because it is representative of the potential energy procurable to entrain sediment121. The relative slope position (RSP), as a tool, could calculate several terrain indices from the digital elevation model (Suppl. Material 1 n). General information on the computational concept can be found in122. The discrepancy between the value of one cell and the average value of the 8 surrounding cells defines the TRI (Terrain Ruggedness Index)51. In the first place, the two input neighborhood raster (using a 3 × 3 neighborhood for min and max) was produced from a DEM, afterwards, the equation was run in Raster Calculator (Suppl. Material 1 o). Lithological units (Suppl. Material 1 p) play a dominant role in determining gully erosion and landslides in each area99,102,123,124 because gully erosion is particularly dependent on the lithology properties and various lithological units demonstrate important differences in landslide instability. Also, lithology is assumed as a necessary factor in the spatial and temporal variations of drainage basin hydrology125. Lithological units have different susceptibility to active hydrological processes. In this study, the lithological map of the subject area was produced according to the available geological maps on a scale of 1:100,000 obtained from the Geological Survey Department, Iran. Different variety of lithological formations have covered the Gorganrood Watershed which is classified into 24 groups (Table 1).
Table 1 Litology of the Gorganrood Watershed.
The LS (slope-length) plays as a significant factor in soil erosion and natural hazards occurrence122 and is known as a parameter used in the RUSLE equation to consider the effect of topography on erosion126. The topographical factor depends on the slope steepness factor (S) and the slope length factor (L) and was estimated based on the slope and specific catchment area as follow127:
$$LS={\left(As /22.13\right)}^{0.4}\times {\left(sin\beta /0.0896\right)}^{1.3}$$
where As is the specific catchment area and \(\beta\) is the slope in degrees. The LS factor map for Gorganrood Watershed (Suppl. Material 1 q) was extracted using the SAGA-GIS software122.
Multi-collinearity test
The factors were used to consider the effect of correlation among them as the independent variables (Table 2). When the correlation between two independent variables is considerably high, it is a problem in the modelling process. The problem is named multi-collinearity. The VIF (variance inflation factor) and tolerance are two significant indices for multicollinearity diagnosis. VIF is the reciprocal of tolerance, contrarily, tolerance is 1 − R2 for the regression of that variable in contradiction of all the other independents, deprived of the dependent variable128. A VIF of > 5 or 10 and more and/or tolerance of lower than 0.10 shows a multicollinearity obstacle129,130.
Table 2 Predicting factors for the three selected hazards in the study area.
Maximum entropy (MaxEnt) model
The MaxEnt model was applied in the Maxent software for modelling landslides, flood, and gully erosion and calculation of hazard values (version 13.0.6.0). Phillips et al.131 proposed the MaxEnt model for predictive modeling of geographical species distribution based on the most important environmental condition when presence data are available131,132. We can also explain the maximum entropy estimation from a decision-theoretic viewpoint as a sturdy Bayes estimation. MaxEnt depends on a machine learning response that makes predictions from incomplete data133,134. The MaxEnt output produces in ASCII format as a continuous prediction of specific presence that ranges from 0 to 1134. For running the MaxEnt model, validation and training datasets were processed in excel format, and the conditioning factors were converted from raster to ASCII format, which is needed in Maxent software135. During the model running, for model training in the calibration phase, a random selection algorithm was used and (70%) of datasets were randomly selected83. This machine learning technique allows for the investigation of the relationship amongst a dependent variable (landslides, flood, and gully occurrence) and several independent variables (conditioning factors/geo-environmental factors), respectively. Details are given in Phillips et al.132.
Considering the effect of variables importance using the MaxEnt model
In the current study, the sensitivity analyses136,137 has been used as an exploratory technique to define the effect of variable variations on model outputs, allowing then a quantitative evaluation of the relative importance of uncertainty sources. To assess the uncertainty of projected maps in this study, a Jackknife test was executed for investigative the effects of removing any of the conditioning factors on the three susceptibility maps138. The Jackknife test can be used to assess the relative strengths of every predictor variable131,138,139. In consonance with the Jackknife test outcomes, variables with zero values including (SPI and soil for flood modelling; SPI, LS, profile curvature, and aspect for gully erosion; and TWI for landslides) were eliminated. Therefore, the remaining variables were used to run the final model for all three hazards.
Evaluation of the predictive performance of three hazards models
The validation step is the most important process of modelling140. The prediction accuracy of the built hazard models was evaluated by the ROC curve. In this approach, the AUC can evaluate the prediction accuracy qualitatively119,141. The ROC curve is a methodical technique that has been using to describe the proficiency of deterministic and probabilistic and prediction systems142.
The prediction accuracy of the models based on the AUC value can be classified three classes of accuracy following the classification proposed by Hosmer & Lemeshow143: 0.7, 0.8, and 0.9 AUC value thresholds were adopted to acceptable, excellent, and outstanding performance, respectively81,89.
The multi-hazard mapping adoption process
The combination of three hazard maps was used to create the multi-hazard probability map including flood, gully erosion, and landslide. First of all, for every considered hazard in this study, the MaxEnt model was constructed. Afterwards, the multi-hazard probability map was prepared based on the three individual hazard susceptibility maps, by synthesizing the three individual susceptibility maps according to their four classes in ArcGIS 10.5 environment, and this multi-hazard susceptibility map was ultimately classified into eight classes.
Results of the multi-collinearity test
According to the results of Table 3a,b,c, TRI (Terrain Ruggedness Index) for landslides, flood, and gully erosion with VIF > 5 and tolerance < 0.1 was eliminated. So, other factors are used for future analyses, and results show there is not any multi-collinearity among the remaining independent variables in the present study.
Table 3 The results of multi-collinearity test.
Application of the MaxEnt model
The susceptibility map for each hazard and each dataset in the study area was produced using both continuous and categorical data sets. Finally, the MaxEnt model was built using all three training groups of the sample data sets (i.e., S1, S2, S3) in the training step. Susceptibility maps of the flood, gully erosion and landslides of the study area are presented in Fig. 4. Four susceptibility groups including low (L), Moderate (M), high (H), and very high (VH) are performed based on the output susceptibility maps using the most authentic natural breaks classifying method.
Susceptibility mapping for (a) floods, (b) gully erosion, (c) and landslides using MaxEnt model in the study area.
Also, Fig. 5 shows the relative distribution of the average of the flood, landslides, and gully erosion susceptibility classes for three categories of the sample data sets. Besides, the statistical characteristics of the probabilistic prediction of the three hazards and all sample data sets are shown in Table 4.
Relative distributions of the average of four susceptibility classes for the flood, landslides, and gully erosion susceptibility maps.
Table 4 Statistical characteristics of the probability values obtained from ME models.
Sensitivity and response curves analysis
A sensitivity analysis134,139 was performed to investigate the relative strengths of every predictor variable on the results of predicted maps using the Jackknife test. Suppl. Material 2 shows the results of the Kappa-based Jackknife test using AUC on test data (S1) for flood, landslides, and gully erosion.
Suppl. Material 3 illustrates the response curves of one data set (S1) for some of the important conditioning factors used for three hazards (landslides, flood, and gully erosion) assessment.
The MaxEnt model performance
The results of the MaxEnt model (based on all three sample data sets) show different ranges of susceptibility values of hazards. The results of the goodness-of-fit are shown in Table 5. Figure 6a–c show the AUROC value for the three forecasted hazards maps based on one data set. The hazards samples applied to the model evaluation must be different from the hazards points used for training. In this current work, 30% of hazards occurrence points (30% of floods, landslides, and gully erosion samples) were considered to the validation phase (Table 5). The outcomes of robustness according to AUROC are illustrated in Fig. 7.
ROC curves of one data set (S1) for three hazards (a) landslides, (b) floods, and (c) gullies.
Robustness of the MaxEnt model in training and validation steps based on AUC.
Table 5 Predictive performance of models based on three sample data sets (S1, S2, and S3) in the training and validation step.
Multi-hazard probability map (MHPM)
The individual probability maps (i.e., gully erosion, floods, and landslides) which created using the MaxEnt model were used to produce the multi-hazard susceptibility map by synthesizing the three various hazard maps and finally classified into eight different classes: landslides-gully-flood, landslides-floods, landslide-gully, gully erosion, floods, gully-floods, landslides, gully-floods, and no hazard.
Figure 8 shows MHPM of the Gorganrood Watershed for the three hazards. Results demonstrated that 40% of the area is located in the low to very low susceptibility zones whereas 60% of the area is subjected to floods, landslides, and gully occurrence. It is also cleared that the proportion of landslide is the most occupied hazard (21.2%) in the Gorganrood Watershed (i.e. the range of areas covered by a landslide was larger than other hazards) (Fig. 9).
Multi-hazard probability map (MHPM).
Percent of association of each hazard in the MHPM.
Mapping hazards using combined diverse multi-risks
In our current study, after applying the multi-collinearity test, the susceptibility map for each hazard and each dataset was generated using an independent variable. Four susceptibility groups are performed based on the output susceptibility maps using the most authentic natural breaks classifying method47,144.
As Fig. 6 shows, for flood susceptibility mapping according to the MaxEnt model, less than 6% of the study area has a high and very high susceptibility, whereas, about landslides, approximately 14.8% and 8.1% of the study area was classified as high and very high classes, respectively. For the gully erosion susceptibility map, 8.2% of the pixels in the study area fell into high and very high susceptibility classes. For three hazard modelling, the highest percentage belongs to the low class.
Two techniques of one-by-one predictor-removal (OOPR) and only-one-predictor-involved (OOPI) considering the Jackknife-test were used to identify the key hazard-predictors. When isolated, the most influential predictor variables are drainage density, distance to streams, and DEM/elevation, respectively for flood, DEM/elevation, lithological units and annual mean rainfall landslides, and annual mean rainfall, DEM/elevation, lithological units for gully erosion. In other words, the elevation variable was the main controlling factor among all other variables for three hazards, whilst the lithological units were identified as the most important independent variable for gully erosion and landslides.
Therefore, as stated by Convertino et al.145, the SA (sensitivity analysis) allows modelers and managers to identify the conditioning factors (i.e. input variables) that reduce the variance of the model output to the most, which is significantly vital in understanding the model structure.
According to Suppl. Material 3, in the response curve of drainage density, with increasing river density, the flood values increased drastically, the most floods occurred in the range of drainage density between 0.3 and 0.6 km/km2. In other words, flood susceptibility increased in the areas with very high drainage density (i.e. increase the runoff transport capacity of drainage network). Based on the outcomes of previous studies, when the drainage density is high, it leads to an important surface runoff ratio. Different factors affect the drainage pattern of an area including slope degree, infiltration rate, vegetation cover condition the structure and nature of the soil characteristics, and geological formation83.
Regarding the distance to streams, the flood happened very close to rivers due to an upsurge in the degree of flood susceptibility. It is one of the important predictor factors owing to its significance on the flood velocity and magnitude. The high concentration of flow in the places around the stream network, there is the most chance of flood incidence close the streams113. Several studies highlighted that elevation, drainage density, and distance to streams were the most significant predictors for flood occurrence103,145,146.
The flood values decreased in places with high elevation. This was based on the outcomes from response curves of elevation, which demonstrates that flood happens in areas by low elevation and plains. In the areas with low elevation, an abundant amount of water enters the stream network and results in a flood incident147. The natural treatment of flooding, which happens mostly in flat areas instead of in highly elevated regions can produce a suitable proof for current results. Correspondingly, based on148, these areas have more upslope contributing area and runoff production. The most landslide occurrences happened in the range of elevation between 300 and 500 m, in which steep slope areas are located. The elevation did not contribute straightly to landslide appearance, but other factors such as precipitation and erosion processes were registered, both of them, logically affected by the elevation, therefore, relevant to be considered too149. As mentioned by other authors150, lithological unit structures and attributes constitute fundamental factors in landslide events. Various lithological units based on their types and characteristics have dissimilar landslide possibilities. In response curves of lithological units to landslide, the most hazard happened in group 21 with Dorud Formation, red sandstone, and shale with subordinate sandy limestone.
The underground hydrostatic level and water pressure surge because of rainfall151. One of the significant operating prognosticators in landslide mapping is landslide initiation which is powerfully connected with rainfall151,152. According to the results, the maximum percentage of the landslide is centralized in areas with rainfall rages from 650 to 690 mm. The gully extension hazard will rise in the high amounts of precipitation153. The response curves of rainfall to gully demonstrates the maximum amount of gully erosion that happened in a rainfall ranged from 450 to 500 mm.
Moreover, the elevation has an essential role in the spatial alteration of hydrological conditions for example runoff production rate, soil moisture, surface flow, and slope stability63. Based on the survey of the regions with elevation beneath 200 m and flat areas, our results show that they are more prone to this type of erosion, which can be ascribed to the vegetation cover154. Consequently, the area with high elevation showed a lower possibility of gully erosion incidence.
Lithological units are considered the most influential prognosticators regarding the gully occurrence155. The reason was that the parent materials have different hydraulic conductivity and shear stability. In this study, the gully v within the groups 1 and 13 lithological units with Sanganeh and Sarcheshmeh formations registered this issue, respectively.
Consequently, a relatively higher contribution susceptibility prediction was obtained among some categorical data sets. However, these lesser contributions of some categorical layers did not mean that the categorical data layers were unusable for susceptibility mapping. As discussed in a previous research139,156, all these categorical layers did affect the final prediction result. Then simultaneously considered with continuous data sets.
The results of the goodness-of-fit in Table 5 confirm that performance values for the applied models, based on the AUC-ROC case of flood range from 0.970 to 0.974 (average = 0.972), for landslides, vary from 0.913 to 0.919 (average = 0.915), and for gully erosion, the minimum value of AUC-ROC is 0.916 and the maximum one 0.926 (average = 0.920). Consequently, a high proficiency was acquired for all the three natural hazards studied in this research.
In this current work, 30% of hazards occurrence points were considered to the validation phase (Table 5). Outcomes of the MaxEnt model demonstrates that the AUROC ranges between 0.933–0.936 (average = 0.935) for the floods. In the case of the landslides, AUROC values vary from 0.870 to 0.885 (average = 0.879), whereas for gully erosion changes from 0.918 to 0.922 (average = 0.920). There is a powerful settlement among the output hazard maps of the MaxEnt model and the distribution of hazards occurrence points. According to Hosmer & Lemeshow143, the MaxEnt model revealed excellent performances for all datasets81,89. Hence, based on the estimated AUROC value, the employed model detected reasonable prediction proficiency in forecasting the hazard spatial potentiality map. Considering that the accuracy values are almost identical when the data sets change, there were only a few changes and the model for three hazards was robust and entirely stable.
The outcomes of robustness according to AUROC are illustrated in Fig. 7. As can be observed, the MaxEnt model for landslide had a maximum robustness value (0.015) in the validation step indicating the minimum stability and robustness in comparison with other hazards (0.004 and 0.003). Furthermore, from a model stability viewpoint, the almost excellent agreement between training and validation AUC values for the applied model demonstrates that this model is most stable and over-fitting has also been avoided157. Based on this result, it is obvious that the MaxEnt model can be applied as an efficient machine learning technique in susceptibility assessment for flood, landslides, and gully erosion.
These results are consistent with the study of other authors75,157,158,159,160, who intended for preparing susceptibility mapping of the natural disasters.
In this study, we concluded that the MaxEnt is useful to model natural hazards (e.g. flood, landslides, and gully erosion occurrence) with nonlinear relationships. This machine learning model does not need a prior elimination of outliers or data transformation and can fit complex nonlinear relationships between hazards conditioning factors and hazards susceptibility. Also, it is able to automatically analyze interaction effects among conditioning factors (i.e. predictors)54. Our results demonstrated that 40% of the area is located in the low to very low susceptibility zones, whereas 60% of the area is subjected to floods, landslides, and gully occurrence. It is also clarified that the relevance of landslides is the most important (21.2%) in the Gorganrood Watershed (Fig. 9).
The most vulnerable areas for human activities are the ones that are in a group of more than one hazard. The flat area with low elevation are more prone to gully erosion and flood in the Gorganrood watershed whilst higher elevation areas with a high slope degree are more susceptible to landslides.
In a similar study, Pourghasemi et al.160 employed the SWARA-ANFIS-GWO model for producing a multi-hazard susceptibility mapping in Lorestan Province, most sections were susceptible to landslide and flood incidents together (33.7%), although 17.1% of the study region was in the class of no hazards. Nevertheless, in other researches, the various models have been employed for preparing multi-hazard susceptibility mapping. The surveyed hazards were floods, earthquakes, and landslides160, landslide, earthquake, and floods161, landslides, flood, and forest fire162, landslides, floods, earthquakes, forest fires, subsidence, and drought163, although there were several studies underway to develop multi-hazard risk28,35,36,37.
Conclusions and final remarks
Using the MaxEnt machine learning technique in the Gorganrood Watershed, we generated individual flood, landslides, and gully erosion susceptibility maps and, subsequently, a combination of them, to estimate a multi-hazard probability map (MHPM). Results showed almost 40% of the area is placed in the low to very low susceptibility zones, but 60% of the area is subjected to floods, landslides, and gully occurrences. The proportion of landslide is the most common hazard in the Gorganrood Watershed (21.2%). Hazard and risk strategies should be considered before future occurrences. So, this research demonstrates the application of the MHPM may be utilized in other territories for land use planning and hazard mitigation giving new facilities for insurance purposes. To date, there are still no libraries of multi-hazard probability map available for three hazards used in the current study (i.e. flood, landslides, and gully erosion). A conformity procedure is needed to elaborate and design the mitigation practices, provided that individual accomplishments are harmonized with existing policies. Applying a great number of individual hazards maps with spatial covering and various resolutions for the planner is confusing. Therefore, an integrated multi-hazard probability map prepares homogenized data about frequent natural hazards for an area.
Risk and hazard management should be taken into account before studying disaster management. To begin such activities and schematization for further land use planning, the provided flood, gully erosion, and landslide susceptibility maps and the hazard-based probability mapping can be valuable platforms. Multi-hazard evaluation makes it possible reducing hazard risk and gives fundamental information for stakeholders, it can also provide a comprehensive vision of the changes happening in the environment. According to this point, a multi-hazard probability map can be applied for comprehensive and integrated land use planning and consequently for the watershed management.
Mahendra, R., Mohanty, P., Bisoyi, H., Kumar, T. S. & Nayak, S. Assessment and management of coastal multi-hazard vulnerability along the Cuddalore-Villupuram, east coast of India using geospatial techniques. Ocean Coast. Manag. 54, 302–311 (2011).
Cerdà, A. Effect of climate on surface flow along a climatological gradient in Israel: a field rainfall simulation approach. J. Arid Environ. 38, 145–159 (1998).
Bathrellos, G. D., Skilodimou, H. D. & Maroukian, H. The spatial distribution of Middle and Late Pleistocene cirques in Greece. Geogr. Ann. Ser. A Phys. Geogr. 96, 323–338 (2014).
Skilodimou, H. D., Bathrellos, G. D., Maroukian, H. & Gaki-Papanastassiou, K. Late Quaternary evolution of the lower reaches of Ziliana stream in south Mt. Olympus (Greece). Geogr. Fisica Din. Quatern. 37, 43–50 (2014).
Rodrigo-Comino, J. et al. Soil science challenges in a new era: a transdisciplinary overview of relevant topics. Air Soil Water Res. 13, 1178622120977491 (2020).
García-Ruiz, J. M. Why Geomorphology is a Global Science (2015).
Ochoa-Cueva, P., Fries, A., Montesinos, P., Rodríguez-Díaz, J. A. & Boll, J. Spatial estimation of soil erosion risk by land-cover change in the Andes of southern Ecuador. Land Degrad. Dev. 26, 565–573 (2015).
Serrano-Muela, M. P. et al. An exceptional rainfall event in the central western Pyrenees: spatial patterns in discharge and impact. Land Degrad. Dev. 26, 249–262 (2015).
Torres, L., Abraham, E. M., Rubio, C., Barbero-Sierra, C. & Ruiz-Pérez, M. Desertification research in Argentina. Land Degrad. Dev. 26, 433–440 (2015).
Cerdà, A. Soil water erosion on road embankments in eastern Spain. Sci. Total Environ. 378, 151–155. https://doi.org/10.1016/j.scitotenv.2007.01.041 (2007).
Rodrigo-Comino, J., Terol, E., Mora, G., Gimenez-Morera, A. & Cerdà, A. Vicia sativa Roth can reduce soil and water losses in recently planted vineyards (Vitis vinifera L.). Earth Syst. Environ. 1, 2. https://doi.org/10.1007/s41748-020-00191-5 (2020).
Vorlaufer, T., Falk, T., Dufhues, T. & Kirk, M. Payments for ecosystem services and agricultural intensification: Evidence from a choice experiment on deforestation in Zambia. Ecol. Econ. 141, 95–105. https://doi.org/10.1016/j.ecolecon.2017.05.024 (2017).
Kavian, A., Hoseinpoor Sabet, S., Solaimani, K. & Jafari, B. Simulating the effects of land use changes on soil erosion using RUSLE model. Geocarto Int. 32(1), 97–111 (2017).
Achour, Y. & Pourghasemi, H. R. How do machine learning techniques help in increasing accuracy of landslide susceptibility maps?. Geosci. Front. 11, 871–883. https://doi.org/10.1016/j.gsf.2019.10.001 (2020).
Arnaud, P., Bouvier, C., Cisneros, L. & Dominguez, R. Influence of rainfall spatial variability on flood prediction. J. Hydrol. 260, 216–230 (2002).
Castillo, C. & Gómez, J. A. A century of gully erosion research: urgency, complexity and study approaches. Earth Sci. Rev. 160, 300–319. https://doi.org/10.1016/j.earscirev.2016.07.009 (2016).
Kelarestaghi, A. & Ahmadi, H. Landslide susceptibility analysis with a bivariate approach and GIS in Northern Iran. Arab. J. Geosci. 2(1), 95–101 (2009).
Braud, I. et al. Flash floods, hydro-geomorphic response and risk management. J. Hydrol. Flash Floods Hydro-Geomorphic Response Risk Manag. 541, 1–5. https://doi.org/10.1016/j.jhydrol.2016.08.005 (2016).
Korup, O., Densmore, A. L. & Schlunegger, F. The role of landslides in mountain range evolution. Geomorphology 120, 77–90. https://doi.org/10.1016/j.geomorph.2009.09.017 (2010).
Martínez-Casasnovas, J. A., Ramos, M. C. & García-Hernández, D. Effects of land-use changes in vegetation cover and sidewall erosion in a gully head of the Penedès region (northeast Spain). Earth Surf. Proc. Land. 34, 1927–1937. https://doi.org/10.1002/esp.1870 (2009).
Kavian, A. et al. Assessing the hydrological effects of land-use changes on a catchment using the Markov chain and WetSpa models. Hydrol. Sci. J. 65(15), 2604–2615 (2020).
Cutter, S. L., Mitchell, J. T. & Scott, M. S. Revealing the vulnerability of people and places: a case study of Georgetown County, South Carolina. Ann. Assoc. Am. Geogr. 90, 713–737 (2000).
Alcántara-Ayala, I. Geomorphology, natural hazards, vulnerability and prevention of natural disasters in developing countries. Geomorphology 47, 107–124 (2002).
Soulard, C. E., Acevedo, W., Stehman, S. V. & Parker, O. P. Mapping extent and change in surface mines within the United States for 2001 to 2006. Land Degrad. Dev. 27, 248–257 (2016).
Martínez-Graña, A. M., Goy, J. L. & Zazo, C. Cartographic procedure for the analysis of aeolian erosion hazard in natural parks (Central System, Spain). Land Degrad. Dev. 26, 110–117 (2015).
Strohmeier, S., Laaha, G., Holzmann, H. & Klik, A. Magnitude and occurrence probability of soil loss: a risk analytical approach for the plot scale for two sites in lower Austria. Land Degrad. Dev. 27, 43–51 (2016).
Weinzierl, T., Wehberg, J., Böhner, J. & Conrad, O. Spatial assessment of land degradation risk for the Okavango River Catchment Southern Africa. Land Degrad. Dev. 27, 281–294 (2016).
Guzzetti, F., Carrara, A., Cardinali, M. & Reichenbach, P. Landslide hazard evaluation: a review of current techniques and their application in a multi-scale study Central Italy. Geomorphology 31, 181–216 (1999).
Friedel, M. J. Modeling hydrologic and geomorphic hazards across post-fire landscapes using a self-organizing map approach. Environ. Model. Softw. 26, 1660–1674 (2011).
Mazzorana, B., Comiti, F. & Fuchs, S. A structured approach to enhance flood hazard assessment in mountain streams. Nat. Hazards 67, 991–1009 (2013).
Javidan, N., Kavian, A., Pourghasemi, H. R., Conoscenti, C. H. & Jafarian, Z. Gully erosion susceptibility mapping using multivariate adaptive regression splines replications and sample size scenarios. Water 11, 1–21. https://doi.org/10.3390/w11112319 (2019).
Agency, F. E. M. Multi‐Hazard Identification and Risk Assessment. US Gov. Print (1997).
Fema, H.-M. Mr3 Technical Manual. Multi-Hazard Loss Estimation Methodology Earthquake Model (2003).
UNISDR, C. The Human Cost of Natural Disasters: A Global Perspective. (2015).
Bathrellos, G. D., Kalivas, D. & Skilodimou, H. D. GIS-based landslide susceptibility mapping models applied to natural and urban planning in Trikala Central Greece. Estud. Geol. 65, 49–65 (2009).
Das, H., Sonmez, H., Gokceoglu, C. & Nefeslioglu, H. Influence of seismic acceleration on landslide susceptibility maps: a case study from NE Turkey (the Kelkit Valley). Landslides 10, 433–454 (2013).
Youssef, A. M. Landslide susceptibility delineation in the Ar-Rayth area, Jizan, Kingdom of Saudi Arabia, using analytical hierarchy process, frequency ratio, and logistic regression models. Environ. Earth Sci. 73, 8499–8518 (2015).
Chousianitis, K. et al. Assessment of earthquake-induced landslide hazard in Greece: from arias intensity to spatial distribution of slope resistance demandassessment of earthquake-induced landslide hazard in Greece. Bull. Seismol. Soc. Am. 106, 174–188 (2016).
Bender, S. Primer on natural hazard management in integrated regional development planning. Organization of American States, Department of Regional Development and Environment. Executive Secretariat for Economic and Social Affairs, Washington, DC (1991).
USAID. Primer on Natural Hazard Management in Integrated Regional Development Planning. Department of Regional Development and Environment Executive Secretariat for Economic and Social Affairs Organization of American States. With Support from the Office of Foreign Disaster Assistance United States Agency for International Development Washington, D.C. (Chapter 6) (1991).
Kappes, M. S., Keiler, M., von Elverfeldt, K. & Glade, T. Challenges of analyzing multi-hazard risk: a review. Nat. Hazards 64, 1925–1958 (2012).
El Morjani, Z. E. A., Ebener, S., Boos, J., Ghaffar, E. A. & Musani, A. Modelling the spatial distribution of five natural hazards in the context of the WHO/EMRO Atlas of Disaster Risk as a step towards the reduction of the health impact related to disasters. Int. J. Health Geogr. 6, 8 (2007).
FEMA (Federal Emergency Management Agency). Using HAZUS-MH for risk assessment. HAZU-MH risk assessment and user group series. FEMA 433 (2004).
Schmidt, J. et al. Quantitative multi-risk analysis for natural hazards: a framework for multi-risk modelling. Nat. Hazards 58, 1169–1192 (2011).
Sheikh, V., Kornejady, A. & Ownegh, M. Application of the coupled TOPSIS–Mahalanobis distance for multi-hazard-based management of the target districts of the Golestan Province Iran. Nat. Hazards 96, 1335–1365 (2019).
Assimakopoulos, J., Kalivas, D. & Kollias, V. A GIS-based fuzzy classification for mapping the agricultural soils for N-fertilizers use. Sci. Total Environ. 309, 19–33 (2003).
Ayalew, L., Yamagishi, H. & Ugawa, N. Landslide susceptibility mapping using GIS-based weighted linear combination, the case in Tsugawa area of Agano River, Niigata Prefecture Japan. Landslides 1, 73–81 (2004).
Ayalew, L. & Yamagishi, H. The application of GIS-based logistic regression for landslide susceptibility mapping in the Kakuda-Yahiko Mountains Central Japan. Geomorphology 65, 15–31 (2005).
Fernández, D. & Lutz, M. Urban flood hazard zoning in Tucumán Province, Argentina, using GIS and multicriteria decision analysis. Eng. Geol. 111, 90–98 (2010).
Peng, S.-H., Shieh, M.-J. & Fan, S.-Y. Potential hazard map for disaster prevention using GIS-based linear combination approach and analytic hierarchy method. J. Geogr. Inf. Syst. 4, 403 (2012).
Karaman, H. & Erden, T. Net earthquake hazard and elements at risk (NEaR) map creation for city of Istanbul via spatial multi-criteria decision analysis. Nat. Hazards 73, 685–709 (2014).
Althuwaynee, O. F., Pradhan, B., Park, H.-J. & Lee, J. H. A novel ensemble bivariate statistical evidential belief function with knowledge-based analytical hierarchy process and multivariate statistical logistic regression for landslide susceptibility mapping. CATENA 114, 21–36 (2014).
Karaman, H. Integrated multi-hazard map creation by using AHP and GIS. Geomatics Engineering Department, Istanbul Technical University, Recent Advances on Environmental and Life Science (2015).
Kornejady, A., Ownegh, M. & Bahremand, A. Landslide susceptibility assessment using maximum entropy model with two different data sampling methods. CATENA 152, 144–162 (2017).
Kornejady, A., Ownegh, M., Rahmati, O. & Bahremand, A. Landslide susceptibility assessment using three bivariate models considering the new topo-hydrological factor: HAND. Geocarto Int. 33, 1155–1185 (2018).
Chen, W. et al. Spatial prediction of landslide susceptibility using an adaptive neuro-fuzzy inference system combined with frequency ratio, generalized additive model, and support vector machine techniques. Geomorphology 297, 69–85 (2017).
Devkota, K. C. et al. Landslide susceptibility mapping using certainty factor, index of entropy and logistic regression models in GIS and their comparison at Mugling-Narayanghat road section in Nepal Himalaya. Nat. Hazards 65, 135–165 (2013).
Pourghasemi, H. R., Jirandeh, A. G., Pradhan, B., Xu, C. & Gokceoglu, C. Landslide susceptibility mapping using support vector machine and GIS at the Golestan Province Iran. J. Earth Syst. Sci. 122, 349–369 (2013).
Pourghasemi, H., Moradi, H., Aghda, S. F., Gokceoglu, C. & Pradhan, B. GIS-based landslide susceptibility mapping with probabilistic likelihood ratio and spatial multi-criteria evaluation models (North of Tehran, Iran). Arab. J. Geosci. 7, 1857–1878 (2014).
Chen, W. et al. A novel hybrid artificial intelligence approach based on the rotation forest ensemble and naïve Bayes tree classifiers for a landslide susceptibility assessment in Langao County, China. Geomat. Nat. Hazards Risk 8, 1955–1977 (2017).
Chen, W. et al. A novel ensemble approach of bivariate statistical-based logistic model tree classifier for landslide susceptibility assessment. Geocarto Int. 33, 1398–1420 (2018).
Chen, W. et al. GIS-based landslide susceptibility evaluation using a novel hybrid integration approach of bivariate statistical based random forest method. CATENA 164, 135–149 (2018).
Shirzadi, A. et al. Novel GIS based machine learning algorithms for shallow landslide susceptibility mapping. Sensors 18(11), 1–28. https://doi.org/10.3390/s18113777 (2018).
Conforti, M., Aucelli, P. P., Robustelli, G. & Scarciglia, F. Geomorphology and GIS analysis for mapping gully erosion susceptibility in the Turbolo stream catchment (Northern Calabria, Italy). Nat. Hazards 56, 881–898 (2011).
Park, S., Choi, C., Kim, B. & Kim, J. Landslide susceptibility mapping using frequency ratio, analytic hierarchy process, logistic regression, and artificial neural network methods at the Inje area Korea. Environ. Earth Sci. 68, 1443–1464 (2013).
Tehrany, M. S., Lee, M.-J., Pradhan, B., Jebur, M. N. & Lee, S. Flood susceptibility mapping using integrated bivariate and multivariate statistical models. Environ. Earth Sci. 72, 4001–4015 (2014).
Mousavi, S. Z., Kavian, A., Solaimani, K., Mousavi, S. R. & Shirzadi, A. GIS based spatial prediction of landslide susceptibility using logistic regression model. Geomat. Nat. Hazards Risk 2(1), 33–50 (2011).
Naghibi, S. A., Pourghasemi, H. R., Pourtaghi, Z. S. & Rezaei, A. Groundwater qanat potential mapping using frequency ratio and Shannon's entropy models in the Moghan watershed Iran. Earth Sci. Inform. 8, 171–186 (2015).
Pourghasemi, H. R., Pradhan, B. & Gokceoglu, C. Application of fuzzy logic and analytical hierarchy process (AHP) to landslide susceptibility mapping at Haraz watershed Iran. Nat. Hazards 63, 965–996 (2012).
Gómez Gutiérrez, A., Conoscenti, C., Angileri, S., Rotigliano, E. & Schnabel, S. Using topographical attributes to model the spatial distribution of gullying from two Mediterranean basins: advantages and limitations. Nat. Hazards 10, 291–314 (2015).
Lee, S. Soil erosion assessment and its verification using the universal soil loss equation and geographic information system: a case study at Boun Korea. Environ. Geol. 45, 457–465 (2004).
Catani, M., Dell'Acqua, F. & De Schotten, M. T. A revised limbic system model for memory, emotion and behaviour. Neurosci. Biobehav. Rev. 37, 1724–1737 (2013).
Conoscenti, C. et al. Assessment of susceptibility to earth-flow landslide using logistic regression and multivariate adaptive regression splines: a case of the Belice River basin (western Sicily, Italy). Geomorphology 242, 49–64 (2015).
Siahkamari, S., Haghizadeh, A., Zeinivand, H., Tahmasebipour, N. & Rahmati, O. Spatial prediction of flood-susceptible areas using frequency ratio and maximum entropy models. Geocarto Int. 33, 927–941 (2018).
Zakerinejad, R. & Märker, M. Prediction of Gully erosion susceptibilities using detailed terrain analysis and maximum entropy modeling: a case study in the Mazayejan Plain, Southwest Iran. Geogr. Fisica Din. Quaternaria 37, 67–76 (2014).
Douaik, A., Van Meirvenne, M. & Tóth, T. Soil salinity mapping using spatio-temporal kriging and Bayesian maximum entropy with interval soft data. Geoderma 128, 234–248 (2005).
Saghafian, B., Farazjoo, H., Bozorgy, B. & Yazdandoost, F. Flood intensification due to changes in land use. Water Resour. Manag. 22, 1051–1067 (2008).
[CONRWMGP] Central Office of Natural Resources and Watershed Management in Golestan Province. Detailed action plan. Iran; p. 230 (2009).
Sharifi, F. & Mahdavi, M. Technical report on investigating causes of summer flooding on North-east of Golestan-Iran deputy of watershed management-Iran. Iran. J. Watershed Manag. 60, 85–110 (2001).
Water Resources Company of Golestan [WRCG]. Precipitation and temperature reports; [cited 2013 August 11]. http://www.gsrw.ir/Default.aspx (2013).
Conoscenti, C. et al. Gully erosion susceptibility assessment by means of GIS-based logistic regression: a case of Sicily (Italy). Geomorphology 204, 399–411 (2014).
Manandhar, B. Flood Plain Analysis and Risk Assessment of Lothar Khola. Master of Science Thesis in Watershed Management. Tribhuvan University Institute of Forestry Pokhara, Nepal (2010).
Pourtaghi, Z. S. & Pourghasemi, H. R. GIS-based groundwater spring potential assessment and mapping in the Birjand Township, southern Khorasan Province Iran. Hydrogeol. J. 22, 643–662 (2014).
Rahmati, O., Pourghasemi, H. R. & Melesse, A. M. Application of GIS-based data driven random forest and maximum entropy models for groundwater potential mapping: a case study at Mehran Region Iran. CATENA 137, 360–372 (2016).
Angileri, S. E. et al. Water erosion susceptibility mapping by applying stochastic gradient treeboost to the Imera Meridionale river basin (Sicily, Italy). Geomorphology 262, 61–76 (2016).
Cama, M., Lombardo, L., Conoscenti, C. & Rotigliano, E. Improving transferability strategies for debris flow susceptibility assessment: application to the Saponara and Itala catchments (Messina, Italy). Geomorphology 288, 52–65 (2017).
Kia, M. B. et al. An artificial neural network model for flood simulation using GIS: Johor River Basin Malaysia. Environ. Earth Sci. 67, 251–264 (2012).
Tehrany, M. S., Pradhan, B. & Jebur, M. N. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS. J. Hydrol. 512, 332–343 (2014).
Conoscenti, C. et al. Exploring the effect of absence selection on landslide susceptibility models: a case study in Sicily Italy. Geomorphology 261, 222–235 (2016).
Jiménez-Perálvarez, J., Irigaray, C., El Hamdouni, R. & Chacón, J. Landslide-susceptibility mapping in a semi-arid mountain environment: an example from the southern slopes of Sierra Nevada (Granada, Spain). Bull. Eng. Geol. Environ. 70, 265–277 (2011).
Saponaro, A. et al. Landslide susceptibility analysis in data-scarce regions: the case of Kyrgyzstan. Bull. Eng. Geol. Environ. 74, 1117–1136 (2015).
Jaafari, A., Najafi, A., Pourghasemi, H., Rezaeian, J. & Sattarian, A. GIS-based frequency ratio and index of entropy models for landslide susceptibility assessment in the Caspian forest, northern Iran. Int. J. Environ. Sci. Technol. 11, 909–926 (2014).
Nagarajan, R., Roy, A., Kumar, R. V., Mukherjee, A. & Khire, M. Landslide hazard susceptibility mapping based on terrain and climatic factors for tropical monsoon regions. Bull. Eng. Geol. Environ. 58, 275–287 (2000).
Gallardo-Cruz, J. A., Pérez-García, E. A. & Meave, J. A. β-Diversity and vegetation structure as influenced by slope aspect and altitude in a seasonally dry tropical landscape. Landscape Ecol. 24, 473–482 (2009).
Geroy, I. et al. Aspect influences on soil water retention and storage. Hydrol. Process. 25, 3836–3842 (2011).
Lucà, F., Conforti, M. & Robustelli, G. Comparison of GIS-based gullying susceptibility mapping using bivariate and multivariate statistics: Northern Calabria South Italy. Geomorphology 134, 297–308 (2011).
Ercanoglu, M. & Gokceoglu, C. Assessment of landslide susceptibility for a landslide-prone area (north of Yenice, NW Turkey) by fuzzy approach. Environ. Geol. 41, 720–730 (2002).
Sidle, R. & Ochiai, H. Processes, Prediction, and Land Use. Water Resources Monograph. American Geophysical Union, Washington (2006).
Yalcin, A. GIS-based landslide susceptibility mapping using analytical hierarchy process and bivariate statistics in Ardesen (Turkey): comparisons of results and confirmations. CATENA 72, 1–12 (2008).
Vahidnia, M. H., Alesheikh, A. A., Alimohammadi, A. & Hosseinali, F. A GIS-based neuro-fuzzy procedure for integrating knowledge and data in landslide susceptibility mapping. Comput. Geosci. 36, 1101–1114 (2010).
Poiraud, A. Landslide susceptibility–certainty mapping by a multi-method approach: a case study in the Tertiary basin of Puy-en-Velay (Massif central, France). Geomorphology 216, 208–224 (2014).
Meinhardt, M., Fink, M. & Tünschel, H. Landslide susceptibility analysis in central Vietnam based on an incomplete landslide inventory: comparison of a new method to calculate weighting factors by means of bivariate statistics. Geomorphology 234, 80–97 (2015).
Khosravi, K., Nohani, E., Maroufinia, E. & Pourghasemi, H. R. A GIS-based flood susceptibility assessment and its mapping in Iran: a comparison between frequency ratio and weights-of-evidence bivariate statistical models with multi-criteria decision-making technique. Nat. Hazards 83, 947–987 (2016).
Moghaddam, D. D., Rezaei, M., Pourghasemi, H., Pourtaghie, Z. & Pradhan, B. Groundwater spring potential mapping using bivariate statistical model and GIS in the Taleghan watershed Iran. Arab. J. Geosci. 8, 913–929 (2015).
Jenness, J. DEM surface tools for ArcGIS (2013).
Maestre, F. T. & Cortina, J. Spatial patterns of surface soil properties and vegetation in a Mediterranean semi-arid steppe. Plant Soil 241, 279–291 (2002).
Cosby, B., Hornberger, G., Clapp, R. & Ginn, T. A statistical exploration of the relationships of soil moisture characteristics to the physical properties of soils. Water Resour. Res. 20, 682–690 (1984).
Gyssels, G., Poesen, J., Nachtergaele, J. & Govers, G. The impact of sowing density of small grains on rill and ephemeral gully erosion in concentrated flow zones. Soil Tillage Res. 64, 189–201 (2002).
Vandekerckhove, L., Poesen, J. & Govers, G. Medium-term gully headcut retreat rates in Southeast Spain determined from aerial photographs and ground measurements. CATENA 50, 329–352 (2003).
De Reu, J. et al. Application of the topographic position index to heterogeneous landscapes. Geomorphology 186, 39–49 (2013).
Moore, I. D. & Grayson, R. B. Terrain-based catchment partitioning and runoff prediction using vector elevation data. Water Resour. Res. 27, 1177–1191 (1991).
Grabs, T., Seibert, J., Bishop, K. & Laudon, H. Modeling spatial patterns of saturated areas: a comparison of the topographic wetness index and a dynamic distributed model. J. Hydrol. 373, 15–23 (2009).
Glenn, E. P. et al. Roles of saltcedar (Tamarix spp.) and capillary rise in salinizing a non-flooding terrace on a flow-regulated desert river. J. Arid Environ. 79, 56–65 (2012).
Kamp, U., Growley, B. J., Khattak, G. A. & Owen, L. A. GIS-based landslide susceptibility mapping for the 2005 Kashmir earthquake region. Geomorphology 101, 631–642 (2008).
Jungerius, P., Matundura, J. & Van De Ancker, J. Road construction and gully erosion in West Pokot Kenya. Earth Surf. Process. Landf. 27, 1237–1247 (2002).
Shimizu, M. In International Symposium on Landslides. 5. 771–776.
Lan, H., Zhou, C., Wang, L., Zhang, H. & Li, R. Landslide hazard spatial analysis and prediction using GIS in the Xiaojiang watershed, Yunnan China. Eng. Geol. 76, 109–128 (2004).
Duc, D. M. Rainfall-triggered large landslides on 15 December 2005 in Van Canh district, Binh Dinh province Vietnam. Landslides 10, 219–230 (2013).
Bui, D. T., Pradhan, B., Lofman, O., Revhaug, I. & Dick, O. B. Landslide susceptibility mapping at Hoa Binh province (Vietnam) using an adaptive neuro-fuzzy inference system and GIS. Comput. Geosci. 45, 199–211 (2012).
Nefeslioglu, H. A., Gokceoglu, C. & Sonmez, H. An assessment on the use of logistic regression and artificial neural networks with different sampling strategies for the preparation of landslide susceptibility maps. Eng. Geol. 97, 171–191 (2008).
Kakembo, V., Xanga, W. & Rowntree, K. Topographic thresholds in gully development on the hillslopes of communal areas in Ngqushwa Local Municipality, Eastern Cape South Africa. Geomorphology 110, 188–194 (2009).
Böhner, J. & Selige, T. Spatial Prediction of Soil Attributes Using Terrain Analysis and Climate Regionalisation (2006).
Song, Y. et al. Susceptibility assessment of earthquake-induced landslides using Bayesian network: a case study in Beichuan China. Comput. Geosci. 42, 189–199 (2012).
Zhu, A.-X. et al. An expert knowledge-based approach to landslide susceptibility mapping using GIS and fuzzy logic. Geomorphology 214, 128–138 (2014).
Miller, J. R. Morphometric assessment of lithologic controls on drainage basin evolution in the Crawford Upland, South-Central Indiana Jerry R. Miller, Dale F. Ritter, and R. Craig Kochel. Am. J. Sci. 290, 569–599 (1990).
Renard, K. G. Predicting Soil Erosion by Water: A Guide to Conservation Planning with the Revised Universal Soil Loss Equation (RUSLE). (United States Government Printing, 1997).
Moore, I. D. & Burch, G. J. Physical basis of the length-slope factor in the Universal Soil Loss Equation. Soil Sci. Soc. Am. J. 50, 1294–1298 (1986).
Farrar, D. E. & Glauber, R. R. Multicollinearity in regression analysis: the problem revisited. Rev. Econ. Stat. 49, 92–107 (1967).
O'brien, R. M. A caution regarding rules of thumb for variance inflation factors. Qual. Quant. 41, 673–690 (2007).
Ozdemir, A. Using a binary logistic regression method and GIS for evaluating and mapping the groundwater spring potential in the Sultan Mountains (Aksehir, Turkey). J. Hydrol. 405, 123–136 (2011).
Phillips, S. J., Anderson, R. P. & Schapire, R. E. Maximum entropy modeling of species geographic distributions. Ecol. Model. 190, 231–259 (2006).
Phillips, S. J., Dudík, M. & Schapire, R. E. In Proceedings of the Twenty-First international Conference on Machine Learning. 83.
Medley, K. A. Niche shifts during the global invasion of the Asian tiger mosquito, Aedes albopictus Skuse (Culicidae), revealed by reciprocal distribution models. Glob. Ecol. Biogeogr. 19, 122–133 (2010).
Moreno, R., Zamora, R., Molina, J. R., Vasquez, A. & Herrera, M. Á. Predictive modeling of microhabitats for endemic birds in South Chilean temperate forests using Maximum entropy (Maxent). Ecol. Inform. 6, 364–370 (2011).
Boubli, J. & De Lima, M. Modeling the geographical distribution and fundamental niches of Cacajao spp. and Chiropotes israelita in Northwestern Amazonia via a maximum entropy algorithm. Int. J. Primatol. 30, 217–228 (2009).
Archer, G., Saltelli, A. & Sobol, I. Sensitivity measures, ANOVA-like techniques and the use of bootstrap. J. Stat. Comput. Simul. 58, 99–120 (1997).
Article MATH Google Scholar
Chen, Y. et al. CaliBayes and BASIS: integrated tools for the calibration, simulation and storage of biological simulation models. Brief Bioinform. 11, 278–289 (2010).
Yost, A. C., Petersen, S. L., Gregg, M. & Miller, R. Predictive modeling and mapping sage grouse (Centrocercus urophasianus) nesting habitat using maximum entropy and a long-term dataset from Southern Oregon. Ecol. Inform. 3, 375–386 (2008).
Park, N.-W. Using maximum entropy modeling for landslide susceptibility mapping with multiple geoenvironmental data sets. Environ. Earth Sci. 73, 937–949 (2015).
Chung, C.-J.F. & Fabbri, A. G. Validation of spatial prediction models for landslide hazard mapping. Nat. Hazards 30, 451–472 (2003).
Maier, H. R. & Dandy, G. C. Neural networks for the prediction and forecasting of water resources variables: a review of modelling issues and applications. Environ. Model. Softw. 15, 101–124 (2000).
Swets, J. A. Measuring the accuracy of diagnostic systems. Science 240, 1285–1293 (1988).
Article ADS MathSciNet CAS PubMed MATH Google Scholar
Hosmer, D. W. Wiley Series in Probability and Statistics, Chap. 2. Multiple Logistic Regression. Applied Logistic Regression, 31–46 (2000).
Akgun, A., Dag, S. & Bulut, F. Landslide susceptibility mapping for a landslide-prone area (Findikli, NE of Turkey) by likelihood-frequency ratio and weighted linear combination models. Environ. Geol. 54, 1127–1143 (2008).
Convertino, M., Muñoz-Carpena, R., Chu-Agor, M. L., Kiker, G. A. & Linkov, I. Untangling drivers of species distributions: global sensitivity and uncertainty analyses of MaxEnt. Environ. Model. Softw. 51, 296–309 (2014).
Bui, D. T. et al. Hybrid artificial intelligence approach based on neural fuzzy inference model and metaheuristic optimization for flood susceptibilitgy modeling in a high-frequency tropical cyclone area using GIS. J. Hydrol. 540, 317–330 (2016).
Lee, S., Kim, J.-C., Jung, H.-S., Lee, M. J. & Lee, S. Spatial prediction of flood susceptibility using random-forest and boosted-tree models in Seoul metropolitan city, Korea. Geomat. Nat. Hazards Risk 8, 1185–1203 (2017).
Mojaddadi, H., Pradhan, B., Nampak, H., Ahmad, N. & Ghazali, A. H. B. Ensemble machine-learning-based geospatial approach for flood risk assessment using multi-sensor remote-sensing data and GIS. Geomat. Nat. Hazards Risk 8, 1080–1102 (2017).
Rozos, D., Pyrgiotis, L., Skias, S. & Tsagaratos, P. An implementation of rock engineering system for ranking the instability potential of natural slopes in Greek territory. An application in Karditsa County. Landslides 5, 261–270 (2008).
Yalcin, A. An Investigation on Ardesen (Rize) Region on the Basis of Landslide Susceptibility, Ph. D. Dissertation. Karadeniz Technical University, Trabzon, Turkey (2005).
Shahabi, H., Ahmad, B. B. & Khezri, S. Application of satellite remote sensing for detailed landslide inventories using frequency ratio model and GIS. Int. J. Comput. Sci. 9, 108–117 (2012).
Shahabi, H., Khezri, S., Ahmad, B. B. & Hashim, M. Landslide susceptibility mapping at central Zab basin, Iran: a comparison between analytical hierarchy process, frequency ratio and logistic regression models. CATENA 115, 55–70 (2014).
Svoray, T., Michailov, E., Cohen, A., Rokah, L. & Sturm, A. Predicting gully initiation: comparing data mining techniques, analytical hierarchy processes and the topographic threshold. Earth Surf. Proc. Land. 37, 607–619 (2012).
Daba, S., Rieger, W. & Strauss, P. Assessment of gully erosion in eastern Ethiopia using photogrammetric techniques. CATENA 50, 273–291 (2003).
Dai, F., Lee, C., Li, J. & Xu, Z. Assessment of landslide susceptibility on the natural terrain of Lantau Island Hong Kong. Environ. Geol. 40, 381–391 (2001).
Marmion, M., Hjort, J., Thuiller, W. & Luoto, M. A comparison of predictive methods in modelling the distribution of periglacial landforms in Finnish Lapland. Earth Surf. Proc. Land. 33, 2241–2254 (2008).
Golkarian, A. & Rahmati, O. Use of a maximum entropy model to identify the key factors that influence groundwater availability on the Gonabad Plain Iran. Environ. Earth Sci. 77, 369 (2018).
Pournader, M., Ahmadi, H., Feiznia, S., Karimi, H. & Peirovan, H. R. Spatial prediction of soil erosion susceptibility: an evaluation of the maximum entropy model. Earth Sci. Inf. 11, 389–401 (2018).
Moghaddam, D. D., Pourghasemi, H. R. & Rahmati, O. Natural Hazards GIS-Based Spatial Modeling Using Data Mining Techniques 59–78 (Springer, 2019).
Pourghasemi, H. R., Gayen, A., Panahi, M., Rezaie, F. & Blaschke, T. Multi-hazard probability assessment and mapping in Iran. Sci. Total Environ. 692, 556–571 (2019).
Skilodimou, H. D., Bathrellos, G. D., Chousianitis, K., Youssef, A. M. & Pradhan, B. Multi-hazard assessment modeling via multi-criteria analysis and GIS: a case study. Environ. Earth Sci. 78, 47 (2019).
Pourghasemi, H. R., Gayen, A., Edalat, M., Zarafshar, M. & Tiefenbacher, J. P. Is multi-hazard mapping effective in assessing natural hazards and integrated watershed management?. Geosci. Front. 11, 1203–1217 (2020).
Pourghasemi, H. R. et al. Assessing and mapping multi-hazard risk susceptibility using a machine learning technique. Sci. Rep. 10, 1–11 (2020).
The authors would like to thank the Regional Water Authority and Natural Resources and Watershed Management Department of Golestan province for providing the discharge and meteorological data and some initial maps. We would also like to thank Sari Agricultural Sciences and Natural Resources University (SANRU) for funding the project.
Open Access funding enabled and organized by Projekt DEAL.
Department of Watershed Management, Faculty of Natural Resources, Sari Agricultural Sciences and Natural Resources University (SANRU), Sari, 48441-74111, Iran
Narges Javidan & Ataollah Kavian
Department of Natural Resources and Environmental Engineering, College of Agriculture, Shiraz University, Shiraz, 71441- 65186, Iran
Hamid Reza Pourghasemi
Department of Earth and Marine Sciences (DISTEM), University of Palermo, Palermo, 90123, Italy
Christian Conoscenti
Department of Range Management, Sari Agricultural Sciences and Natural Resources University (SANRU), Sari, 48441-74111, Iran
Zeinab Jafarian
Department of Physical Geography, University of Trier, 54296, Trier, Germany
Jesús Rodrigo-Comino
Soil Erosion and Degradation Research Group, Department of Geography, Valencia University, Blasco Ibàñez, 28, 46010, Valencia, Spain
Narges Javidan
Ataollah Kavian
N.J., A.K., H.R.P., C.C., Z.J., J.R.C. designed experiments, run models, analyzed results, wrote, and reviewed manuscript. All authors reviewed the final manuscript.
Correspondence to Ataollah Kavian or Jesús Rodrigo-Comino.
Supplementary information.
Javidan, N., Kavian, A., Pourghasemi, H.R. et al. Evaluation of multi-hazard map produced using MaxEnt machine learning technique. Sci Rep 11, 6496 (2021). https://doi.org/10.1038/s41598-021-85862-7
Natural hazards and disasters around the Caspian Sea
Suzanne A. G. Leroy
Raisa Gracheva
Andrei Medvedev
Natural Hazards (2022)
Assessment of groundwater sustainable development considering geo-environment stability and ecological environment: a case study in the Pearl River Delta, China
Peng Huang
Chuanming Ma
Aiguo Zhou
Environmental Science and Pollution Research (2022)
The combination of fuzzy analytical hierarchical process and maximum entropy methods for the selection of wind farm location
Muge Unal Cilek
Esra Deniz Guner
Senem Tekin
Routing algorithms as tools for integrating social distancing with emergency evacuation
Yi-Lin Tsai
Chetanya Rastogi
Christopher B. Field
Hazard zonation mapping of earthquake-induced secondary effects using spatial multi-criteria analysis
Maria Karpouza
Konstantinos Chousianitis
Assimina Antonarakou
Top 100 in Earth Science
|
CommonCrawl
|
The computation of the pitch damping stability derivatives of supersonic blunt cones using unsteady sensitivity equations
Chenxi Guo1 &
Yu-xin Ren ORCID: orcid.org/0000-0002-3047-49231
The numerical methods for computing the stability derivatives of the aircraft by solving unsteady sensitivity equations which was proposed in our previous papers was extended to solve three-dimensional problems in this paper. Both the static and dynamic derivatives of the hypersonic blunt cone undergoing pitching oscillation around a fixed point were computed using the new methods. The predicted static derivative and dynamic derivative were found to be in reasonable agreement with the experimental data. For the present method, it is possible to distinguish the components of dynamic derivatives caused by different state parameters. It is found that \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) are usually of opposite signs and tend to eliminate each other, which makes \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \) being much smaller than its individual components. Another feature of this method is that the moment of pressure derivatives proposed in the present paper can be used to predict the contribution of each part of the blunt cone to the overall stability quantitatively. It is found that the head region is crucial for the static stability and the body region contributes most to the dynamic stability.
The determination of the stability characteristics of atmospheric flight vehicles is the basis of the control system design, which is one of the most essential yet challenging stages in the whole process of aircraft development. A poor understanding or prediction of the stability characteristics may lead to a rise in costs and detrimental effects on the performance of the aircraft [1, 2]. Therefore, it is necessary for designers to have proper knowledge of the stability characteristics, of which the stability derivatives are the key parameters.
The concept of stability derivatives was introduced by Byran [3] based on the assumption of linear relations between the aerodynamic forces/moments and instantaneous values of the disturbances of the kinematic variables. This model was very successful for flight at small angles of attack and with small disturbances. However, under extreme flight conditions involving high angle of attack, high pitch rate and/or gust responses, it is important to develop the aerodynamic models accounting for the nonlinear and unsteady effects. Some extended aerodynamic models can be found in [4,5,6,7,8,9,10] and others.
No matter which aerodynamic model is adopted, accurate evaluations of the aerodynamic response are essential [11], which can be carried out by wind tunnel experiments, flight tests, and computational fluid dynamics (CFD) simulations. The CFD approaches have great potentials in simulating wide ranges of flight conditions, predicting complete sets of data for the stability characteristics analysis and removing the interference effects of the model support [12], and have been developed considerably in recent years.
The CFD methods for computing the stability derivatives can be roughly divided into two categories. The first one is computing the flow fields first and then evaluating the stability derivatives using a certain parameter identification technique. These methods are very close to the experimental methods for evaluating the stability derivatives, and the only difference is that the aerodynamic forces and moments are computed instead of measured. One example of this type is the forced-oscillation approach, in which the stability derivatives can be computed by the integration of the periodic solutions of force/moment coefficients [13]. Ronch et.al [14,15,16]. used this approach to systematically study the stability derivatives of several models of aircraft in terms of various CFD solvers, such as RANS, harmonic balance, and linear frequency methods. There are several limitations to this approach. Firstly, only combinations of the stability derivatives such as \( {C}_{m_{\alpha }}-{k}^2{C}_{m_{\dot{q}}} \) and \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \) can be computed. To separate \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \), additional plunging motion must be considered besides the pitching motion. This approach is however not rigorous and will introduce additional errors. Secondly, the stability derivatives are assumed to be constant and their dependence on the reduced frequency is not known. More general approaches in terms of parameter identification techniques have been studied in [17, 18]. Nevertheless, these approaches all require pre-assumed aerodynamic models.
The second category of CFD methods is directly computing the stability derivatives by solving the flow as well as the static sensitivity equations. This category of sensitivity equation based method was put forward by Godfrey and Cliff [19]. Limache [20] followed this approach and computed the pitch-rate derivatives of the airfoil under the steady motion. These methods are capable of computing various stability derivatives directly without relying on the parameter identification technique. However, the application of these sensitivity-based methods is confined to the study on static stability derivatives, since the basis of these methods is the static sensitivity equations. Similar methods based on automatic differentiation adjoint approach can be found in [3].
Ren [21] developed a sensitivity equation based method for computing the stability derivatives that account for the unsteady effects. This method is based on an extension of the conventional stability derivative model. Taking the relation between the pitching moment Cm(t) and motion time history of the angle of attack α(t) as an example, he demonstrated that if α(t) can be expressed as a convergent Taylor series and the pitching moment is the function of α(t) and its time derivatives of various order
$$ {C}_m(t)={C}_m\left(\alpha (t),\dot{\alpha}(t),\ddot{\alpha}(t),\cdots \right), $$
the unsteady sensitivity equations can be derived. Then the stability derivatives can be computed directly from the solution of the sensitivity equations. This method does not rely on the linear or linearized aerodynamic model and takes the unsteady effects into consideration. Furthermore, this method is capable of predicting all stability derivatives from a single maneuver because of the use of information obtained from the sensitivity equations. In [22], this method was extended to compute the stability derivatives associated with supersonic flow with shock waves. The behavior of the solution of the sensitivity equations in the vicinity of shock waves was analyzed. In these papers, only simple two-dimensional cases were studied. Further studies are needed to validate the proposed method using more realistic three-dimensional test cases.
In this paper, the longitudinal stability derivatives of a blunt cone in supersonic flows are studied by solving the three-dimensional unsteady sensitivity equations. The results are compared with the experimental data to demonstrate the validity of this method. Besides further validation of the unsteady sensitivity equation based method for computing the stability derivatives, the main purpose of the present paper is to analyze the behaviors of \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) which, instead of the combination \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \), can be computed individually using the present method. Based on the distributions of sensitivity variables solved by the sensitivity equations, the contributions to the overall stability of any part of the cone can be evaluated quantitatively.
Sensitivity equations and the numerical approaches
In the non-inertial frame of reference fixed on the aircraft, the three-dimensional Euler equations in conservation form are show in Eq.(1) as follows
$$ \frac{\partial \mathbf{U}}{\partial t}+\frac{\partial \mathbf{F}}{\partial x}+\frac{\partial \mathbf{G}}{\partial y}+\frac{\partial \mathbf{H}}{\partial z}=\mathbf{R} $$
where U is the vector of conservation variables and F, and G and H are the inviscid fluxes:
$$ \mathbf{U}=\left[\begin{array}{c}\rho \\ {}\rho u\\ {}\rho v\\ {}\rho w\\ {}\rho E\end{array}\right]\kern0.5em \mathbf{F}=\left[\begin{array}{c}\rho u\\ {}\rho u u+p\\ {}\rho v u\\ {}\rho w u\\ {}u\left(\rho E+p\right)\end{array}\right]\kern0.5em \mathbf{G}=\left[\begin{array}{c}\rho v\\ {}\rho u v\\ {}\rho v v+p\\ {}\rho w v\\ {}v\left(\rho E+p\right)\end{array}\right]{\displaystyle \begin{array}{cc}& \mathbf{H}=\left[\begin{array}{c}\rho w\\ {}\rho u w\\ {}\rho v w\\ {}\rho w w+p\\ {}w\left(\rho E+p\right)\end{array}\right]\end{array}} $$
R is the source terms due to the motion of the aircraft in the following form
$$ \mathbf{R}={\left[{R}_{\rho },{R}_{Vx},{R}_{Vy},{R}_{Vz},{R}_E\right]}^T $$
$$ {\displaystyle \begin{array}{l}{R}_{\rho }=0\\ {}{R}_E=-\rho {\mathbf{V}}_r\cdot \left({\mathbf{a}}_0+\dot{\omega}\times \mathbf{r}+\omega \times \left(\omega \times \mathbf{r}\right)\right)\end{array}} $$
and RVx, RVy and RVz are the components of
$$ {\mathbf{R}}_V=-\rho \left({\mathbf{a}}_0+\dot{\omega}\times \mathbf{r}+\omega \times \left(\omega \times \mathbf{r}\right)+2\omega \times {\mathbf{V}}_r\right). $$
In these equations and definitions, ρ is the density, p is the pressure, E is the energy, u, v and w are the velocity components of Vr, a0 is the acceleration vector of the origin of the moving frame, and p, q and r are the components of ω, which is the angular velocity of the moving frame.
Here, the unsteady sensitivity equations for computing the stability derivatives are briefly reviewed. According to Von Karman and Burgers [19], the unsteady aerodynamic forces and moments depend on the time histories of the motion of the aircraft. In longitudinal motions, this relationship in terms of pitching moment is
$$ {C}_m(t)={C}_m\left(\alpha \left(\tau \right),q\left(\tau \right),V\left(\tau \right)\right)\kern0.5em \tau \in \left(-\infty, t\right]. $$
If in Eq.(2) the moment coefficient depends on a short period of the past history [23], it is sufficient to assume that
$$ {C}_m(t)={C}_m\left(\alpha (t),\dot{\alpha}(t),\ddot{\alpha}(t),\dots, q(t),\dot{q}(t),\ddot{q}(t),\dots, V(t),\dot{V}(t),\ddot{V}(t),\dots \right). $$
In [21], besides Eq.(3), it is further assumed that at a fixed point in the non-inertial frame of reference, the conservative variables of the flow field can be also expressed by
$$ \mathbf{U}\left(t,x,y\right)=\mathbf{U}\left(\alpha (t),\dot{\alpha}(t),\ddot{\alpha}(t),\dots, q(t),\dot{q}(t),\ddot{q}(t),\dots, V(t),\dot{V}(t),\ddot{V}(t),\dots; x,y\right). $$
Eq. (4) is sufficient for deriving the unsteady sensitivity equations [21]
$$ {\displaystyle \begin{array}{l}\frac{\partial }{\partial t}\left({\mathbf{U}}_{\gamma}\right)+\frac{\partial }{\partial x}\left({\mathbf{F}}_{\gamma}\right)+\frac{\partial }{\partial y}\left({\mathbf{G}}_{\gamma}\right)+\frac{\partial }{\partial z}\left({\mathbf{H}}_{\gamma}\right)={\mathbf{R}}_{\gamma}\\ {}\frac{\partial }{\partial t}\left({\mathbf{U}}_{\dot{\gamma}}\right)+\frac{\partial }{\partial x}\left({\mathbf{F}}_{\dot{\gamma}}\right)+\frac{\partial }{\partial y}\left({\mathbf{G}}_{\dot{\gamma}}\right)+\frac{\partial }{\partial z}\left({\mathbf{H}}_{\dot{\gamma}}\right)={\mathbf{R}}_{\dot{\gamma}}-\frac{\partial \mathbf{U}}{\partial \gamma}\\ {}\frac{\partial }{\partial t}\left({\mathbf{U}}_{\ddot{\gamma}}\right)+\frac{\partial }{\partial x}\left({\mathbf{F}}_{\ddot{\gamma}}\right)+\frac{\partial }{\partial y}\left({\mathbf{G}}_{\ddot{\gamma}}\right)+\frac{\partial }{\partial z}\left({\mathbf{H}}_{\ddot{\gamma}}\right)={\mathbf{R}}_{\ddot{\gamma}}-\frac{\partial \mathbf{U}}{\partial \dot{\gamma}}\\ {}\cdots \end{array}} $$
where γ is any one of α, q and V, and \( {\mathbf{U}}_{\gamma },{\mathbf{U}}_{\dot{\gamma}} \) and \( {\mathbf{U}}_{\ddot{\gamma}} \) are called the sensitivity derivatives. Eq.(5) can be solved together with Eq.(1) to predict the sensitivity derivatives.
The sensitivity equations are passive equations depending on the solution of the flow governing equations. In Eq.(5), it is shown that the sensitivity derivatives with respect to \( \dot{\gamma} \) and \( \ddot{\gamma} \) depend on those with respect to γ and \( \dot{\gamma} \) respectively. Therefore, in practice, we firstly solve the flow governing equations, secondly solve the sensitivity equations with respect to γ to predict the static derivatives, and then solve the sensitivity equations with respect to \( \dot{\gamma} \) (and higher order terms when necessary) to compute the dynamic derivatives. The sequence of solution procedures for predicting the longitudinal stability derivatives is shown in Fig. 1. The computational cost of the present methods is closely related to the number of sensitivity equations being solved. As shown in Fig. 1, if the sensitivity equations with respect to α, q, V and \( \dot{\alpha},\dot{q},\dot{V} \) are solved, the total equations to be solved will be 7 times as many as the flow governing equations. Therefore, the computational effort of the present approach is very large in general. In the present paper, only the sensitivity equations corresponding to the inviscid Euler equations are solved to save the computational cost.
The illustration of the solution sequence for the computation of the stability derivatives
The sensitivity equations and the flow governing equations are in a similar form and can be solved using basically the same numerical schemes. In the present paper, a finite volume solver in terms of the multi-block structured grids is used to solve both Eqs. (1) and (5). A reconstruction procedure based on the minimized dispersion and controllable dissipation [24, 25] is employed to compute the left and right states of conservative variables at the cell interface. The HLL Riemann solver [26] is used to compute the numerical flux of both the flow governing equations and the sensitivity equations. The dual time stepping LU-SGS technique [27] is used for temporal discretization. The validation test cases for this numerical procedure can be found in [25]. The boundary conditions of the sensitivity equations can be straightforwardly derived from the boundary conditions for the flow governing equations. For example, the boundary conditions for the inviscid wall is
$$ {\mathbf{V}}_r\cdotp \mathbf{n}=0 $$
in the non-inertial frame of reference. The corresponding wall boundary conditions for the sensitivity equations with respect to γ and \( \dot{\gamma} \) are respectively
$$ {\left({\mathbf{V}}_r\right)}_{\gamma}\cdotp \mathbf{n}=0 $$
$$ {\left({\mathbf{V}}_r\right)}_{\dot{\gamma}}\cdotp \mathbf{n}=0. $$
The far-field boundary conditions are handled using characteristic approaches based on the Riemann invariants in the boundary normal directions. The Riemann invariants can be also differentiated with respect to γ and \( \dot{\gamma} \) to obtain the boundary conditions for the corresponding sensitivity equations.
The predicted sensitivity derivatives can be used to compute the stability derivatives with respect to \( \gamma, \dot{\gamma},\cdots \) directly. For example, knowing the definition of the moment coefficient
$$ {C}_m=\left[\underset{\Omega}{\oiint}\mathbf{r}\times p\mathbf{n} ds\right]/\left[\frac{1}{2}\rho {V}_{\infty}^2 SL\right], $$
the stability derivative Cmγ is computed by
$$ {\left({C}_m\right)}_{\gamma }=\left[\underset{\Omega}{\oiint}\mathbf{r}\times {p}_{\gamma}\mathbf{n} ds\right]/\left[\frac{1}{2}\rho {V}_{\infty}^2 SL\right], $$
where the sensitivity derivative pγ can be deduced from Uγ. \( {\left({C}_m\right)}_{\dot{\gamma}} \) can be computed in a similar way.
The present method for computing the stability derivatives by solving the sensitivity equations is a entirely new approach. Although its theory has been presented in [21], some features of this approach are not studied in detail. Therefore, it is necessary to discuss further on these features especially for three-dimensional problems.
The first one is the aerodynamic model. In the present approach, we only need to know the abstract relation between the aerodynamic force/moment and the motion variables that is shown in Eq.(3). Most of other CFD based approach requires the prescription of an explicit aerodynamic model. For example, in [14], a steady reference motion is firstly given, and the increment of the aerodynamic force/moment is assumed to be the linear function of the increments of the state variables of a perturbative motion. On the other hand, the present approach does not require a steady reference motion. And the stability derivatives can be computed for any maneuver motion of the aircraft.
The second one is the dependency of the stability derivatives on the motion variables. In traditional approach for computing the pitching stability derivatives [14], the stability derivatives are only related to the steady reference motion and the reduced frequency. As a result, the static stability derivatives are assumed to be constant. However, in the present approach, it is easy to derive from Eq.(3) that
$$ {\left({C}_m\right)}_{\alpha }=\frac{\partial {C}_m}{\partial \alpha}\left(\alpha (t),\dot{\alpha}(t),\ddot{\alpha}(t),\dots, q(t),\dot{q}(t),\ddot{q}(t),\dots, V(t),\dot{V}(t),\ddot{V}(t),\dots \right). $$
Therefore, for a general maneuver motion of the aircraft, the static derivative (Cm)α is also time-varying. This relation reveals that the stability derivatives are not only affected by the reference motion, but also by the perturbative motion. For other stability derivatives including the dynamic stability derivatives, the same conclusion can be also drawn.
The third one is that the present approach is capable of computing all of the static and dynamic derivatives in a single maneuver motion as long as the corresponding sensitivity equations are solved. In the case of the forced sinusoidal motion around the aircraft's center of gravity, instead of computing \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \), \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) can be computed individually. According to Eq.(7), they are computed respectively by
$$ {\left({C}_m\right)}_{\dot{\alpha}}=\left[\underset{\Omega}{\oiint}\mathbf{r}\times {p}_{\dot{\alpha}}\mathbf{n} ds\right]/\left[\frac{1}{2}\rho {V}_{\infty}^2 SL\right] $$
$$ {\left({C}_m\right)}_q=\left[\underset{\Omega}{\oiint}\mathbf{r}\times {p}_q\mathbf{n} ds\right]/\left[\frac{1}{2}\rho {V}_{\infty}^2 SL\right]. $$
In these formulations, \( {p}_{\dot{\alpha}} \) and pq are computed by the solutions of Eq.(5). The present method gives the time histories of \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) so that their dependency on the reduced frequency can be shown. For traditional methods, the pitching oscillations can be used only to compute \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \). In order to compute both \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \), besides the pitching oscillation, additional plunging oscillation should also be considered [28]. This approach is feasible only when the assumption is valid that the stability derivatives are solely related to the steady reference motion. However, according to the analysis presented above in the second feature of the present method, the stability derivatives are not only affected by the reference motion, but also by the perturbative motion. Therefore, using two different perturbative motions to separate \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) from \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \) will introduce additional errors. It is reported in [29] that the approach using pitching and plunging oscillations to compute \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) showed great frequency dependency. To reduce this dependency, it is proposed in [29] that \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) are computed using the looping and heaving motions. In the present approach, any single maneuver motion can be used to \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) directly, which totally removes the ambiguity in computing these stability derivatives.
The last feature is that the present method gives not only the stability derivatives directly by solving the sensitivity equations, but also the distribution of the sensitivity derivatives such as pγ and \( {p}_{\dot{\gamma}} \). The distribution of the sensitivity derivatives such as pγ and \( {p}_{\dot{\gamma}} \) appeared in Eq. (7) gives additional information which is not available for the traditional methods. In the present paper, we propose to use this information to evaluate the local contribution of a particular element of an aircraft to the stability derivatives of the whole aircraft. To this end, the surface of an aircraft is divided in to N parts with
$$ \Omega =\sum \limits_i^N{\Omega}_i. $$
On each part, the contribution to the moment derivatives can be computed by
$$ {\left({m}_y\right)}_{\gamma, i}=\underset{\Omega_i}{\oiint}\mathbf{r}\times {p}_{\gamma}\mathbf{n} ds. $$
The moment stability derivative of the aircraft (my)γ is computed as
$$ {\left({m}_y\right)}_{\gamma }=\sum \limits_i^N{\left({m}_y\right)}_{\gamma, i} $$
which is nondimensionalized to obtain \( {C}_{m_{\gamma }} \). The term (my)γ, i is called the Moment of Pressure Derivatives (MPD) in this paper. This value indicates the amount of contribution of any particular body surface to the overall stability derivatives. The importance of MPD is to identify the crucial locations that affect the stability of an aircraft so that local measurement can be introduced to effectively stabilize or destabilize the aircraft. In conventional methods based on the CFD technique, since pγ is not known, it is impossible to calculate the local contribution to the stability derivatives although it is possible to calculate the local contribution to the moment coefficient.
The test case
The stability derivatives of blunt cones undergoing the forced oscillations was studied using the wind-tunnel experiments in [30, 31]. The configuration and geometry parameters of the cones are shown in Fig. 2a. The ratio between the nose diameter and the base diameter dN/dB is 0.4.
The geometry and the computational domain of the blunt cone. a The configuration of the cone, dN/dB = 0.4. b The computational domain of the blunt cone without bottom region
In this section, we will study this test case numerically. For the forced pitching oscillation around a fixed point, the pitch angle equals to the angle of attack
$$ \alpha =\theta, $$
and the forced oscillation is in the following form [32]
$$ \alpha =\theta ={\alpha}_0+{\alpha}_1\sin \left(\frac{2{V}_{\infty } kt}{L}\right), $$
where α0 and α1 are the mean value and amplitude of the oscillation, L is the characteristic length of the cone, V∞ is the velocity of the freestream, and k is the reduced frequency. The model was tested according to different rotating centers (Xcg = 0.70L and Xcg = 0.75L). The two sets of experimental conditions are named as Xcg70 and Xcg75 in this paper. The computational domain is shown in Fig. 2b with about 1.6 million cells.
The stability derivatives of the moment coefficient with respect to \( \alpha, \dot{\alpha} \) and q(\( q=\dot{\theta} \)) are presented in this paper. It should be noted that the stability derivatives under any maneuver can be computed, while in this paper only the derivatives of the pitch damping motion are computed.
The stability derivatives
The results of Xcg70 are discussed here to show the stability derivatives during the pitch damping motions. The amplitude of the angle of attack is 1°, and the Mach number is 6.85. The range of reduced frequency k of the experiments is from 0.0018 to 0.0092. In order to observe and analyze the influences of the pitching frequency, more frequencies are used in this test case.
Figure 3 and Fig. 4 show the variation of the pitching moment coefficient and its derivatives versus the angle of attack, where α0 = 0∘ and 3∘ . It can be observed that the stability derivatives are not constant, and they are changing with the angle of attack. This is in contradiction with some linear aerodynamic models in which the stability derivatives are functions of the mean angle of attack only.
Cm and its derivatives at α0 = 0∘, Xcg/L = 0.70. a Cm. b \( {C}_{m_{\alpha }} \). c \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \)
It is also clear that there is the time-lagged effect in both the momentum coefficient and its stability derivatives. Further study reveals an interesting phenomenon. The hysteresis effects of the moment coefficients become stronger as the reduced frequency increases. However, the hysteresis effects of the stability derivatives are not very sensitive to the reduced frequency. Another feature of the stability derivatives is that the static derivative \( {C}_{m_{\alpha }} \) is smoothly changed with the angle of attack, while there are oscillations in the dynamic damping derivative \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \). This result indicates that the higher order sensitivity derivatives are more sensitive to the flow field prediction.
A distinctive feature of the present method is that the dynamic stability derivatives \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) can be predicted separately. The results of \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) are shown in Fig. 5. It is found that the \( {C}_{m_{\dot{\alpha}}} \) is negative and thus stabilizes the motion, and on the other hand the \( {C}_{m_q} \) is positive and tends to make the motion unstable. Using conventional methods, only \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \) can be predicted, and as the result, the destabilization effect of \( {C}_{m_q} \) cannot be revealed.
The components of the dynamic derivatives at α0 = 3∘ for Xcg/L = 0.70. a \( {C}_{m_{\dot{\alpha}}} \). b \( {C}_{m_q} \)
Grid convergence
The grid convergence is important to ensure that the numerical solutions of the stability derivatives are accurate on given grids. In this subsection, the grid convergence of the numerical methods for solving the sensitivity equations is verified by increasing the number of grids gradually from 0.17 million to 1.60 million. The mean angle of attack α0 is set to 3°, and the amplitude of oscillation α1 is 1°. The position of the rotating center Xcg = 0.70 L.
Figure 6 shows the results of the mean value of moment coefficient derivatives with respect to α, \( \dot{\alpha} \) and q under different grid numbers. When the grid is refined, the tendency of convergence is observed in the numerical solutions. The differences for \( {C}_{m_{\alpha }} \), \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) are 0.47%, 0.58% and 0.77% between the two most refined grids with 1.02 M and 1.6 M grids. In what follows, only the numerical results on the finest grid are shown.
The time-average values of \( {C}_{m_{\alpha }} \), \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) versus different numbers of cells. a \( {C}_{m_{\alpha }} \). b \( {C}_{m_{\dot{\alpha}}} \). c \( {C}_{m_q} \)
The comparison with experimental data
In the experimental study [30, 31], only the mean values of the stability derivatives are measured. In the present study, the mean values of the stability derivatives are computed by the time-average of the instantaneous solutions of the stability derivatives on one period of the forced oscillation. Figure 7 shows the results of mean static derivatives at different mean angles of attack (≤6°) for both Xcg70 and Xcg75. The results are in reasonable agreement with the experimental data and are more accurate than the theoretical results of the embedded Newtonian theory [30, 31].
Comparison of the static of theoretical, experimental and numerical methods for Xcg/L = 0.70 and Xcg/L = 0.75. a Static derivatives of Xcg/L = 0.70. b Static derivatives of Xcg/L = 0.75
When the mean angle of attack increases further, the errors between the numerical and experimental results will become larger (not shown here). To explain this phenomenon, we notice that the computational domain which is shown in Fig. 2 does not include the bottom region of the blunt cone. When the angle of attack is large enough, the asymmetry of the flow field may have a large influence on the stability derivatives. Therefore, this test case is recomputed using the computational domain and corresponding grids shown in Fig. 8.
The refined grid considering the bottom regions. a Topology of blocks. b Near view of the grid
After considering the bottom effect, the static derivatives of Xcg75 at α0 ∈ [4°, 10°] are shown in Fig. 9. In this results, the predicted static stability derivatives are in good agreement with the experimental results. When the angle of attack increases to an even large value, the inviscid nature of Euler equations may prevent an accurate prediction of the stability derivatives since the flow separations at the bottom region are dominated by the viscous effect. The use of the Navier-Stokes equations for predicting the sensitivity derivatives is very expensive, which will be studied in the future work.
Comparison of the static moment derivative between theoretical, experimental and numerical results after considering the bottom effect
Figure 10 shows the mean dynamic derivatives \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \) after considering the bottom effect. It is shown that when the angle of attack is smaller than 10°, the agreement between the numerical results and the experimental results are reasonable although the errors are considerably larger than the static derivatives. When the angle of attack is larger than 10°, the errors become even larger. This is also an indication that the Euler equations may not be appropriate when the angle of attack is large. However, the tendency of the dynamic stability derivatives with respect to the angle of attack is better predicted by the present method when compared with the embedded Newtonian theory.
Comparison of the dynamic moment derivatives between the theoretical, experimental and numerical results after considering the bottom effect
We note further that in the experiment, only \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \) can be predicted directly. On the other hand, the present method can compute \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) separately. The mean values of these derivatives are shown in Fig. 11. It is found that \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) are usually of opposite signs and tend to eliminate each other, which makes the variation of \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \) being much smaller than its individual components. This phenomenon shows the importance of predicting \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) separately.
Components of Dynamic derivatives
Evaluation of the local contribution of the blunt cone to the stability derivatives using the sensitivity derivatives
The MPD defined in Eq.(8) is used to evaluate the contribution of local surfaces of the blunt cone to the moment stability derivatives. For the present case, the surface of the cone is divided into three parts, namely the head, body and bottom parts which are shown in Fig. 12. Their contributions to the stability derivatives are given in Table 1 when the mean angles of attack is 8°. In Table 1, the positive percentage means the stabilization effect while the negative percentage means destabilization effect. For static stability derivative \( {C}_{m_{\alpha }} \), the stabilization effect is achieved by the head of the blunt cone, and the body and bottom parts both destabilize the cone. And the head region plays the most important role in the static stability. For the dynamic stability derivative \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \), all three parts have the stabilization effect, and the body part provides about 80% of the overall stability. The bottom effect can be also quantitatively identified using the MPD. It can be deduced from Table 1 that in terms of the absolute value of MPD, the bottom region contributes 6.4% of the total static stability, and contributes 4.9% of the total dynamic stability.
The sketch diagram of components of the blunt cone, red-head, blue-body, green-bottom
Table 1 Contribution of head, body and bottom part to the overall stability
In this paper, the numerical method for evaluating the stability derivatives based on the unsteady sensitivity equations is extended to 3D cases. This method takes the unsteady effects into consideration and can be used to predict any stability derivative by solving the unsteady flow and the corresponding sensitivity equations. There are two remarkable features of this method. One of them is the possibility to distinguish the components of dynamic derivatives caused by different state parameters. The other feature is that the MPD can be used to predict the contribution of each part of the aircraft to the overall stability quantitatively. The supersonic blunt cone is tested to validate this method.
For the stability derivatives of the blunt cone in hypersonic flow with Mach number 6.85, the numerical results show that when the angle of attack is not very large, both static and dynamic stability derivatives can be predicted in reasonable accuracy which is usually higher than the embedded Newtonian theory. For the static stability, the stabilization effect is achieved by the head of the blunt cone, and the body and bottom parts both destabilize the cone. For the dynamic stability, \( {C}_{m_{\dot{\alpha}}} \) and \( {C}_{m_q} \) are usually of opposite signs and tend to eliminate each other, which makes the variation of \( {C}_{m_{\dot{\alpha}}}+{C}_{m_q} \) being much smaller than its individual components. The body part provides about 80% of the overall dynamic stability.
α angle of attack, shown in Fig. 2a
θ pitching angle, shown in Fig. 2a
q pitching angular velocity
V magnitude of velocity
γ any one of α, q and V
Cm moment coefficient, defined in Eq.(6)
(Cm)γ Stability derivatives of Cm with respected to γ
Parts of the data and materials are available upon request.
Chambers JR, Hall RM (2004) Historical review of uncommanded lateral-directional motions at transonic conditions. J Aircr 41(3):436–447
Hall RM, Biedron RT et al (2005) Computational methods for stability and control (COMSAC): the time has come, AIAA Atmospheric Flight mechanics conf. and exhibit, AIAA, p 6121
Bryan GH (1911) Stability in aviation. Macmillan, New York
Tobak M (1954) On the use of indicial function concept in the analysis of unsteady motions of wings and wing-tail combinations, NACA Report 1188
Tobak M, Schiff LB (1976) On the formulation of the aerodynamic characteristics in aircraft dynamics, NACA TR R-456
Goman MG, Zagainov GI (1997) Application of bifurcation methods to nonlinear flight dynamics problems. Prog Aerosp Sci 33(9–10):539–586
Klein V, Noderer KD (1994) Modeling of aircraft unsteady aerodynamic characteristics, NASA Technical Memorandum 109120
Greenwell D (2004) A review of unsteady aerodynamic modelling for flight dynamics of maneuverable aircraft, AIAA, p 5276
Ghoreyshi M, Jirasek A, Cummings RM (2014) Reduced order unsteady aerodynamic modeling for stability and control analysis using computational fluid dynamics. Prog Aerosp Sci 71:167–217
Ghoreyshi M, Cummings RM (2012) Unsteady aerodynamics modeling for aircraft maneuvers: a new approach using time-dependent surrogate modeling, 30th AIAA Applied Aeroedynamics Conference, AIAA, p 3327
Mader CA, Martins JA (2011) Computation of aircraft stability derivatives using an automatic differentiation Adjoint approach. AIAA J 49(12):2737–2750
Da Ronch A, Ghoreyshi M, Badcock KJ (2011) On the generation of flight dynamics aerodynamic tables by computational fluid mechanics. Prog Aerosp Sci 47(8):597–620
Klein V, Murphy PC, Curry TJ, Brandon J (1997) Analysis of wind tunnel longitudinal static and oscillatory data of the F−16XL aircraft, NASA tm–97–206276
Da Ronch A, Vallespin D, Ghoreyshi D, Badcock K (2010) Computation of Dynamic Derivaitves Using CFD, 28th AIAA Applied Aerodynamic Conference, AIAA, p 4562
Da Ronch A, Ghoreyshi D, Badcock K, Görtz K, Widhalm S, Dwight M, Champobasso M (2010) Linear Frequency Domain and Harmonic Balance Predictions of Dynamic Derivatives, 28th AIAA Applied Aerodynamic Conf., AIAA, p 4699
Da Ronch A, Vallespin D, Ghoreyshi M, Badcock K (2012) Evaluation of dynamic derivatives using computational fluid dynamics. AIAA J 50(2):470–484
Görtz S, McDaniel DR, Morton SA (2007) Towards an efficient aircraft stability and control analysis capability using high-fidelity CFD, AIAA, p 1053
Dean JP, Morton SA, McDaniel DR et al (2008) Aircraft stability and control characteristics determined by system identification of CFD simulations, AIAA, p 6378
Godfrey AG, Cliff R (1998) Direct calculation of aerodynamic force derivatives: a sensitivity–equation approach, AIAA 98–0363
Limache A, Cliff E (2000) Aerodynamic sensitivity theory for rotary stability derivatives. J Aircr 37:676–683
Ren YX (2008) Evaluation of the stability derivatives using the sensitivity equations. AIAA J 46(4):912–917
Lei GD, Ren YX (2011) Computation of the stability derivatives via CFD and the sensitivity equations. Acta Mech 27(2):179–188
Murman SM (2007) Reduced-frequency approach for calculating dynamic derivatives. AIAA J 45(6):1161–1168
Sun ZS, Ren YX (2011) A class of finite difference schemes with low dispersion and controllable dissipation for DNS of compressible turbulence. J Comput Phys 230:4616–4635
Wang QJ, Ren YX, Sun ZS, Sun YT (2013) Low dispersion finite volume scheme based on reconstruction with minimized dispersion and controllable dissipation. Sci China Phys Mech Astron 56(2):423–431
Harten A, Lax PD, Van Leer B (1983) On upstream differencing and Godunov-type schemes for hyperbolic conservation Laws. SIAM Rev 25(1):35–61
Jameson A, Turkel E (1981) Implicit scheme and LU-decompositions. Math Comput 37:385–397
Giesing JP, Rodden WP (1970) Application of oscillatory aerodynamic theory to estimation of dynamic stability derivatives. J Aircr 7(3):272–275
Gili P, Visone M, Lerro A (2015) A new approach for the estimation of longitudinal damping derivatives: CFD validation on NACA 0012. WSEAS Trans Fluid Mech 10:137–145
East RA, Qasrawi AMS, Khalid M (1978) An experimental study of the hypersonic dynamic stability of pitching blunt conical and hyper-ballistic shapes in a short running time facility, NATO AGARD CP-235
Hutt GR, East RA (1985) Optical techniques for model position measurement in dynamic wind tunnel testing. Meas Control 18:99–101
East RA, Hutt GR (1988) Comparison of predictions and experimental data for hypersonic pitching motion stability. J Spacecr 25(3):225–233
The authors would like to thank the national numerical wind tunnel project and the national key research and development program of China for their financial support.
This work is supported by national numerical wind tunnel project under contract number 2018-ZT4A07 and 2016YFA0401200 of national key research and development program of China.
Department of Engineering Mechanics, Tsinghua University, Beijing, 100084, China
Chenxi Guo & Yu-xin Ren
Chenxi Guo
Yu-xin Ren
The methods are developed by both authors based on the idea of the corresponding author, and the coding and numerical simulations are carried out by the first author. Both authors read and approved the final manuscript.
Correspondence to Yu-xin Ren.
Guo, C., Ren, Yx. The computation of the pitch damping stability derivatives of supersonic blunt cones using unsteady sensitivity equations. Adv. Aerodyn. 1, 17 (2019). https://doi.org/10.1186/s42774-019-0018-3
Unsteady sensitivity equations
Three dimensional flows
Blunt cones
Numerical simulation
|
CommonCrawl
|
Computational Cognitive Science
Time-order error and scalar variance in a computational model of human timing: simulations and predictions
Maciej Komosinski1 &
Adam Kups2
Computational Cognitive Science volume 1, Article number: 3 (2015) Cite this article
This work introduces a computational model of human temporal discrimination mechanism – the Clock-Counter Timing Network. It is an artificial neural network implementation of a timing mechanism based on the informational architecture of the popular Scalar Timing Model.
The model has been simulated in a virtual environment enabling computational experiments which imitate a temporal discrimination task – the two-alternative forced choice task. The influence of key parameters of the model (including the internal pacemaker speed and the variability of memory translation) on the network accuracy and the time-order error phenomenon has been evaluated.
The results of simulations reveal how activities of different modules contribute to the overall performance of the model. While the number of significant effects is quite large, the article focuses on the relevant observations concerning the influence of the pacemaker speed and the scalar source of variance on the measured indicators of network performance.
The results of performed experiments demonstrate consequences of the fundamental assumptions of the clock-counter model for the results in a temporal discrimination task. The results can be compared and verified in empirical experiments with human participants, especially when the modes of activity of the internal timing mechanism are changed because of some external conditions, or are impaired due to some kind of a neural degradation process.
Timing is one of the fundamental cognitive abilities among humans and animals. Temporal information is used by living organisms to perform many crucial tasks such as movement, planning and communication. Therefore it is of great importance to gain knowledge on how human timing mechanisms work, what are the biological (and especially neural) bases of these mechanisms, and what are their limitations.
Psychology, psychophysics and neuroscience of human timing have provided many theoretical approaches. One of the prominent approaches are clock-counter models (Eisler 1981; Gibbon 1992; Gibbon et al. 1984; Ulrich et al. 2006; Wearden 1999, 2003; Wearden and Doherty 1995). This class of models revolves around the concept of the internal clock emitting impulses and the counter storing these pulses whenever a stimulus is presented. One of the most popular models of this class is the Scalar Timing Model. Another type of model that has been recently proposed is the state-dependent network model (Buonomano et al. 2009; Karmarkar and Buonomano 2007). This model relies on the idea of a network of simple computational elements, where internal time is encoded as a changing state of neurons during and after exposition of stimuli. Yet another prominent group of models is constituted by psychophysical quantitative models (Church 1999; Getty 1975, 1976; Killeen and Weiss 1987; Rammsayer and Ulrich 2001). These models are usually represented as sets of equations describing dependencies between physical properties of a stimulus and an internal, subjective representation of time. These psychophysical equations are sometimes closely related to the other groups of models, and may be seen as their specification. Apart from these groups of models, there exist more complex interdisciplinary approaches, combining neurological, psychological and computational knowledge (Church 2003; Matell and Meck 2004; Meck 2005). More information on different classes of time perception models is provided in (Buhusi and Meck 2005; Grondin 2001; Ivry and Schlerf 2008; Zakay et al. 1999). Models that are outside of the classification outlined above are described in (Shi et al. 2013; Staddon and Higga 1999; Yamazaki and Tanaka 2005). Overall, these are good theoretical frameworks: they provide explanations to experimental data, some of them are equipped with tools allowing to perform advanced simulations, and some of them integrate knowledge and data from different scientific disciplines. Nevertheless, a unified, commonly accepted theory of human timing is yet to beproposed.
Apart from the development of better and better explanations of human timing processes, much time and resources have also been devoted to explore human (and animal) timing phenomena. There is a great body of research concerning interval timing, ranging from behavioral experiments conducted on animal and human subjects (Gibbon 1977; Grondin 2005; Wearden et al. 2007) to neuroimaging studies and research on patients with mental or neurodegenerative diseases (Grondin 2010; Hairston and Nagarajan 2007; Malapani et al. 1998; Riesen and Schnider 2001; Sévigny et al. 2003; Smith et al. 2007). Among experimental findings, several phenomena have been frequently reported and analyzed. One of them is the scalar property of animal and human timing – a characteristic often perceived as an equivalent of Weber's law in the domain of timing (Eisler et al. 2008; Wearden and Lejeune 2008; Wearden et al. 1997; Komosinski 2012). Depending on the type of an experiment, the scalar property may denote a constant coefficient of variation of a subject's timing judgments/measurements when perceiving stimuli of different durations, or a superposition of distributions of estimations of different time intervals, expressed on the same, relative timescale (Church 2002). This property became the core assumption of one of the most popular timing models – the Scalar Timing Model (Gibbon et al. 1984; Wearden 1999, 2003) or STM, which is a part of the Scalar Expectancy Theory – SET. The STM and the SET are popular (Perbal et al. 2005; Wearden et al. 2007) despite the fact that the scalar property is not observed in some experiments (Komosinski 2012; Lewis and Miall 2009; Wearden and Lejeune 2008).
Another frequently reported and robust phenomenon is the time-order error – TOE (Allan 1977; Hairston and Nagarajan 2007; Hellström 1985; Hellström and Rammsayer 2004; Jamieson and Petrusic 1975). The TOE is reported when duration (but also loudness, pitch, weight, etc.) of two successively presented stimuli is compared; this procedure is known as the two-alternative forced choice task (2-AFC). In the domain of human timing it is called the temporal discrimination task.
The TOE is a systematic overestimation (a positive TOE) or underestimation (a negative TOE) of the first stimulus relative to the second one. While the negative TOE is generally more common, many factors influence the magnitude and even the polarity of the TOE. For example, it is often reported that when the intensity of stimuli is low (in a context of timing research they have short durations), the TOE is closer to zero, and it even becomes positive (Allan 1977; Hellström 2003). Another important factor influencing the TOE is an interstimulus interval (ISI); it was reported that longer ISIs cause a decrease of the magnitude of the TOE (Jamieson and Petrusic 1975). Many kinds of explanations have been proposed (Eisler et al. 2008; Hellström 1985), but there is no single explanation that would cover every property of the TOE. What is quite certain is that this is a perception-related phenomenon, not the decision-making one. As much as the TOE is a matter to consider by theoreticians of human timing, it is also a methodological issue. The order of presentation of temporal stimuli may distort the response pattern of participants, increasing or decreasing correct response rate by tens of percent (Jamieson and Petrusic 1975; Schab and Crowder 1988).
Responding to the need for a unified model of human timing, we have implemented an informational architecture of a commonly known clock-counter model, the STM, in a connectionist environment of an artificial neural network (Komosinski and Kups 2009, 2011). We call this implementation the Clock-Counter Timing Network. In order to research the responses of the CCTN we developed a software platform which is able to conduct an artificial behavioral experiment: the temporal discrimination task. The CCTN consists of a number of modules, of which some are adopted directly from the STM. Furthermore, by including a few additional assumptions, the CCTN is able to manifest the TOE. A preliminary computational experiment proved that the CCTN can mimic the behavior of a human (the participant "BJ" from the study of (Allan 1977)). This was possible even though those simulations did not demonstrate the scalar property which, according to various studies, does not always hold; such experiments allowed minimizing the influence of additional sources of variability in the otherwise complex system.
After establishing that the CCTN is able to successfully manifest the TOE, it is natural to ask whether the source of scalar variability in the network affects the TOE and the network's ability to mimic human behaviors. Answering this question may reveal how the two frequently reported phenomena interact. What is more, including the assumption of the scalar property in the CCTN makes this architecture more similar to the STM, which is the original theoretical foundation of the CCTN.
Results reported in this paper demonstrate that the computational representation of the STM is capable of explaining the robustness of the TOE, and it is also useful in predicting performance in the temporal discrimination task under different modes of activity. These capabilities make the CCTN a valuable contribution in the quest for explaining human timing mechanisms.
The neural model – the clock-counter timing network
To build a neural model of the timing mechanism and to perform the experiments, the Framsticks simulation environment was employed (Hapke and Komosinski 2008; Jelonek and Komosinski 2006; Komosinski and Ulatowski 2009, 2014). Apart from tools designed to build complex neural models, this software is able to efficiently perform simulation and optimization. Since simulation time is measured in simulation steps, it was assumed that one millisecond corresponds to one simulation step.
The basic processing units used in the CCTN are depicted in Figure 1. These are artificial neurons processing signal received from inputs and transforming it according to some rule or a simple function:
SeeLight – a receptor that outputs a value corresponding to the detected quantity of a stimulus; it can be used as a model of a light, sound, or smell sensor.
Pulse – outputs a pulse once in a few simulation steps; the number of steps between pulses is exponentially distributed and the mean can be adjusted.
Gate – this neuron has one control input and one or more standard inputs; if a signal flowing through the control input is positive, than the neuron outputs the weighted sum of inputs that have positive weights. When the signal in control input is negative, then the neuron outputs the weighted sum of inputs that have negative weights. For zero control signal the neuron outputs zero.
Thr – a threshold neuron with a binary transfer function. The threshold value and both output values can be adjusted.
Gauss – outputs a product of the input value and the value drawn from the normal distribution of a given mean and a standard deviation.
Sum – accumulates received signals in each step and outputs currently accumulated value,
Delay – propagates input to output with an adjustable delay of a number of simulation steps.
Neuron types used in the network shown in Figure 2 . The "Gate" neuron conditionally passes input signal to output, depending on the state of the additional control input. The names of the remaining neurons indicate their function.
Modules in the CCTN
The modules of the CCTN are shown in Figure 2. As described below, these modules belong to two groups: the modules present in the Scalar Timing Model informational architecture and additional modules enabling the CCTN to compare pairs of sequentially presented stimuli.
Modules present in the STM:
Pacemaker – consists of one Pulse neuron which has no input and emits pulses once in a few steps. The key parameter of this neuron is the mean interpulse interval, further referred to as the Pacemaker Period or the Pacemaker Speed.
Switch – consists of one Gate Neuron; lets the signal from the Pacemaker through whenever it receives a positive signal from the Receptor.
Accumulator – in the default state it stores the bias signal from the Accumulator Bias Module. When a stimulus is present (the Receptor is excited), the Accumulator stores the pulses from the Pacemaker.
Reference Memory – the role of this module is slightly different than in the original STM. The module stores the signal from the Scalar Variance Module after the end of the first stimulus (this information represents the duration of the stimulus). This module is also equipped with the resetting loop which starts to reset the memory after the exposition of the second stimulus. In the original STM, this module integrates lengths of conditional stimuli during the conditional/learning part of the experiment, and during the testing part, a randomly drawn sample of the duration is compared with the information stored in the working memory.
Working Memory – stores the signal from the Scalar Variance Module after the end of the second stimulus; it is equipped with a resetting loop which starts to reset it several steps after the signal is stored in memory.
Comparator – a few steps after the end of the second stimulus, it compares the values stored in the Reference and Working Memory buffers; if the signal from the Reference Memory has a greater absolute value than the signal from the Working Memory, the Comparator outputs 1.0. In the opposite case, the Comparator outputs −1.0. When the absolute difference between the two signals is lower than 0.1 (which usually happens before the end of a trial, and might happen when the difference between the two signals is really small), the Comparator outputs 0.0.
Scalar Variance Module – this module is not an explicit part of the STM, however, the function it serves is one of the ways of producing scalar variance (Komosinski 2012). The module consists of one Gauss neuron which receives signal from the Accumulator, and outputs the product of its input and a random value drawn from the normal distribution with a given mean and a standard deviation. The mean and the standard deviation are further subjected to experimental manipulations, along with other parameters of the CCTN.
Modules enabling CCTN to compare pairs of stimuli:
Stimulus Monitoring Module – consists of one Thr neuron that receives the signal from the Receptor. If the input signal is higher than the arbitrary threshold (currently set to 0.001), the module outputs 1.0, otherwise it outputs 0.0.
Accumulator Control Module – it is equipped with its own buffer (a Sum neuron). The module receives signals from the Accumulator Reset Module and the Receptor, and it outputs signals to the Accumulator Reset Module and to the Accumulator Bias Module. The role of the Accumulator Control Module is to recognize when to enable the two modules. The Accumulator Control Module enables the Reset Module after the end of the stimulus, and stops it when the signal in the Accumulator is close to a threshold value (0.1 by default); after that, the Accumulator Control Module enables the Bias Module, which stops on its own when a threshold of the bias value is reached.
Accumulator Reset Module – receives signals from the Accumulator Control Module and from the Stimulus Monitoring Module. The main part of this module is a negative feedback loop which starts several steps after the stimulus has ended. The module clears the state of the Accumulator by decreasing the accumulated value so that it may drop even below the bias threshold. The exact moment of stopping the activity of this module is determined by the Accumulator Control Module; the signal value in the Accumulator indicating this moment is called later the Accumulator Reset Lower Bound. The rate at which this module clears the signal in the Accumulator, further referred to as the Accumulator Reset Rate, is equal to the amount of signal that is deducted from the signal stored in the Accumulator.
Accumulator Bias Module – receives inputs from the Accumulator Control Module and the Stimulus Monitoring Module. After receiving a proper signal from the Control Module, it outputs a positive value until the signal stored in the Accumulator reaches a certain threshold. The amount of signal added to the signal in the Accumulator is called the Accumulator Bias Recovery Rate. The bias charging threshold is further referred to as Accumulator Bias. The processes of resetting and charging the Accumulator with the bias signal do not overlap.
Accumulator-Memory Mediator – receives signals from the Scalar Variance Module, the Reference Memory Module and the End-of-stimulus Module. This module passes the signal to the Reference Memory or to the Working Memory, depending on which stimulus of the pair has been presented.
End-of-stimulus Module – receives signals from the SeeLight receptor and the Accumulator Reset Module, and outputs control signals to the Comparator and to the Accumulator-Memory Mediator. Depending on whether the first or the second stimulus ends, this module enables the transfer of the signal from the Accumulator through the Accumulator-Memory Mediator or it enables the comparison process in the Comparator.
The CCTN artificial neural network based on the STM architecture; the network compares lengths of two stimuli. Individual modules are described in the text.
An example of the key modules of the CCTN processing signals is shown in Figure 3. The scalar property of timing is provided by the Scalar Variance Module (7). Another way to observe this property would be to use more biologically adequate building elements which may produce the scalar property emergently or to introduce some form of an inherent noise. However, the main goal of this work is to see how scalar property interacts with the TOE phenomenon and not to explain the scalar property itself.
Sample signal waveform in the key modules of the CCTN comparing two pairs of stimuli. Top panel: When the stimuli are presented, the Receptor feeds the 1.0 value to the network. During the presence of the stimulus, the Accumulator collects the signal from the Pacemaker module. After the exposition of each stimulus, the Accumulator Reset Module resets the signal in the Accumulator – the signal in the Accumulator gradually drops. The Scalar Variance Module located between the Accumulator and the memory modules introduces Gaussian noise to the signal sent from the Accumulator. Middle panel: After the exposition of the first/second stimulus of a pair, the Reference/Working Memory stores the signal acquired from the Scalar Variance Module. After the end of a trial, the signal in these memories is reset. Bottom panel: After the end of the second stimulus of a pair, the Comparator compares signals from the Working Memory and the Reference Memory. When the value in the Reference Memory is bigger, the Comparator sends 1.0, otherwise it sends −1.0. Note that in the example shown, the CCTN made an error when comparing the first pair of stimuli: the Comparator sent a positive value while, in fact, the second stimulus lasted longer.
Time-order error in the CCTN
In general, the TOE for two durations can be calculated as the difference between the conditional probability of the correct answer when the first stimulus lasted longer (the "LongShort" case), and the probability of the correct answer when the second stimulus lasted longer (the "ShortLong" case). In the case when the presented stimuli have the same durations, the TOE is calculated as the difference between the frequency of the answer "the first stimulus lasted longer" and 50 percent. To enable proper comparisons of the TOE values in these two different situations, the resulting value in the former case has to be halved (Jamieson and Petrusic 1975). The formulas describing these two measures are:
$$ {\fontsize{8}{12} {\begin{aligned} TOE = \frac{P(CorrectAnswer|LongShort) - P(CorrectAnswer|ShortLong)}{2} \end{aligned}}} $$
$$ {\small{\begin{aligned} TOE = P(FirstReportedLonger|BothIdentical) - 0.5 \end{aligned}}} $$
A negative TOE value means overestimation of the second stimulus relative to the first one, which can be measured as a higher frequency of the correct answer when the second stimulus of a pair lasted longer, than in the case when the stimuli were presented in the reverse order – compare with (1). If the presented stimuli are of the same duration, a negative TOE means that the answer "the second stimulus lasted longer" is more frequent than 50 percent – compare with (2). A positive TOE means that the opposite pattern of responses occurred. As mentioned before, the earlier version of the CCTN that was not equipped with the Scalar Variance Module was capable of manifesting the TOE – at least for the range of stimuli used in the experiment described by Allan (see Section "Data"), and of nearly the same magnitude as the one exhibited by the participant BJ.
A positive TOE occurs mainly due to the activity of the Accumulator Bias Module and the Accumulator Reset Module. If, after the exposition of the first stimulus of a pair, the signal drops below the default level and then the second stimulus appears, the second stimulus has a smaller chance to be reported by the network as being longer than the first one. Increasing length of the interstimulus interval would reduce this effect; this phenomenon is reported in the literature on human timing (Jamieson and Petrusic 1975).
A negative TOE in the CCTN is caused by the work of the Accumulator Reset Module. If, after the exposition of the first stimulus from a pair, there are many pulses accumulated in the Accumulator and the resetting process is slow, then at the time of the arrival of the second stimulus, the Accumulator still contains remnants of the pulses accumulated during the first stimulus. If the remaining value in the Accumulator is higher than the default bias, then there is a greater chance that the second stimulus will be reported as longer. Note that according to the predictions of the CCTN, this effect should be smaller for shorter stimuli and a longer interstimulus interval. Again, these phenomena are reported in the literature on human timing (Jamieson and Petrusic 1975). A sufficiently long interstimulus interval would cause the TOE to be positive, which is a prediction that is yet to be confirmed in a separate empirical experiments.
The basic mechanisms responsible for the TOE that have been proposed in our earlier research remain the same in the present version of the CCTN. The assumptions underlying the manifestation of the TOE have led to the development of the model's ability to reflect human behaviors in timing tasks. In this work we investigate how including the source of scalar variability influences patterns of neural network responses during temporal discrimination of relatively short stimuli. These stimuli are in the range of tens to less than two hundred milliseconds, although in the experiment concerning variability of temporal representation, longer stimuli were considered as well. The existence of the source of scalar variance is shown to be unable to "swamp" the variance generated by the Poissonian generator when stimuli are really short (Gibbon 1992). This behavior is also supported by empirical data demonstrating that stimuli in the range of milliseconds tend to cause higher coefficients of variation of judgements than longer stimuli, and that the magnitude of this ratio drops fast as the stimuli get longer (Lewis and Miall 2009; Wearden and Lejeune 2008).
To perform extensive analyses of the CCTN, the idea of an experiment originally performed by Allan (1977) was employed. Allan's data have been previously used in tasks that model timing mechanisms (Eisler 1981; Hellström 1985). Our experiments imitate the structure of the Experiment II – more specifically, the part where participants compared short stimuli. In this part, subjects were presented with the set of short durations in each trial, and had to decide which of the two visual stimuli lasted longer. The set of stimuli consisted of ten different types of pairs of stimuli ranging from 70 to 160 ms. Apart from adaptation of the experimental procedure, we have also fit the data from simulation experiments to the results of the "BJ" participant. The data were taken from Table two in (Allan 1977) with TOE for unequal stimuli halved to enable direct comparisons with pairs containing equally long stimuli.
Each pair of stimuli was presented to this participant approximately 150 times across 5 sessions of 3 blocks of 100 pairs. Actually, in the case of "BJ", 1–5 presentations of each pair did not take place or their results were discarded from further analysis (Eisler 1981), but because these numbers were relatively low, this artifact was not reflected in our experiments.
For further analyses, we used the proportions of the responses "first longer" – meaning the the first stimulus of the pair was reported as lasting longer. Such proportions are also provided for each subject and for each stimuli length in the Allan's study.
To investigate the behavior of the CCTN and the interplay between its parameters, two kinds of simulations were performed. The aim of the first one was to study the influence of the Scalar Variance Module on the variability of signals in the Reference Memory and in the Working Memory modules. The second experiment was designed to study the TOE phenomenon and its dependence on parameter values of the network. The crucial part of the second experiment was testing how different parameters of the Scalar Variance Module influence the magnitude of the TOE, the overall accuracy of discrimination of two short stimuli, and the goodness of fit of the network to the human behavior.
The CCTN parameters
In this work, the influence of six important parameters of the CCTN on the actual outcome of the stimuli comparison is studied. These parameters are:
Pacemaker Period PP – the mean interval between consecutive pulses in the Pacemaker Module. It is calculated as \(\frac {1}{\lambda }\), where λ is the mean of the Poissonian distribution. The PP parameter is often called the internal clock speed.
The mean S V μ and the standard deviation S V σ of the normal distribution used in the Scalar Variance Module. These two parameters are further referred to as the Scalar Variance factor SV, as both values determine the (scalar) variability of stimuli representations in the Working Memory module.
The Accumulator Reset Rate ARR – the rate at which the accumulator is cleaned up after exposition of a stimulus. This parameter is highly responsible for the negative TOE, since the remnants of the signal in the Accumulator Module add up to the signal related to the second stimulus and, in consequence, lead to its overestimation. The signal value in the Accumulator module at which the resetting process stops is called the Accumulator Reset Lower Bound, ARLB, and it was constant in our experiments.
The Accumulator Bias value AB – the default signal value in the Accumulator when no stimulus is presented. Increasing this value potentially favors positive TOE, as the value may be added to the value which represents the first stimulus in a pair. However, the actual time profile of a trial and the Accumulator Bias Recovery Rate determine whether increased AB would indeed lead to the overestimation of the first stimulus.
The Accumulator Bias Recovery Rate ABRR – the rate at which the Accumulator Bias recovers after the exposition of the second stimulus in a pair.
Ten different CCTN networks were considered; each of them was presented 1000 times with a set of 27 different pairs of identical stimuli. Each network was characterized by one of the five pairs of parameters of the Scalar Variance Module: (S V μ=1.0,S V σ=0.05),(S V μ=1.0,S V σ=0.1),(S V μ=1.0,S V σ=0.2),(S V μ=2.0,S V σ=0.2), and (S V μ=2.0,S V σ=0.4). In each network, the period of the Pacemaker Module was set to either \(\frac {1}{\lambda } = 10\) ss or \(\frac {1}{\lambda } = 20\) ss (ss denotes simulation steps). It was assumed that one ss reflects one millisecond. Other important parameters of the networks were adjusted so that there was no interference in the Accumulator between the signal remaining after the exposition of the first stimulus and the signal related to the second stimulus: the Accumulator Reset Rate A R R=0.005, the Accumulator Reset Lower Bound A R L B=0.1, the Accumulator bias (default signal) A B=0.5, the Accumulator Bias Recovery Rate A B R R=0.001. The stimuli ranged from 10 ss to 10000 ss. Pairs of stimuli were presented in the ascending order of stimulus length. Presentation of each pair was preceded by 2750 ss to charge the Accumulator with the bias signal. The interstimulus interval (ISI) of each pair lasted 10000 ss. Because the main aim of this experiment was to explore the behavior of networks comparing short stimuli, the stimuli in the range 10–280 ss were sampled with the highest resolution (30 ss); stimuli in the range 300–900 ss were sampled every 100 ss, and stimuli in the range 1000–10000 ss were sampled every 1000 ss. During the experiment, the network responded after the exposition of the second stimulus. The absolute difference between the signals from the Reference Memory and the Working Memory had to be higher than 0.01 for the Comparator to send out 1.0 (the first stimulus considered longer), or −1.0 (the second stimulus considered longer); otherwise the Comparator would output 0.0. The time between the end of the second stimulus of a pair and the start of the next trial lasted 3000 ss to let the network return to the initial state.
270 networks were examined, and each network had a different configuration of six crucial parameters: the period of the Pacemaker Module pulse generation \(\left (\frac {1}{\lambda }\right)\), the mean and the standard deviation of the normal distribution associated with the Scalar Variance Module (S V μ and S V σ), the Accumulator Reset Rate ARR, the Accumulator Bias value AB, and the Accumulator Bias Recovery Rate ABRR. The remaining aspects of network settings were the same as in the Experiment 1. The values of the parameters were as follows: λ∈{1/5,1/10,1/20}, (S V μ,S V σ)∈{(1.0,0.05), (1.0,0.1), (1.0,0.2), (2.0,0.2), (2.0,0.4)}, A R R∈{0.0016, 0.0017, 0.0018, 0.0019, 0.002}, A B∈{0.1,0.6,1.1}, A B R R∈{0.001,0.011}. All the combinations of parameters in these sets yield a total of 216 network configurations. Additionally, networks with no Scalar Variance Module were examined; excluding parameters for scalar variability gives 54 combinations of the remaining parameters, thus 270 networks in total. Each of these networks was tested 32 times.
There were ten different pairs of stimuli lengths given in simulation steps: (70,100), (100,70), (100,130), (130,100), (130,160), (160, 130), (70,70), (100, 100), (130,130), and (160, 160). Each pair was presented to a network 150 times, hence 1500 pairs in total presented to each of the 270 networks. The simulation setup has been arranged to be similar to the experiment conducted by Allan (see Section "Data") with some obvious differences. Since artificial networks were tested, not humans, the experiment was not divided into blocks and sessions; as the networks were not equipped with sophisticated sensory and decision systems, there were no warning signals at the beginnings of trials, and stimulation concerned a single, simulated receptor.
Data collection and analyses
In the first experiment, values of signals were collected from four key neurons of the network: the Accumulator buffer, the Reference Memory buffer, the Working Memory buffer and the output of the Comparator module. For each network, one thousand values for each type of pair of stimuli were registered for all the modules except the Accumulator, from which we have collected two thousand values related to each of the stimuli. For the first three key neurons the mean value, the standard deviation and the coefficient of variation of the signal were computed. For the Comparator module, the proportion of the "first stimulus longer" signal for each stimuli pair was calculated. Additionally, distributions of the values from the Accumulator, the Reference Memory and the Working Memory were determined for each duration of stimuli. These values were stored during the trials. The values from the Accumulator were collected just after the end of the stimuli, the values from the Reference and the Working Memory buffers were collected several steps after the end of the stimuli, and the output value from the Comparator module was collected after the end of the second stimulus.
In the second experiment, for each network and for each type of stimuli pair, the proportion of the "first stimulus longer" response was calculated from the output of the Comparator. The cases of 0.0 signals (meaning that the difference between signals from the Reference Memory and the Working Memory was too small) were randomly assigned to one of the two categories. Having these proportions, the ratios of the correct answers for each stimuli pair and the TOE values were determined. For each pair of stimuli, we have also calculated the mean squared error (MSE) between response rates of the network and the participant BJ (see Section "Data"). Statistical analyses were performed using the IBM SPSS Statistics package, version 21.0.0.1.
Figure 4 demonstrates that for each set of parameters of the Scalar Variance Module and the Pacemaker Module, the coefficient of variation of signal values in each memory buffer increased when registered stimuli were shorter than 2000 ss (the results were almost identical for the Working Memory and the Reference Memory buffers, so charts are presented only for the former buffer).
Coefficient of variation (vertical axis) of the signal related to the representation of stimuli duration in the Experiment 1. Left panel: the Accumulator buffer. Right panel: the Working Memory buffer. Gray lines illustrate experiments with Pacemaker speed P P=10, and black lines correspond to P P=20. Three kinds of markers (dot, cross, circle) are used for different S V σ/S V μ ratios. The key is the same for both plots.
Contrary to the Accumulator, in the memory buffers the coefficient of variation stabilized when the duration of stimuli exceeded 2000–4000 ss. In the Accumulator, the coefficient of variation seemed to continuously drop, though the magnitude of the decrease tended to get lower when longer stimuli were presented. A detailed theoretical analysis of this relationship including asymptotic behavior can be found in (Komosinski 2012). Altogether, the coefficient of variation in the Accumulator was higher when the Pacemaker speed was lower (recall that there were only two Pacemaker speeds tested). Although the coefficient of variation in memory buffers was highly influenced by the activity of the Scalar Variance Module, the dependence of the coefficient of variation on the Pacemaker speed was especially visible when the presented stimuli lasted shortly. To see this dependence, compare changes of the coefficient of variation in networks characterized by P P=10, S V σ=0.1, S V μ=1.0 or P P=10, S V σ=0.2, S V μ=2.0 and the network with P P=20, S V σ=0.05, S V μ=1.0. Not surprisingly, the coefficient of variation grew together with the S V σ/S V μ ratio in the Scalar Variance Module, which is especially visible for longer stimuli. A high relative variability added by the Scalar Variance Module accompanied by the increased Pacemaker speed led to a faster stabilization of the coefficient of variation (cf. discussion on Weber's law in (Komosinski, 2012)).
Apart from signals in the Accumulator and in the memory buffers, we also examined patterns of answers of the networks. For longer stimuli, for each network, two responses favoring one of the stimuli were equally probable. For pairs of the shortest stimuli, however, the answer "do not know" (the Comparator output was 0.0) appeared more often: the absolute difference between the stimuli was below the threshold. The probability of such situations increased with a decreasing speed of the Pacemaker and with the decrease of the relative variability in the Scalar Variance Module.
The first indicator of timing performance that was measured was the overall accuracy (OA) – the mean percentage of the correct answers for all trials, including differing stimuli.
The outcome of the Kolmogorov-Smirnov test for normality (D=0.007,p=.200) as well as the visual inspection of the Q-Q plot and the histogram showed that the residuals of the dependent variable were consistent with a normal distribution; the outcome of Levene's test indicated (F=1.361,p<.001) that variances in groups are not homogeneous – which should deem parametric analyses of the dependent variable unusable until the scale is transformed. However, after the arcsin transformation (which caused the results of Levene's test to be non-significant: F=1.115, p=.099), the output from the 5-way General Linear Model for the transformed data was very similar to the output from non-transformed data: the set of significant effects did not change. The magnitude of the effects on the transformed scale expressed as partial eta squared – \(\mathrm {{\eta _{p}^{2}}}\) was usually slightly higher, but the order of magnitude of the effects was preserved. Because of this behavior and to avoid unnecessary transformations, the results of analyses will be presented for non-transformed data. All post hoc analyses were performed using the Bonferroni testa. The results of the analyses are presented in Table 1.
Table 1 Tests of simple and interaction effects on overall accuracy (OA)
The main effects of the Scalar Variance factor (S V,\(F(4,8370)=3566, p<.001, \mathrm {{\eta _{p}^{2}}}=.630\)), Pacemaker period factor (\(PP, F(2,8370)=38548, p<.001, \mathrm {{\eta _{p}^{2}}}=.902\)), Accumulator reset rate (\(ARR, F(2,8370)=947, p<.001, \mathrm {{\eta _{p}^{2}}}=.185\)) and Accumulator bias (\(AB, F(2,8370)=233, p<.001, \mathrm {{\eta _{p}^{2}}}=.053\)) were significant. The main effect of the Accumulator bias recovery rate factor was not significant (\(ABRR, F(1,8370)=0.037, p=.847, \mathrm {{\eta _{p}^{2}}}<.001\)).
Post-hoc analyses revealed that for each main effect excluding the SV factor, each pair of levels differed significantly (all p<.001). As for the main effect of SV, all pairs but one (1,0.1-2,0.2: p=.812) differed significantly (p<.001). The details (see also Figure 5) are presented below:
PP: the lower was the PP value, the higher was the accuracy.
Significant main effects on total accuracy of unequal pairs comparison in Experiment 2. Error bars indicate 95% confidence intervals.
SV: the highest accuracy was observed for the networks which did not have the scalar source of variability; then the accuracy dropped with the increase of the relative variability produced by the SV module (the networks with the same SV-related variability did not differ significantly).
ARR: the increase of the ARR parameter value entailed growth of the OA.
AB: a similar trend was observed as in the previous case, however, the growth of the accuracy was less pronounced.
Additionally, there were significant interactions: P P×S V (F(8,8370)=337, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.244\)), P P×A R R (F(4,8370)=206, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.090\)), P P×A B (F(4,8370)=41.9, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.020\)), S V×A R R (F(8,8370)=4.17, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.004\)), S V×A B (F(8,8370)=7.36, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.007\)), A R R×A B (F(4,8370)=10.6, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.005\)), A B×A B R R (F(2,8370)=3.03, p=.049, \(\mathrm {{\eta _{p}^{2}}}=.001\)), S V×P P×A R R (F(16,8370)=4.18, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.008\)), S V×P P×A B (F(16,8370)=1.69, p=.041, \(\mathrm {{\eta _{p}^{2}}}=.003\)). All the other interactions were not significant (all p≥.109). Most of the interactions were ordinal.
More detailed analyses of interaction effects revealed that (see Figures 6 and 7):
P P×S V: networks with lower scalar variability, including the networks without scalar variance source, demonstrated higher accuracy than those with a higher variability. Almost all differences between networks with different SV parameter values were significant across all levels of the PP factor, except for the (1.0,0.5 – non-scalar) pair, P P=20, where p=.027, all other p<.001. The only non-significant difference across all levels of PP was for the (1.0,0.1 – 2.0,0.2) pair: p≥.990. In general, the SV effect was more pronounced when the Pacemaker was faster; the least sensitive to the increase in the Pacemaker speed were the networks with the highest ratio of the S V σ/S V μ in the Scalar Variance Module: the higher the pacemaker speed, the lower the increase of accuracy in these networks.
Significant two-way interaction effects on total accuracy of unequal pairs comparison in Experiment 2. Error bars indicate 95% confidence intervals.
Significant three-way interaction effects on total accuracy of unequal pairs comparison in Experiment 2. Error bars indicate 95% confidence intervals.
P P×A R R: except for the lowest speed of the Pacemaker module, pairwise comparisons revealed significant differences (p<.001) between all groups of networks across different Accumulator reset rates. For the lowest values of the PP factor, the only non-significant difference was between networks with the highest and the medium reset rate (p=.063). Other than that, all pairwise differences were significant (p<.001). The OA increased together with the ARR factor values across different levels of the PP factor. The magnitude of the difference changed across different levels of the period factor showing that the difference was higher when the Pacemaker was faster.
P P×A B: here the pattern was similar to the previous interaction, except that for the lowest Pacemaker speed, the only significant difference was between two peripheral values of the bias parameter (p=.046). Other differences at this level of the PP factor were not significant (p≥.113).
S V×A R R: all differences between values of the ARR across all levels of the SV factor were significant (p<.001) demonstrating that networks with a higher Accumulator reset rate were more accurate than those with lower ARR values. The interaction seemed to be carried out by slight changes in the magnitude of differences between networks with different ARR levels across lower levels of the SV (including the lack of the scalar variance source) and networks with higher levels of the SV.
S V×A B: this interaction seemed to be caused by a decreasing discrepancy in accuracy between networks with different Accumulator bias levels (a higher bias level increased accuracy) for growing scalar variability levels (starting from the lack of scalar variability source). Moreover, for the group with the highest relative variability caused by the SV module, the difference between 0.6 and 1.1 levels of the AB factor was non-significant (p=.794).
A R R×A B: pairwise comparisons revealed that across all levels of the ARR factor, each pair of groups with different values of the AB parameter differed significantly (p<.001). However, these differences tended to be a bit smaller among the groups with higher values of the ARR parameter. Nevertheless, when the value of the ARR parameter was fixed, then accuracy increased as the value of the AB parameter increased.
A B×A B R R: this interaction barely crossed the threshold of statistical significance both before and after the arcsin transformation of the data (p=.049 in both cases), so these results should be treated with caution. The pairwise comparisons revealed (all p<.001) that the discrepancy in accuracy between A B=0.1 and A B=1.1, and between A B=0.6 and A B=1.1 increased slightly when the value of the ABRR factor decreased.
S V×P P×A R R: the P P×A R R interaction was modulated by the SV factor – i.e., the difference between networks with a different value of the ARR parameter within the group of the fastest networks was higher when the relative variability was lower (despite the fact that in these cases all p<.001). Across all levels of the SV factor, when the PP was higher than 20, the networks with a higher Accumulator reset rate were more accurate than those with a lower reset rate. Interestingly, as revealed by the F-tests, the effect of the ARR factor was not significant when S V=(2.0,0.2) and P P=20 (p=.375), contrary to all other cases, where p≤.003. There was also a minor non-linearity in the ARR effect size in the medium pacemaker speed groups across levels of the SV factor; overall, the performance dropped as the scalar variability increased.
S V×P P×A B: particularly within the group of networks with a lower scalar variability produced by the SV module (or lack thereof), it was observed that the higher the Pacemaker rate, the higher the differences between networks with distinct values of the AB parameter. As revealed by the F-tests, the effect of the AB factor among the networks with the slowest Pacemaker was not significant (p≥.076) across almost all levels of the SV factor; only the networks with the lowest scalar variability (1.0,0.5) differed significantly in this group (p=.016). Apart from that, across all levels of the SV factor, when the PP level was fixed, networks with a higher Accumulator bias were more accurate – however, the overall performance decreased as the scalar variability increased.
To aggregate results concerning the TOE for different classes of pairs of stimuli, the arithmetic mean of TOE values was calculated across all types of pairs (both with unequal and equal stimuli within a pair). Obviously, this aggregating measure is insufficient to reveal the exact patterns of changes of the TOE across different pairs of stimuli, which are interesting and complex by themselves. However, since all pairwise correlations between the TOE related to different types of pairs were significant, positive and relatively high (all p<.001 and Pearson's r(8640)≥.717), employing such a measure was justified. To additionally ensure that the mean reflects values of its arguments, only those effects were interpreted that are significant for the majority of TOEs related to individual types of pairs. This time both assumptions of normality of residuals (Kolmogorov-Smirnov test: D=0.007,p=.200) and homogeneity of variances (Levene's test: F=1.070,p=.208) were met, so all analyses were straightforwardly performed using the GLM and post hoc analyses (the Bonferroni test) on the mean TOE (denoted as mT below). The results of these analyses are presented in Table 2.
Table 2 Tests of simple and interaction effects on TOE (mT)
All the main effects were significant (\(PP: F(2,8370)=61228, p<.001, \mathrm {{\eta _{p}^{2}}}=.936\); \(SV: F(4,8370)=1216, p<.001, \mathrm {{\eta _{p}^{2}}}=.368; ARR: F(2,8370)=17763\), p<.001, \(\mathrm {{\eta _{p}^{2}}}=.809; AB: F(2,8370)=22313\), p<.001, \(\mathrm {{\eta _{p}^{2}}}=.842\); ABRR: F(1,8370)=37.1, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.004\)). All of these effects were significant for the majority of pair-specific TOEs.
Post-hoc analyses revealed that for each main effect excluding the SV factor, each pair of levels differed significantly (all p<.001). As for the main effect of the SV factor, all pairs but one (1,0.1-2,0.2: p=1.0) differed significantly (p<.001). The mTs averaged for each group of networks were below zero. Details (see Figure 8) are presented below:
PP: the higher was the PP value, the lower was the mT (recall that "lower" means more distant from 0.0 and closer to −0.5 – the maximal negative TOE).
Significant main effects on the mean TOE for all types of comparisons of pairs in Experiment 2. Error bars indicate 95% confidence intervals.
SV: the lowest mT was observed for networks which did not have the scalar source of variability; then the mT increased with the growth of the relative scalar variability in the SV module. Networks with the same relative variability of the SV did not differ significantly in the mT.
ARR: the increase of the ARR value entailed growth of the mT.
AB: the growth of the AB parameter yielded increase of the mT.
ABRR: the higher value of the ABRR entailed only slightly a higher mT than the lower value of the ABRR.
The significant interactions were: P P×S V (F(8,8370)=339, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.245\)), P P×A R R (F(4,8370)=291, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.122\)), P P×A B (F(4,8370)=1490, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.416\)), S V×A R R (F(8,8370)=40.4, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.037\)), S V×A B (F(8,8370)=13.6, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.013\)), A R R×A B (F(4,8370)=15.5, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.007\)), P P×S V×A R R (F(16,8370)=3.03, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.006\)), P P×S V×A B (F(16,8370)=2.66, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.005\)), S V×A R R×A B (F(16,8370)=4.42, p<.001, \(\mathrm {{\eta _{p}^{2}}}=.008\)), and P P×S V×A B×A B R R (F(16,8370)=1.77, p=.029, \(\mathrm {{\eta _{p}^{2}}}=.003\)). All the other interactions were not significant (all p≥.063). Not all of these interactions were confirmed by the pair-specific TOE analyses (see below). Most interactions were ordinal.
More detailed analyses of interaction effects revealed that (see Figures 9 and 10):
P P×S V: most of the time, when the PP level was fixed, the mT increased with the scalar variability level (starting from the situation when there was no SV module at all). For P P=5 and P P=10, the only non-significant difference was between networks with the same relative variability produced by the SV module (all p=1.0 in these cases; in all other cases, p≤.017). The discrepancy between the mT values within a group of networks with a different relative variability got higher as the Pacemaker speed increased, which was inter alia caused by a drastic slowdown of the mT decreasing rate for the networks with the highest scalar variability. Consistently with this trend, within the networks with P P=20, the only significant differences were between networks with the highest relative variability and with the the remaining levels of SV (all p≤.002; in other cases all p≥.055).
Significant two-way interaction effects on the mean TOE for all types of comparisons of pairs in Experiment 2. Error bars indicate 95% confidence intervals.
Significant three-way interaction effect on the mean TOE for all types of comparisons of pairs in Experiment 2. Error bars indicate 95% confidence intervals.
P P×A R R: all pairwise comparisons within the ARR factor across all levels of the PP factor yielded significant differences (all p<.001), where mT increased with the Accumulator reset rate. The magnitude of the differences between the ARR levels increased with the growing speed of the Pacemaker.
P P×A B: all pairwise comparisons within the AB factor across all levels of the PP factor yielded significant differences (all p<.001), where mT increased with the growing value of the Accumulator bias parameter. The magnitude of the differences between the ARR levels decreased with the growing speed of the Pacemaker.
S V×A R R: all pairwise comparisons within the ARR factor across all levels of the SV factor yielded significant differences (all p<.001), where mT increased together with the growth of the ARR parameter value. This interaction seemed to be carried out mainly by the visible decrease of the ARR effect when the scalar variance was thehighest.
S V×A B: here the pattern was similar to the previous case, however the strength of the effect was slightly lower.
A R R×A B: all pairwise comparisons within the AB factor across all levels of the ARR factor yielded significant differences (all p<.001) – the mT decreased as the value of the AB parameter decreased. The differences of mT between different values of the AB parameter seem similar, yet a slight drop of the mT is visible for decreasing ARR for the extreme values of the AB.
S V×P P×A R R: this interaction was not significant for TOEs observed for most pairs of stimuli.
P P×S V×A B: this interaction was not significant for TOEs observed for most pairs of stimuli.
A R R×S V×A B: the interaction between these three factors is easier to interpret when considering ARR as the main modulating factor. As revealed by the F-test, the effect of the AB factor was significant across all levels of the SV factor across all levels of the ARR factor (all p<.001). The increasing Accumulator reset rate seemed to strengthen the S V×A B interaction. When the Accumulator reset rate was low, changes of the mT across the SV levels were quite similar in different levels of the AB factor (though similarly as for other levels of the ARR, the mT was negative all the time and increased with an increase in the bias value). When the Accumulator reset rate was high, the increase of accuracy related to the growth of the relative variability caused by the Scalar Variance module tended to get noticeably smaller as the Accumulator bias value increased.
S V×P P×A B×A B R R: this interaction was not significant for TOEs observed for all pairs of stimuli.
As the number of significant effects is quite large, the following discussion focuses on the relevant observations concerning the influence of the pacemaker speed and the scalar source of variance on the measured indicators of network performance. These parameters are of the highest importance because the internal clock speed and the memory transfer process are of particular interest in research on human timing. They are also highly influential on the response pattern in the presented version of the 2-AFC task. Other factors are more difficult to be directly operationalized in empirical experiments with human participants without additional low-level empirical research. Nevertheless, complex interactions between the two main factors and the rest of the parameters provide predictions concerning accuracy of the answers of networks and concerning the TOE in the temporal discrimination task. The results obtained from the simulations show the relationship between the human timing mechanism and the exact response patterns in a specific experimental situation. The easiest way to verify predictions of the CCTN without resorting to neuropharmacological manipulations or neuroimaging techniques would be to perform exhaustive timing experiments using a rich set of stimuli and ISIs, and to observe patterns of changes in performance across different types of trials. Such experiments are the next step of our investigation.
The fundamental indicator of performance of networks is the mean proportion of correct answers (OA) across 6 types of trials concerning differing stimuli.
Pacemaker speed
The OA increased with the decreasing pulse generation period of the Pacemaker. The influence of this parameter was modulated by the Accumulator Reset Rate – as the reset rate increased, so did both the value of the OA and the increase of the OA caused by the decreasing PP. Similarly, the growth of the Accumulator Bias strengthened the influence of the PP on the OA. The effect of the PP was not modulated by the Accumulator Bias Recovery Rate. The influence of the PP was also modulated by the value of the Scalar Variance Module parameters – starting from the condition when there was no such module, the higher the relative variability produced by the Scalar Variance Module, the slower the growth of the OA with the decreasing PP. The same S V σ/S V μ ratio of the Scalar Variance Module yielded similar results.
Scalar variance module
Most of the interactions including the SV factor revealed the same pattern: increasing variability related to the Scalar Variance Module diminished the positive influence of the other factors on accuracy. The levels of the factors that increased accuracy more than others (e.g., the levels of PP and AB) often suffered greater loss of the OA. For most of the time, values of the SV parameters yielding the same S V σ/S V μ ratio influenced accuracy in the same way.
As the interaction S V×P P×A R R was significant, the P P×A R R interaction strength (recall that this interaction means that the joint growth of the generator period and the Accumulator reset rate resulted in the greatest increase of accuracy) was diminished by the increase of the relative variability caused by the Scalar Variance Module. The other significant three-way interaction (which is on the verge of significance), S V×P P×A B, was due to a weakening difference in the increase of accuracy between networks differing in Accumulator biases, with a growing PP and with an increasing relative variability of the Scalar Variance Module. This means that the interaction between the PP and the AB parameters is, again, reduced by the growth of the relative variability in the scalar module.
One conclusion from these analyses is that the overall accuracy depends strongly on the pacemaker speed. Although the influence of this parameter was modulated by other parameters, the direction was always the same. This result is important – it means that the increase in the Pacemaker speed yielded the overall increase in accuracy. This is despite the fact that the growing Pacemaker speed should have generally favored the correct answers in the ShortLong order of presentation, and should have lead to decrease of accuracy when the stimuli were presented in the LongShort order. A closer look at the accuracy in the LongShort and the ShortLong orders of presentation revealed that the ShortLong accuracy was higher than the LongShort accuracy, and that the discrepancy between them increased as the Pacemaker speed increased (Figure 11). The LongShort accuracy increased slightly when the Pacemaker was the fastest – this is probably a stimuli-range dependent effect. Additionally, changes of accuracy for the LongShort order were modulated by the Accumulator reset rate and by the ABRR and AB factors. Thus in the investigated range of stimuli, an increase in the Pacemaker speed yielded an interesting effect of an increase of the overall accuracy with the simultaneous increase of the negative TOE.
Mean accuracy presented separately for Short-Long and Long-Short pairs across all levels of the PP factor. Error bars indicate 95% confidence intervals.
The high variability introduced by the Scalar Variance Module was able to dominate the activity of other modules. This, in turn, resulted in the decrease of the overall accuracy in the considered range of stimuli.
The mean TOE decreased with the growth of the Pacemaker speed. This pattern of changes was modulated by all other factors except for the Accumulator Bias Recovery Rate, ABRR. Two mechanisms of which one is closely related to the positive TOE (the AB factor) and the other is more associated with the establishment of the negative TOE (the ARR factor) yielded the inverse pattern of interactions with the PP: the effect of the AB factor was more pronounced when the Pacemaker speed was lower relatively to the other Pacemaker speed levels – the effect of the ARR factor was stronger when the pacemaker speed was higher. This is consistent with our predictions of the CCTN behavior: when the pacemaker speed is high, a higher signal value should be accumulated in the Accumulator buffer, which means that the proportional contribution of the Accumulator bias drops. At the same time, the rate of the resetting mechanism plays an important role as there is always the same, limited time (ISI) to clean the Accumulator. When the Pacemaker speed is low, the situation is reversed. The influence of the PP on the mT was also modulated by the SV factor as in the case of accuracy: the greater was the scalar variance, the less steep was the growth of the negative mT with the increasing pacemaker speed. Groups of networks with the same S V σ/S V μ ratio yielded almost identical patterns.
Scalar variance
Increasing scalar variance (starting from networks not equipped with the SV module) tended to diminish the effects of other factors decreasing the mT. The two groups of networks equipped with the Scalar Variance modules with the same S V σ/S V μ ratio usually yielded similar results. The interaction effects show that networks which made the lowest mT were most sensitive to the changes in scalar variability produced by the SV module (mT usually increased in these cases).
The only complex interaction that was significant for the majority of the pair-specific TOEs, A R R×S V×A B, reveals that increasing the Accumulator reset rate leads to the growth of the S V×A B strength. At the highest Accumulator reset rate, networks with the highest Accumulator bias value tended to differ less across different levels of the scalar variance. This is probably because when both responses occur equally frequently, increasing variance does not change much. Conversely, when one type of response is prevalent, increasing scalar variability equalizes the proportion of both types of responses. This explains why this interaction may be present when the stimuli are of equal duration in trials. Regarding processing of differing stimuli pairs, as it was shown for accuracy (see Section "Scalar variance module"), usually the correct response rate was higher in the ShortLong than in the LongShort order of presentation. This is not surprising given that the negative mean TOE was prevalent in the results. What is more, on average, the correct response rate for the LongShort order was higher than 50% (M=62.7%, S D=5.1%). This means that increasing the Accumulator reset rate and increasing the Accumulator bias value should have boosted accuracy for the LongShort order above 50%, at the same time decreasing the ShortLong accuracy. Therefore, increasing the Accumulator Bias AB was not only responsible for the increase of the mean TOE. Within the group of networks with the highest Accumulator Reset Rate, it also increased similarity between the two orders of presentation of stimuli in the way accuracy changes for increasing scalarvariability.
Summarizing these remarks, the CCTN model produced results consistent with predictions regarding the examined stimuli range. Increasing the Pacemaker speed acted in favor of producing the negative TOE, and two out of the remaining three TOE generating parameters – ARR and AB – influenced the TOE inversely. The weak influence of the ABRR parameter may mean that the Accumulator bias recovery process rarely occurred. This is in agreement with the observation that a negative mT was prevalent in the collected data. However, as the main effect of the ABRR factor was significant, the predictions concerning the ABRR were confirmed: a higher Accumulator bias charging rate resulted in a slightly increased mT. As for the SV parameter, similarly as in the case of the measured accuracy, an increase in scalar variance resulted in a decrease of other effects. This is caused by the fact that a high scalar variance fosters similarity of memory representations of the first and the second stimulus. The three-way interaction revealed that the increasing contribution of the SV module is not simply additive with effects of other mechanisms. This contribution enables convergence of the accuracy of answers to 50% in both orders of presentation.
Accuracy vs. TOE
The presented results suggest that there is an interesting relation between the TOE and the total accuracy established by the CCTN. The results of the analyses revealed that increasing the pacemaker speed resulted in a lower TOE and in a higher accuracy. On the other hand, increasing the scalar variability caused a decrease in accuracy and in the TOE. To investigate the nature of the relation between the TOE and the OA, additional correlation analysis was performed. This time, the mean TOE for trials consisting of unequal stimuli (mTu) was calculated. The results of this analysis confirmed that there is a negative correlation between the mTu and the OA (p<.001 and Pearson's r(8640)=−.513). The reason for this may be a higher influence of the relation between mTu and OA for the ShortLong pairs th an for the LongShort pairs. Further analyses revealed that although correlations between the mTu and the OA were opposite, in the ShortLong group the correlation was stronger (p<.001 and Pearson's r(8640)=−.881) than in the LongShort group (p<.001 and Pearson's r (8640)=.556). This once again shows that modifying parameters of the timing mechanism does not influence processing of temporal stimuli symmetrically in these two orders of presentation. More importantly, the consequence of this asymmetry is a beneficial impact of the mechanisms responsible for the TOE on the overall quality of temporaljudgements.
The results of performed experiments demonstrate consequences of the fundamental assumptions of the clock–counter model for the results in a temporal discrimination task. We showed that the CCTN is able to model internal timing processes including the scalar variance property. We examined the behavior of a number of neural networks during the temporal discrimination task. The outcome of this study is a set of predictions which can be verified straightforwardly in empirical research. The timing literature proves that a lot of effort is devoted to test how changes in modes of activity of the timing mechanism can change behaviors of humans in the timing task (Meck 2005; Meck and Benson 2002; Wieneret al. 2010). This influence is inter alia tested in experiments that concern Parkinson's Disease (Artieda et al. 1992; Hellström et al. 1997; Koch et al. 2008; Merchant et al. 2008; Malapani et al. 1998; Malapani et al. 2002; Rammsayer and Classen 1997; Smith et al. 2007), mental disorders (Penney et al. 2005; Sévigny et al. 2003), the influence of drug application (Lustig and Meck 2005; Meck 1983; 1996; Rammsayer 1999), and presentation of stimuli in different modalities (Melgire et al. 2005; Penney et al. 2000; Ulrich et al. 2006). Some of the these works suggest that anatomical correlates of the internal clock are present in basal ganglia and other brain structures connected to this important part of the dopaminergic system (Coull et al. 2008; Coull et al. 2010; Macar et al. 1999; Meck 2006; Perbal et al. 2005).
Such research emphasizes the need for a model which is able to integrate experimental data, to explain obtained results, and finally, to predict patterns of responses in situations that have not been tested empirically. The CCTN model and the simulation results presented in this work demonstrate how the clock speed and other timing mechanism manipulation may influence accuracy and time-order error in the temporal discrimination task. Since it is possible to manipulate the Pacemaker speed (and it is also possible to find patients with impaired Pacemaker speed), the predictions of our model concerning this property of the timing mechanism can be fully verified in empirical experiments. What is more, we investigated a range of parameters of one of the possible sources of the scalar variance proposed in (Gibbon et al. 1984) – the source that is responsible for the memory transfer from the Accumulator module. There is evidence suggesting that memory storage may be unsettled in people with Parkinson's Disease (Malapani et al. 1998; Malapani et al. 2002). One of the several interesting predictions stemming from our research is that the slowdown of the memory transfer, when it is accompanied by the increase in the relative variability, may reduce the overall accuracy and the magnitude of the time-order error in the temporal discrimination task; this can be verified in PD patients.
Apart from quantitative and qualitative analyses of changes of performance indicators, data fitting analyses have been performed. We are well aware that the proposed model has many free parameters, however it was still important to verify whether it had the potential to reproduce human behavior. The results of the BJ participant (Allan 1977) were used as an example of human characteristics. As the CCTN demonstrated a similar pattern and for some parameters closely resembled experimental data (Figure 12), this is an indication that the model has the potential to be a good explanatory platform for human timing mechanisms. Interestingly, specific analyses revealed that the mean MSE dropped with the increase of the relative scalar variance produced by the SV module (though the differences in MSE were small, see Figure 13), which further emphasizes the importance of the scalar property in timing.
TOE values provided by the CCTN neural networks (270 gray lines, each line is averaged from 32 simulations of a network with a single set of parameters) and provided by a single experiment with a human subject, BJ. The dashed line illustrates a hypothetical situation of zero TOE.
The main effect of the scalar variability factor on the mean squared error (MSE) in Experiment 2. Error bars indicate 95% confidence intervals.
Each of the 270 gray lines shown in Figure 12 shows an average across 32 runs with the same set of parameter values. Since the BJ participant took the experiment only once, a direct comparison between human and simulation data is limited. It is still worth noting that the trend of changes in TOE is consistent for all averaged runs, thus the model can and does simulate empirical reality.
While the CCTN is able to explain more effects than just the influence of stimuli duration on the TOE, our experiments were designed to avoid effects other than those related to stimuli duration. Modeling across-trials effects could be achieved easily, but it would complicate the behavior of the model, while the goal of this work was to isolate and study stimuli duration effects exclusively.
Developing a complete model of human timing is a difficult task; this work demonstrated how to represent a clock-counter timing model in a connectionist architecture. Simulation results prove that the model is a suitable tool to analyze the influence of the scalar sources of variability on temporal judgements, which makes it a descendant of the Scalar Timing Model. Simulations concerning the time-order error phenomenon demonstrated that our model is not only able to manifest it, but also that the manifestation of the TOE may resemble the behavior of a human. This justified investigating interactions between the scalar variance and the TOE – the two hardwired properties of human timing.
There are some analogies between the CCTN and the Sensation Weighting Model (Hellström 2003): in both models, the magnitude of the first or the second stimulus is "strengthened" depending on the context. Our further work will concern the analysis of the recently discovered Type B TOE phenomenon (Dyjas and Ulrich 2014; Ulrich and Vorberg 2009), the exploration of the ISI effect, and the presentation of the results in terms of Just Noticeable Differences instead of raw frequencies of answers. Another issue is the employment of more sophisticated methods of analysis of temporal behaviors, such as adaptive psychophysical procedures – i.e., one of the versions of the up-down procedure (Kaernbach 1991; Leek 2001). To gain more specific knowledge about the stimuli comparison process, simulations will be performed in order to establish determinants of discrimination thresholds or psychometric functions (Wichmann and Hill 2001) related to perception of durations.
Since there are many free parameters in the CCTN, we are going to perform extensive fitting of the proposed model to human data. We would also like to transform the model into the representation consisting only of the integrate-and-fire neurons. Such a representation may cause some of the inherent properties of timing to emerge spontaneously (Buhusi and Oprisan 2013), and it would be a good compromise between the classical clock-counter models and the neural models that have been gaining more and more attention recently. This would meet the need for a connectionist model of timing, fitted to both behavioral and neurobiological data, but still allowing one to comprehend the activity of the network – thus preserving the explanatory power of classical models of timing and at the same time maintaining biological adequacy. This will open a path for further exploration of the patterns of temporal judgements in a wide range of experimental situations.
aAs this work concerns simulations with many free parameters and the number of their values is arbitrary, we use a conservative post hoc test to show that the significance of observed effects is not incidental.
Allan, LG (1977). The time-order error in judgments of duration. Canadian Journal of Psychology, 31(1), 24–31.
Artieda, J, Pastor, MA, Lacruz, F, Obeso, JA (1992). Temporal discrimination is abnormal in Parkinson's disease. Brain, 115(1), 199–210.
Buhusi, CV, & Oprisan, SA (2013). Time-scale invariance as an emergent property in a perceptron with realistic, noisy neurons. Behavioural Processes, 95, 60–70. doi:10.1016/j.beproc.2013.02.015.
Buhusi, CV, & Meck, WH (2005). What makes us tick? Functional and neural mechanisms of interval timing. Nature Reviews Neuroscience, 6(10), 755–765.
Buonomano, DV, Bramen, J, Khodadadifar, M (2009). Influence of the interstimulus interval on temporal processing and learning: testing the state-dependent network model. Philosophical Transactions of the Royal Society B, 364, 1865–1873.
Church, RM (1999). Evaluation of quantitative theories of timing. Journal of the Experimental Analysis of Behavior, 71(2), 253–256.
Church, RM (2002). A tribute to John Gibbon. Behavioural Processes, 57, 261–274.
Church, RM (2003). A concise introduction to scalar timing theory. In: Meck, WH (Ed.) In Functional and Neural Mechanisms of Interval Timing. CRC Press, Boca Raton, Florida, (pp. 3–22).
Coull, JT, Nazarian, B, Vidal, F (2008). Timing, storage, and comparison of stimulus duration engage discrete anatomical components of a perceptual timing network. Journal of Cognitive Neuroscience, 20(12), 2185–2197.
Coull, JT, Cheng, R-K, Meck, WH (2010). Neuroanatomical and neurochemical substrates of timing. Neuropsychopharmacology, 36(1), 3–25.
Dyjas, O, & Ulrich, R (2014). Effects of stimulus order on discrimination processes in comparative and equality judgements: Data and models. The Quarterly Journal of Experimental Psychology, 67(6), 1121–1150.
Eisler, H (1981). Applicability of the parallel-clock model to duration discrimination. Attention, Perception, & Psychophysics, 29(3), 225–233.
Eisler, H, Eisler, AD, Hellström, Å (2008). Psychophysical issues in the study of time perception. In: Grondin, S (Ed.) In Psychology of Time. Emerald Group Publishing Ltd, (pp. 75–110).
Getty, DJ (1975). Discrimination of short temporal intervals: A comparison of two models. Perception & Psychophysics, 18(1), 1–8.
Getty, DJ (1976). Counting processes in human timing. Attention, Perception, & Psychophysics, 20, 191–197. ISSN 1943-3921. http://dx.doi.org/10.3758/BF03198600.
Gibbon, J (1977). Scalar expectancy theory and Weber's law in animal timing. Psychological Review, 84(3), 279–325.
Gibbon, J (1992). Ubiquity of scalar timing with Poisson clock. Journal of Mathematical Psychology, 35, 283–293.
Gibbon, J, Church, RM, Meck, WH (1984). Scalar Timing in Memory. Annals of the New York Academy of Sciences, 423(1), 52–77.
Grondin, S (2001). From physical time to the first and second moments of psychological time. Psychological Bulletin, 127(1), 22–44.
Grondin, S (2005). Overloading temporal memory. Journal of Experimental Psychology: Human Perception and Performance, 31(5), 869–879.
Grondin, S (2010). Timing and time perception: A review of recent behavioral and neuroscience findings and theoretical directions. Attention, Perception, & Psychophysics, 72(3), 561–582.
Hairston, IS, & Nagarajan, SS (2007). Neural mechanisms of the time-order error: An MEG study. Journal of Cognitive Neuroscience, 19(7), 1163–1174.
Hapke, M, & Komosinski, M (2008). Evolutionary Design of Interpretable Fuzzy Controllers. Foundations of Computing and Decision Sciences, 33(4), 351–367. http://www.framsticks.com/files/common/Komosinski_EvolveInterpretableFuzzy.pdf.
Hellström, Å, Lang, H, Portin, R, Rinne, J (1997). Tone duration discrimination in Parkinson's disease. Neuropsychologia, 35(5), 737–740.
Hellström, Å (1985). The time-order error and its relatives: Mirrors of cognitive processes in comparing. Psychological Bulletin, 97(1), 35–61.
Hellström, Å (2003). Comparison is not just subtraction: Effects of time- and space-order on subjective stimulus difference. Perception & Psychophysics, 65(7), 1161–1177.
Hellström, Å, & Rammsayer, TH (2004). Effects of time-order, interstimulus interval, and feedback in duration discrimination of noise bursts in the 50- and 1000-ms ranges. Acta Psychologica, 116, 1–20.
Ivry, RB, & Schlerf, JE (2008). Dedicated and intrinsic models of time perception. Trends in Cognitive Sciences, 12(7), 273–280.
Jamieson, DG, & Petrusic, WM (1975). The dependence of time-order error direction on stimulus range. Canadian Journal of Psychology, 29(3), 175–182.
Jelonek, J, & Komosinski, M (2006). Biologically-inspired Visual-motor Coordination Model in a Navigation Problem. In: Bogdan, G, Robert, H, Lakhmi, J (Eds.) In Knowledge-Based Intelligent Information and Engineering Systems, volume 4253 of Lecture Notes in Computer Science. http://www.framsticks.com/files/common/BiologicallyInspiredVisualMotorCoordinationModel.pdf. Springer, Berlin/Heidelberg, (pp. 341–348).
Kaernbach, C (1991). Simple adaptive testing with the weighted up-down method. Perception & Psychophysics, 49, 227–229.
Karmarkar, UR, & Buonomano, DV (2007). Telling time in the absence of clocks. Neuron, 53(3), 427.
Killeen, PR, & Weiss, NA (1987). Optimal timing and the Weber function. Psychological Review, 94(4), 455–468.
Koch, G, Costa, A, Brusa, L, Peppe, A, Gatto, I, Torriero, S, Gerfo, EL, Salerno, S, Oliveri, M, Carlesimo, GA (2008). Impaired reproduction of second but not millisecond time intervals in Parkinson's disease. Neuropsychologia, 46(5), 1305–1313.
Komosinski, M (2012). Measuring quantities using oscillators and pulse generators. Theory in Biosciences, 131(2), 103–116. http://dx.doi.org/10.1007/s12064-012-0153-4.
Komosinski, M, & Kups, A (2009). Models and implementations of timing processes using Artificial Life techniques. Technical Report RA-05/09, Poznan University of Technology, Institute of Computing Science.
Komosinski, M, & Kups, A (2011). Implementation and Simulation of the Scalar Timing Model. Bio-Algorithms and Med-Systems, 7(4), 41–52.
Komosinski, M, & Ulatowski, S (2009). Framsticks: Creating and Understanding Complexity of Life. In: Komosinski, M, & Adamatzky, A (Eds.) In Artificial Life Models in Software, chapter 5, second edition. Springer, London, (pp. 107–148).
Komosinski, M, & Ulatowski, S (2014). Framsticks Web Site. http://www.framsticks.com.
Leek, MR (2001). Adaptive procedures in psychophysical research. Perception & Psychophysics, 63(8), 1279–1292.
Lewis, PA, & Miall, RC (2009). The precision of temporal judgement: milliseconds, many minutes and beyond. Philosophical Transactions of the Royal Society B, 364(2), 1897–1905.
Lustig, C, & Meck, WH (2005). Chronic treatment with haloperidol induces deficits in working memory and feedback effects of interval timing. Brain and Cognition, 58(1), 9–16.
Macar, F, Vidal, F, Casini, L (1999). The supplementary motor area in motor and sensory timing: evidence from slow brain potential changes. Experimental Brain Research, 125(3), 271–280.
Malapani, C, Rakitin, B, Levy, R, Meck, WH, Deweer, B, Dubois, B, Gibbon, J (1998). Coupled temporal memories in Parkinson's disease: a dopamine-related dysfunction. Journal of Cognitive Neuroscience, 10(3), 316–331.
Malapani, C, Deweer, B, Gibbon, J (2002). Separating storage from retrieval dysfunction of temporal memory in Parkinson's disease. Journal of Cognitive Neuroscience, 14(2), 311–322.
Matell, MS, & Meck, WH (2004). Cortico-striatal circuits and interval timing: coincidence detection of oscillatory processes. Cognitive Brain Research, 21, 139–170.
Meck, WH (1983). Selective adjustment of the speed of internal clock and memory processes. Journal of experimental psychology. Animal Behavior Processes, 9(2), 171–201.
Meck, WH (1996). Neuropharmacology of timing and time perception. Cognitive Brain Research, 3(3), 227–242.
Meck, WH (2005). Neuropsychology of timing and time perception. Brain and Cognition, 58(1), 1–8.
Meck, WH (2006). Frontal cortex lesions eliminate the clock speed effect of dopaminergic drugs on interval timing. Brain research, 1108(1), 157–167.
Meck, WH, & Benson, AM (2002). Dissecting the brain's internal clock: how frontal–striatal circuitry keeps time and shifts attention. Brain and Cognition, 48(1), 195–211.
Melgire, M, Ragot, R, Samson, S, Penney, TB, Meck, WH, Pouthas, V (2005). Auditory/visual duration bisection in patients with left or right medial-temporal lobe resection. Brain and Cognition, 58(1), 119–124.
Merchant, H, Luciana, M, Hooper, C, Majestic, S, Tuite, P (2008). Interval timing and Parkinson's disease: heterogeneity in temporal performance. Experimental Brain Research, 184(2), 233–248.
Penney, TB, Gibbon, J, Meck, WH (2000). Differential effects of auditory and visual signals on clock speed and temporal memory. Journal of Experimental Psychology: Human Perception and Performance, 26(6), 1770–1787.
Penney, TB, Meck, WH, Roberts, SA, Gibbon, J, Erlenmeyer-Kimling, L (2005). Interval-timing deficits in individuals at high risk for schizophrenia. Brain and Cognition, 58(1), 109–118.
Perbal, S, Deweer, B, Pillon, B, Vidailhet, M, Dubois, B, Pouthas, V (2005). Effects of internal clock and memory disorders on duration reproductions and duration productions in patients with Parkinson's disease. Brain and Cognition, 58(1), 35–48.
Rammsayer, T, & Classen, W (1997). Impaired temporal discrimination in Parkinson's disease: temporal processing of brief durations as an indicator of degeneration of dopaminergic neurons in the basal ganglia. International Journal of Neuroscience, 91(1-2), 45–55.
Rammsayer, T, & Ulrich, R (2001). Counting models of temporal discrimination. Psychonomic Bulletin & Review, 8(2), 270–277.
Rammsayer, TH (1999). Neuropharmacological evidence for different timing mechanisms in humans. The Quarterly Journal of Experimental Psychology: Section B, 52(3), 273–286.
Riesen, JM, & Schnider, A (2001). Time estimation in Parkinson's disease: normal long duration estimation despite impaired short duration discrimination. Journal of neurology, 248(1), 27–35.
Schab, FR, & Crowder, RG (1988). The role of succession in temporal cognition: Is the time-order error a recency effect of memory?. Perception & Psychophysics, 44(3), 233–242.
Sévigny, M-C, Everett, J, Grondin, S (2003). Depression, attention, and time estimation. Brain and Cognition, 52(2), 351–353.
Shi, Z, Church, RM, Meck, WH (2013). Bayesian optimization of time perception. Trends in Cognitive Sciences, 17(11), 556–564. doi:10.1016/j.tics.2013.09.009.
Smith, JG, Harper, DN, Gittings, D, Abernethy, D (2007). The effect of Parkinson's disease on time estimation as a function of stimulus duration range and modality. Brain and Cognition, 64(2), 130–143.
Staddon, JER, & Higga, JJ (1999). Time and memory: Towards a pacemaker-free theory of interval timing. Journal of Experimental Psychology: Animal Behavior Processes, 71(2), 215–251.
Ulrich, R, Nitschke, J, Rammsayer, T (2006). Crossmodal temporal discrimination: Assessing the predictions of a general pacemaker-counter model. Perception & Psychophysics, 68(7), 1140–1152.
Ulrich, R, & Vorberg, D (2009). Estimating the difference limen in 2AFC tasks: Pitfalls and improved estimators. Attention, Perception, & Psychophysics, 71(6), 1219–1227.
Wearden, JH, & Lejeune, H (2008). Scalar properties in human timing: Conformity and violations. The Quarterly Journal of Experimental Psychology, 61(4), 569–587.
Wearden, JH, Denoyan, L, Fakhri, M, Haworth, R (1997). Scalar timing in temporal generalization in humans with longer stimulus durations. Journal of Experimental Psychology: Animal Behavior Processes, 23(4), 502–511.
Wearden, JH (1999). Beyond the fields we know...: exploring and developing scalar timing theory. Behavioural Processes, 45, 3–21.
Wearden, JH (2003). Applying the scalar timing model to human time psychology: Progress and challenges. In: Helfrich, H (Ed.) In Time and mind II: Information processing perspectives. Hogrefe & Huber Publishers, Cambridge, Massachusetts, (pp. 21–39).
Wearden, JH, & Doherty, MF (1995). Exploring and developing a connectionist model of animal timing: Peak procedure and fixed-interval simulations. Journal of Experimental Psychology: Animal Behavior Processes, 21(2), 99–115.
Wearden, JH, Norton, R, Martin, S, Montford-Bebb, O (2007). Internal clock processes and the filled-duration illusion. Journal of Experimental Psychology: Human Perception and Performance, 33(3), 716–729.
Wichmann, FA, & Hill, NJ (2001). The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & psychophysics, 63(8), 1293–1313.
Wiener, M, Hamilton, R, Turkeltaub, P, Matell, MS, Coslett, HB (2010). Fast forward: supramarginal gyrus stimulation alters time measurement. Journal of Cognitive Neuroscience, 22(1), 23–31.
Yamazaki, T, & Tanaka, S (2005). Neural modeling of an internal clock. Neural computation, 17(5), 1032–1058.
Article MATH Google Scholar
Zakay, D, Block, RA, Tsal, Y (1999). Prospective duration estimation and performance. In: Gopher, D, & Koriat, A (Eds.) In Attention and Performance XVII. MIT Press, Cambridge, Massachusetts, (pp. 557–580).
This work has been supported by the Polish National Science Centre, grant no. N N519 441939. Computations were performed on the equipment funded by the Polish Ministry of Science and Higher Education, grant no. 6168/IA/128/2012.
Poznan University of Technology, Institute of Computing Science, Piotrowo 2, Poznan, 60-965, Poland
Maciej Komosinski
Adam Mickiewicz University, Institute of Psychology, Szamarzewskiego 89a, Poznan, 60-568, Poland
Adam Kups
Correspondence to Maciej Komosinski.
MK developed the simulation environment and neural simulation scripts. He provided ideas on modeling and simulation of the time-order error, performed theoretical analyses, and guided the development and simulation of the artificial neural network model. AK was responsible for the design of the research; he carried out the simulation studies, data acquisition, statistical analyses and interpretation of the results. Apart from that he drafted the manuscript. Both authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Komosinski, M., Kups, A. Time-order error and scalar variance in a computational model of human timing: simulations and predictions. Comput Cogn Sci 1, 3 (2015). https://doi.org/10.1186/s40469-015-0002-0
Temporal discrimination
Internal clock
Time-order error
|
CommonCrawl
|
July 2008 , Volume 20 , Issue 3
Elliptic PDE's in probability and geometry: Symmetry and regularity of solutions
Xavier Cabré
We describe several topics within the theory of linear and nonlinear second order elliptic Partial Differential Equations. Through elementary approaches, we first explain how elliptic and parabolic PDEs are related to central issues in Probability and Geometry. This leads to several concrete equations. We classify them and describe their regularity theories. After this, most of the paper focuses on the ABP technique and its applications to the classical isoperimetric problem for which we present a new original proof, the symmetry result of Gidas-Ni-Nirenberg, and the regularity theory for fully nonlinear elliptic equations.
Xavier Cabr\u00E9. Elliptic PDE\'s in probability and geometry: Symmetry and regularity of solutions. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 425-457. doi: 10.3934/dcds.2008.20.425.
Long-term dynamics of semilinear wave equation with nonlinear localized interior damping and a source term of critical exponent
Igor Chueshov, Irena Lasiecka and Daniel Toundykov
This article addresses long-term behavior of solutions to a semilinear damped wave equation with a critical source term. A distinctive feature of the model is the geometrically constrained dissipation: it only affects a small subset of the domain adjacent to a connected portion of the boundary. The main result of the paper provides an affirmative answer to the open question whether global attractors for a wave equation with critical source and geometrically constrained damping are smooth and finite-dimensional. A positive answer to the same question in the case of subcritical sources was given in [9]. However, critical exponent of the source term combined with weak geometrically restricted dissipation constitutes the major new difficulty of the problem. To overcome this issue we develop a new version of Carleman's estimates and apply them in the context of recent results [12] on fractal dimension of global attractors.
Igor Chueshov, Irena Lasiecka, Daniel Toundykov. Long-term dynamics of semilinear wave equation with nonlinear localized interior damping and a source term of critical exponent. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 459-509. doi: 10.3934/dcds.2008.20.459.
Minimal dynamics for tree maps
Lluís Alsedà, David Juher and Pere Mumbrú
We prove that, given a tree pattern $\mathcal{P}$, the set of periods of a minimal representative $f: T\rightarrow T$ of $\mathcal{P}$ is contained in the set of periods of any other representative. This statement is an immediate corollary of the following stronger result: there is a period-preserving injection from the set of periodic points of $f$ into that of any other representative of $\mathcal{P}$. We prove this result by extending the main theorem of [6] to negative cycles.
Llu\u00EDs Alsed\u00E0, David Juher, Pere Mumbr\u00FA. Minimal dynamics for tree maps. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 511-541. doi: 10.3934/dcds.2008.20.511.
Multiscale asymptotic expansion for second order parabolic equations with rapidly oscillating coefficients
Walter Allegretto, Liqun Cao and Yanping Lin
In this paper we discuss initial-boundary problems for second order parabolic equations with rapidly oscillating coefficients in a bounded convex domain. The asymptotic expansions of the solutions for problems with multiple spatial and temporal scales are presented in four different cases. Higher order corrector methods are constructed and associated explicit convergence rates obtained.
Walter Allegretto, Liqun Cao, Yanping Lin. Multiscale asymptotic expansion for second order parabolic equations with rapidly oscillating coefficients. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 543-576. doi: 10.3934/dcds.2008.20.543.
Hypercyclicity and chaoticity spaces of $C_0$ semigroups
Jacek Banasiak and Marcin Moszyński
In [10] the author provided a generalization of the classical Desch-Schappacher-Webb sufficient criterion which ensures hypercyclicity of linear semigroups. In this paper we simplify assumptions of [10], obtaining new criteria for hypercyclicity of a $C_0$ semigroup in a subspace (sub-hypercyclicity), and also for its sub-chaoticity. Moreover, we provide full characterization of chaoticity and hypercyclicity spaces of semigroups satisfying the assumptions of these new criteria. We also present examples showing that, in general, these assumptions cannot be weakened.
Jacek Banasiak, Marcin Moszy\u0144ski. Hypercyclicity and chaoticity spaces of $C_0$ semigroups. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 577-587. doi: 10.3934/dcds.2008.20.577.
Super-exponential growth of the number of periodic orbits inside homoclinic classes
Christian Bonatti, Lorenzo J. Díaz and Todd Fisher
We show that there is a residual subset $\S (M)$ of Diff$^1$ (M) such that, for every $f\in \S(M)$, any homoclinic class of $f$ containing periodic saddles $p$ and $q$ of indices $\alpha$ and $\beta$ respectively, where $\alpha< \beta$, has superexponential growth of the number of periodic points inside the homoclinic class. Furthermore, it is shown that the super-exponential growth occurs for hyperbolic periodic points of index $\gamma$ inside the homoclinic class for every $\gamma\in[\alpha,\beta]$.
Christian Bonatti, Lorenzo J. D\u00EDaz, Todd Fisher. Super-exponential growth of the number of periodic orbits inside homoclinic classes. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 589-604. doi: 10.3934/dcds.2008.20.589.
Local well-posedness for a nonlinear dirac equation in spaces of almost critical dimension
Nikolaos Bournaveas
We study a nonlinear Dirac system in one space dimension with a quadratic nonlinearity which exhibits null structure in the sense of Klainerman. Using an $L^{p}$ variant of the $L^2$ restriction method of Bourgain and Klainerman-Machedon, we prove local well-posedness for initial data in a Sobolev-like space $\hat{H^{s,p}}(\R)$ whose scaling dimension is arbitrarily close to the critical scaling dimension.
Nikolaos Bournaveas. Local well-posedness for a nonlinear dirac equation in spaces of almost critical dimension. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 605-616. doi: 10.3934/dcds.2008.20.605.
$W^{1,p}$ regularity for the conormal derivative problem with parabolic BMO nonlinearity in reifenberg domains
Sun-Sig Byun and Lihe Wang
We obtain an optimal $W^{1,p}$, $2 \leq p < \infty$, regularity theory on the conormal derivative problem for a nonlinear parabolic equation in divergence form with small BMO nonlinearity in a $\delta$-Reifenberg flat domain.
Sun-Sig Byun, Lihe Wang. $W^{1,p}$ regularity for the conormal derivative problem with parabolic BMO nonlinearity in reifenberg domains. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 617-637. doi: 10.3934/dcds.2008.20.617.
The thermodynamic formalism for sub-additive potentials
Yongluo Cao, De-Jun Feng and Wen Huang
The topological pressure is defined for sub-additive potentials via separated sets and open covers in general compact dynamical systems. A variational principle for the topological pressure is set up without any additional assumptions. The relations between different approaches in defining the topological pressure are discussed. The result will have some potential applications in the multifractal analysis of iterated function systems with overlaps, the distribution of Lyapunov exponents and the dimension theory in dynamical systems.
Yongluo Cao, De-Jun Feng, Wen Huang. The thermodynamic formalism for sub-additive potentials. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 639-657. doi: 10.3934/dcds.2008.20.639.
The complete classification on a model of two species competition with an inhibitor
Jifa Jiang and Fensidi Tang
Hetzer and Shen [3] considered a system of a two-species Lotka-Volterra competition model with an inhibitor, investigated its long-term behavior and proposed two open questions: one is whether the system has a nontrivial periodic solution; the other is whether one of two positive equilibria is non-hyperbolic in the case that the system has exactly two positive equilibria. The goal of this paper is first to give these questions clear answers, then to present a complete classification for its dynamics in terms of coefficients. As a result, all solutions are convergent as $t$ goes to infinity.
Jifa Jiang, Fensidi Tang. The complete classification on a model of two species competition with an inhibitor. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 659-672. doi: 10.3934/dcds.2008.20.659.
On the entropy of Japanese continued fractions
Laura Luzzi and Stefano Marmi
We consider a one-parameter family of expanding interval maps $\{T_{\alpha}\}_{\alpha \in [0,1]}$ (Japanese continued fractions) which include the Gauss map ($\alpha=1$) and the nearest integer and by-excess continued fraction maps ($\alpha=\frac{1}{2},\,\alpha=0$). We prove that the Kolmogorov-Sinai entropy $h(\alpha)$ of these maps depends continuously on the parameter and that $h(\alpha) \to 0$ as $\alpha \to 0$. Numerical results suggest that this convergence is not monotone and that the entropy function has infinitely many phase transitions and a self-similar structure. Finally, we find the natural extension and the invariant densities of the maps $T_{\alpha}$ for $\alpha=\frac{1}{n}$.
Laura Luzzi, Stefano Marmi. On the entropy of Japanese continued fractions. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 673-711. doi: 10.3934/dcds.2008.20.673.
A continuous Bowen-Mane type phenomenon
Esteban Muñoz-Young, Andrés Navas, Enrique Pujals and Carlos H. Vásquez
In this work we exhibit a one-parameter family of $C^1$-diffeomorphisms $F_\alpha$ of the 2-sphere, where $\alpha>1$, such that the equator $\S^1$ is an attracting set for every $F_\alpha$ and $F_\alpha|_{\S^1}$ is the identity. For $\alpha>2$ the Lebesgue measure on the equator is a non ergodic physical measure having uncountably many ergodic components. On the other hand, for $1<\alpha\leq 2$ there is no physical measure for $F_\alpha$. If $\alpha<2$ this follows directly from the fact that the $\omega$-limit of almost every point is a single point on the equator (and the basin of each of these points has zero Lebesgue measure). This is no longer true for $\alpha=2$, and the non existence of physical measure in this critical case is a more subtle issue.
Esteban Mu\u00F1oz-Young, Andr\u00E9s Navas, Enrique Pujals, Carlos H. V\u00E1squez. A continuous Bowen-Mane type phenomenon. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 713-724. doi: 10.3934/dcds.2008.20.713.
Symbolic dynamics on free groups
Steven T. Piantadosi
We study nearest-neighbor shifts of finite type (NNSOFT) on a free group $\G$. We determine when a NNSOFT on $\G$ admits a periodic coloring and give an example of a NNSOFT that does not allow a periodic coloring. Then, we find an expression for the entropy of the golden mean shift on $\G$. In doing so, we study a new generalization of Fibonacci numbers and analyze their asymptotics with a one-dimensional iterated map that is related to generalized continued fractions.
Steven T. Piantadosi. Symbolic dynamics on free groups. Discrete & Continuous Dynamical Systems - A, 2008, 20(3): 725-738. doi: 10.3934/dcds.2008.20.725.
|
CommonCrawl
|
2.2 Momentum (ESCJ7)
Momentum is a physical quantity which is closely related to forces. Momentum is a property which applies to moving objects, in fact it is mass in motion. If something has mass and it is moving then it has momentum.
The linear momentum of a particle (object) is a vector quantity equal to the product of the mass of the particle (object) and its velocity.
The momentum (symbol \(\vec{p}\)) of an object of mass \(m\) moving at velocity \(v\) is:
\(\vec{p}=m\vec{v}\)
Momentum is directly proportional to both the mass and velocity of an object. A small car travelling at the same velocity as a big truck will have a smaller momentum than the truck. The smaller the mass, the smaller the momentum for a fixed velocity. If the mass is constant then the greater the velocity the greater the momentum. The momentum will always be in the same direction as the velocity because mass is a scalar not a vector.
Vector nature of momentum (ESCJ8)
A car travelling at \(\text{120}\) \(\text{km·hr$^{-1}$}\) will have a larger momentum than the same car travelling at \(\text{60}\) \(\text{km·hr$^{-1}$}\). Momentum is also related to velocity; the smaller the velocity, the smaller the momentum.
Different objects can also have the same momentum, for example a car travelling slowly can have the same momentum as a motorcycle travelling relatively fast. We can easily demonstrate this.
Consider a car of mass \(\text{1 000}\) \(\text{kg}\) with a velocity of \(\text{8}\) \(\text{m·s$^{-1}$}\) (about \(\text{30}\) \(\text{km·hr$^{-1}$}\)) East. The momentum of the car is therefore:
\begin{align*} \vec{p}& = m\vec{v} \\ & = \left(\text{1 000}\right)\left(8\right) \\ & = \text{8 000}\text{ kg·m·s$^{-1}$}~\text{East} \end{align*}
Now consider a motorcycle, also travelling East, of mass \(\text{250}\) \(\text{kg}\) travelling at \(\text{32}\) \(\text{m·s$^{-1}$}\) (about \(\text{115}\) \(\text{km·hr$^{-1}$}\)). The momentum of the motorcycle is:
\begin{align*} \vec{p} &= m\vec{v} \\ & = \left(250 \text{kg}\right)\left(32 \text{m·s$^{-1}$}\right) \\ & = \text{8 000}\text{ kg·m·s$^{-1}$}~\text{East} \end{align*}
Even though the motorcycle is considerably lighter than the car, the fact that the motorcycle is travelling much faster than the car means that the momentum of both vehicles is the same.
From the calculations above, you are able to derive the unit for momentum as \(\text{kg·m·s$^{-1}$}\).
Momentum is also vector quantity, because it is the product of a scalar (\(m\)) with a vector \((\vec{v})\).
A vector multiplied by a scalar has the same direction as the original vector but a magnitude that is scaled by the multiplicative factor.
This means that whenever we calculate the momentum of an object, we should include the direction of the momentum.
Video: 27HN
Worked example 1: Momentum of a soccer ball
A soccer ball of mass \(\text{420}\) \(\text{g}\) is kicked at \(\text{20}\) \(\text{m·s$^{-1}$}\) towards the goal post. Calculate the momentum of the ball.
The question explicitly gives:
the mass of the ball, and
the velocity of the ball.
The mass of the ball must be converted to SI units.
\(\text{420}\text{ g}=\text{0,42}\text{ kg}\)
We are asked to calculate the momentum of the ball. From the definition of momentum, \(\vec{p}=m\vec{v}\) we see that we need the mass and velocity of the ball, which we are given.
Do the calculation
We calculate the magnitude of the momentum of the ball,
\begin{align*} \vec{p}& = m\vec{v} \\ & = \left(\text{0,42}\right)\left(20\right) \\ & = \text{8,40}\text{ kg·m·s$^{-1}$} \end{align*}
We quote the answer with the direction of motion included, \(\vec{p}\) = \(\text{8,40}\) \(\text{kg·m·s$^{-1}$}\) in the direction of the goal post.
Worked example 2: Momentum of a cricket ball
A cricket ball of mass \(\text{160}\) \(\text{g}\) is bowled at \(\text{40}\) \(\text{m·s$^{-1}$}\) towards a batsman. Calculate the momentum of the cricket ball.
the mass of the ball (m = \(\text{160}\) \(\text{g}\) = \(\text{0,16}\) \(\text{kg}\)), and
the velocity of the ball \((\vec{v}\) = \(\text{40}\) \(\text{m·s$^{-1}$}\) towards the batsman)
To calculate the momentum we will use
\(\vec{p}=m\vec{v}.\)
\begin{align*} \vec{p}& = m\vec{v} \\ & = \left(\text{0,16}\right)\left(40\right) \\ & = \text{6,4}\text{ kg·m·s$^{-1}$} \\ & = \text{6,4}\text{ kg·m·s$^{-1}$} \text{in the direction of the batsman} \end{align*}
The momentum of the cricket ball is \(\text{6,4}\) \(\text{kg·m·s$^{-1}$}\) in the direction of the batsman.
Worked example 3: Momentum of the Moon
The centre of the Moon is approximately \(\text{384 400}\) \(\text{km}\) away from the centre of the Earth and orbits the Earth in \(\text{27,3}\) days. If the Moon has a mass of \(\text{7,35} \times \text{10}^{\text{22}}\) \(\text{kg}\), what is the magnitude of its momentum (using the definition given in this chapter) if we assume a circular orbit? The actual momentum of the Moon is more complex but we do not cover that in this chapter.
the mass of the Moon (\(m\) = \(\text{7,35} \times \text{10}^{\text{22}}\) \(\text{kg}\))
the distance to the Moon (\(\text{384 400}\) \(\text{km}\) = \(\text{384 400 000}\) \(\text{m}\) = \(\text{3,844} \times \text{10}^{\text{8}}\) \(\text{m}\))
the time for one orbit of the Moon (\(\text{27,3} \text{ days} = \text{27,3} \times 24 \times 60 \times 60 = \text{2,36} \times \text{10}^{\text{6}}\text{ s}\))
We are asked to calculate only the magnitude of the momentum of the Moon (i.e. we do not need to specify a direction). In order to do this we require the mass and the magnitude of the velocity of the Moon, since
Find the magnitude of the velocity of the Moon
The magnitude of the average velocity is the same as the speed. Therefore:
\(v=\frac{\Delta x}{\Delta t}\)
We are given the time the Moon takes for one orbit but not how far it travels in that time. However, we can work this out from the distance to the Moon and the fact that the Moon has a circular orbit. Using the equation for the circumference, C, of a circle in terms of its radius, we can determine the distance travelled by the Moon in one orbit:
\begin{align*} C& = 2\pi r \\ & = 2\pi \left(\text{3,844} \times \text{10}^{\text{8}}\right) \\ & = \text{2,42} \times \text{10}^{\text{9}}\text{ m} \end{align*}
Combining the distance travelled by the Moon in an orbit and the time taken by the Moon to complete one orbit, we can determine the magnitude of the Moon's velocity or speed,
\begin{align*} v& = \frac{\Delta x}{\Delta t} \\ & = \frac{C}{T} \\ & = \frac{\text{2,42} \times \text{10}^{\text{9}}\text{ m}}{\text{2,36} \times \text{10}^{\text{6}}\text{ s}} \\ & = \text{1,02} \times \text{10}^{\text{3}}\text{ m·s$^{-1}$} \end{align*}
Finally calculate the momentum and quote the answer
The magnitude of the Moon's momentum is:
\begin{align*} \vec{p}& = m\vec{v} \\ {p}& = m{v} \\ & = \left(\text{7,35} \times \text{10}^{\text{22}}\right)\left(\text{1,02} \times \text{10}^{\text{3}}\right) \\ & = \text{7,50} \times \text{10}^{\text{25}}\text{ kg·m·s$^{-1}$} \end{align*}
The magnitude of the momentum of the Moon is \(\text{7,50} \times \text{10}^{\text{25}}\) \(\text{kg·m·s$^{-1}$}\).
As we have said, momentum is a vector quantity. Since momentum is a vector, the techniques of vector addition discussed in Vectors and scalars in Grade 10 must be used when dealing with momentum.
Change in momentum (ESCJ9)
Particles or objects can collide with other particles or objects, we know that this will often change their velocity (and maybe their mass) so their momentum is likely to change as well. We will deal with collisions in detail a little bit later but we are going to start by looking at the details of the change in momentum for a single particle or object.
Case 1: Object bouncing off a wall
Lets start with a simple picture, a ball of mass, \(m\), moving with initial velocity, \(\vec{v}_i\), to the right towards a wall. It will have momentum \(\vec{p}_i=m\vec{v}_i\) to the right as shown in this picture:
The ball bounces off the wall. It will now be moving to the left, with the same mass, but a different velocity, \(\vec{v}_f\) and therefore, a different momentum, \(\vec{p}_f=m\vec{v}_f\), as shown in this picture:
We know that the final momentum vector must be the sum of the initial momentum vector and the change in momentum vector, \(\Delta \vec{p}=m\Delta \vec{v}\). This means that, using tail-to-head vector addition, \(\Delta \vec{p}\), must be the vector that starts at the head of \(\vec{p}_i\) and ends on the head of \(\vec{p}_f\) as shown in this picture:
We also know from algebraic addition of vectors that: \begin{align*} \vec{p}_f &=\vec{p}_i + \Delta \vec{p} \\ \vec{p}_f - \vec{p}_i &= \Delta \vec{p} \\ \Delta \vec{p} &= \vec{p}_f - \vec{p}_i \end{align*} If we put this all together we can show the sequence and the change in momentum in one diagram:
We have just shown the case for a rebounding object. There are a few other cases we can use to illustrate the basic features but they are all built up in the same way.
Case 2: Object stops
In some scenarios the object may come to a standstill (rest). An example of such a case is a tennis ball hitting the net. The net stops the ball but doesn't cause it to bounce back. At the instant before it falls to the ground its velocity is zero. This scenario is described in this image:
Case 3: Object continues more slowly
In this case, the object continue in the same direction but more slowly. To give this some context, this could happen when a ball hits a glass window and goes through it or an object sliding on a frictionless surface encounters a small rough patch before carrying on along the frictionless surface.
Important: note that even though the momentum remains in the same direction the change in momentum is in the opposite direction because the magnitude of the final momentum is less than the magnitude of the initial momentum.
Case 4: Object gets a boost
In this case the object interacts with something that increases the velocity it has without changing its direction. For example, in squash the ball can bounce off a back wall towards the front wall and a player can hit it with a racquet in the same direction, increasing its velocity.
If we analyse this scenario in the same way as the first 3 cases, it will look like this:
Case 5: Vertical bounce
For this explanation we are ignoring any effect of gravity. This isn't accurate but we will learn more about the role of gravity in this scenarion in the next chapter.
All of the examples that we've shown so far have been in the horizontal direction. That is just a coincidence, this approach applies for vertical or horizontal cases. In fact, it applies to any scenario where the initial and final vectors fall on the same line, any 1-dimensional (1D) problem. We will only deal with 1D scenarios in this chapter. For example, a stationary basketball player bouncing a ball.
To illustrate the point, here is what the analysis would look like for a ball bouncing off the floor:
Success in Maths and Science unlocks opportunities
Sign up to get a head start on bursary and career opportunities. Use Siyavula Practice to get the best marks possible.
Sign up to unlock your future
The fastest recorded delivery for a cricket ball is \(\text{161,3}\) \(\text{km·hr$^{-1}$}\), bowled by Shoaib Akhtar of Pakistan during a match against England in the 2003 Cricket World Cup, held in South Africa. Calculate the ball's momentum if it has a mass of \(\text{160}\) \(\text{g}\).
\[p = mv\]
\(v = \text{161,3}\text{ km·hr$^{-1}$}\) and \(m = \text{160}\text{ g}\).
Converting the velocity to the correct S.I units:
\begin{align*} v & = \text{161,3}\text{ km·hr$^{-1}$} \times \frac{\text{1 000}\text{ m}}{\text{3 600}\text{ s}} \\ & = \text{44,81}\text{ m·s$^{-1}$} \end{align*}
Converting the mass to the correct S.I units:
\begin{align*} m & = \text{160}\text{ g} \times \frac{\text{1}\text{ kg}}{\text{1 000}\text{ g}} \\ & = \text{0,16}\text{ kg} \end{align*}
Therefore, computing the momentum:
\begin{align*} p & = mv \\ & = (\text{0,16})(\text{44,81}) \\ & = \text{7,17}\text{ kg·m·s$^{-1}$} \end{align*}
The fastest tennis service by a man is \(\text{246,2}\) \(\text{km·hr$^{-1}$}\) by Andy Roddick of the United States of America during a match in London in 2004. Calculate the ball's momentum if it has a mass of \(\text{58}\) \(\text{g}\).
\begin{align*} p & = mv \\ & = (\text{0,16})(\text{68,39}) \\ & = \text{10,94}\text{ kg·m·s$^{-1}$} \end{align*}
The fastest server in the women's game is Venus Williams of the United States of America, who recorded a serve of \(\text{205}\) \(\text{km·hr$^{-1}$}\) during a match in Switzerland in 1998. Calculate the ball's momentum if it has a mass of \(\text{58}\) \(\text{g}\).
\(v = \text{205}\text{ km·hr$^{-1}$}\) and \(m = \text{58}\text{ g}\).
\begin{align*} v & = \text{205}\text{ km·hr$^{-1}$} \times \frac{\text{1 000}\text{ m}}{\text{3 600}\text{ s}} \\ & = \text{56,94}\text{ m·s$^{-1}$} \end{align*}
\begin{align*} m & = \text{58}\text{ g} \times \frac{\text{1}\text{ kg}}{\text{1 000}\text{ g}} \\ & = \text{0,058}\text{ kg} \end{align*}
\begin{align*} p & = mv \\ & = (\text{0,058})(\text{56,94}) \\ & = \text{3,30}\text{ kg·m·s$^{-1}$} \end{align*}
If you had a choice of facing Shoaib, Andy or Venus and didn't want to get hurt, who would you choose based on the momentum of each ball?
The ball with the smallest momentum gives you the least chance of being hurt and so you would choose to face Venus.
|
CommonCrawl
|
Estimating node connectedness in spatial network under stochastic link disconnection based on efficient sampling
Takayasu Fushimi ORCID: orcid.org/0000-0003-3448-81821,
Kazumi Saito2,3,
Tetsuo Ikeda4 &
Kazuhiro Kazama5
Many networks including spatial networks, social networks, and web networks, are not deterministic but probabilistic due to the uncertainty of link existence. From networks
with such uncertainty, to extract densely connected nodes, we propose connectedness centrality and its extended version, group connectedness centrality, where the connectedness of each node is defined as the expected size of its connected component over all possible graphs produced by an uncertain graph. In a large-scale network, however, since the number of combinations of possible graphs is enormous, it is difficult to strictly calculate the expected value. Therefore, we also propose an efficient estimation method based on Monte Carlo sampling. When applying our method to road networks, the extracted nodes can be regarded as candidate sites of evacuation facilities that many residents can reach even in the situation where roads are stochastically blocked by natural disasters. In our experimental evaluations using actual road networks, we show the following promising characteristics: our proposed method 1) works stably with respect to the number of simulations; 2) extracts nodes set reachable from more nodes even in a situation that many links are deleted; and 3) computes much more efficient, compared to existing centrality measures and community extraction methods.
In many real-life graph structures, relationships among nodes are not permanent and sometimes break. For example, in infrastructure networks such as road networks or power grids, links can be broken due to reconstruction or disaster and thus, in Social Networking Service (SNS) communication networks, communication among users is not maintained and is sometimes broken. These graphs are considered uncertain graphs with a connection probability for each link. In an uncertain graph, connections among nodes are stochastically determined so that the number of possible instances is very large (See Fig. 1). In this study, we aim to estimate the node connectedness and extract expected connected subgraphs under stochastic link disconnections. Assuming an uncertain graph, where link disconnection occurs stochastically-called edge-uncertainity-we have proposed a new centrality measure focusing on the degree of connectedness with neighboring nodes and an efficient sampling algorithm based on a time-evolving graph (Fushimi et al. 2018). Although our method can be applied to general networks in principle, we target mainly spatial networks because urban road structures can be naturally regarded as uncertain graphs and few existing studies focus on such networks. In our previous study (Fushimi et al. 2018), our method-connectedness centrality-defines the connectedness of each node as the expectation of the number of reachable nodes and attempts to extract nodes with high connectedness even when the graph is separated into several connected components by a link disconnection. In order to extract multiple nodes with high connectedness, we enhanced this method to group connectedness centrality, which selects nodes so as to maximize our objective function in a greedy manner. For a road network, the group connectedness centrality can be used to estimate installation sites for evacuation facilities, as these must be accessible to neighboring residents even when the roads are blocked due to floods, landslides, or the collapse of houses and telegraph pillars.
Uncertain graph and possible worlds
In this paper, we substantially extended our previous study (Fushimi et al. 2018) by adding new content as follows:
We added research on uncertain graphs ((Jin et al. 2011; Ceccarello et al. 2017; Potamias et al. 2010; Pfeiffer and Neville 2011)) and facility locations ((Alp et al. 2003; McKendall and Shang 2006; Levanova and Loresh 2004; Tabata et al. 2017; Agra et al. 2017; Kaveh et al. 2018; Puerto et al. 2014)) to the references and discuss these related studies in "Related work" section. Through that discussion, we further clarify the originality of our work in the field.
We added four figures (Figs. 1, 2, 3, and 4) and related discussions to improve the understandability of our manuscript.
Sampling algorithm
Proposed sampling algorithm
Counting of reachable nodes
We reformatted our proposed method and provide a pseudo code as Algorithms 1 and 2 to improve its understandability and readability.
We prove that our proposed measures are unbiased estimators.
We provide additional experimental results in "Results of connectedness centrality: cnc3(v)" section and demonstrate how the proposed centrality is quite different from traditional centrality measures, specifically, closeness, betweenness, and eigenvector centrality, by comparing the top 1000 nodes identified in each centrality.
We discussed how our proposed algorithm deals with non-uniform connection probabilities in "Extension: case of non-uniform connection probabilities" section and other types of networks in "Discussion" section.
We also revised and extended our Introduction and Conclusion according to the above-mentioned additions.
The rest of our paper is organized as follows. In "Related work" section, we overview some related work and, in "Proposed measure" section, we explain in detail the proposed centrality measure and proposed algorithm. In "Experimental settings", "Results of connectedness centrality: cnc3(v)" and "Results of group connectedness centrality: \(cnc_{3}(\mathcal {R})\)" sections, we set forth and discuss the experimental settings and results. Furthermore, we discussed how our proposed algorithm deals with non-uniform connection probabilities and other types of networks in "Extension: case of non-uniform connection probabilities" section and "Discussion" section, and finally, we summarize our paper and propose future work in "Conclusion" section.
In this section, related work is organized from the viewpoint of centrality measure, community extraction, uncertain graphs, and facility location problems.
Centrality measure
In our method, each node is ranked by its connectedness score with neighbor nodes, which can be treated as a centrality measure. Some centrality measures for nodes have been proposed in sociology and web science, including degree, closeness, harmonic, betweenness, eigenvector, Katz, Bonacich, HITS, and PageRank (Freeman 1979; Katz 1953; Bonacich 1987; Brin and Page 1998; Kleinberg 1999). Since degree distribution does not follow power-law distribution and the maximum degree of nodes is relatively small due to geographical restrictions in road networks, degree centrality does not make sense. Closeness centrality and betweenness centrality take the shortest path between nodes into account, so these measures work well even in urban traffic networks, as reported in some studies (Crucitti et al. 2006; Park and Yilmaz 2010). Furthermore, eigenvector measures can extract subgraphs where high-degree nodes connect to each other. As a result, in a road network, it will be possible to extract urban districts where intersections with relatively high degrees are connected.
Our aim is to extract nodes with high connectedness scores, which can then be applied to candidate sites for evacuation facilities. When extracting such nodes, accessibility to these nodes can be an important factor. Closeness centrality quantifies accessibility based on distance, but does not take into consideration road blockages. Therefore, even if an extracted node is close to other nodes, if the node is located near a river, it is not a viable candidate location for an evacuation facility. Taken together, since some of the existing measures could extract important nodes in each of the notions for road networks, we experimentally compare the characteristics of related measures and ours.
Community extraction
Our method extracts representative nodes and divides the remaining nodes into clusters based on connectedness with the representative nodes, and thus can be treated as a community extraction method. In recent years, many methods for community extraction have been proposed (Seidman 1983; Girvan and Newman 2002; Clauset et al. 2004; Palla et al. 2005; von Luxburg 2007; Blondel et al. 2008; Chen and Hero 2015). However, these methods cannot straightforwardly apply to road networks, where the degrees of nodes roughly obey uniform distributions and there are little differences between the number of inter-community and intra-communities links. Furthermore, although spectral clustering (von Luxburg 2007) and deep community detection (Chen and Hero 2015) have a similar flavor to our method in terms of link cuttings, since the eigen-gaps of the Laplacian matrix and differences between the successive eigenvalues of spatial networks are quite small, unlike social networks, it is difficult to calculate the Fiedler vector of the Laplacian matrix with stability. This fact corresponds to the existence of some non-dominant communities and a few links that, when cut, isolate the spatial network. The Girvan-Newman (GN) method also cuts some links according to the edge-betweenness centrality and treats connected components of the remaining graph as communities (Girvan and Newman 2002). The GN method is based on a similar framework to our method in treating the connected components as communities by cutting edges; however, it is difficult to apply to large-scale networks due to its large computational complexity. Therefore, we compare our method with the CNM (Clauset, Newman, and Moore) method (Clauset et al. 2004), which directly optimizes the modularity function to accelerate the calculation and produces similar results to the GN method.
Uncertain graph
The uncertain graph has been studied within the broader context such as network reliability, querying, and mining. Jin et al. proposed two methods to efficiently and accurately estimate the probability that the distance between a given node pair of an uncertain graph is smaller than the designated value (Jin et al. 2011). In (Jin et al. 2011), the authors generalized the simple reachability problem to the distance-constraint reachability problem, which considers both distance and reachability in an uncertain graph. Therefore these methods can be useful in the context of evacuation activity. However, probability must be assigned to each link. Our method, on the other hand, integrates out all possibilities so that it does not need to preliminarily know the probability of each link. Our method adopts the deterministic recursive computational procedure in order to minimize the variance of the estimator and unequal probabilistic sampling over the enumeration tree in order to accelerate the sampling process.
Ceccarello et al. developed a node clustering method for an uncertain graph and reduced the treated problem to the k-center and k-median problems (Ceccarello et al. 2017). In this method, distances between nodes are defined by the inverse of the connection probability among them, which is efficiently and accurately estimated by the Monte Carlo sampling method. Potamias et al. introduced distance measures and identified the k-Nearest Neighbor nodes from an uncertain graph by calculating the probability of the distances between the arbitrary node pair based on the Monte Carlo sampling method with efficient pruning techniques in order to reduce the search space (Potamias et al. 2010). Pfeiffer et al. extended some structural indices on discrete graphs to probabilistic graphs by computing the expected values of sampling graph indices (Pfeiffer and Neville 2011).
Some research on stochastic graphs address another uncertainty. A stochastic graph is a fixed structure graph with randomly changing edge weights. The distribution of its change probability is unknown, but stationary. Misra et al. proposed a method that uses Learning Automata (LA) and Frigioni's algorithm to find the statistical shortest path tree in an average graph topology for the dynamic single source shortest path problem (DSSSP) (Misra and Oommen 2005). Rezvanian et al. proposed generalization of some network measures for stochastic graphs using six LA-based algorithms to calculate these measures (Rezvanian and Meybodi 2016). Vahidipour et al. proposed an efficient LA-based algorithm that speeds up the process of finding the shortest path in a stochastic graph using parallelism for DSSSP (Vahidipour et al. 2017).
In contrast to these existing studies, which assume the connection strength as a probability value for each link, our method assumes a probability distribution for each link. Furthermore, our method also conducts Monte Carlo sampling and, as seen above, the efficiency and accuracy of these methods depend on the quality of the sampling techniques. Unlike the above-mentioned sampling methods, our proposed sampling algorithm is based on a time-evolving graph for which no link exists in a graph at the initial state and links are added to the graph one by one.
Facility location on graph
The most famous facility location problem is the k-median problem. The objective of the k-median problem is to minimize the sum of distances between citizens and their nearest facility, and many approximation algorithms have been reviewed (Alp et al. 2003; McKendall and Shang 2006; Levanova and Loresh 2004). As a facility location problem over graphs, a closeness-centrality-based method has been proposed (Tabata et al. 2017). Closeness centrality focuses on the graph distance and extracts the most central node with a minimum distance to the others. Although the method can quickly solve the location of a single facility, it cannot handle multiple facilities. Agra et al. (2017) provided a k-median problem algorithm for a graph that is divided into several connceted components like archipelagos. Since the approach assumes that it is known beforehand whether it is divided into connected components, it cannot deal with graph disruption by the stochastical occurrences of link breakages.
For locating evacuation facilities, the method proposed in (Kaveh et al. 2018) introduced a weight for each node that represents a risk factor like topographic conditions to the k-median problem. This method mainly considers the failure of nodes (facilities), but not the stochastic occurence of link disconnection. The method proposed by Puerto et al. considered the disruption possibility of an edge (Puerto et al. 2014), which has high computation costs. In fact, the only case of an efficient algorithm in the literature is k=1,2. Unlike these methods, our method attempts to extract multiple nodes as evacuation facility candidate sites based on connectedness with neighbor nodes calculated by an efficient and accurate sampling.
Proposed measure
In order to estimate node connectedness under a stochastic link disconnection, we propose a node ranking measure, called connectedness centrality, and its efficient sampling algorithm. To this end, we explain three versions of connectedness centrality measures. More specifically, we present the first centrality, called cnc1, as a general theoretical framework and then derive the second, called cnc2, as a computable measure by discretizing its prior probability distribution. We then propose a third, called cnc3, as a practical measure equipped with its efficient estimation algorithm. This can be naturally explained as a special case of cnc2, assuming that each link connection probability is the same, although this equal probability assumption can be easily relaxed, as shown in "Extension: case of non-uniform connection probabilities" section. Furthermore, to select multiple nodes, we propose group connectedness centrality by extending the target of connectedness centrality from each node to node groups.
Connectedness centrality: c n c 1
Let \(G = ({\mathcal {V}}, {\mathcal {E}})\) be the graph structure of a given spatial network. For each link \(e \in {\mathcal {E}}\), we consider a link connection probability p(e;s) that is determined according to some model, such as a road blockage model, based on geographical properties, where s is a parameter, just like an inverse of magnitude of earthquake, to control the probability p(e;s). We set s in the range of 0≤s≤1 for our convenience. Figure 1 depicts an uncertain graph introducing connection probabilities to a given spatial network. For each link \(e \in {\mathcal {E}}\), let x(e) be a random variable expressing the link connectivity, i.e., x(e)=1 if link e is connected; otherwise x(e)=0, where p(x(e)=1;s)=p(e;s). Then, by suitably arranging these random variables and setting \(\Omega = \{0, 1\}^{|{\mathcal {E}}|}\), we can construct an indicator vector expressed as x=(⋯,x(e),⋯)∈Ω, whose total number of possible instantiations (possible worlds) amounts to \(|\Omega | = |\{0, 1\}|^{|{\mathcal {E}}|} = 2^{|{\mathcal {E}}|}\). For each instance of the indicator vector x, we can obtain the corresponding graph \(G_{\textbf {x}} = ({\mathcal {V}}, {\mathcal {E}}_{\textbf {x}})\), where \({\mathcal {E}}_{\textbf {x}}= \{e~|~e \in {\mathcal {E}}, x(e)=1\}\). In this paper, assuming a basic model based on independent Bernoulli trials for all links, with repect to each graph Gx obtained from x, we can compute its occurrence probability as follows:
$$ q(\mathbf{x}; s) = \prod_{e \in {\mathcal{E}}} p(e; s)^{x(e)} (1-p(e; s))^{1-x(e)} = \prod_{e \in {\mathcal{E}}_{\mathbf{x}}} p(e; s) \prod_{e \in {\mathcal{E}} \setminus {\mathcal{E}}_{\mathbf{x}}} (1-p(e; s)), $$
where ·∖· stands for a set difference operator. Here, we should emphasize that, unlike most studies on uncertain graphs, where each link connection probability is designated as a value, our approach specifies each as a stochastic model of link connection p(e;s) controlled by parameter s.
After decomposing Gx into connected components, we compute the size of each connected component as the number of nodes belonging to the component and let c(v;Gx) be the set of nodes belonging to the connected component in which node \(v \in {\mathcal {V}}\) is included, where c(u;Gx)=c(v;Gx) if the nodes u and v belong to the same connected components. In this study, under a given stochastic model of link connection, we define our connectedness centrality of node \(v \in {\mathcal {V}}\) by the expected size of the connected component where v is included. More specifically, for each node \(v \in {\mathcal {V}}\), we quantify our first version of connectedness centrality by the following expectation:
$$ \phi_{1}(v) = cnc_{1}(v) = \int_{0}^{1} \sum_{\mathbf{x} \in \Omega} |c(v ;G_{\mathbf{x}})| q(\mathbf{x}; s) r_{1}(s) ds, $$
where r1(s) stands for a prior probability distribution with respect to parameter s. For instance, it can be used to express the fact that small earthquakes occur frequently, but huge ones are quite rare.
Next, we consider computing the integration of s by the summation of H+1 equal interval points. Note that, for the h-th point (0≤h≤H), the link connection probability is set to p(e;h/H). Under this quantization, for each node \(v \in {\mathcal {V}}\), we can quantify our second version of connectedness centrality by the following expectation:
$$ \phi_{2}(v) = \sum_{h=0}^{H} \sum_{\mathbf{x} \in \Omega} |c(v ;G_{\mathbf{x}})| q(\mathbf{x}; h) r_{2}(h), $$
where \(r_{2}(h) = r_{1}(h/H)/\sum _{h'=0}^{H} r_{1}(h'/H)\).
Below, we propose computing the summation of \(2^{|{\mathcal {E}}|}\) times by J Monte Carlo simulations. Let \(G_{(h, j)} = ({\mathcal {V}}, {\mathcal {E}}_{(h, j)})\) be a graph obtained by the j-th simulation (1≤j≤J) at the h-th point (See Fig. 2); then, we can estimate our connectedness centrality ϕ2(v) defined in Eq. (3) by the following:
$$ cnc_{2}(v) = \frac{1}{J} \sum_{h=0}^{H} \sum_{j=1}^{J} |c(v ;G_{(h, j)})| r_{2}(h). $$
Now, by considering the following expectation value of |c(v;Gx)| denoted by 〈|c(v;Gx)|〉Ω, with respect to our simulation based on q(x;h/H),
$$ \langle |c(v ;G_{\mathbf{x}})| \rangle_{\Omega} = \sum_{\mathbf{x} \in \Omega} |c(v ;G_{\mathbf{x}})| q(\mathbf{x}; h/H), $$
we can see that cnc2(v) is an unbiased estimator of ϕ2(v), i.e.,
$$ \langle cnc_{2}(v) \rangle = \frac{1}{J} \sum_{h=1}^{H} \sum_{j=1}^{J} \langle |c(v ;G_{\mathbf{x}})| \rangle_{\Omega} r_{2}(h) = \phi_{2}(v). $$
Thus, by setting both H and J to sufficiently large values, we can naturally expect that cnc2(v) defined in Eq. (4) can be reasonably accurately estimated to cnc1(v) defined in Eq. (2). However, when straightforwardly computing cnc2(v) for every \(v \in {\mathcal {V}}\) for a large H and J, we need a large computational load because its computational complexity becomes O(HJ(N+L)), where \(N = |{\mathcal {V}}|\) and \(L = |{\mathcal {E}}|\) respectively stand for the numbers of nodes and links for a given network. Note that the computational complexity of decomposing a graph into its connected components is O(N+L) and, during this process, we can simultaneously compute their sizes.
Below, we propose another reasonably accurate estimate, referred to as cnc3(v), instead of cnc2(v) together with an effective algorithm whose computational complexity becomes O(J(L+N logN)), rather than O(HJ(N+L)). We assume that each link connection probability is the same, i.e., p(e;h/H)=p(h/H)=h/H, and define the set of graphs whose number of links is h, expressed as \(\Omega (h) = \{ \mathbf {x}~|~\sum _{e \in {\mathcal {E}}} x(e) = h \}\). This definition corresponds to employing a setting of H=L. Under this uniform probability setting, for each node \(v \in {\mathcal {V}}\), we can quantify our third version of connectedness centrality by the following expectation:
$$ \phi_{3}(v) = \sum_{h=0}^{H} \frac{1}{|\Omega(h)|} \sum_{\mathbf{x} \in \Omega(h)} |c(v ;G_{\mathbf{x}})| r_{2}(h). $$
Below, we estimate ϕ3(v) by J Monte Carlo simulations.
In our proposed algorithm, from the initial state that all links are disconnected and thus all nodes are isolated in the setting p(0)=0, we repeatedly add a randomly selected link one by one until the final state where all original links are connected in the setting p(1)=1 (See Fig. 3). During this process, we attempt to efficiently compute the expected size of the connected component for each node \(v \in {\mathcal {V}}\) by focusing on the difference between the graphs caused by adding only one link. More specifically, for the j-th simulation, we assign a random order to each link \(e \in {\mathcal {E}}\), denoted by e(h,j), where we also use h∈{1,⋯,H} to express the order that the link becomes connected. By considering a graph defined by \(G^{(h, j)} = ({\mathcal {V}}, {\mathcal {E}}^{(h, j)})\), where \({\mathcal {E}}^{(h, j)} = \left \{ e^{(h', j)} \in {\mathcal {E}}~\left |~h' \leq h\right. \right \}\), we can estimate our connectedness centrality ϕ3(v) defined in Eq. (7) by the following:
$$ cnc_{3}(v) = \frac{1}{J} \sum_{j=1}^{J} \sum_{h=1}^{H} \left|c\left(v ;G^{(h, j)}\right)\right| r_{2}(h). $$
By considering the following expectation value of |c(v;Gx)|, denoted by 〈|c(v;Gx)|〉Ω(h), with respect to our simulation based on 1/|Ω(h)|,
$$ \langle |c(v ;G_{\mathbf{x}})| \rangle_{\Omega(h)} = \frac{1}{|\Omega(h)|} \sum_{\mathbf{x} \in \Omega(h)} |c(v ;G_{\mathbf{x}})|, $$
$$ \langle cnc_{3}(v) \rangle = \frac{1}{J} \sum_{h=1}^{H} \sum_{j=1}^{J} \langle |c(v ;G_{\mathbf{x}})| \rangle_{\Omega(h)} r_{2}(h) = \phi_{3}(v). $$
Thus, for uniform probability settings, by setting both H and J to sufficiently large values, we can naturally expect that cnc3(v) defined in Eq. (8) can be a reasonably accurate estimate of cnc1(v) defined in Eq. (2).
Solution algorithm of c n c 3
Below, we provide details of our proposed algorithm together with its computational complexity. In the initial state with no link, we set that every node belongs to an individually different component by assigning a unique component number n(v)∈{1,⋯,N} to each node \(v \in {\mathcal {V}}\). When a new link (represented by a red link in Fig. 3) denoted by e(h,j)=(x,y)(h,j) is added, we can proceed to the next link if nodes x and y belong to the same connected component; otherwise, we need to change the component number of nodes belonging to one component.
More specifically, by assuming |c(x;G(h,j))|≥|c(y;G(h,j))| without loss of generality, we propose that the component number with a smaller size is changed to a larger one by setting n(z)←n(x) for each z∈c(y;G(h,j)). Evidently, for each link addition, the number of nodes whose component number is changed never exceeds N/2. Thus, during all link additions, the computational complexity of these renumbering processes becomes O(N logN).
Let \(cnc_{3}^{(h, j)}(v)\) be the partial summation of \(\left |c\left (v ;G^{(h', j)}\right)\right |\phantom {\dot {i}\!}\) until h′=h for the j-th simulation defined by
$$ cnc_{3}^{(h, j)}(v) = \sum_{h'=1}^{h} \left|c\left(v ;G^{(h', j)}\right)\right| r_{2}(h'). $$
Now, suppose that when a new link e(h,j)=(x,y)(h,j) was added at the h-th step, nodes x and y switch to belong to the same connected component for the first time. For arbitrary h′≥h, since \(\phantom {\dot {i}\!}c\left (x ;G^{(h', j)}\right) = c\left (y ;G^{(h', j)}\right)\), we can obtain the following relation:
$$ cnc_{3}^{(h', j)}(x) - cnc_{3}^{(h', j)}(y) = cnc_{3}^{(h-1, j)}(x) - cnc_{3}^{(h-1, j)}(y). $$
Thus, by maintaining the partial summation \(cnc_{3}^{(h', j)}(x)\) for a head node x of each connected component and keeping the difference values such as \(cnc_{3}^{(h-1, j)}(x) - cnc_{3}^{(h-1, j)}(y)\) for the other nodes in the component, we can obtain the final summation values, such as \(cnc_{3}^{(H, j)}(y)\), by using Eq. (12). Note that the computational complexity of obtaining \(cnc_{3}^{(h, j)}(v)\) for every \(v \in {\mathcal {V}}\) is O(N) and that of updating these difference values is O(N logN) because these updates can be done together with the above node renumbering processes. Therefore, since we need to shuffle and examine all of the links at the j-th simulation, the total computational complexity of our proposed algorithm becomes O(J(L+N logN)). Algorithm 1 and Fig. 4 show the details of the algorithm of connectedness centrality. In Algorithm 1, delta has two meanings: for the head node s of a connected component at step h, s.delta indicates the partial sum of reachable nodes \(cnc_{3}^{(h-1,j)}(s)\); for the other appearing node x, x.delta indicates the difference value of the partial summation of the reachable nodes between node x and its head node s, \(cnc_{3}^{(h,j)}(x)-cnc_{3}^{(h,j)}(s)\).
Group connectedness centrality
Although we can extract high-connectedness nodes using our connectedness centrality, these nodes gather unevenly in some parts of the network because of focusing on whether or not they belong to the large connected component. Actually, as shown in "Results of connectedness centrality: cnc3(v)" section, the top 1000 nodes of the connectedness centrality ranking are located near each other. This tendency is impractical for the purpose of estimating evacuation facility locations. To overcome this shortcoming, we enhance the notion of our connectedness centrality, called group connectedness centrality.
In group connectedness centrality, connectedness of the node set \({\mathcal {R}}\) is defined as:
$$ cnc_{1}({\mathcal{R}}) = \int_{0}^{1} \sum_{\mathbf{x} \in \{0, 1\}^{|{\mathcal{E}}|}} |c({\mathcal{R}} ;G_{\mathbf{x}})| q(\mathbf{x}; s) r_{1}(s) ds, $$
where \(c({\mathcal {R}} ;G_{\mathbf {x}}) = \bigcup _{r \in {\mathcal {R}}}c(r ;G_{\mathbf {x}})\) stands for the number of reachable nodes from whichever of \(r \in {\mathcal {R}}\).
Similarly to connectedness centrality, we compute the integration of s by the summation of H+1 equal interval points and set r(s) to be a uniform distribution.
$$ cnc_{3}({\mathcal{R}}) = \frac{1}{J} \sum_{j=1}^{J} \sum_{h=1}^{H} \left|c\left({\mathcal{R}} ;G^{(h, j)}\right)\right| r_{2}(h). $$
In order to select K nodes, \({\mathcal {R}}\), which maximizes the objective function defined in Eq. (14), we utilize a greedy algorithm. Hereafter, we refer to the selected node as the representative node. When selecting k-th representative node \({\hat r}_{k}\), the greedy algorithm fixes k−1 already selected nodes \({\mathcal {R}}_{k-1}\) and selects the node with the highest marginal gain, MG, defined by
$$\begin{array}{@{}rcl@{}} MG(v; {\mathcal{R}}_{k-1}) &=& cnc_{3}({\mathcal{R}}_{k-1} \cup \{v\}) - cnc_{3}({\mathcal{R}}_{k-1}) \\ &=& \frac{1}{J} \sum_{j=1}^{J} \sum_{h=1}^{H} mg(v;{\mathcal{R}}_{k-1})^{(h, j)} r_{2}(h), \end{array} $$
where \(mg(v;{\mathcal {R}}_{k-1})^{(h, j)} = \left |c\left ({\mathcal {R}}_{k-1} \cup \{v\} ;G^{(h, j)}\right) \setminus c\left ({\mathcal {R}}_{k-1} ;G^{(h, j)}\right)\right |\) stands for the increment of the number of reachable nodes when node v, which is a candidate for the k-th representative node, is added to \({\mathcal {R}}_{k-1}\). The total computational complexity of group connectedness centrality becomes O(KJ(L+N logN)). Let \({\mathcal {Q}}\) be a subset of \({\mathcal {R}}\), i.e., \({\mathcal {Q}} \subset {\mathcal {R}}\). Then we obtain \(mg(v;{\mathcal {Q}})^{(h,j)} \geq mg(v;{\mathcal {R}})^{(h,j)}\), which directly derives \(MG(v;{\mathcal {Q}}) \geq MG(v;{\mathcal {R}})\) from the definition of \(MG(v;{\mathcal {R}})\) shown in Eq. (15). Therefore, we can see that \(cnc_{3}({\mathcal {R}}))\) is a submodular function, and thus its greedy solution guarantees to be reasonably high quality with the worst case.
After selecting K representative nodes, each of the remaining nodes is assigned to the community where a representative node with the highest connectedness exists. Suppose that when a new link is added at the h-th step of the j-th simulation, node v switches to belong to the same connected component with representative node r. The degree of connectedness of nodes v and r is then defined as f(v,r)(j)=1−h/H. Therefore, the degree of connectedness in all J simulations is \(F(v, r) = J^{-1}\sum _{j = 1}^{J} f(v, r)^{(j)}\). For each of the remaining nodes, we assign one community of a representative node with the highest connectedness as follows:
$${\mathcal{V}}^{(k)} = \{v \in {\mathcal{V}}; r_{k} = \text{arg~max}_{r \in {\mathcal{R}}} F(v,r) \}. $$
In the final stage of a simulation, when most representative nodes would belong to the same connected component and the degree of connectedness between a remaining node v and each representative node is equal, node v is assigned to the community of the closest representative node in terms of graph distance. Hereafter, we refer to this method as CNC and show the summary of CNC in Algorithm 2.
In the context of the evacuation facility location problem, the representative node corresponds to a candidate site for an evacuation facility.
Experimental settings
To reveal the characteristics of our method, we conducted several experiments using actual datasets to compare our method with some existing methods.
In our experiments, we employed the following four prefectures extracted from Digital Road Map (DRM) data: Tokyo, Kanagawa, Shizuoka, and Ibaraki. We extracted all intersections and roads from the DRM data of each prefecture. We then constructed a spatial network with the intersections as the nodes and the roads between the intersections as the links by following a standard formulation of road networks such as those presented by SNAP (Stanford Large Network Dataset Collection)Footnote 1. Namely, we deleted nodes used for the curve-segments of roads by directly connecting intersections at both ends of these curve-segments, where the curve-segment nodes mean points representing polylines between intersections in DRM, which are used to approximate the road shapes. As a result, each network has 340919, 295151, 110925, and 172892 nodes, and 485858, 402576, 162322, and 263075 links, respectively.
Existing methods used for comparison
We used the following three centrality measures and two clustering (community extraction) methods for comparison. We begin by briefly discussing the centrality:
Closeness centrality is calculated based on the average of the shortest path length d(u,v) between pairs of nodes. In this paper, we employed a harmonic version using the inverse value of the distance.
$$clc(v) = \sum_{u \in {\mathcal{V}} \setminus \{v\}} d(v,u)^{-1} $$
Betweenness centrality
Betweenness centrality is calculated based on the number of shortest paths between any pair of nodes.
$$bwc(v) = \sum_{s \in {\mathcal{V}} \setminus \{v\}} \sum_{t \in {\mathcal{V}} \setminus \{s,v\}} \frac{\sigma_{s,t}(v)}{\sigma_{s,t}}, $$
where σs,t is the number of shortest paths between s and t, and σs,t(v) is the number of routes going through v of them.
Eigenvector centrality
Eigenvector centrality is based on the dominant eigenvector of the adjacency matrix and calculated using the power iteration method.
$$eig_{t+1}(v) = \sum_{u \in \Gamma(v)} eig_{t}(u),~~eig_{t+1}(v) \gets \frac{eig_{t+1}(v)}{\sum_{u \in {\mathcal{V}}} eig_{t+1}^{2}(u)}, $$
where Γ(v) is the set of adjacent nodes of v. We used the converged value eigt(v) as centrality score eig(v).
Next, we show the abstract of the community extraction methods:
Distance-based method
We extend the closeness centrality to group closeness centrality similarly to our group connectedness centrality.
$$ clc({\mathcal{R}}) = \sum_{v \in {\mathcal{V}}} \max_{r \in {\mathcal{R}}} d(v,r)^{-1}. $$
To maximize the objective function Eq. (16), we employ a greedy algorithm as well as our group closeness centrality. Extracting a node set \({\mathcal {R}}\) consisting of K representatives and assigning each of the remaining nodes to a community based on the shortest path length to the representative node is equivalent to conducting K-medoids clustering based on the graph distance. Hereafter, we refer to this method as CLC.
Density-based method
In this study, we employed a well-known community extraction method, the CNM method (Clauset et al. 2004), which directly optimizes the modularity function to accelerate the calculation. The CNM method divides all nodes into K communities without extracting representative nodes. Therefore, just for reference, we utilize the CNM method as a general community extraction method.
Results of connectedness centrality: c n c 3(v)
How to determine parameter J?
We first experimentally examine the quality of the connectedness centrality calculations. The value of cnc3(v) depends on the parameter, which is the number of simulations J. As mentioned above, when increasing the number of simulations J, the expected value of cnc3(v) approaches the true value because cnc3(v) is the unbiased estimator. To confirm the variance of cnc3(v) with respect to J, we conducted M calculations and introduced the coefficient of variation, \(CV(v) = \sigma (v)/\overline {cnc}_{3}(v)\), where \(\overline {cnc}_{3}(v) = M^{-1}\sum _{m=1}^{M} cnc_{3}(v;m)\) stands for the arithmetic mean of the value by the m-th calculation, cnc3(v;m), 1≤m≤M, and \(\sigma (v) = M^{-1}\sum _{m=1}^{M} (cnc_{3}(v;m)-\overline {cnc}_{3}(v))^{2}\) is a sample standard deviation. Figure 5 shows the coefficient of variation for M=100 calculations, where the horizontal axis is the connectedness centrality score cnc3(v). In each of the calculations, we set the number of simulations to J=10000.
From Fig. 5, we can confirm that, for all networks, the value of CV(v) is significantly small, especially for the nodes with high scores. Moreover, generally speaking, the value of CV(v) becomes \(1/\sqrt {J}\) with respect to the increase of J. Based on the results of this verification, in the remainder of this paper, the result of J=10000 will be shown unless otherwise noted.
Comparison with other centrality measures
We now compare the top nodes in the ranking of existing centrality measures that can be naturally applied to road networks and proposed connectedness centrality. Let CENT(r) and PROP(r) be the node set up to the upper r ranking of the existing and the proposed centrality, and quantitatively investigate the ranking similarity using the F-measure (Rijsbergen 1979) defined as follows:
$$F(r) = \frac{2 \cdot |PROP(r) \cap CENT(r)|}{|PROP(r)|+|CENT(r)|}. $$
Figure 6 shows the F-measure with respect to the ranking. Figure 6 shows that, in the four networks, the F-measure is almost 0 up to the top 1000 and thus the rankings do not almost match; that is, different nodes are extracted as important nodes by each centrality.
Similarity of centrality rankings
Figure 7 plots the top 1000 nodes in the proposed and existing centrality rankings in the Shizuoka network. The highly ranked nodes of the connectedness centrality are distributed in wide plain areas, especially residential areas. In the residential area in the plain part, since there are many routes to other nodes and some routes are blocked, alternative routes can be used. Thus, the connectedness of these nodes with neighbors is high. Nodes on important roads such as expressways are selected as highly ranked nodes of the closeness and betweenness centralities. While high-closeness nodes are chosen from the central region of the network, high-betweenness nodes are chosen from the entire network. Highly eigenvector-centrality-ranked nodes are selected from the downtown area around a station. Although not shown, in the ranking based on the second eigenvector, the nodes of the downtown area around the other station were extracted. In this way, various centrality measures can extract important nodes from the road network, but their meanings are significantly different.
Top 1000 nodes of centrality rankings (Shizuoka network). a Connectedness centrality. b Closeness centrality. c Betweenness centrality. d Eigenvector centrality
Results of group connectedness centrality: \(cnc_{3}(\mathcal {R})\)
In this section, we evaluate the group connectedness centrality from the viewpoints of stability with respect to the number of simulations, reachability to the extracted representative nodes, and computation time. In our experimental scenario, we set the number of representatives (evacuation facilities) K as a relatively small number, like K=5,10,15,20, because, in practical terms, installation cost should be considered and the number of facilities that can be installed in each municipality is restricted.
Stability with respect to the number of simulations
In this subsection, we show the stability of group connectedness centrality with respect to the number of simulations J. In this experiment, we conducted CNC calculation 10 times while changing the number of simulations J=101,102,103,104. We regard a result with J=100,000 as converged and compare the results in terms of similarities of representative nodes and communities. Figure 8a depicts the Minimum Matching Distance (MMD) between representatives extracted by each CNC computation, which is calculated as follows:
$$MMD(J) = \frac{1}{K} \sum_{k=1}^{K} \min_{1 \leq h \leq K} e(r(k), r(h;J)) +\frac{1}{K} \sum_{k=1}^{K} \min_{1 \leq h \leq K} e(r(k;J), r(h)), $$
where r(k) and r(k,J) respectively stand for the k-th representative node extracted by a CNC computation with 100,000 and J simulations, and e(a,b) is the Euclidean distance between locations of two representatives a and b. In Fig. 8a, the red solid line is the mean of MMD(J) at 10 times with respect to the number of simulations J taken as the horizontal axis and the black lines stand for the average Euclidean distances between node pairs of \(\Gamma _{d} = \{(u, v) \in {\mathcal {V}} \times {\mathcal {V}}; d(u,v) = d\},~~d=1,5,10\). As illustrated by Fig. 8a, the average MMD decreases as the number of simulations J increases and, especially at J=104, indicates about the same or less value than the average distance of Γ5. This means that almost the same representatives are extracted.
Stability w.r.t. the number of simulations (K=20). a Similarity between representatives. b Similarity between communities
Figure 8b depicts the Normalized Mutual Information (NMI) between communities assigned to the nodes by each CNC computation. From Fig. 8b, the average NMI takes a substantially large value, i.e., NMI(J)≃0.9, which means that almost the same communities are extracted. These results confirm that stable results can be obtained with a substantially small number of simulations compared with the number of possible worlds J≪2L. Although we show the results with setting K=20, which is the largest in our settings, similar results were obtained with K=5,10,and15.
Visualization of representatives and their communities
Next, we qualitatively evaluate our method, CNC, compared with two existing methods, CLC and CNM, by visualizing the extracted communities shown in Fig. 9b, c, and d. Due to the limitation of the number of pages, we show only the results of the Shizuoka network with setting K=20, which is the largest of our experiments, but similar results were obtained in other networks with settings K=5,10,and15. In Fig. 9b and c, the representative nodes are described as star nodes and the colors of other nodes stand for their assigned communities. Figure 9a depicts the natural environment in and around the Shizuoka network, such as mountains and rivers, which can be constraints during evacuations. Figure 9b shows that our CNC method extracts representatives (star nodes) avoiding mountainous areas and divides nodes into communities roughly according to the natural environment. For some representatives (surrounded by circles in Fig. 9b), although they are located in lakeside, streamside, and peninsula areas and are easily isolated, many nodes exist around them. Therefore, evacuation facilities are needed in these areas. On the other hand, in the CLC and the CNM methods' results, several communities (surrounded by squares in Fig. 9c and d) range across rivers and mountains. In Fig. 9c, two circled representative nodes are located in the mountainous areas; thus access to the residents of these communities may be difficult during disasters.
Visualization of Shizuoka network (K=20). a Landmarks. b CNC. c CLC. d CNM
Reachability under link cutting
In this subsection, we quantitatively evaluate the reachability of the extracted representative nodes under a link disconnection situation, which models road blockage. In this experiment, we remove a certain ratio of links according to edge-betweenness centrality or random-selection. We then examine whether the representative node can be reached from the non-representative nodes along with existence links.
First, we count the number of reachable representative nodes from each non-representative node where a certain ratio of high edge-betweenness links is removed. Figure 10 shows the average number of reachable representatives with respect to the cutting rate taken as the horizontal axis. From Fig. 10, for all networks and all communities, K, the average number of reachable representatives extracted by our CNC method is substantially larger than those by the CLC method. In particular, even though 10% of the links are removed, at least one representative node can be reached from non-representative nodes for any number of representatives. Although, as the number of representative nodes increases, the difference between the results of the two methods tends to gradually decrease, the proposed method is sufficient with a smaller number of representative nodes for obtaining the same degree of reachability to the representatives extracted by CLC. For example, if setting up K=5 evacuation facilities for the Tokyo network, each resident can reach two to five proposed evacuation sites on average (red line in the left top image of Fig. 10a). In order to achieve the same degree of reachability, 15 evacuation facilities are required (green line in the left bottom image of Fig. 10a).
Reachability of the representatives under high-betweenness-link cutting. a tokyo. b kanagawa. c shizuoka. d ibaraki
Next, we show the average number of nodes reachable to representatives within a certain distance d in Fig. 11, where the dotted and solid lines indicate the distances d=10 and d=20, respectively. In almost all the graphs of Fig. 11, we see that 1) when the cutting probability is 0, that is, when the graph is not disconnected, many more nodes can reach the representatives extracted by CLC than CNC; 2) as the cutting probability increases, more nodes reach the representatives extracted by CNC than CLC. Although the CLC method can extract nodes with the smallest sum of distances in situations where the graph is not disconnected, we confirmed that many nodes cannot be reached if the graph becomes disconnected.
Distance to the representatives under high-betweenness-link cutting. a tokyo. b kanagawa. c shizuoka. d ibaraki
Similarly to the results shown in Fig. 10, we removed a certain ratio of links selected uniformly at random and examined whether the representative node can be reached from the non-representative nodes along with existence links (Fig. 12a). In the experiment, we executed link removal trials 10 times and calculated the average number of reachable nodes. Unlike the results in Fig. 10, the difference between the two methods is substantially small in any network. Similarly to Figs. 11, 12b shows the average number of nodes reachable to representatives within a certain distance d. Unlike the results in Fig. 11, more nodes can reach the CLC representatives; however, the number of nodes that can reach the CNC representatives is almost the same as that of high-betweenness-link cutting. Therefore, the CNC method can stably extract promising representative nodes robust to both types of link cuttings. Although not shown, similar results were obtained for the other networks.
Reachability and distance of the representatives under random-link cutting (Tokyo). a Reachability. b Distance
From these results, in the context of the evacuation facility problem, residents can be expected to reach evacuation facilities extracted by our method even when blockages of high-betweenness roads like a bridge between cities occur.
Computation time
Finally, we evaluate our method from the viewpoint of computation time. Figure 13a and b show the computation time with respect to the number of simulations J and communities K, respectively. As might be expected, Fig. 13a shows that, when the number of simulations J increases, computation time increases; however, even at J=10000, CNC is faster than CLC. Moreover, Fig. 13b shows that the difference between the computation time of the CNC and CLC methods increases as the number of communities K grows. Therefore, our method can output a number of representative nodes and their communities more efficiently even for large-scale networks.
Computation time. a w.r.t. #simulations J (K=10). b w.r.t. #communities K (J=10000)
Extension: case of non-uniform connection probabilities
Although our proposed cnc3 algorithm was derived by assuming the case of the uniform connection probability for all links, it should be mentioned that, by adequately transforming a given simple graph into a multigraph and/or by adequately introducing some virtual nodes and links, we can easily deal with the case of non-uniform ones even with our current algorithm. Our algorithm is straightforwardly applicable to a multigraph. More specifically, we denote the multiplicity of a link \(e \in {\mathcal {E}}\) by m(e) and let e and f be links whose multiplicities are m(e)=1 and m(f)=2, respectively, which means that a multiset {e,f,f} is a subset of \({\mathcal {E}}\), that is, \(\{e, f, f\} \subset {\mathcal {E}}\). Then, for a list of links produced by randomly arranging each element in \({\mathcal {E}}\), the probability that at least one f appears prior to e is twice the probability that e appears prior to both of them, indicating that we can naturally implement a non-uniform probability, p(f;s)=2p(e;s), since the second occurrence of f is simply ignored in terms of connectedness. This suggests that, by adequately transforming a given simple graph into a multigraph, we can easily deal with the case of non-uniform probabilities.
As another way to deal with the case of non-uniform probabilities, we can consider adding virtual nodes and links. More specifically, let (u,v) and (w,x) be links in a simple graph \(G({\mathcal {V}}, {\mathcal {E}})\). After removing (u,v), we add two links (u,y) and (y,v) by introducing a completely new node \(y \not \in {\mathcal {V}}\), which produces a new graph \(G'({\mathcal {V}}', {\mathcal {E}}')\) where \({\mathcal {V}}' = {\mathcal {V}} \cup \{y\}\) and \({\mathcal {E}}' = ({\mathcal {E}}\setminus \{(u,v)\}) \cup \{(u,y), (y,v)\}\). From our uniform setting, we obtain p((u,y);s)=p((y,v);s)=p((w,x);s) for G′. Then, for a list of links produced by randomly arranging each element in \({\mathcal {E}}'\), the probability that both (u,y) and (y,v) appear prior to (w,x) is half the probability that e appears prior to one of them, indicating that we can also naturally implement a non-uniform probability, p((u,v);s)=0.5p((w,x);s), over the original graph G. This suggests that, by adequately introducing some virtual nodes and links, we can easily deal with the case of non-uniform probabilities. In this paper, although our algorithm deals with the case of non-uniform probabilities over a multigraph as mentioned above, to evaluate the basic performance of our proposed algorithm, we focus only on the case of uniform probability over a simple graph.
Our problem setting is very closely related to percolation problems and introducing the percolation states of individual nodes into our problem formulation is an interesting research direction, as performed in the percolation centrality proposed by Piraveenan et al. (2013). In addition, since our proposed algorithm is quite efficient and works linearly with the problem size for each simulation, as another research direction, we expect that our method can more efficienctly solve some percolation problems.
On the other hand, our proposed algorithm can contribute to some types of dynamic networks analyses. In fact, as to some sort of dynamic networks that incrementally evolve by addition of a link over time, we can directly apply our algorithm for efficiently computing the average number of reachable nodes with respect to every node in a dynamic network during a given period. Moreover, since it is straightforward to enable to cope with the deletion of a link, our algorithm is expected to be served as a basic tool for this type of reachability analyses, although we need to confirm the validity of this claim by performing further experiments in future.
In addition to the above future directions, our immediate future work includes evaluating both the effectiveness of our connectedness centrality and the efficiency of our algorithm for the other types of networks such as social networks with non-uniform connection probabilities. For this purpose, in order to clarify the basic characteristics of our centrality and algorithm, we plan to utilize representative synthetic networks produced by Erdos-Renyi, Barabasi-Albert, and stochastic block models. Furthermore, our method, in particular, the objective function and its marginal gain on the CNC method, could quantitatively evaluate existing facilities from the viewpoints of reachability to each facility, contribution of each facility and its duplication degree which helps planning not only of the new construction but also of the abolishment. Therefore, it can be expected that more effective evacuation facility installation would be realized by objectively quantifying the degree of contribution of some candidate sites devised by domain experts using our objective function.
In this paper, to extract high-connectedness nodes from large-scale networks, we proposed connectedness centrality and its extended version, group connectedness centrality, together with an efficient sampling method based on a time-evolving graph. The proposed method can be regarded as a generalization of connected component decomposition intended for a connected graph. In experiments using actual road networks, we confirmed that 1) connectedness centrality can quantify the degree of connectedness with neighboring nodes and extract high-connectedness nodes; and 2) group connectedness centrality can extract adequate representative nodes from the viewpoints of their location, community members, and reachability. We further confirmed that our approximation based on sampling technique is efficient and effective in terms of computation time and stability with respect to the number of simulations.
As future work, we plan to develop an extended version of connectedness centrality that takes non-uniform values as link connection probabilities into account and confirm whether our method can apply to more varied types of networks.
The raw datasets used and analysed during the current study are available from an Open Street Map (OSM) site, https://mapzen.com/data/metro-extracts, and Digital Road Map (DRM) data, http://www.drm.jp/english/drm/e_index.htm.
http://snap.stanford.edu/data/index.html
Agra, A, Cerdeira JO, Requejo C (2017) A decomposition approach for the p-median problem on disconnected graphs. Comput Oper Res 86:79–85.
Alp, O, Erkut E, Drezner Z (2003) An efficient genetic algorithm for the p-median problem. Ann Oper Res 122:21–42.
Blondel, VD, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):P10,008.
Bonacich, P (1987) Power and Centrality: A Family of Measures. Am J Sociol 92(5):1170–1182. https://doi.org/10.2307/2780000.
Brin, S, Page L (1998) The anatomy of a large-scale hypertextual web search engine. Comput Netw ISDN Syst 30:107–117.
Ceccarello, M, Fantozzi C, Pietracaprina A, Pucci G, Vandin F (2017) Clustering uncertain graphs. Proc VLDB Endowment 11(4):472–484.
Chen, PY, Hero AO (2015) Deep community detection. IEEE Trans Signal Process 63(21):5706–5719.
Clauset, A, Newman MEJ, Moore C (2004) Finding community structure in very large networks. Phys Rev E 70(6):066,111+. https://doi.org/10.1103/PhysRevE.70.066111.
Crucitti, P, Latora V, Porta S (2006) Centrality Measures in Spatial Networks of Urban Streets. Phys Rev E 73(3):036,125+.
Freeman, L (1979) Centrality in social networks: Conceptual clarification. Soc Netw 1(3):215–239. https://doi.org/10.1016/0378-8733(78)90021-7.
Fushimi, T, Saito K, Ikeda T, Kazama K (2018) A New Group Centrality Measure for Maximizing the Connectedness of Network Under Uncertain Connectivity In: Proceedings of the 7th International Conference on Complex Networks and Their Applications, 3–14.. Springer.
Girvan, M, Newman MEJ (2002) Community structure in social and biological networks. Proc Natl Acad Sci 99(12):7821–7826. https://doi.org/10.1073/pnas.122653799.
Jin, R, Liu L, Ding B, Wang H (2011) Distance-constraint reachability computation in uncertain graphs. Proc VLDB Endowment 4(9):551–562.
Katz, L (1953) A new status index derived from sociometric analysis. Psychometrika 18:39–43.
Kaveh, A, Beitollahi A, Mahdavi V (2018) Locating emergency facilities using the weighted k-median problem: A graph-metaheuristic approach. Period Polytech Civ Eng 62(1):200–205.
Kleinberg, JM (1999) Authoritative sources in a hyperlinked environment. J ACM 46:604–632.
Levanova, TV, Loresh MA (2004) Algorithms of ant system and simulated annealing for the p-median problem. Autom Remote Control 65(3):431–438.
von Luxburg, U (2007) A tutorial on spectral clustering. Stat Comput 17(4):395–416.
McKendall, AR, Shang J (2006) Hybrid ant systems for the dynamic facility layout problem. Comput Oper Res 33(3):790–803.
Misra, S, Oommen BJ (2005) Dynamic algorithms for the shortest path routing problem: learning automata-based solutions. IEEE Trans Syst Man Cybern Part B (Cybern) 35(6):1179–1192.
Palla, G, Derényi I, Farkas I, Vicsek T (2005) Uncovering the Overlapping Community Structure of Complex Networks in Nature and Society. Nature 435:814–818.
Park, K, Yilmaz A (2010) A Social Network Analysis Approach to Analyze Road Networks In: Proceedings of the ASPRS Annual Conference 2010.
Pfeiffer, JJ, Neville J (2011) Methods to determine node centrality and clustering in graphs with uncertain structure In: Proceedings of the Fifth International Conference on Weblogs and Social Media, 590–593.. The AAAI Press.
Piraveenan, M, Prokopenko M, Hossain L (2013) Percolation centrality: Quantifying graph-theoretic impact of nodes during percolation in networks. PLoS ONE 8(1):e53,095. https://doi.org/10.1371/journal.pone.0053095.
Potamias, M, Bonchi F, Gionis A, Kollios G (2010) K-nearest neighbors in uncertain graphs. Proc VLDB Endowment 3(1-2):997–1008.
Puerto, J, Ricca F, Scozzari A (2014) Unreliable point facility location problems on networks. Discret Appl Math 166:188–203.
Rezvanian, A, Meybodi MR (2016) Stochastic graph as a model for social networks. Comput Hum Behav 64(C):621–640.
Rijsbergen, CJV (1979) Information Retrieval, 2nd edn.. Butterworth-Heinemann, Newton.
Seidman, SB (1983) Network structure and minimum degree. Soc Netw 5(3):269–287.
Tabata, K, Nakamura A, Kudo M (2017) An efficient approximate algorithm for the 1-median problem on a graph. IEICE Trans Inf Syst E100.D(5):994–1002. https://doi.org/10.1587/transinf.2016EDP7398.
Vahidipour, SM, Meybodi MR, Esnaashari M (2017) Finding the shortest path in stochastic graphs using learning automata and adaptive stochastic petri nets. Int J Uncertain Fuzziness Knowl-Based Syst 25(3):427–455.
We thank Prof. Seiya Okubo of the University of Shizuoka, Shizuoka, Japan, for supporting computation environments.
All authors are grateful for the financial support from JSPS Grant-in-Aid for Scientific Research (No.17H01826).
School of Computer Science, Tokyo University of Technology, 1404-1 Katakuramachi, Hachioji city, Tokyo, 192-0982, Japan
Takayasu Fushimi
Faculty of Science, Kanagawa University, 2946 Tsuchiya, Hiratsuka city, Kanagawa, 259-1293, Japan
Kazumi Saito
Center for Advanced Intelligence Project, RIKEN, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027, Japan
School of Management and Information, University of Shizuoka, 52-1 Yada, Suruga-ku, Shizuoka city, Shizuoka, 422-8526, Japan
Tetsuo Ikeda
Faculty of Systems Engineering, Wakayama University, 930 Sakaedani, Wakayama city, Wakayama, 640-8510, Japan
Kazuhiro Kazama
TF performed the research and wrote the article. KS contributed to designing the proposed method. TI contributed preparation of experimental data and part of experimental evaluations. KK contributed survey of related work and part of experimental evaluations. All authors read and approved the final manuscript.
Correspondence to Takayasu Fushimi.
All authors declare no financial and non-financial competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Fushimi, T., Saito, K., Ikeda, T. et al. Estimating node connectedness in spatial network under stochastic link disconnection based on efficient sampling. Appl Netw Sci 4, 66 (2019). https://doi.org/10.1007/s41109-019-0187-3
Spatial network
Facility location problem
Connected component decomposition
Graph sampling
|
CommonCrawl
|
Supersymmetric world from a conservative viewpoint
IQ in different fields
Steve Hsu has found a very interesting table with the average GRE scores computed for various concentrations. He has also defined a linear map translating the average V-Q-A scores into a more familiar IQ scale. This convention looks natural to me and I will follow his scale although it is not guaranteed that it is equally calibrated as other IQ measurements.
Disclaimer: these cold numbers expressing typical IQ for different occupations must be interpreted very carefully. They don't necessarily imply anything. The outcome depends on the character of the question, discrimination, etc. Despite different numbers, all of us are equal. Blah blah blah. And so on.
The results are:
130.0 Physics
129.0 Mathematics
128.5 Computer Science
128.0 Economics
127.5 Chemical engineering
127.0 Material science
126.0 Electrical engineering
125.5 Mechanical engineering
125.0 Philosophy
124.0 Chemistry
123.0 Earth sciences
122.0 Industrial engineering
122.0 Civil engineering
121.5 Biology
120.1 English/literature
120.0 Religion/theology
119.8 Political science
119.7 History
118.0 Art history
117.7 Anthropology/archeology
116.5 Architecture
116.0 Business
115.0 Sociology
114.0 Psychology
114.0 Medicine
112.0 Communication
109.0 Education
106.0 Public administration
If you trust these numbers, one of the conclusions is that the economists are the brightest among the social scientists (rank 4) who are only followed by philosophers (rank 9). The philosophers are still brighter than political scientists (rank 17) who are smarter than the sociologists (rank 23). This list may confirm virtually all of your preconceptions about all these fields, at least it was my case. The only exception was medicine that I expected to appear in the upper half.
Let me remind the dear reader that the IQ is normalized in such a way that all the people in the world are distributed along a Gaussian (normal) distribution with the mean value IQ=100 and the standard deviation ΔIQ=15. So 34.1% of the people should be between 85 and 100, 34.1% of them should be between 100 and 115, while e.g. only 2.5% of the folks should be above 130, including those 0.15% of folks above 145, and 2.5% of them should be below 70, including 0.15% of folks below 55: the latter groups are called, in the technical terminology, morons, idiots, and imbeciles.
You may also want to study the table of average IQs in individual nations. As a Czech – a guy from a nation whose average IQ is just 98 – I may still use my personal IQ of 187 to safely predict that you are Japanese, i.e. a member of one of the 5 smartest nations whose average IQ is 105, and even play Kimigayo for you. ;-)
There is a whole IQ category on this blog with 30 articles which are dedicated to intelligence, especially the IQ differences between the two sexes.
The IQ of a field seems to be slightly negatively correlated with its hotness. Click the graph for a source with some extra information.
Luboš Motl v 2:19 AM
Atlanin Jul 11, 2006, 8:03:00 AM
Dear Luboš,
re: In "IQ in different fields" you made the following statement: "The only exception was medicine that I expected to appear in the upper half ( of the list of Mean IQs )." Average IQ was 114 for medicine fourth from the bottom only Communication, Education and Public administration were lower.
I am a physician and entered medical school ( Ivy League ) in the early 1960s. At the time I took the GREs ( score 960 in Biology out of 970 top score and 800 verbal, and 800 math ). The stellar GREs did not help me get into Medical school because I already had been accepted before taking the GREs during my 2nd semester of my last year of college. There was no GRE exam for Medicine then or now to my knowledge. Prospective medical students in the 60s took the Medical Aptitude Test some time before applying to medical school and that still is being done as far as I know. The Medical Aptitude Test has been dumbed done since I took it as have the SAT exams. The Medical Aptitude Test was a darn hard test at that time. During my time in medical school ( 1960s ) and the year after I graduated, there were three tests given nationally to medical students ( and I suspect currently ): 1) at the end of the second year, a basic science exam, 2) at the end of the fourth year, a clinical science exam, and 3) at the end of one's Internship ( or equivalent 1st year residency ), an applied clinical science exam. No exact grades were reported back to the student, only one's percentile rank which made a great difference later in getting into a top residency program and was used to get a medical license to practice medicine in most States of the USA.
My hunch is that the Medicine GRE must be in reference to Nursing, Audiology and possibly Physical Therapy not true MD students.
Check out this URL http://members.shaw.ca/delajara/GREIQ.html .
Here is Rodrigo de la Jara's information on "How to estimate your IQ based on your GRE or SAT scores." Linked on that page is http://members.shaw.ca/delajara/Occupations.html "Modern IQ ranges for various occupations." The MDs were the brightest on average of occupations but Natural Science included physical, life and math. There is much to mine relating to IQ on de la Jara's site.
Now another topic. A nephew of mine who has been followed by the John's Hopkins High IQ study as he grew up, did not get into Harvard's on coming Freshman class; he is on the waiting list but will never attend if eventually accepted. He will attend Colgate University and it will be Harvard's loss IMHO. His SATs were 800 verbal, 800 math, and A+ Essay. He knows of two females and two blacks with much lower SATs who were accepted to Harvard and who have no where near his Academic and Extracurricular achievements such as being High School Newspaper Editor his Junior year--it was a competitive position not restricted to Seniors. This perceived insult by Harvard will bother him for years because of its injustice. I urge you to read the following essay: Down, Down, Down--Reflections On The Boy Crisis http://fredoneverything.net/FOE_Frame_Column.htm .
chevo Feb 4, 2007, 11:24:00 PM
Hmmm, I don't trust these numbers. I looked at the ETS stats for the GRE and according to my calculations, physics are #1 but philosophy is #2 and mathematics comes in at #3.
I think Hsu's #s are from added scaled scores from the three subtests. Wouldn't it be more accurate to get the deviational scores since the three subtests of the GRE have different averages and SDs? Simply adding the 3 subtests would give the quantitatively strong disciplines a decided advantage on the rankings because the average scaled score for the quantitatuve section is much higher than it it is for the verbal. The verbal section also have a much smaller SD.
Philosophy majors have the highest verbal scores, the second highest quantitative scores for humanities-social sciences majors (just behind economic majors) and the 3rd highest analytical scores overall behind physics and math majors.
On the newer GRE test with the analytical writing section, philosophy majors have the highest overall deviational GRE score of any major. This subsection gives phil majors an advantage and so probably should not be included in rankings. From just the verbal and quantitative sections, phil majors are still right behind physics majors.
Lumo Feb 6, 2007, 6:18:00 PM
Dear Chevo, trust whatever you want but these numbers are directly taken from this page. Philosophy is at rank 9, with 585,597,621 in V,Q,A GRE tests. I didn't invent the ranks, just translated them to expected IQs. Best, Lubos
PTET Sep 6, 2008, 11:19:00 AM
I'm amused to see lawyers don't appear on the table. Presumably their IQ's are too high for the scale ;>
OZ Sep 24, 2008, 4:37:00 AM
I suspect the engineers have their scaled scores biased downward due to their verbal skills. I know a few engineers with fantastic analytical/quantitative skills, but are lacking (somewhat) in written communication...
Anonymous Aug 17, 2009, 5:11:00 AM
heh, that's true,i dont know where this data is coming from (frankly, i only want to prove you wrong, heh). The guy probably lumped all healthcare professionals, from medical assistants to doctors, into one category. I trust this paper more: http://www.ssc.wisc.edu/cde/cdewp/98-07.pdf (pay attention to figs 8, 10, 12)
johns2317 Oct 15, 2010, 6:58:00 PM
Enjoyed reading through comments and your post and found it very interesting. I was rather amused by all the comments from md's trying to convince themselves they had to be higher in ranking though they seem to be rather logical. Dr's must do very well in school and pre-med in a broad spectrum of mostly mundane studies. They learn and are great note takers and excellent at studying and repeating what they are taught. There's little or no room for deviating from the norms of what is known and expected. You would not expect someone who had a lot of insight to be that much of an automaton. All said professional schools are not created for the highly gifted with all their positive and negative qualities if you choose to refer to very high IQ's as gifted.
SKlussmann Oct 24, 2011, 12:46:00 AM
I often notice that a lot of people completely overlook one pretty obvious reason, why students of certain disciplines, escpecially those called "hard sciences", often land at the top at those standardized tests on average: Students of Economics, Physics, Computer Science etc. are more or less "training" for the math part of the GRE everyday, just by doing their daily learning. What is required for the math part is basically their "bread an butter", to advance in their field they have to train the math to succeed. Most of the social sciences, except for economics, just require you to leanr the basics of statistics. So it's really not a surprise that certain fields do better than others and that doesn't necessarily has anything to do with their IQ (btw, it's just a psychometric test, and is in fact changeable -> neuroplasticity). It would be interesting to investigate if students of certain fields do different at certain standardized tests like the GRE Math part before and after their undergraduate studies. Of course you have to take things like graduation rate into account. I am convinced that you will see a certain divergence of the Math score when you compare social vs. Hard sciences which would be partly due to the fact of training. So what about the Verbal Score? I think that just those studying English or languages will benefit from their studies. And by the way, psychologists have found that your scope of vocabulary is the best proxy of your "intelligence" or "IQ" (which, keep in mind, is changeable after all).
Kind regards and many greetings from Berlin, Germany
Stephen Luttrell Mar 23, 2012, 5:49:00 PM
Hmmm. I see that
FindRoot[Probability[x>=iq,x\[Distributed]NormalDistribution[100,15]]==1/CountryData["UnitedStates", "Population"],{iq,150,100,200}]
{iq -> 187.059}
Luboš Motl Mar 23, 2012, 6:34:00 PM
excellent calculation. Most of other people know the number 187 as the IQ of Sheldon Cooper but it was cleverly chosen to emulate a plausible IQ by the smartest American.
froginblender Apr 1, 2012, 9:31:00 PM
With an IQ of 187, Sheldon would handily qualify for membership not only in Mensa but also the Prometheus and even Mega Societies.
But what of people possessing even greater IQs? Supposedly the scale breaks down once you get past Sheldon's 187 and break the 200 points mark.
This is actually not true as I show below: If
$ \left(\sum_{k=1}^n a_k b_k \right)^2 $
$ \[ \frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} =
1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}}
{1+\frac{e^{-8\pi}} {1+\ldots} } } } \]$
and then by simple chromularinversion we find that all IQ > 200 wrap around the back and then continue, starting from n=1, a phenomenon known as Doubly Speshul Intelligence (DSI), which means --
What's that? Gibberish? Ah yes, dear Lubos, you would say that of course, as the benighted possessor of a singly special intelligence.
Well, don't trouble yourself too much and go back to playing with your twistors. In the meantime I will solve cancer and the P ≠ NP problem. Should be done by dinnertime tomorrow.
Signed, Eugene S
IQ 74 [DSI]
Member, DENSA (Doublyspeshul Education-Negative Slacker's Association)
P.S.: I have a simple C++ program that interactively demonstrates the above concept but the margin of this comment box is too narrow to hold all eight pages of it.
itsnobody Sep 11, 2012, 9:46:00 AM
What's the SD for Czech Republic if your IQ is 187 and the average is 98? How it could it be 15? Seems more like 50...
Does that mean there are only like 10 people in Czech Republic with an IQ above 172?
Luboš Motl Sep 11, 2012, 9:57:00 AM
Dear itsnobody, let's remote the personal dimension from this question.
Now, when it's removed, I must say that your reasoning is statistically faulty because it doesn't take the so-called look-elsewhere effect into account. If you find a subgroup - nation in this case - which contains an element with unreasonably extreme characteristics, it does not mean that you have demonstrated a contradiction or implausibility of the parameters of the distribution because there may exist many other subgroups that don't contain such an extreme element.
So it makes no sense to constrain your attention to the Czech Republic unless you have information about the IQ-187 people in other, comparable nations.
Just try an extreme version of your "localization" to see why it's wrong. In this apartment house, there may be 50 people, so even a person at IQ=145 is "almost impossible" to be found here - at a 90% confidence level or so. That doesn't mean that there's no person with IQ=145 or greater here.
itsnobody Oct 27, 2012, 11:38:00 PM
You didn't really answer the question, what's the SD for Czech Republic? Do you know what it is?
I didn't argue a contradiction, I simply asked if that would mean that there are only around 10 people in Czech Republic with IQs above 172 since Czech Republic's total population size is 10,546,000.
Your apartment house analogy is flawed because it doesn't take into account the average IQ of the 50 people in the apartment house.
A better analogy would be 50 people in an apartment house, the average IQ is 98, and 1 of them has an IQ of 145, what's the SD within this sample of 50 people?
The SD for each nation cannot obviously be the same.
For those who don't understand how important knowing what the Standard Deviation is consider this:
An average IQ of 105 with an SD of 3 would mean there would be virtually no genius IQ people in this group (but the average IQ appears high)
An average IQ of 97 with an SD of 50 would mean there would be lots of geniuses within this group (even though the average is lower)
The Standard Deviation tells us how much variation there is from the average value.
So an average value without the SD wouldn't tell you much if you were trying to find out the percentage of geniuses in the group.
As for East Asians, their population size is above 1500 million, their average IQ is also high, but I don't know what the SD for their group is.
Some other questions:
- What type of IQ tests are used, are they all very similar in type all around the world (if not this would give inaccurate data)?
- What's the sampling error and standard deviation?
It's just like the saying goes "lies, damned lies, and statistics"
Luboš Motl Oct 28, 2012, 6:05:00 AM
Dear its nobody,
the IQ standard deviation in the Czech Republic is 12 IQ points, so the Czech nation is clearly more egalitarian than the mankind whose standard deviation is by definition 15. 187 is formally 7.42 standard deviations above the average IQ of 98.
But that's an irrelevant information for the questions about the tails because once we admit that Czechia's IQ distribution deviates from the global distribution, we must also admit that it deviates from any normal distribution, so you can't calculate the probabilities just from the mean value and the standard deviation.
Even if you could and a person with IQ at 187 would turn out to be very unlikely, it doesn't mean that it's impossible. The Czech Republic could have been very well "cherry-picked" as the country that just happens to harbor a person with the IQ equal at 187. A convincing "proof" that something doesn't seem right could only be obtained if you took the most inclusive population you may get - the whole mankind - and you would still calculate that the probability is insanely low. That would be an argument for the assertion that some of the assumptions are flawed.
Your comments about the calculation of apartments are clearly complete bogus - that's the fallacy I mentioned above that you brought into ad absurdum dimensions, except that you apparently think that your reasoning is valid.
If there is an apartment with 2 or 5 or even 10 people, it's clear that the number of folks in the apartment with IQ above some high value can't be accurately obtained just from some mean value and the standard deviation because the distribution obtained from 2, 5, or 10 people is highly non-Gaussian - it's the sum of 2, 5, or 10 delta-functions, in fact.
Rusty Longwood Sep 18, 2013, 4:03:00 AM
One big problem- doctors didn't take the GRE, they took the MCAT. So who's taking the GRE in medicine? Answer: slackers who couldn't do well enough on the MCAT to make it into a med school and are looking at lesser positions in nursing, physical therapy, lab technicians, etc.
David Bandel Sep 19, 2013, 6:08:00 AM
To say that the Czech Republic has a different standard deviation than the one used, globally, for IQ norming, is illogical and I believe expresses, on your behalf, a lack of understanding of the psychometrics involved.
The standard deviation, like the mean, is arbitrary. It is defined to be 15. That is a normalization constant. If you are making the claim that your IQ is 187 on a s.d.=12/mean=100 scale, then you are claiming that your IQ is based on a sample from another planet where the population is roughly 50 times that of Earth's.
If you were to modify your claim to the statement that your IQ was 187 on a s.d.=15/mean=100 scale, then you are still making the claim that you are among the dozen or so highest IQs on the planet, which is absurd because it carries with it the implication that the test(s) involved have covered enough cases across a wide enough range to be statistically valid at the +/- 6 s.d. threshold.
I don't doubt your IQ based on the way you speak, though someone less knowledgeable on these matters would most likely immediately come to the conclusion that based on your flawed, illogical, and inefficient use of language, your IQ couldn't possibly be that high.
No. I doubt it on the basis of its statistical unlikelihood.
Others might dispute your claim on the basis of your failure to understand the statistics of a concept as simple as a standard deviation. And they would have a stronger argument than those pointing out your childish grasp of language. Yet still, it would not be a strong enough argument to disprove your claim outright.
But actually, the unlikeliness of it being true is enough.
Dear David, your comments reveal a profound misunderstanding of statistics.
The standard deviation for IQ is defined to be 15 but it's the standard deviation for the mankind. Within groups that are less variable and whose members are more alike, the standard deviation will obviously be smaller.
For example, among women, the standard deviation is about 10% smaller for IQ. This is actually by far the main reason for the women's underrepresentation in elite mathematics and related fields. By your confusion about these basic facts, you look like you have just fell from another galaxy.
The Ultimate Philosopher Sep 20, 2013, 1:38:00 AM
The usual division is into the physical sciences, the social sciences, and the humanities. Philosophers come out on top in the last third (of course).
Sherif Karama Sep 20, 2013, 3:19:00 AM
I believe that the medicine/IQ association provided here is misleading as it may appear to be in reference to medical doctors. It probably not only includes medical doctors but also individuals in medical-related fields. The mean IQ of medical doctors is about 125 if I am not mistaken.
The number of 125 provided above for medical doctors is under the assumption of population sd of 16, not 15 (it was an old study).
Socrates Nov 21, 2013, 8:07:00 PM
Philosophers are pretty smart for not being mathematically literate.
Smack Dat Class Gurl Mar 2, 2014, 10:51:00 PM
I knew my iq was higher than 102. I am flying through biology with no problems understanding nature's basic yet processes (so far). Other students complained it was too hard lol. Turns out I am twice exceptional and my IQ cannot be quantified accurately at least by that particular iq test. So I am probably at least 120 to 130 ( according to psycologists). Hooray!
Richard Nixon Mar 28, 2014, 10:36:00 PM
That doesn't mean you have an IQ higher than 102.
Pewpmaster69 Apr 8, 2014, 9:51:00 PM
Oh...my...god. Another guy on the internet one upping himself from the general population with benign and linear banter that reveals in and of itself the bitter grayness of incompetence. You primitive little insect, was the world unfair to you? Are you that far gone that you must attempt to fend us off with the blunt club of faulty analysis? I hope not. For as long as you struggle, the farther you are distanced from yourself courtesy of your feisty little ego. Tsk tsk tsk. Find a hobby. Join us. We don't bite. Embrace promises your emotional insurance not be taxed, and our wants not be leached. You are more than this. Leave your computer, and prove it. We all love you.
Anonymous Dec 3, 2014, 5:43:00 PM
IQ is a funny thing. Just because you have a higher than average intelligence, doesn't mean you pick a hard major. I have a friend who is smart who went into accounting, as he liked numbers but didn't want a hard occupation. He could have been a doctor, lawyer, engineer, or mathematician, but chose accounting. I knew engineering students in school who went into actuarial sciences and physical sciences (chemistry, etc) due to desire.
For me I chose engineering. I was one of the gifted ones in electrical engineering, and was in the top 3 in my class. I too, could have chosen any field based on my aptitude. I chose engineering due to maximum pay for a 4 year degree at an in-state school. I didn't desire medical due to my lack of interest in learning archaic terms for medical minutia. I also wanted to be done in 4 years, so lawyer was out.
I read the article, and know as a fact that IQ varies on 5-7 metrics, with few who are able to max out on all. People who are logical often lack communication skills. People who are socially skilled and have great bedside manor can't do the memorization and recall required to be a doctor. Street con artists have IQ's for social and personal skills that are rivaled by social climbers and politicians, but are difficult to measure by a test.
In the end, you rarely get a car mechanic, surgeon, lawyer, doctor, engineer, social butterfly, counselor/physicist. Why is due to forced differentiation by society and schools. it would take a lifetime to learn all things and master all things, but it has maximum reward to pick something that you find rewarding, then use it to make your way in life.
IQ is meaningless if you aren't happy. for that reason, geniuses will be teachers at just above minimum wage, and the not-so-smart who are pretty and witty will climb social ladders and be managers and politicians. some will choose power, others wealth, others happiness.
← by date
|
CommonCrawl
|
Only show content I have access to (154)
Only show open access (22)
Last 12 months (11)
Physics and Astronomy (163)
Materials Research (100)
Statistics and Probability (45)
Earth and Environmental Sciences (18)
Politics and International Relations (3)
Epidemiology & Infection (43)
Publications of the Astronomical Society of Australia (31)
Proceedings of the International Astronomical Union (17)
Psychological Medicine (17)
Parasitology (14)
The Journal of Laryngology & Otology (13)
Journal of the International Neuropsychological Society (9)
Transactions of the International Astronomical Union (7)
International Astronomical Union Colloquium (6)
Microscopy and Microanalysis (6)
European Astronomical Society Publications Series (5)
Symposium - International Astronomical Union (5)
The Journal of Agricultural Science (5)
Journal of Developmental Origins of Health and Disease (4)
Journal of Fluid Mechanics (4)
Proceedings of the Nutrition Society (4)
Geological Magazine (3)
Materials Research Society (101)
International Astronomical Union (36)
International Neuropsychological Society INS (9)
BSAS (7)
Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (5)
Animal consortium (4)
Institute and Faculty of Actuaries (4)
The New Zealand Society of Otolaryngology, Head and Neck Surgery (4)
AEPC Association of European Paediatric Cardiology (2)
European Psychiatric Association (2)
JLO (1984) Ltd (2)
Test Society 2018-05-10 (2)
The Australian Society of Otolaryngology Head and Neck Surgery (2)
Cambridge Handbooks in Behavioral Genetics (2)
Cambridge Studies in Biological and Evolutionary Anthropology (2)
Cambridge Child and Adolescent Psychiatry (1)
International Hydrology Series (1)
Systematics Association Special Volume Series (1)
The Cambridge History of Science (1)
World and Regional Geology (1)
Cambridge Histories (1)
Cambridge Histories - British & European History (1)
Cambridge Histories - Global History (1)
Cambridge Histories - Middle East & African Studies (1)
C.5 Musashi-1 is a master regulator of aberrant translation in MYC-amplified Group 3 medulloblastoma
MM Kameda-Smith, H Zhu, E Luo, C Venugopal, K Brown, BA Yee, S Xing, F Tan, D Bakhshinyan, AA Adile, M Subapanditha, D Picard, J Moffat, A Fleming, K Hope, J Provias, M Remke, Y Lu, J Reimand, R Wechsler-Reya, G Yeo, SK Singh
Journal: Canadian Journal of Neurological Sciences / Volume 48 / Issue s3 / November 2021
Published online by Cambridge University Press: 05 January 2022, p. S19
Background: Medulloblastoma (MB) is the most common solid malignant pediatric brain neoplasm. Group 3 (G3) MB, particularly MYC amplified G3 MB, is the most aggressive subgroup with the highest frequency of children presenting with metastatic disease, and is associated with a poor prognosis. To further our understanding of the role of MSI1 in MYC amplified G3 MB, we performed an unbiased integrative analysis of eCLIP binding sites, with changes observed at the transcriptome, the translatome, and the proteome after shMSI1 inhibition. Methods: Primary human pediatric MBs, SU_MB002 and HD-MB03 were kind gifts from Dr. Yoon-Jae Cho (Harvard, MS) and Dr. Till Milde (Heidelberg) and cultured for in vitro and in vivo experiments. eCLIP, RNA-seq, Polysome-seq, and TMT-MS were completed as previously described. Results: MSI1 is overexpressed in G3 MB. shRNA Msi1 interference resulted in a reduction in tumour burden conferring a survival advantage to mice injected with shMSI1 G3MB cells. Robust ranked multiomic analysis (RRA) identified an unconventional gene set directly perturbed by MSI1 in G3 MB. Conclusions: Our robust unbiased integrative analysis revealed a distinct role for MSI1 in the maintenance of the stem cell state in G3 MB through post-transcriptional modification of multiple pathways including identification of unconventional targets such as HIPK1.
The Effect of Self-Paced Exercise Intensity and Cardiorespiratory Fitness on Frontal Grey Matter Volume in Cognitively Normal Older Adults: A Randomised Controlled Trial
Natalie J. Frost, Michael Weinborn, Gilles E. Gignac, Ying Xia, Vincent Doré, Stephanie R. Rainey-Smith, Shaun Markovic, Nicole Gordon, Hamid R. Sohrabi, Simon M. Laws, Ralph N. Martins, Jeremiah J. Peiffer, Belinda M. Brown
Journal: Journal of the International Neuropsychological Society , First View
Published online by Cambridge University Press: 22 September 2021, pp. 1-14
Exercise has been found to be important in maintaining neurocognitive health. However, the effect of exercise intensity level remains relatively underexplored. Thus, to test the hypothesis that self-paced high-intensity exercise and cardiorespiratory fitness (peak aerobic capacity; VO2peak) increase grey matter (GM) volume, we examined the effect of a 6-month exercise intervention on frontal lobe GM regions that support the executive functions in older adults.
Ninety-eight cognitively normal participants (age = 69.06 ± 5.2 years; n = 54 female) were randomised into either a self-paced high- or moderate-intensity cycle-based exercise intervention group, or a no-intervention control group. Participants underwent magnetic resonance imaging and fitness assessment pre-intervention, immediately post-intervention, and 12-months post-intervention.
The intervention was found to increase fitness in the exercise groups, as compared with the control group (F = 9.88, p = <0.001). Changes in pre-to-post-intervention fitness were associated with increased volume in the right frontal lobe (β = 0.29, p = 0.036, r = 0.27), right supplementary motor area (β = 0.30, p = 0.031, r = 0.29), and both right (β = 0.32, p = 0.034, r = 0.30) and left gyrus rectus (β = 0.30, p = 0.037, r = 0.29) for intervention, but not control participants. No differences in volume were observed across groups.
At an aggregate level, six months of self-paced high- or moderate-intensity exercise did not increase frontal GM volume. However, experimentally-induced changes in individual cardiorespiratory fitness was positively associated with frontal GM volume in our sample of older adults. These results provide evidence of individual variability in exercise-induced fitness on brain structure.
The GLEAM 200-MHz local radio luminosity function for AGN and star-forming galaxies
T. M. O. Franzen, N. Seymour, E. M. Sadler, T. Mauch, S. V. White, C. A. Jackson, R. Chhetri, B. Quici, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, B. For, B. M. Gaensler, P. J. Hancock, L. Hindson, N. Hurley-Walker, M. Johnston-Hollitt, A. D. Kapińska, E. Lenc, B. McKinley, J. Morgan, A. R. Offringa, P. Procopio, L. Staveley-Smith, R. B. Wayth, C. Wu, Q. Zheng
Published online by Cambridge University Press: 06 September 2021, e041
The GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) is a radio continuum survey at 76–227 MHz of the entire southern sky (Declination $<\!{+}30^{\circ}$ ) with an angular resolution of ${\approx}2$ arcmin. In this paper, we combine GLEAM data with optical spectroscopy from the 6dF Galaxy Survey to construct a sample of 1 590 local (median $z \approx 0.064$ ) radio sources with $S_{200\,\mathrm{MHz}} > 55$ mJy across an area of ${\approx}16\,700\,\mathrm{deg}^{2}$ . From the optical spectra, we identify the dominant physical process responsible for the radio emission from each galaxy: 73% are fuelled by an active galactic nucleus (AGN) and 27% by star formation. We present the local radio luminosity function for AGN and star-forming (SF) galaxies at 200 MHz and characterise the typical radio spectra of these two populations between 76 MHz and ${\sim}1$ GHz. For the AGN, the median spectral index between 200 MHz and ${\sim}1$ GHz, $\alpha_{\mathrm{high}}$ , is $-0.600 \pm 0.010$ (where $S \propto \nu^{\alpha}$ ) and the median spectral index within the GLEAM band, $\alpha_{\mathrm{low}}$ , is $-0.704 \pm 0.011$ . For the SF galaxies, the median value of $\alpha_{\mathrm{high}}$ is $-0.650 \pm 0.010$ and the median value of $\alpha_{\mathrm{low}}$ is $-0.596 \pm 0.015$ . Among the AGN population, flat-spectrum sources are more common at lower radio luminosity, suggesting the existence of a significant population of weak radio AGN that remain core-dominated even at low frequencies. However, around 4% of local radio AGN have ultra-steep radio spectra at low frequencies ( $\alpha_{\mathrm{low}} < -1.2$ ). These ultra-steep-spectrum sources span a wide range in radio luminosity, and further work is needed to clarify their nature.
Cardiac echocardiogram findings of severe acute respiratory syndrome coronavirus-2-associated multi-system inflammatory syndrome in children – CORRIGENDUM
Ashraf S. Harahsheh, Anita Krishnan, Roberta L. DeBiasi, Laura J. Olivieri, Christopher Spurney, Mary T. Donofrio, Russell R. Cross, Matthew P. Sharron, Lowell H. Frank, Charles I. Berul, Adam Christopher, Niti Dham, Hemalatha Srinivasalu, Tova Ronis, Karen L. Smith, Jaclyn N. Kline, Kavita Parikh, David Wessel, James E. Bost, Sarah Litt, Ashley Austin, Jing Zhang, Craig A. Sable
Journal: Cardiology in the Young , First View
Published online by Cambridge University Press: 31 August 2021, p. 1
Characterisation of age and polarity at onset in bipolar disorder
Janos L. Kalman, Loes M. Olde Loohuis, Annabel Vreeker, Andrew McQuillin, Eli A. Stahl, Douglas Ruderfer, Maria Grigoroiu-Serbanescu, Georgia Panagiotaropoulou, Stephan Ripke, Tim B. Bigdeli, Frederike Stein, Tina Meller, Susanne Meinert, Helena Pelin, Fabian Streit, Sergi Papiol, Mark J. Adams, Rolf Adolfsson, Kristina Adorjan, Ingrid Agartz, Sofie R. Aminoff, Heike Anderson-Schmidt, Ole A. Andreassen, Raffaella Ardau, Jean-Michel Aubry, Ceylan Balaban, Nicholas Bass, Bernhard T. Baune, Frank Bellivier, Antoni Benabarre, Susanne Bengesser, Wade H Berrettini, Marco P. Boks, Evelyn J. Bromet, Katharina Brosch, Monika Budde, William Byerley, Pablo Cervantes, Catina Chillotti, Sven Cichon, Scott R. Clark, Ashley L. Comes, Aiden Corvin, William Coryell, Nick Craddock, David W. Craig, Paul E. Croarkin, Cristiana Cruceanu, Piotr M. Czerski, Nina Dalkner, Udo Dannlowski, Franziska Degenhardt, Maria Del Zompo, J. Raymond DePaulo, Srdjan Djurovic, Howard J. Edenberg, Mariam Al Eissa, Torbjørn Elvsåshagen, Bruno Etain, Ayman H. Fanous, Frederike Fellendorf, Alessia Fiorentino, Andreas J. Forstner, Mark A. Frye, Janice M. Fullerton, Katrin Gade, Julie Garnham, Elliot Gershon, Michael Gill, Fernando S. Goes, Katherine Gordon-Smith, Paul Grof, Jose Guzman-Parra, Tim Hahn, Roland Hasler, Maria Heilbronner, Urs Heilbronner, Stephane Jamain, Esther Jimenez, Ian Jones, Lisa Jones, Lina Jonsson, Rene S. Kahn, John R. Kelsoe, James L. Kennedy, Tilo Kircher, George Kirov, Sarah Kittel-Schneider, Farah Klöhn-Saghatolislam, James A. Knowles, Thorsten M. Kranz, Trine Vik Lagerberg, Mikael Landen, William B. Lawson, Marion Leboyer, Qingqin S. Li, Mario Maj, Dolores Malaspina, Mirko Manchia, Fermin Mayoral, Susan L. McElroy, Melvin G. McInnis, Andrew M. McIntosh, Helena Medeiros, Ingrid Melle, Vihra Milanova, Philip B. Mitchell, Palmiero Monteleone, Alessio Maria Monteleone, Markus M. Nöthen, Tomas Novak, John I. Nurnberger, Niamh O'Brien, Kevin S. O'Connell, Claire O'Donovan, Michael C. O'Donovan, Nils Opel, Abigail Ortiz, Michael J. Owen, Erik Pålsson, Carlos Pato, Michele T. Pato, Joanna Pawlak, Julia-Katharina Pfarr, Claudia Pisanu, James B. Potash, Mark H Rapaport, Daniela Reich-Erkelenz, Andreas Reif, Eva Reininghaus, Jonathan Repple, Hélène Richard-Lepouriel, Marcella Rietschel, Kai Ringwald, Gloria Roberts, Guy Rouleau, Sabrina Schaupp, William A Scheftner, Simon Schmitt, Peter R. Schofield, K. Oliver Schubert, Eva C. Schulte, Barbara Schweizer, Fanny Senner, Giovanni Severino, Sally Sharp, Claire Slaney, Olav B. Smeland, Janet L. Sobell, Alessio Squassina, Pavla Stopkova, John Strauss, Alfonso Tortorella, Gustavo Turecki, Joanna Twarowska-Hauser, Marin Veldic, Eduard Vieta, John B. Vincent, Wei Xu, Clement C. Zai, Peter P. Zandi, Psychiatric Genomics Consortium (PGC) Bipolar Disorder Working Group, International Consortium on Lithium Genetics (ConLiGen), Colombia-US Cross Disorder Collaboration in Psychiatric Genetics, Arianna Di Florio, Jordan W. Smoller, Joanna M. Biernacka, Francis J. McMahon, Martin Alda, Bertram Müller-Myhsok, Nikolaos Koutsouleris, Peter Falkai, Nelson B. Freimer, Till F.M. Andlauer, Thomas G. Schulze, Roel A. Ophoff
Journal: The British Journal of Psychiatry / Volume 219 / Issue 6 / December 2021
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
Cardiac echocardiogram findings of severe acute respiratory syndrome coronavirus-2-associated multi-system inflammatory syndrome in children
Published online by Cambridge University Press: 05 August 2021, pp. 1-9
A novel paediatric disease, multi-system inflammatory syndrome in children, has emerged during the 2019 coronavirus disease pandemic.
To describe the short-term evolution of cardiac complications and associated risk factors in patients with multi-system inflammatory syndrome in children.
Retrospective single-centre study of confirmed multi-system inflammatory syndrome in children treated from 29 March, 2020 to 1 September, 2020. Cardiac complications during the acute phase were defined as decreased systolic function, coronary artery abnormalities, pericardial effusion, or mitral and/or tricuspid valve regurgitation. Patients with or without cardiac complications were compared with chi-square, Fisher's exact, and Wilcoxon rank sum.
Thirty-nine children with median (interquartile range) age 7.8 (3.6–12.7) years were included. Nineteen (49%) patients developed cardiac complications including systolic dysfunction (33%), valvular regurgitation (31%), coronary artery abnormalities (18%), and pericardial effusion (5%). At the time of the most recent follow-up, at a median (interquartile range) of 49 (26–61) days, cardiac complications resolved in 16/19 (84%) patients. Two patients had persistent mild systolic dysfunction and one patient had persistent coronary artery abnormality. Children with cardiac complications were more likely to have higher N-terminal B-type natriuretic peptide (p = 0.01), higher white blood cell count (p = 0.01), higher neutrophil count (p = 0.02), severe lymphopenia (p = 0.05), use of milrinone (p = 0.03), and intensive care requirement (p = 0.04).
Patients with multi-system inflammatory syndrome in children had a high rate of cardiac complications in the acute phase, with associated inflammatory markers. Although cardiac complications resolved in 84% of patients, further long-term studies are needed to assess if the cardiac abnormalities (transient or persistent) are associated with major cardiac events.
Systematic and other reviews: criteria and complexities
Robert T Sataloff, Matthew L Bush, Rakesh Chandra, Douglas Chepeha, Brian Rotenberg, Edward W Fisher, David Goldenberg, Ehab Y Hanna, Joseph E Kerschner, Dennis H Kraus, John H Krouse, Daqing Li, Michael Link, Lawrence R Lustig, Samuel H Selesnick, Raj Sindwani, Richard J Smith, James Tysome, Peter C Weber, D Bradley Welling
Journal: The Journal of Laryngology & Otology / Volume 135 / Issue 7 / July 2021
Print publication: July 2021
Canadian Stroke Best Practice Recommendations: Secondary Prevention of Stroke Update 2020
David J. Gladstone, M. Patrice Lindsay, James Douketis, Eric E. Smith, Dar Dowlatshahi, Theodore Wein, Aline Bourgoin, Jafna Cox, John B. Falconer, Brett R. Graham, Marilyn Labrie, Lena McDonald, Jennifer Mandzia, Daniel Ngui, Paul Pageau, Amanda Rodgerson, William Semchuk, Tammy Tebbutt, Carmen Tuchak, Stephen van Gaal, Karina Villaluna, Norine Foley, Shelagh Coutts, Anita Mountain, Gord Gubitz, Jacob A Udell, Rebecca McGuff, Alexandre Y. Poppe,
Journal: Canadian Journal of Neurological Sciences , First View
Published online by Cambridge University Press: 18 June 2021, pp. 1-23
The 2020 update of the Canadian Stroke Best Practice Recommendations (CSBPR) for the Secondary Prevention of Stroke includes current evidence-based recommendations and expert opinions intended for use by clinicians across a broad range of settings. They provide guidance for the prevention of ischemic stroke recurrence through the identification and management of modifiable vascular risk factors. Recommendations address triage, diagnostic testing, lifestyle behaviors, vaping, hypertension, hyperlipidemia, diabetes, atrial fibrillation, other cardiac conditions, antiplatelet and anticoagulant therapies, and carotid and vertebral artery disease. This update of the previous 2017 guideline contains several new or revised recommendations. Recommendations regarding triage and initial assessment of acute transient ischemic attack (TIA) and minor stroke have been simplified, and selected aspects of the etiological stroke workup are revised. Updated treatment recommendations based on new evidence have been made for dual antiplatelet therapy for TIA and minor stroke; anticoagulant therapy for atrial fibrillation; embolic strokes of undetermined source; low-density lipoprotein lowering; hypertriglyceridemia; diabetes treatment; and patent foramen ovale management. A new section has been added to provide practical guidance regarding temporary interruption of antithrombotic therapy for surgical procedures. Cancer-associated ischemic stroke is addressed. A section on virtual care delivery of secondary stroke prevention services in included to highlight a shifting paradigm of care delivery made more urgent by the global pandemic. In addition, where appropriate, sex differences as they pertain to treatments have been addressed. The CSBPR include supporting materials such as implementation resources to facilitate the adoption of evidence into practice and performance measures to enable monitoring of uptake and effectiveness of recommendations.
Australian square kilometre array pathfinder: I. system description
A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier
Published online by Cambridge University Press: 05 March 2021, e009
In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown.
The Qualitative Transparency Deliberations: Insights and Implications
Alan M. Jacobs, Tim Büthe, Ana Arjona, Leonardo R. Arriola, Eva Bellin, Andrew Bennett, Lisa Björkman, Erik Bleich, Zachary Elkins, Tasha Fairfield, Nikhar Gaikwad, Sheena Chestnut Greitens, Mary Hawkesworth, Veronica Herrera, Yoshiko M. Herrera, Kimberley S. Johnson, Ekrem Karakoç, Kendra Koivu, Marcus Kreuzer, Milli Lake, Timothy W. Luke, Lauren M. MacLean, Samantha Majic, Rahsaan Maxwell, Zachariah Mampilly, Robert Mickey, Kimberly J. Morgan, Sarah E. Parkinson, Craig Parsons, Wendy Pearlman, Mark A. Pollack, Elliot Posner, Rachel Beatty Riedl, Edward Schatz, Carsten Q. Schneider, Jillian Schwedler, Anastasia Shesterinina, Erica S. Simmons, Diane Singerman, Hillel David Soifer, Nicholas Rush Smith, Scott Spitzer, Jonas Tallberg, Susan Thomson, Antonio Y. Vázquez-Arroyo, Barbara Vis, Lisa Wedeen, Juliet A. Williams, Elisabeth Jean Wood, Deborah J. Yashar
Journal: Perspectives on Politics / Volume 19 / Issue 1 / March 2021
Published online by Cambridge University Press: 06 January 2021, pp. 171-208
Print publication: March 2021
In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups' final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency's promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1
A history of high-power laser research and development in the United Kingdom
60th Celebration of First Laser
Colin N. Danson, Malcolm White, John R. M. Barr, Thomas Bett, Peter Blyth, David Bowley, Ceri Brenner, Robert J. Collins, Neal Croxford, A. E. Bucker Dangor, Laurence Devereux, Peter E. Dyer, Anthony Dymoke-Bradshaw, Christopher B. Edwards, Paul Ewart, Allister I. Ferguson, John M. Girkin, Denis R. Hall, David C. Hanna, Wayne Harris, David I. Hillier, Christopher J. Hooker, Simon M. Hooker, Nicholas Hopps, Janet Hull, David Hunt, Dino A. Jaroszynski, Mark Kempenaars, Helmut Kessler, Sir Peter L. Knight, Steve Knight, Adrian Knowles, Ciaran L. S. Lewis, Ken S. Lipton, Abby Littlechild, John Littlechild, Peter Maggs, Graeme P. A. Malcolm, OBE, Stuart P. D. Mangles, William Martin, Paul McKenna, Richard O. Moore, Clive Morrison, Zulfikar Najmudin, David Neely, Geoff H. C. New, Michael J. Norman, Ted Paine, Anthony W. Parker, Rory R. Penman, Geoff J. Pert, Chris Pietraszewski, Andrew Randewich, Nadeem H. Rizvi, Nigel Seddon, MBE, Zheng-Ming Sheng, David Slater, Roland A. Smith, Christopher Spindloe, Roy Taylor, Gary Thomas, John W. G. Tisch, Justin S. Wark, Colin Webb, S. Mark Wiggins, Dave Willford, Trevor Winstone
Published online by Cambridge University Press: 27 April 2021, e18
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
MWA tied-array processing III: Microsecond time resolution via a polyphase synthesis filter
S. J. McSweeney, S. M. Ord, D. Kaur, N. D. R. Bhat, B. W. Meyers, S. E. Tremblay, J. Jones, B. Crosse, K. R. Smith
Published online by Cambridge University Press: 24 August 2020, e034
A new high time resolution observing mode for the Murchison Widefield Array (MWA) is described, enabling full polarimetric observations with up to $30.72\,$ MHz of bandwidth and a time resolution of ${\sim}$ $0.8\,\upmu$ s. This mode makes use of a polyphase synthesis filter to 'undo' the polyphase analysis filter stage of the standard MWA's Voltage Capture System observing mode. Sources of potential error in the reconstruction of the high time resolution data are identified and quantified, with the $S/N$ loss induced by the back-to-back system not exceeding $-0.65\,$ dB for typical noise-dominated samples. The system is further verified by observing three pulsars with known structure on microsecond timescales.
A period of 10 weeks of increased protein consumption does not alter faecal microbiota or volatile metabolites in healthy older men: a randomised controlled trial
S. M. Mitchell, E. J. McKenzie, C. J. Mitchell, A. M. Milan, N. Zeng, R. F. D'Souza, F. Ramzan, P. Sharma, E. Rettedal, S. O. Knowles, N. C. Roy, A. Sjödin, K.-H. Wagner, J. M. O'Sullivan, D. Cameron-Smith
Journal: Journal of Nutritional Science / Volume 9 / 2020
Published online by Cambridge University Press: 03 July 2020, e25
Diet has a major influence on the composition and metabolic output of the gut microbiome. Higher-protein diets are often recommended for older consumers; however, the effect of high-protein diets on the gut microbiota and faecal volatile organic compounds (VOC) of elderly participants is unknown. The purpose of the study was to establish if the faecal microbiota composition and VOC in older men are different after a diet containing the recommended dietary intake (RDA) of protein compared with a diet containing twice the RDA (2RDA). Healthy males (74⋅2 (sd 3⋅6) years; n 28) were randomised to consume the RDA of protein (0⋅8 g protein/kg body weight per d) or 2RDA, for 10 weeks. Dietary protein was provided via whole foods rather than supplementation or fortification. The diets were matched for dietary fibre from fruit and vegetables. Faecal samples were collected pre- and post-intervention for microbiota profiling by 16S ribosomal RNA amplicon sequencing and VOC analysis by head space/solid-phase microextraction/GC-MS. After correcting for multiple comparisons, no significant differences in the abundance of faecal microbiota or VOC associated with protein fermentation were evident between the RDA and 2RDA diets. Therefore, in the present study, a twofold difference in dietary protein intake did not alter gut microbiota or VOC indicative of altered protein fermentation.
Examining pathways between genetic liability for schizophrenia and patterns of tobacco and cannabis use in adolescence
Hannah J. Jones, Gemma Hammerton, Tayla McCloud, Lindsey A. Hines, Caroline Wright, Suzanne H. Gage, Peter Holmans, Peter B Jones, George Davey Smith, David E. J. Linden, Michael C. O'Donovan, Michael J. Owen, James T. Walters, Marcus R. Munafò, Jon Heron, Stanley Zammit
Journal: Psychological Medicine / Volume 52 / Issue 1 / January 2022
Published online by Cambridge University Press: 09 June 2020, pp. 132-139
Print publication: January 2022
It is not clear to what extent associations between schizophrenia, cannabis use and cigarette use are due to a shared genetic etiology. We, therefore, examined whether schizophrenia genetic risk associates with longitudinal patterns of cigarette and cannabis use in adolescence and mediating pathways for any association to inform potential reduction strategies.
Associations between schizophrenia polygenic scores and longitudinal latent classes of cigarette and cannabis use from ages 14 to 19 years were investigated in up to 3925 individuals in the Avon Longitudinal Study of Parents and Children. Mediation models were estimated to assess the potential mediating effects of a range of cognitive, emotional, and behavioral phenotypes.
The schizophrenia polygenic score, based on single nucleotide polymorphisms meeting a training-set p threshold of 0.05, was associated with late-onset cannabis use (OR = 1.23; 95% CI = 1.08,1.41), but not with cigarette or early-onset cannabis use classes. This association was not mediated through lower IQ, victimization, emotional difficulties, antisocial behavior, impulsivity, or poorer social relationships during childhood. Sensitivity analyses adjusting for genetic liability to cannabis or cigarette use, using polygenic scores excluding the CHRNA5-A3-B4 gene cluster, or basing scores on a 0.5 training-set p threshold, provided results consistent with our main analyses.
Our study provides evidence that genetic risk for schizophrenia is associated with patterns of cannabis use during adolescence. Investigation of pathways other than the cognitive, emotional, and behavioral phenotypes examined here is required to identify modifiable targets to reduce the public health burden of cannabis use in the population.
The DemTect®: a very sensitive screening instrument for mild dementia
E. Kalbe, J. Kessler, R. Smith, R. Bullock, L. Fischer, P. Calabrese
Journal: European Psychiatry / Volume 17 / Issue S1 / May 2002
Published online by Cambridge University Press: 16 April 2020, p. 131s
The impact study - motivating a change in health behaviour
S. Smith, K. Greenwood, Z. Atakan, P. Sood, R. Ohlsen, E. Papanastasiou, A. Featherman, G. Todd, J. Eberhard, K. Ismail, R. Murray, F. Gaughran
Journal: European Psychiatry / Volume 26 / Issue S2 / March 2011
Published online by Cambridge University Press: 16 April 2020, p. 2151
IMPaCT is a five-year project funded by the Department of Health, UK. Running in the UK and now Sweden, the IMPACT Project aims to target the poor physical health and excessive substance use seen in people with SMI. There is evidence that behavioural interventions may be associated with an improvement in physical health and substance use in this population.
IMPaCT is a randomised controlled trial of a health promotion intervention which consists of a manualised modular approach to working with people with severe mental illness to empower them to improve their physical health and substance use habits. It consists of The Manual, The Reference Guide and The Better Health Handbook which make up a therapy package to support clients to become healthier.
The therapy is provided by care coordinators (mental health practitioners) over a 6–9 month period and combines Cognitive Behavioural Therapy (CBT) with Motivational Interviewing (MI) principles. The aim is to work with clients to help them identify their own problem health behaviours, e.g. smoking, diet, exercise, drug and alcohol use. Realistic goals are set and revised with the client, and individual and group sessions are used to develop personal motivation to change. Information, workbooks and diaries are provided to record progress and give helpful hints, while meaningful alternative activities are introduced to replace problem health behaviours.
An artificial intelligence algorithm that identifies middle turbinate pneumatisation (concha bullosa) on sinus computed tomography scans
P Parmar, A-R Habib, D Mendis, A Daniel, M Duvnjak, J Ho, M Smith, D Roshan, E Wong, N Singh
Journal: The Journal of Laryngology & Otology / Volume 134 / Issue 4 / April 2020
Print publication: April 2020
Convolutional neural networks are a subclass of deep learning or artificial intelligence that are predominantly used for image analysis and classification. This proof-of-concept study attempts to train a convolutional neural network algorithm that can reliably determine if the middle turbinate is pneumatised (concha bullosa) on coronal sinus computed tomography images.
Consecutive high-resolution computed tomography scans of the paranasal sinuses were retrospectively collected between January 2016 and December 2018 at a tertiary rhinology hospital in Australia. The classification layer of Inception-V3 was retrained in Python using a transfer learning method to interpret the computed tomography images. Segmentation analysis was also performed in an attempt to increase diagnostic accuracy.
The trained convolutional neural network was found to have diagnostic accuracy of 81 per cent (95 per cent confidence interval: 73.0–89.0 per cent) with an area under the curve of 0.93.
A trained convolutional neural network algorithm appears to successfully identify pneumatisation of the middle turbinate with high accuracy. Further studies can be pursued to test its ability in other clinically important anatomical variants in otolaryngology and rhinology.
The effect of topical xylometazoline on Eustachian tube function
K S Joshi, V W Q Ho, M E Smith, J R Tysome
Journal: The Journal of Laryngology & Otology / Volume 134 / Issue 1 / January 2020
Topical nasal decongestants are frequently used as part of the medical management of symptoms related to Eustachian tube dysfunction.
This study aimed to assess the effect of topical xylometazoline hydrochloride sprayed in the anterior part of the nose on Eustachian tube active and passive opening in healthy ears.
Active and passive Eustachian tube function was assessed in healthy subjects before and after intranasal administration of xylometazoline spray, using tympanometry, video otoscopy, sonotubometry, tubo-tympano-aerodynamic-graphy and tubomanometry.
Resting middle-ear pressures were not significantly different following decongestant application. Eustachian tube opening rate was not significantly different following the intervention, as measured by all function tests used. Sonotubometry data showed a significant increase in the duration of Eustachian tube opening following decongestant application.
There remains little or no evidence that topical nasal decongestants improve Eustachian tube function. Sonotubometry findings do suggest that further investigation with an obstructive Eustachian tube dysfunction patient cohort is warranted.
Managing uncertainty: Principles for improved decision making
P. H. Kaye, A. D. Smith, M. J. Strudwick, M. White, C. E. L. Bird, G. Aggarwal, T. Durkin, T. A. G. Marcuson, T. R. Masters, N. Regan, S. Restrepo, J. R. Toller, S. White, R. Wilkinson
Journal: British Actuarial Journal / Volume 25 / 2020
Published online by Cambridge University Press: 22 June 2020, e15
Effective management of uncertainty can lead to better, more informed decisions. However, many decision makers and their advisers do not always face up to uncertainty, in part because there is little constructive guidance or tools available to help. This paper outlines six Uncertainty Principles to manage uncertainty.
Face up to uncertainty
Deconstruct the problem
Don't be fooled (un/intentional biases)
Models can be helpful, but also dangerous
Think about adaptability and resilience
Bring people with you
These were arrived at following extensive discussions and literature reviews over a 5-year period. While this is an important topic for actuaries, the intended audience is any decision maker or advisor in any sector (public or private).
|
CommonCrawl
|
Working memory training in typically developing children: A multilevel meta-analysis
Theoretical Review
Giovanni Sala ORCID: orcid.org/0000-0002-1589-37591 &
Fernand Gobet2
Psychonomic Bulletin & Review volume 27, pages423–434(2020)Cite this article
Working memory (WM) training in typically developing (TD) children aims to enhance not only performance in memory tasks but also other domain-general cognitive skills, such as fluid intelligence. These benefits are then believed to positively affect academic achievement. Despite the numerous studies carried out, researchers still disagree over the real benefits of WM training. With this meta-analysis (m = 41, k = 393, N = 2,375), we intended to resolve the discrepancies by focusing on the potential sources of within-study and between-study true heterogeneity. Small to medium effects were observed in memory tasks (i.e., near transfer). The size of these effects was proportional to the similarity between the training task and the outcome measure. By contrast, far-transfer measures of cognitive ability (e.g., intelligence) and academic achievement (mathematics and language ability) were essentially unaffected by the training programs, especially when the studies implemented active controls (\( \overline{g} \) = 0.001, SE = 0.055, p = .982, τ2 = 0.000). Crucially, all the models exhibited a null or low amount of true heterogeneity, which was wholly explained by the type of controls (nonactive vs. active) and by statistical artifacts, in contrast to the claim that this field has produced mixed results. Since the empirical evidence shows the absence of both generalized effects and true heterogeneity, we conclude that there is no reason to keep investing resources in WM training research with TD children.
It is widely acknowledged that general cognitive ability is a major predictor of academic achievement and job performance (Detterman, 2014; Gobet, 2016; Schmidt, 2017; Wai, Brown, & Chabris, 2018). Finding a way to enhance people's general cognitive ability would thus have a huge societal impact. That is why the idea that engaging in cognitive-training programs can boost one's domain-general cognitive skills has been evaluated in numerous experimental trials over the last two decades (for reviews, see Sala, Aksayli, Tatlidil, Tatsumi, et al., 2019b; Simons et al., 2016). The most influential of such programs has been working memory (WM) training.
WM is the ability to store and manipulate the information needed to perform complex cognitive tasks (Baddeley, 1992, 2000). The concept of WM thus goes beyond that of short-term memory (STM): Whereas the latter focuses on how much information can be passively stored in one's cognitive system, the former involves an active manipulation of the information, as well (Cowan, 2017; Daneman & Carpenter, 1980).
The importance of WM in cognitive development is well-known. WM capacity—that is, the maximum amount of information that WM can store and manipulate—steadily increases throughout infancy and childhood up to adolescence (Cowan, 2016; Gathercole, Pickering, Ambridge, & Wearing, 2004), due to both maturation and an increase in knowledge (Cowan, 2016; Jones, Gobet, & Pine, 2007). WM capacity is positively correlated with essential cognitive functions such as fluid intelligence and attentional processes (Engle, 2018; Kane, Hambrick, & Conway, 2005; Süß, Oberauer, Wittmann, Wilhelm, & Schulze, 2002). WM capacity is also a significant predictor of academic achievement (Peng et al., 2018). Furthermore, low WM capacity is comorbid with learning disabilities such as dyslexia and attention-deficit hyperactivity disorder (ADHD; Westerberg, Hirvikoski, Forssberg, & Klingberg, 2004). It is thus reasonable to believe that if WM skills could be improved by training, the benefits would spread across many other cognitive and real-life skills.
Three mechanisms, which are not necessarily mutually exclusive, have been hypothesized to explain why WM training might induce generalized cognitive benefits. First, WM and fluid intelligence may share a common capacity constraint (Halford, Cowan, & Andrews, 2007); that is, performance on fluid intelligence tasks is constrained by the amount of information that can be handled by WM. If WM capacity were augmented, then one's fluid intelligence would be expected to improve (Jaeggi, Buschkuehl, Jonides, & Perrig, 2008). In turn, individuals with boosted fluid intelligence are expected to improve their real-life skills, such as academic achievement and job performance, of which general intelligence is a major predictor. The second explanation focuses on the role played by attentional processes in both WM and fluid intelligence tasks (Engle, 2018; Gray, Chabris, & Braver, 2003). Cognitively demanding activities such as WM training may foster people's attentional control, which is, once again, a predictor of other cognitive skills and of academic achievement (for a detailed review, see Strobach & Karbach, 2016). Finally, Taatgen (2013, 2016) has claimed that enhancement in domain-general cognitive skills may be a by-product of the acquisition of domain-specific skills. That is, training in a given task (e.g., the n-back task) may enable individuals to acquire not only domain-specific skills (i.e., how to correctly perform the trained task) but also elements of more abstract production rules. These elements are assumed to be small enough not to encompass any domain-specific content and, therefore, can be transferred across different cognitive tasks.
Typically developing (TD) children engaging in WM training represent an ideal group on which to test these hypothesized mechanisms, for several reasons. Most obviously, the population of TD children is larger than the population of children with learning disabilities, who suffer from different disorders (e.g., ADHD, dyslexia, and language impairment). Moreover, the distribution of WM skills in TD children encompasses a larger range (which reduces the biases related to range restriction), and it is more homogeneous across studies. The latter features make studies involving TD children easier to meta-analyze than studies including patients with different learning disabilities. The results concerning TD children are thus more generalizable than those obtained from more specific populations. Also, unlike studies examining adult populations, studies involving TD children often include transfer measures of both cognitive skills (e.g., WM capacity and fluid intelligence) and academic achievement (e.g., mathematics and language skills). This feature allows us to directly test the hypothesis that WM training induces near-transfer and far-transfer effects that generalize into benefits in important real-life skills. Finally, and probably most importantly, TD children represent a population in which cognitive skills are still developing and in which brain plasticity is at its peak. In other words, TD children are the most likely to benefit from cognitive-training interventions. Therefore, a null result in this group would cast serious doubts on the possibility to obtain generalized effects in other populations, as well (e.g., healthy adults).
The meta-analytic evidence
To date, scholars have disagreed about the effectiveness of WM training programs, and several meta-analytic reviews have been carried out to resolve this issue. The most recent and comprehensive ones—including studies on children, adults, and older adults—are Melby-Lervåg, Redick, and Hulme (2016; number of studies: m = 87) and Sala, Aksayli, Tatlidil, Tatsumi, et al. (2019b; m = 119). Both meta-analyses reached the conclusion that although WM training exerts a medium effect on memory-task performance (near transfer), no other cognitive or academic skills (far transfer) seem to be affected, regardless of the population examined; in particular, no effects have been observed when active controls are implemented, so as to rule out placebo effects (for a comprehensive list of meta-analyses about WM training, see Sala, Aksayli, Tatlidil, Gondo, & Gobet, 2019a).
Two meta-analyses have focused on children, with results similar to those described above. With TD children (ages 3 to 16), Sala and Gobet (2017) found a medium effect (\( \overline{g} \) = 0.46) with near transfer and a modest effect (\( \overline{g}= \)0.12) with far transfer, with the qualification that the better the quality of the design (in terms of use of an active control group), the smaller the effect sizes. With children with learning disabilities, Sala, Aksayli, Tatlidil, Tatsumi, et al. (2019b) reanalyzed a subsample of the studies from Melby-Lervåg et al. (2016) and found effect sizes of \( \overline{g} \) = 0.37 for near transfer and \( \overline{g} \) = 0.02 for far transfer. Similar results were obtained with Cogmed, a commercial WM training program that has been subjected to a considerable amount of research, especially with children with learning disabilities (Aksayli, Sala, & Gobet, 2019).
Critique of the meta-analytic evidence
Some researchers have questioned the conclusions of meta-analytic syntheses concerning WM training. According to Pergher et al. (2019), the diversity of features in the training tasks (e.g., single vs. dual tasks) and the transfer tasks (e.g., numerical vs. verbal tasks) may make any meta-analytic synthesis on the topic essentially meaningless. Exact replications of studies have been rare (where there are any), and the moderators (independent variables in a meta-regression) that should be added in order to account for all the differences across studies are too numerous to avoid power-related issues in meta-regression models. Therefore, it is not possible to reach strong conclusions from research into WM training. In simple words, this is nothing but the well-known apples-and-oranges argument against meta-analysis (Eysenck, 1994).
It is true that meta-analytic syntheses usually include just a few moderators examining only the most macroscopic study features. Nonetheless, meta-analysis also provides the tools to estimate the amount of variability across different findings in a particular field of research. The total variance observed in any dataset is the sum of sampling error variance and true variance. Sampling error variance is just noise, and therefore does not require any further explanation. By contrast, true variance, also referred to as true heterogeneity, is supposed to be accounted for by one or more moderating variables (Schmidt, 2010). In a meta-analysis, it is possible to estimate both within-study and between-study true heterogeneity in order to evaluate whether specific moderating variables are affecting the effect sizes at the level of the single study (e.g., different outcome measures) or across studies (e.g., different types of training or populations involved). Simply put, although it is nearly impossible to test every single potential moderator, it is easy to estimate how big the impact of unknown moderators is on the overall results.
Interestingly, several meta-analyses have estimated within- and between-study true heterogeneity in WM training to be null or low, for both near-transfer and far-transfer effects. When it is present at all, true heterogeneity is accounted for by the type of control group used (active or nonactive), by statistical artifacts such as pre–posttest regression to the mean, due to baseline differences between the experimental and control groups, and, to a lesser extent, by a few extreme effect sizes. This is the case with meta-analyses on younger and older adults (Sala, Aksayli, Tatlidil, Gondo, & Gobet, 2019a) and children with learning disabilities (Aksayli et al., 2019; Melby-Lervåg et al., 2016; Sala, Aksayli, Tatlidil, Tatsumi, et al., 2019b). In brief, despite the many design-related differences across WM training studies, consideration of true heterogeneity has indicated that there are no real differences between the effects produced by such diverse training programs.
The first aim of the present study was to update the previous meta-analytic synthesis about WM training in TD children (Sala & Gobet, 2017), which included studies only until 2016. Because considerable efforts have been devoted to this field of research, it is important to update this study in order to establish whether the same conclusions obtain. The second aim was to test, with a population of TD children, Pergher et al.'s (2019) claim that the broad variety of features of the training and transfer tasks used in WM training research has led to differential outcomes. Specifically, they hypothesized that some features encourage transfer, while others do not. Thus, resolving Pergher et al.'s claim is tantamount to predicting within-study and between-study true heterogeneity. To estimate both within-study and between-study true heterogeneity, we used multilevel modeling, and more especially robust variance estimation with hierarchical weights (Hedges, Tipton, & Johnson, 2010; Tanner-Smith, Tipton, & Polanin, 2016).
More specifically, we here tested the following study features. First, we examined the role played by the abovementioned design qualities (types of controls) and statistical artifacts (baseline differences and extreme effect sizes). As can be seen, these features have been found to be significant moderators in previous meta-analyses. Therefore, it will be worthwhile to test whether these findings can be replicated. Second, we checked whether transfer effects are influenced by the participants' age. Since WM capacity steadily develops throughout childhood, it is advisable to investigate whether WM training is more effective in TD children in a specific age range. Third, we checked whether such training is more effective for specific far-transfer outcome measures. Fourth, we tested whether the size of near-transfer effects is a function of transfer distance (i.e., the similarity between the training task and the outcome measures). Finally, we examined the effectiveness of different training programs. WM training tasks can be classified according to the type of primary manipulation required in order to perform the training tasks (e.g., Redick & Lindsey, 2013). In fact, whereas a number of WM training experiments have employed only one type of training task (e.g., n-back; Jaeggi, Buschkuehl, Jonides, & Shah, 2011), other scholars have suggested that including different kinds of WM tasks could maximize the chances to obtain transfer effects (Byrne, Gilbert, Kievit, & Holmes, 2019).
A systematic search strategy was employed to find relevant studies (PRISMA statement; Moher, Liberati, Tetzlaff, & Altman, 2009). The following Boolean string was used: ("working memory training" OR "WM training" OR "cognitive training"). We searched through the MEDLINE, PsycINFO, Science Direct, and ProQuest Dissertation & Theses databases to identify all potentially relevant studies. We retrieved 3,080 records. Also, the references in earlier meta-analytic and narrative reviews (Aksayli et al., 2019; Melby-Lervåg et al., 2016; Sala, Aksayli, Tatlidil, Tatsumi, et al., 2019b; Sala & Gobet, 2017; Simons et al., 2016) were searched through.
The studies were included according to the following seven criteria:
The study included children (maximum mean age = 16 years old) not diagnosed with any learning disability or clinical condition;
The study included a WM training condition;
The study included at least one control group not engaged in any adaptive WM-training program;
At least one objective cognitive/academic task was administered. Self-reported measures were excluded. Also, when the active control group was trained in activities closely related to one of the outcome measures (e.g., controls involved in a reading course), the relevant effect sizes were excluded (e.g., tests of reading comprehension);
The study implemented a pre–posttest design;
The participants were not self-selected;
The data were sufficient to compute an effect size.
We searched for eligible published and unpublished articles through July 21, 2019. When the necessary data to calculate the effect sizes were not reported in the original publications, we contacted the researchers by e-mail (n = 3). We received one positive reply. In total, we found 41 studies, conducted from 2007 to 2019, that met all the inclusion criteria (see Appendix A in the supplemental materials). These studies included 393 effect sizes and a total of 2,375 participants. The previous most comprehensive meta-analysis concerning WM training in TD children had included 25 studies (conducted between 2007 and 2016), 134 effect sizes, and 1,601 participants (Sala & Gobet, 2017). The present meta-analysis, therefore, adds a significant amount of new data. The procedure is described in Fig. 1.
Flow diagram of the search strategy. TD = typically developing; WM = working memory.
Meta-analytic models
Each effect size was considered either near-transfer or far-transfer. The near-transfer effect sizes consisted of memory tasks referring to the Gsm construct, as defined by the Cattell–Horn–Carroll model (CHC model; McGrew, 2009). Far-transfer effect sizes referred to all the other cognitive measures. The two authors coded each effect size independently and reached 100% agreement.
We evaluated four potential moderators for all studies, based on previous meta-analyses, as well as one moderator apiece that applied only to the far- or to near-transfer models:
Baseline difference (continuous variable): The corrected standardized mean difference (i.e., Hedges's g) between the experimental and control groups at pretest. This moderator was included to assess the amount of true heterogeneity accounted for by regression to the mean.
Control group (active or nonactive; dichotomous variable): Whether the WM training group was compared to another cognitively demanding activity (e.g., nonadaptive training); no-contact groups and business-as-usual groups were considered "nonactive." Also, in line with Simons et al.'s (2016) criteria, those control groups involved in activities that were not cognitively demanding were labeled as "nonactive." The interrater agreement was 98%; here and elsewhere, the two raters resolved every discrepancy by discussion.
Age (continuous variable): The mean age of the participants. A few primary studies did not provide the participants' mean age. In these cases, the participants' mean age was extracted from the median (when the range was reported) or the school grade.
Type of training task (categorical variable): The type of training task used in the study. This moderator included updating tasks (n-back tasks and running tasks; Gathercole, Dunning, Holmes, & Norris, 2019); span tasks (e.g., reverse digit span task, Corsi task, odd one out, etc.; Shipstead, Hicks, & Engle, 2012a); and a mix of updating and span tasks (labeled as mixed). A few training tasks did not fall into any of these categories and were labeled as others. Cohen's kappa was κ = 1.00.
Outcome measure (categorical variable): This moderator, which was analyzed only in the far-transfer models, included measures of fluid intelligence (Gf; McGrew, 2009), processing speed (Gs), mathematical ability, and language ability. The authors coded each effect size for moderator variables independently. Cohen's kappa was κ = .98.
Type of near transfer (categorical variable): Whether the task was the same as or similar to the WM training tasks (nearest transfer)—that is, referred to the same narrow memory skill—or was a different memory task (less near transfer)—that is, referred to different skills in the same broad construct (i.e., Gsm; McGrew, 2009). This categorization was the same as that proposed by Noack, Lövdén, Schmiedek, and Lindenberger (2009). This moderator was added only in the near-transfer models. The authors coded each effect size for moderator variables independently, and the interrater agreement was 97%.
Effect size calculation
The effect sizes were calculated for each comparison in the primary studies that met the inclusion criteria. Redundant comparisons (e.g., rate of correct responses and incorrect responses) were excluded.
The effect size (Hedges's g) was calculated with the following formula:
$$ g=\frac{\left({M}_{e\_ post}-{M}_{e\_ pre}\right)-\left({M}_{c\_ post}-{M}_{c\_ pre}\right)}{S{D_{pooled}}_{pre}}\times \left(1-\frac{3}{\left(4\times N\right)-9}\right) $$
where Me_post and Me_pre are the mean performance of the experimental group at posttest and pretest, respectively, Mc_post and Mc_pre are the mean performance of the control group at posttest and pretest, respectively, SDpooled_pre is the pooled pretest SDs in the experimental group and the control group, and N is the total sample size.
The formula used to calculate the sampling error variances was
$$ Va{r}_g=\left(\frac{N_e-1}{N_e-3}\times \left(\frac{2\times \left(1-r\right)}{r_{xx}}+\frac{d_e^2}{2}\times \frac{N_e}{N_e-1}\right)\times \frac{1}{N_e}+\frac{N_c-1}{N_c-3}\times \left(\frac{2\times \left(1-r\right)}{r_{xx}}+\frac{d_c^2}{2}\times \frac{N_c}{N_c-1}\right)\times \frac{1}{N_c}\right)\times {\left(1-\frac{3}{\left(4\times N\right)-9}\right)}^2 $$
where rxx is the test–retest reliability of the measure, Ne and Nc are the sizes of the experimental group and the control group, de and dc are the within-group standardized mean differences of the experimental group and the control group, and r is the pre–posttest correlations of the experimental group and the control group, respectively (Schmidt & Hunter, 2015, pp. 343–355). The pre–posttest correlations and test–retest coefficients were rarely provided in the primary studies. Therefore, we assumed the reliability coefficient (rxx) to be equal to the pre–posttest correlation (i.e., no treatment-by-subject interaction was postulated; Schmidt & Hunter, 2015, pp. 350–351), and we imposed the pre–posttest correlation to be rxx = r = .700. (We replicated the analyses using other correlation values ranging between .500 and .800. No significant differences were observed.)
Some of the studies reported follow-up effects. In these cases, the effect sizes were calculated by replacing the posttest means in Formula 1 with the follow-up means in the two groups.
Modeling approach
Robust variance estimation (RVE) with hierarchical weights was used to perform the intercept and meta-regression models (Hedges et al., 2010; Tanner-Smith & Tipton, 2014; Tanner-Smith et al., 2016). RVE allowed us to model nested effect sizes (i.e., extracted from the same study). Importantly, we used RVE to estimate both within-cluster (ω2) and between-cluster (τ2) true heterogeneity—that is, the amount of heterogeneity that was not due to sampling error. The effect sizes extracted from one study were thus grouped into the same cluster. These analyses were performed with the Robumeta R package (Fisher, Tipton, & Zhipeng, 2017).
A set of additional analyses were run in order to test the robustness of the results. The Metafor R package (Viechtbauer, 2010) was used. We first merged all the statistically dependent effect sizes using Cheung and Chan's (2014; for more details, see Appendix B in the supplemental materials) weighted-sample-wise correction and ran a random-effect model. This analysis was implemented to check whether the results were sensitive to the way the statistically dependent effect sizes were handled.
Second, we performed Viechtbauer and Cheung's (2010) influential case analysis. This analysis evaluated whether some effect sizes exerted an unusually strong influence on the model's parameters, such as the meta-analytic mean (\( \overline{g} \)) and amount of between-effect true heterogeneity (τ2). The RVE models were then rerun without the detected influential effect sizes.
Third, we ran publication bias analyses. We removed those influential effect sizes that increased true heterogeneity in order to rule out heterogeneity-related biases in the publication-bias-corrected estimates (Schmidt & Hunter, 2015). We then merged all the statistically dependent effect sizes and ran a trim-and-fill analysis (Duval & Tweedie, 2000). Trim-and-fill analysis estimates whether some smaller-than-average effects have been systematically suppressed and calculates a corrected overall effect size. We used the L0 and R0 estimators described by Duval and Tweedie. Finally, we employed Vevea and Woods's (2005) selection method. This technique estimates the amount of publication bias by assigning to p-value ranges different weights. As was suggested by Pustejovsky and Rodgers (2019), the weights employed in the publication bias analysis were not a function of the effect sizes (for more details, see Appendix C in the supplemental materials).
The mean age of the samples included in the present meta-analysis was 8.63 years. The median age was 8.69, the first and third quartiles were 6.00 and 9.85, and the mean age range was 4.27–15.40. The mean baseline difference was 0.037, the median was 0.031, the first and third quartiles were – 0.183 and 0.216, and the range was – 0.912 to 1.274. The descriptive statistics of the categorical/dichotomous moderators are summarized in Tables 1 and 2.
Table 1 Numbers of studies and posttest effect sizes, by categorical moderators
Table 2 Numbers of studies and follow-up effect sizes, by categorical moderators
Far transfer
In this section, we examine the effects of WM training on TD children's ability to perform non-memory-related cognitive and academic tasks. The tasks did not share any features with the trained tasks.
Immediate posttest
The overall effect size of the RVE intercept model was \( \overline{g} \) = 0.092, SE = 0.033, 95% CI [0.021; 0.163], m = 34, k = 146, df = 14.8, p = .015, ω2 = 0.000, τ2 = 0.000. The random-effect (RE) model (with Cheung & Chan's, 2014, correction) yielded very similar estimates: \( \overline{g} \) = 0.105, SE = 0.040, p = .013, τ2 = 0.005 (p = .291). Baseline was a statistically significant moderator (b = – 0.376, SE = 0.065, p < .001), whereas age was not (p = .117). Regarding the categorical moderators, the control group was the only statistically significant moderator (p = .030). No significant differences were found across different outcome measures (p = 1.000 in all pairwise comparisons; Holm's correction) or type of training task (all ps ≥ .563).
Analysis of the control group moderator
Since the control group moderator was statistically significant, we performed the sensitivity analysis on the subsamples separately. When nonactive controls were used, the overall effect size was \( \overline{g} \) = 0.139, SE = 0.045, 95% CI [0.034; 0.243], m = 21, k = 75, df = 8.2, p = .015, ω2 = 0.000, τ2 = 0.005. The RE model yielded very similar results, \( \overline{g} \) = 0.177, SE = 0.056, p = .005, τ2 = 0.012 (p = .176). Five influential cases were found. Excluding these effects did not meaningfully affect the results, \( \overline{g} \) = 0.150, SE = 0.050, 95% CI [0.040; 0.261], m = 20, k = 70, df = 9.9, p = .013, ω2 = 0.000, τ2 = 0.000. The two influential cases inflating heterogeneity were excluded for the following analyses. The trim-and-fill analysis retrieved four missing studies with the L0 estimator, and the corrected estimate was \( \overline{g} \) = 0.116, 95% CI [0.020; 0.211]. No missing study was retrieved with the R0 estimator. Vevea and Woods's (2005) selection model calculated a similar estimate (\( \overline{g} \) = 0.097).
When active controls were used, the overall effect size was \( \overline{g} \) = 0.032, SE = 0. 049, 95% CI [– 0.073; 0.138], m = 18, k = 71, df = 12.3, p = .517, ω2 = 0.000, τ2 = 0.000. The RE model yielded very similar results, \( \overline{g} \) = 0.001, SE = 0.055, p = .982, τ2 = 0.000. One influential case was found. Excluding this effect did not meaningfully affect the results, \( \overline{g} \) = 0.046, SE = 0.047, 95% CI [– 0.055; 0.148], m = 17, k = 70, df = 12.0, p = .339, ω2 = 0.000, τ2 = 0.000. No missing study was retrieved with either the L0 or R0 estimator. The selection model estimate was \( \overline{g} \) = – 0.002.
The overall effect size of the RVE intercept model was \( \overline{g} \) = 0.006, SE = 0.022, 95% CI [– 0.048; 0.059], m = 13, k = 66, df = 6.2, p = .809, ω2 = 0.002, τ2 = 0.000. The RE model provided very similar estimates: \( \overline{g} \) = 0.014, SE = 0.056, p = .809, τ2 = 0.000. Due to the limited number of studies included in this model, no further analysis was conducted.
Near transfer
In this section, we examine the effects of WM training on TD children's ability to perform memory tasks.
The RVE model included all the effect sizes related to near-transfer measures. The overall effect size was \( \overline{g} \) = 0.389, SE = 0.056, 95% CI [0.271; 0.507], m = 29, k = 123, df = 18.8, p < .001, ω2 = 0.006, τ2 = 0.059. The RE model yielded very similar estimates: \( \overline{g} \) = 0.365, SE = 0.056, p < .001, τ2 = 0.036 (p = .002). The meta-regression showed that neither baseline nor age was a significant moderator (p = .154 and p = .914, respectively). The type of control group and type of training were not significant moderators, either (p = .845 and ps ≥ .477, respectively). By contrast, type of near transfer (i.e., nearest vs. less near) was a significant moderator (p = .005).
Type of near transfer
Since the type of near transfer moderator was statistically significant, we performed the sensitivity analysis on these two subsamples separately. With regard to nearest-transfer effects, the meta-analytic mean was \( \overline{g} \) = 0.468, SE = 0.072, 95% CI [0.310; 0.626], m = 20, k = 76, df = 11.9, p < .001, ω2 = 0.011, τ2 = 0.054. The RE model yielded very similar results, \( \overline{g} \) = 0.457, SE = 0.064, p < .001, τ2 = 0.022 (p = .090). One influential case was found. Excluding this effect did not meaningfully affect the results, \( \overline{g} \) = 0.451, SE = 0.071, 95% CI [0.297; 0.605], m = 20, k = 75, df = 11.8, p < .001, ω2 = 0.000, τ2 = 0.052. Merging the effects after excluding the influential case lowered the between-study true heterogeneity to a nonsignificant amount (τ2 = 0.015, p = .158). The trim-and-fill analysis retrieved seven missing studies with the L0 and R0 estimators, and the corrected estimate was \( \overline{g} \) = 0.356, 95% CI [0.221; 0.492]. The selection model estimate was \( \overline{g} \) = 0.391.
The less-near-transfer overall effect size was \( \overline{g} \) = 0.261, SE = 0.092, 95% CI [0.060; 0.462], m = 20, k = 47, df = 12.0, p = .015, ω2 = 0.000, τ2 = 0.051. The RE model yielded similar results, \( \overline{g} \) = 0.292, SE = 0.070, p < .001, τ2 = 0.030 (p = .086). One influential case was found. Excluding these effects did not meaningfully affect the results, \( \overline{g} \) = 0.284, SE = 0.089, 95% CI [0.090; 0.477], m = 20, k = 46, df = 12.2, p = .008, ω2 = 0.000, τ2 = 0.039. Excluding the influential effect and merging the statistically dependent effects lowered the between-study true heterogeneity to a nonsignificant amount (τ2 = 0.010, p = .234). No missing study was retrieved with either the L0 or R0 estimator. Finally, the selection model estimated some publication bias (\( \overline{g} \) = 0.196).
The overall effect size of the RVE intercept model was \( \overline{g} \) = 0.239, SE = 0.103, 95% CI [– 0.012; 0.489], m = 12, k = 58, df = 6.1, p = .059, ω2 = 0.000, τ2 = 0.045. The results with the RE model were \( \overline{g} \) = 0.276, SE = 0.084, p = .007, τ2 = 0.031 (p = .080). Due to the limited number of studies included in this model, no further analysis was conducted.
In this article we have analyzed the impact of WM training on TD children's cognitive skills and academic achievement. The findings were clear: whereas WM training fosters performance on memory tasks, small (with nonactive controls) to null (with active controls) far-transfer effects are observed. Therefore, the impact of training on far-transfer measures does not go beyond placebo effects. The follow-up overall effects are consistent with this pattern of results. These results are also in line with Sala and Gobet (2017; a reanalysis with RVE of the data used in that study yielded similar results; for the details, see the supplemental materials) and, more broadly, with the conclusions of previous meta-analytic syntheses concerning WM training in the general population (Aksayli et al., 2019; Melby-Lervåg et al., 2016; Sala, Aksayli, Tatlidil, Tatsumi, et al., 2019b). The findings are summarized in Table 3.
Table 3 Overall effects in the two meta-analyses, sorted by significant moderators
The examination of true heterogeneity revealed that the meta-analytic models exhibit high internal consistency. No appreciable within-study true heterogeneity was observed (ω2 ≈ 0.000 in all the models). This result supports the validity of Noack et al.'s (2009) taxonomy of transfer distance, which was used here. If near-transfer tasks had incorrectly been classified as far-transfer tasks (or vice versa), some within-study true heterogeneity would have been present. In addition, this result suggests that the memory tests (near transfer) used in the primary studies are correlated with each other and can be averaged by study to get more precise measures. Analogously, as we reported in the meta-regression analysis, there is no significant variability across diverse far-transfer measures. The important implication is that WM training fails to induce far transfer in every type of outcome measure (e.g., fluid intelligence, mathematics, etc.).
The models report some between-study true heterogeneity (τ2 > 0.000). Regarding far transfer, this heterogeneity is very low and is accounted for by the type of control group, baseline differences, and a few influential cases. The near-transfer models show slightly higher between-study true heterogeneity, which is partly explained by the type of near transfer (nearest vs. less near). The remaining true heterogeneity almost completely disappears when the statistically dependent (i.e., belonging to the same study) effects are averaged into more precise measures of memory skills. This corroborates the idea that most of the observed between-study heterogeneity is a statistical artifact related to measurement error in memory tasks. Otherwise, between-study true heterogeneity would occur even after averaging the effect sizes within the same study.
Finally, no significant amount of true heterogeneity appears to be accounted for by either the participants' mean age or the type of training task. The various training programs seem equally (in)effective in eliciting transfer effects. This outcome is in line with the findings of Melby-Lervåg et al. (2016) and corroborates the idea that transfer is a function of distance between the training task and the target task, rather than the features of the training program per se (e.g., Byrne et al., 2019; Pergher et al., 2019). Analogously, since age exerts no appreciable impact on the amount of transfer, we can conclude that the stage of WM development in TD children does not play any role in making training programs more (or less) effective. That being said, it is worth noting that most of the primary studies investigated the effects of WM training in preschool and primary school TD children (see the Descriptive Statistics section). Only a fraction of the primary studies included adolescent samples, which makes our findings somewhat less generalizable to typically middle/high school students (e.g., 12–16 years of age).
Overall, Pergher et al.'s (2019) claim that the outcomes of WM training might be mediated by specific characteristics of the training and transfer tasks is not supported by our analyses: The estimated true heterogeneity, when present at all, was explained by a few moderators (distance of transfer and type of control group) and statistical artifacts (baseline differences and a few extreme effects). Therefore, searching for other potential moderators (e.g., duration of the intervention) seems pointless, and could even be perceived as a questionable research practice (i.e., capitalizing on sampling error; Schmidt & Hunter, 2015). In other words, even though, just as in pretty much any field of research in the behavioral sciences, there are a number of design-related differences across the primary studies (as was correctly observed by Pergher and colleagues), almost none of these differences exert any influence on the ability of WM training to induce near- or far-transfer effects. In fact, without quantitative evidence for within- and between-study true heterogeneity, appealing to generic differences across studies risks ending up being just a smokescreen behind which anybody can question the conclusions of meta-analytic syntheses and justify the need to carry out further research (Schmidt, 2017; Schmidt & Hunter, 2015).
Moreover, it is unlikely that WM training exerts positive far-transfer effects on subgroups of individuals (e.g., underachievers at baseline assessment; Jaeggi et al., 2011). Assuming so would necessarily lead to implausible conclusions. Since the meta-analytic far-transfer mean is null when placebo effects are ruled out, postulating nonartifactual between-individual differences would imply that, whereas WM training enhances cognitive/academic skills in some children (positive effect), other individuals have their skills damaged by the training (negative effect). However, there is no theoretical reason nor any empirical evidence to believe that WM training exerts a detrimental effect on one's cognition. Instead, the reported between-study and between-individual differences are simply statistical fluctuations (e.g., sampling error and regression to the mean).
Therefore, given the circumstances, it is possible to apply Occam's razor (Schmidt, 2010), and conclude that WM training does not produce any generalized (far-transfer) effect in TD children. Furthermore, because the same pattern of results has been found in adults, older adults, and children with learning disabilities (Aksayli et al., 2019; Melby-Lervåg et al., 2016; Sala, Aksayli, Tatlidil, Tatsumi, et al., 2019b), the most parsimonious and plausible conclusion is that WM training does not lead to far transfer. Thus, on the basis of the available scientific evidence, the rational decision should be to redirect research efforts and resources to other means of fostering cognitive and academic skills, most likely using domain-specific methods (Gobet, 2016; Gobet & Simon, 1996).
Practical and theoretical implications
The practical implications of our results are the most obvious ones to highlight. Given the absence of appreciable far-transfer effects, especially in those studies implementing active controls, WM training should not be recommended as an educational tool. Although there seems to be no reason to believe that WM training negatively affects children's cognitive skills or academic achievement, implementing such programs would represent a waste of financial and time resources.
Given that positive effects were observed in our meta-analyses with respect to near transfer, one might nonetheless wonder whether WM training is worth the effort. In our opinion, it is not. First, nearest-transfer effects do not constitute robust evidence for cognitive enhancement. Rather, they are clearly a measure of children's boosted ability to perform the training task or one of its variants. This fact reflects the well-known psychometric principle according to which cognitive tests are not reliable proxies for the cognitive constructs of interest if the participant has the opportunity to carry out the task multiple times. Second, less-near-transfer effects are not evidence of improved domain-general memory skills either. As was noted by Shipstead, Redick, and Engle (2012b), even though some less-near-transfer memory tasks (e.g., odd-one-out task) are not part of the training programs, they still share some overlap with some training tasks (e.g., simple-span tasks). Simply put, individuals engaging in WM training do not expand their WM capacity. Rather, they most likely acquire the ability to perform some memory tasks somewhat better than controls, which explains the small effect sizes reported in less-near-transfer measures, and the absence of far transfer.
Two main theoretical implications stem from our findings. First, on the behavioral level, we observe that the amount of transfer is a function of the similarity between the training task and the outcome task. This pattern of results has been replicated in many different domains and appears to be a constant in human cognition (for a review, see Sala & Gobet, 2019). Second, and most important, our findings support recent empirical evidence showing that WM and fluid intelligence do not share the same neural mechanisms, as was previously hypothesized (e.g., Halford et al., 2007; Jaeggi et al., 2008; Strobach & Karbach, 2016; Taatgen, 2013, 2016). Brain-imaging data suggest that WM performance is associated with increased network segregation, whereas the opposite pattern occurs when participants are asked to solve fluid intelligence tasks (Lebedev, Nilsson, & Lövdén, 2018). In the same vein, Burgoyne, Hambrick, and Altman (2019) have recently failed to find any evidence of a causal link between WM capacity and fluid intelligence. In fact, this study shows that the correlation between performance in WM tasks and fluid intelligence tasks is not a function of the capacity demands of the items of fluid intelligence tasks. This finding is in direct contradiction to the predictions of the common-capacity-constraint hypothesis. Thus, WM and fluid intelligence do not appear isomorphic, or even causally related, which would explain why WM training fails to induce any far-transfer effect, despite the well-known correlation between measures of WM capacity, fluid intelligence, and academic achievement.
Pessimism about the possibility to stimulate cognitive enhancement through WM training has thus been upheld by a robust corpus of evidence that goes beyond our meta-analytic results. Such convergent findings at different levels of empirical evidence (experimental, correlational, and neural) provide a successful example of triangulation that does not leave much room for further debate (Campbell & Fiske, 1959; Munafò & Smith, 2018). Indeed, it is our conviction that the data collected so far should lead researchers involved in WM training to entirely reconsider the theoretical bases of the field, or even to dismiss this branch of research.
In this meta-analysis we examined the impact of WM training on TD children's performance on cognitive and academic tasks, using a multilevel approach. The results significantly extend and corroborate the conclusions reached in a previous meta-analysis (Sala & Gobet, 2017): First, training programs exert an appreciable effect on memory task performance. The size of this effect is a function of the similarity between the training task and the outcome task. By contrast, small to null effects are found on far-transfer measures (i.e., fluid intelligence, attention, language, and mathematics). The magnitude of these effects equals zero in studies implementing active controls, suggesting that the small benefits reported in some studies have been the product of placebo effects. Finally, the meta-analytic models exhibit a low to null amount of true heterogeneity that is entirely explained by transfer distance, type of control group, baseline between-group differences, and a few extreme effect sizes. The lack of residual true heterogeneity means that there is no variance left to explain and implies that systematically comparing the features of training tasks and far-transfer outcome measures in order to identify successful WM training regimens, as was suggested by Pergher et al. (2019), is bound to fail.
Aksayli, N. D., Sala, G., & Gobet, F. (2019). The cognitive and academic benefits of Cogmed: A meta-analysis. Educational Research Review, 29, 229–243. doi:https://doi.org/10.1016/j.edurev.2019.04.003
Baddeley, A. (1992). Working memory. Science, 255, 556–559. doi:https://doi.org/10.1126/science.1736359
Baddeley, A. (2000). The episodic buffer: A new component of working memory? Trends in Cognitive Sciences, 4, 417–423. doi:https://doi.org/10.1016/S1364-6613(00)01538-2
Burgoyne, A. P., Hambrick, D. Z., & Altman, E. M. (2019). Is working memory capacity a causal factor in fluid intelligence? Psychonomic Bulletin & Review, 26, 1333–1339. doi:https://doi.org/10.3758/s13423-019-01606-9
Byrne, E. M., Gilbert, R. A., Kievit, R., & Holmes, J., (2019, April 16). Evidence for separate backward recall and n-back working memory factors: A large-scale latent variable analysis. doi:10.31234/osf.io/bkja7
Campbell, D., & Fiske, D. (1959). Convergent and discriminant validation by the multitrait–multimethod matrix. Psychological Bulletin, 56, 81–105. doi:https://doi.org/10.1037/h0046016
Cheung, S. F., & Chan, D. K. (2014). Meta-analyzing dependent correlations: An SPSS macro and an R script. Behavioral Research Methods, 46, 331–345. doi:https://doi.org/10.3758/s13428-013-0386-2
Cowan, N. (2016). Working memory maturation: Can we get at the essence of cognitive growth? Perspective on Psychological Science, 11, 239–264. doi:https://doi.org/10.1177/1745691615621279
Cowan, N. (2017). The many faces of working memory and short-term storage. Psychonomic Bulletin & Review, 24, 1158–1170. doi:https://doi.org/10.3758/s13423-016-1191-6
Daneman, M., & Carpenter, P. A. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior, 19, 450–466. doi:https://doi.org/10.1016/S0022-5371(80)90312-6
Detterman, D. K. (2014). Introduction to the intelligence special issue on the development of expertise: Is ability necessary? Intelligence, 45, 1–5. doi:https://doi.org/10.1016/j.intell.2014.02.004
Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel plot based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56, 276–284. doi:https://doi.org/10.1111/j.0006-341X.2000.00455.x
Engle, R. W. (2018). Working memory and executive attention: A revisit. Perspectives on Psychological Science, 13, 190–193. doi:https://doi.org/10.1177/1745691617720478
Eysenck, H. J. (1994). Systematic reviews: Meta-analysis and its problems. BMJ, 309, 789. doi:https://doi.org/10.1136/bmj.309.6957.789
Fisher, Z., Tipton, E., & Zhipeng, H. (2017). Package "robumeta." Retrieved from https://cran.r-project.org/web/packages/robumeta/robumeta.pdf
Gathercole, S. E., Dunning, D. L., Holmes, J., & Norris, D. (2019). Working memory training involves learning new skills. Journal of Memory and Language, 105, 19–42. doi:https://doi.org/10.1016/j.jml.2018.10.003
Gathercole, S. E., Pickering, S. J., Ambridge, B., & Wearing, H. (2004). The structure of working memory from 4 to 15 years of age. Developmental Psychology, 40, 177–190. doi:https://doi.org/10.1037/0012-1649.40.2.177
Gobet, F. (2016). Understanding expertise: A multi-disciplinary approach. London, UK: Palgrave/Macmillan.
Gobet, F., & Simon, H. A. (1996). Templates in chess memory: A mechanism for recalling several boards. Cognitive Psychology, 31, 1–40. doi:https://doi.org/10.1006/cogp.1996.0011
Gray, J. R., Chabris, C. F., & Braver, T. S. (2003). Neural mechanisms of general fluid intelligence. Nature Neuroscience, 6, 316–322. doi:https://doi.org/10.1038/nn1014
Halford, G. S., Cowan, N., & Andrews, G. (2007). Separating cognitive capacity from knowledge: A new hypothesis. Trends in Cognitive Sciences, 11, 236–242. doi:https://doi.org/10.1016/j.tics.2007.04.001
Hedges, L. V., Tipton, E., & Johnson, M. C. (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Research Synthesis Methods, 1, 39–65. doi:https://doi.org/10.1002/jrsm.5
Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Perrig, W. J. (2008). Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences, 105, 6829–6833. doi:https://doi.org/10.1073/pnas.0801268105
Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Shah, P. (2011). Short- and long-term benefits of cognitive training. Proceedings of the National Academy of Sciences, 108, 10081–10086. doi:https://doi.org/10.1073/pnas.1103228108
Jones, G., Gobet, F., & Pine, J. M. (2007). Linking working memory and long-term memory: A computational model of the learning of new words. Developmental Science, 10, 853–873. doi:https://doi.org/10.1111/j.1467-7687.2007.00638.x
Kane, M. J., Hambrick, D. Z., & Conway, A. R. A. (2005). Working memory capacity and fluid intelligence are strongly related constructs: Comment on Ackerman, Beier, and Boyle (2005). Psychological Bulletin, 131, 66–71. doi:https://doi.org/10.1037/0033-2909.131.1.66
Lebedev, A. V., Nilsson, J., & Lövdén, M. (2018). Working memory and reasoning benefit from different modes of large-scale brain dynamics in healthy older adults. Journal of Cognitive Neuroscience, 30, 1033–1046. doi:https://doi.org/10.1162/jocn_a_01260
McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37, 1–10. doi:https://doi.org/10.1016/j.intell.2008.08.004
Melby-Lervåg, M., Redick, T. S., & Hulme, C. (2016). Working memory training does not improve performance on measures of intelligence or other measures of far-transfer: Evidence from a meta-analytic review. Perspective on Psychological Science, 11, 512–534. doi:https://doi.org/10.1177/1745691616635612
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine, 151, 264–269. doi:https://doi.org/10.7326/0003-4819-151-4-200908180-00135
Munafò, M. R., & Smith, G. D. (2018). Robust research needs many lines of evidence. Nature, 553, 399–401. doi:https://doi.org/10.1038/d41586-018-01023-3
Noack, H., Lövdén, M., Schmiedek, F., & Lindenberger, U. (2009). Cognitive plasticity in adulthood and old age: Gauging the generality of cognitive intervention effects. Restorative Neurology and Neuroscience, 27, 435–453. doi:https://doi.org/10.3233/RNN-2009-0496
Peng, P., Barnes, M., Wang, C., Wang, W., Li, S., Swanson, H. L., . . . Tao, S. (2018). A meta-analysis on the relation between reading and working memory. Psychological Bulletin, 144, 48–76. doi:https://doi.org/10.1037/bul0000124
Pergher, V., Shalchy, M. A., Pahor, A., Van Hulle, M. M, Jaeggi, S. M., & Seitz, A. R. (2019). Divergent research methods limit understanding of working memory training. Journal of Cognitive Enhancement. Advance online publication. doi:https://doi.org/10.1007/s41465-019-00134-7
Pustejovsky, J. E., & Rodgers, M. A. (2019). Testing for funnel plot asymmetry of standardized mean differences. Research Synthesis Methods, 10, 57–71. doi:https://doi.org/10.1002/jrsm.1332
Redick, T. S., & Lindsey, D. R. B. (2013). Complex span and n-back measures of working memory: A meta-analysis. Psychonomic Bulletin & Review, 20, 1102–1113. doi:https://doi.org/10.3758/s13423-013-0453-9
Sala, G., Aksayli, N. D., Tatlidil, K. S., Gondo, Y., & Gobet, F. (2019a). Working memory training does not enhance older adults' cognitive skills: A comprehensive meta-analysis. Intelligence, 77, 101386. doi: https://doi.org/10.1016/j.intell.2019.101386.
Sala, G., Aksayli, N. D., Tatlidil, K. S., Tatsumi, T., Gondo, Y., & Gobet, F. (2019b). Near and far transfer in cognitive training: A second-order meta-analysis. Collabra: Psychology, 5, 18. doi:https://doi.org/10.1525/collabra.203
Sala, G., & Gobet, F. (2017). Working memory training in typically developing children: A meta-analysis of the available evidence. Developmental Psychology, 53, 671–685. doi:https://doi.org/10.1037/dev0000265
Sala, G., & Gobet, F. (2019). Cognitive training does not enhance general cognition. Trends in Cognitive Sciences, 23, 9–20. doi:https://doi.org/10.1016/j.tics.2018.10.004
Schmidt, F. L. (2010). Detecting and correcting the lies that data tell. Perspectives on Psychological Science, 5, 233–242. doi:https://doi.org/10.1177/1745691610369339
Schmidt, F. L. (2017). Beyond questionable research methods: The role of omitted relevant research in the credibility of research. Archives of Scientific Psychology, 5, 32–41. doi:https://doi.org/10.1037/arc0000033
Schmidt, F. L., & Hunter, J. E. (2015). Methods of meta-analysis: Correcting error and bias in research findings (3rd ed.). Newbury Park, CA: Sage.
Shipstead, Z., Hicks, K. L., & Engle, R. W. (2012a). Cogmed working memory training: Does the evidence support the claims? Journal of Applied Research in Memory and Cognition, 1, 185–193. doi:https://doi.org/10.1016/j.jarmac.2012.06.003
Shipstead, Z., Redick, T. S., & Engle, R. W. (2012b). Is working memory training effective? Psychological Bulletin, 138, 628–654. doi:https://doi.org/10.1037/a0027473
Simons, D. J., Boot, W. R., Charness, N., Gathercole, S.E., Chabris, C. F., Hambrick, D. Z., & Stine-Morrow, E. A. L. (2016). Do "brain-training" programs work? Psychological Science in the Public Interest, 17, 103–186. doi:https://doi.org/10.1177/1529100616661983
Strobach, T., & Karbach, J. (Eds.). (2016). Cognitive training: An overview of features and applications. New York, NY: Springer.
Süß, H. M., Oberauer, K., Wittmann, W. W., Wilhelm, O., & Schulze, R. (2002). Working-memory capacity explains reasoning ability—and a little bit more. Intelligence, 30, 261–288. doi:https://doi.org/10.1016/S0160-2896(01)00100-3
Taatgen, N. A. (2013). The nature and transfer of cognitive skills. Psychological Review, 120, 439–471. doi:https://doi.org/10.1037/a0033138
Taatgen, N. A. (2016). Theoretical models of training and transfer effects. In T. Strobach & J. Karbach (Eds.), Cognitive training: An overview of features and applications (pp. 19–29). Cham, Switzerland: Springer.
Tanner-Smith, E. E., & Tipton, E. (2014). Robust variance estimation with dependent effect sizes: Practical considerations including a software tutorial in Stata and SPSS. Research Synthesis Methods, 5, 13–30. doi:https://doi.org/10.1002/jrsm.1091
Tanner-Smith, E. E., Tipton, E., & Polanin, J. R. (2016). Handling complex meta-analytic data structures using robust variance estimates: A tutorial in R. Journal of Developmental and Life-Course Criminology, 2, 85–112. doi:https://doi.org/10.1007/s40865-016-0026-5
Vevea, J. L., & Woods, C. M. (2005). Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychological Methods, 10, 428–443. doi:https://doi.org/10.1037/1082-989X.10.4.428
Viechtbauer, W. (2010). Conducting meta-analysis in R with the metafor package. Journal of Statistical Software, 36, 1–48. Retrieved from http://brieger.esalq.usp.br/CRAN/web/packages/metafor/vignettes/metafor.pdf
Viechtbauer, W., & Cheung, M. W. L. (2010). Outlier and influence diagnostics for meta-analysis. Research Synthesis Methods, 1, 112–125. doi:https://doi.org/10.1002/jrsm.11
Wai, J., Brown, M. I., & Chabris, C. F. (2018). Using standardized test scores to include general cognitive ability in education research and policy. Journal of Intelligence, 6, 37. doi:https://doi.org/10.3390/jintelligence6030037
Article PubMed Central Google Scholar
Westerberg, H., Hirvikoski, T., Forssberg, H., & Klingberg, T. (2004). Visuo-spatial working memory span: A sensitive measure of cognitive deficits in children with ADHD. Child Neuropsychology, 10, 155–161. doi:https://doi.org/10.1080/09297040490911014
The support of the Japan Society for the Promotion of Science [to G.S.; Grant No. 17F17313] is gratefully acknowledged.
The data supporting the findings of this study are openly available at the Open Science Foundation, at doi:10.17605/OSF.IO/BW8PG.
Fujita Health University, Toyoake, Japan
Giovanni Sala
London School of Economics and Political Science, London, UK
Fernand Gobet
Correspondence to Giovanni Sala.
ESM 1
Sala, G., Gobet, F. Working memory training in typically developing children: A multilevel meta-analysis. Psychon Bull Rev 27, 423–434 (2020). https://doi.org/10.3758/s13423-019-01681-y
DOI: https://doi.org/10.3758/s13423-019-01681-y
Cognitive enhancement
Cognitive training
Working memory training
|
CommonCrawl
|
Is Kirchoffs law valid at all pressures?
$$\Large H_{T_\mathrm{f}, p}=H_{T_\mathrm{i},p}+\int_{T_\mathrm{f}}^{T_\mathrm{i}}c_p(T)dT$$ This can be derived by integrating: $$C_v =\left(\frac{\mathrm{d}U}{\mathrm{d}T}\right)_V$$ Applying this to both reactants and products in a reaction, and subtracting reactans from products, we can arrive to
$$\Delta H_{T_\mathrm{f},p}=\Delta H_{T_\mathrm{i},p}+\int_{T_{\mathrm{f}}}^{T_{\mathrm{i}}}\Delta c_p(T)\mathrm{d}T \tag{1}\label{a}$$ Where $$c_p(T)=\sum_i\nu_\mathrm{i}c_{p,\mathrm{i}}(T)$$ This is essentially Kirchoffs law how enthalpy varies with temperature. However, whenever I see the the equation $\ref{a}$, it always deals with standard enthalpy changes, i.e: $$\Delta H_{T_\mathrm{f}}^{⊖}=\Delta H_{T_\mathrm{i}}^{⊖}+\int_{T_{\mathrm{f}}}^{T_{\mathrm{i}}}\Delta c_{p^{⊖}}(T)\mathrm{d}T $$ Is this law not valid for all reactions? If so, what is the mistake in the above derivation?
thermodynamics enthalpy
Gaurang Tandon
AdroitAdroit
$\begingroup$ Enthalpy and heat capacity are functions of both temperature and pressure. But, at the low pressures of the standard state, it can be treated as exclusively a function of temperature. Why do you feel that it would not be valid for all reactions? $\endgroup$ – Chet Miller Apr 16 '18 at 20:59
$\begingroup$ @ChesterMiller Personally i am not convinced, but everywhere it is referenced. E.g Atkins Physical chemistry 10th edition, our lecture notes etc. it is only stated for standard pressure. It states the top equation (how it varies for a single species) as true, but when it is applied to a reaction, it is only given for standard pressure. $\endgroup$ – Adroit Apr 17 '18 at 6:42
It would say that you are making three separate mistakes:
The standard state pressure is any arbitrary pressure, not necessarily 1 bar, so there is no implication from the equation specifying standard state that it is only true at one particular pressure.
The important specification of being in the standard state is not the value of arbitrary pressure chosen, but that the substances are not mixed. They are pure or in solvent as the single solute. Also, for gases and solutions the states are fictitious. In your derivation, you are not accounting for changes due to mixing and any changes due to being in real, rather than fictitious, states.
I don't see why you say to integrate Cv rather than Cp.
DavePhDDavePhD
$\begingroup$ Very helpful, thank you. The non mixing thig is something that was mentioned in the very lecture we learned about this topic. So, just to be clear: The standard state(as indicated by the symbol in the quoted equation) implies that no mixing is occuring? $\endgroup$ – Adroit Apr 17 '18 at 21:20
$\begingroup$ Also, the Cv instead of Cp was simply a typo, i copied the maths language from a related thread but forgot to edit $\endgroup$ – Adroit Apr 17 '18 at 21:21
$\begingroup$ @Adroit yes, it means each reactant or product existing separately. $\endgroup$ – DavePhD Apr 17 '18 at 22:22
$\begingroup$ Please provide a reference for item 1. above. In every source I have seen, the standard state pressure for gases is taken as 1 bar. $\endgroup$ – Chet Miller Apr 18 '18 at 11:01
$\begingroup$ @ChesterMiller The statement is already hyperlinked to the IUPAC definition which says "a well defined but arbitrarily chosen standard pressure". But if that is insufficient see "The standard state with respect to pressure or concentration for each state of aggregation is arbitrarily chosen as some value that can be conveniently measured" books.google.com/… $\endgroup$ – DavePhD Apr 18 '18 at 11:27
If the question is, "Is the heat of reaction a function of temperature and pressure?" the answer is Yes. If the question is, "Is the standard heat of reaction a function of temperature and pressure?," the answer is No; the standard heat of reaction applies only for reactants and products at 1 bar. If the question is, "Are the heat capacities of the pure reactants and products functions of temperature and pressure?", the answer is Yes. So are you asking, "How do I determine the heat of reaction at a specified temperature and an arbitrary pressure higher than 1 bar (without knowing details on how the heat capacities vary with pressure)?"
Chet MillerChet Miller
Not the answer you're looking for? Browse other questions tagged thermodynamics enthalpy or ask your own question.
On Spontaneity of the Redox Reactions
Unknown Mass in Dalton's Law of Partial Pressures
What assumptions underlie Hess' law and and its analogues?
Do volumes and pressures calculated from Boyle's Law depend on the temperature?
Is this the true expression for isochoric enthalpy of non-ideal gases?
Impact of Pressure on equilibrium of Tin allotropes
Energy balance in battery thermodynamics
Current and reversible heat in battery reaction
|
CommonCrawl
|
Let particle P be characterized by its enthalpy $H$ and the work $W$ required to bring together its component quarks. Definition: The rest mass of P is given by
$\begin{align} m \equiv \frac{\sqrt{ \: H^{2} - W^{2} \; \vphantom{ {\Sigma^{2}}^{2} } }}{ c^{2} } \end{align}$
where $c$ is a constant. This definition distinguishes several types of particles by their mass. If $m$ is positive then P is a material particle; an ordinary particle of matter, like a coin or a bullet. There is an important special case for material particles when the work required to make them is negligible compared to their enthalpy, then we say they are heavy particles. If the mass is zero then P is ethereal. Finally if $m^{2} \! < \! 0$ then P has an imaginary mass.1 Roughly speaking, the rest mass describes how much internal energy is leftover after the work of assembling a particle has been completed. We may use the mass to describe the hardness or density of a particle. Recall that $\left\| \, \overline{\rho} \, \right\|$ is the norm of the radius vector of P. Definition: The density of P is
Particle Type Definition
heavy $W^{2} \, \ll \, H^{2}$
material $W^{2} \, < \, H^{2}$
ethereal $W^{2} \, = \, H^{2}$
imaginary $W^{2} \, > \, H^{2}$
$\begin{align} \varrho \equiv \frac{ m c^{2} }{ \left\| \, \overline{\rho} \, \right\| } \end{align}$
Theorem: Particles and anti-particles have the same mass as each other. We have already seen how $H ( {\sf{P}} ) = - H ( \overline{{\sf{P}}} )$ and $W ( {\sf{P}} ) = W ( {\sf{\overline{P}}} )$ when conjugate symmetry is assumed. But the mass depends on these quantities squared. So
$m ({\sf{P}} ) = m ( {\sf{\overline{P}}} )$
Theorem: Photons are ethereal because they are mostly phase anti-symmetric. Their radius $\overline{\rho} ( \gamma)$ is null, and so no work is required to assemble the quarks in a photon; $W (\gamma)=0$. Phase anti-symmetry also means that the net number of quarks must be nil for most quarks. Substituting this $\Delta n = 0$ condition into the definition of enthalpy shows that $H ( \gamma) =0$ as well. Then the definition of mass given above implies that
$m (\gamma) =0$
Sensory Interpretation: Enthalpy characterizes the magnitude of all classes of sensation, whereas the work represents just somatic and visual sensations. The mass is established by their difference, which is mostly due to thermal sensation. So for heavy particles, thermal perceptions are more important than visual sensations. And for particles with an imaginary mass, audio-visual sensations dominate awareness.
Mass $\begin{align} m \equiv \frac{1}{ c^{2} } \sqrt{ \: H^{2} - W^{2} \; \vphantom{ {\Sigma^{2}}^{2} } } \end{align}$ 8-2
Density $\begin{align} \varrho \equiv \frac{ m c^{2} }{ \parallel \overline{\rho} \parallel } \end{align}$ 8-3
The term imaginary is used here with its mathematical meaning . Particles with an imaginary mass are no more fictitious than any other sort of nuclear particle. They carry momentum and transmit forces like other particles. The main thing about having an imaginary mass is that it puts a particle in a logical category that is different from Newtonian particles. So they are not necessarily required to follow Newtonian laws of motion.
|
CommonCrawl
|
Nonlinear anisotropic elliptic and parabolic equations with variable exponents and $L^1$ data
On the number of maximum points of least energy solution to a two-dimensional Hénon equation with large exponent
May 2013, 12(3): 1221-1235. doi: 10.3934/cpaa.2013.12.1221
Phragmén-Lindelöf alternative for an exact heat conduction equation with delay
M. Carme Leseduarte 1, and Ramon Quintanilla 2,
Departament de Matemàtica Aplicada 2, ETSEIAT–UPC, C. Colom 11, 08222 Terrassa, Barcelona, Spain
Matemática Aplicada 2, E.T.S.E.I.T.-U.P.C., Colom 11, 08222 Terrassa, Barcelona, Spain
Received November 2011 Revised May 2012 Published September 2012
In this paper we investigate the spatial behavior of the solutions for a theory for the heat conduction with one delay term. We obtain a Phragmén-Lindelöf type alternative. That is, the solutions either decay in an exponential way or blow-up at infinity in an exponential way. We also show how to obtain an upper bound for the amplitude term. Later we point out how to extend the results to a thermoelastic problem. We finish the paper by considering the equation obtained by the Taylor approximation to the delay term. A Phragmén-Lindelöf type alternative is obtained for the forward and backward in time equations.
Keywords: equations with delay, Spatial estimates, energy arguments, heat conduction., Phragmén-Lindelöf alternative.
Mathematics Subject Classification: Primary: 35Q79; Secondary: 35B40, 35B35, 80A2.
Citation: M. Carme Leseduarte, Ramon Quintanilla. Phragmén-Lindelöf alternative for an exact heat conduction equation with delay. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1221-1235. doi: 10.3934/cpaa.2013.12.1221
D. S. Chandrasekharaiah, Hyperbolic thermoelasticity: A review of recent literature,, Appl. Mech. Rev., 51 (1998), 705. doi: 10.1115/1.3098984. Google Scholar
M. Dreher, R. Quintanilla and R. Racke, Ill posed problems in thermomechanics,, Applied Mathematics Letters, 22 (2009), 1374. doi: 10.1016/j.aml.2009.03.010. Google Scholar
J. N. Flavin, R. J. Knops and L. E. Payne, Decay estimates for the constrained elastic cylinder of variable cross-section,, Quarterly Applied Mathematics, 47 (1989), 325. Google Scholar
J. N. Flavin, R. J. Knops and L. E. Payne, Energy bounds in dynamical problems for a semi-infinite elastic beam,, in, (1989), 101. Google Scholar
R. B. Hetnarski and J. Ignaczak, Generalized thermoelasticity,, J. Thermal Stresses, 22 (1999), 451. doi: 10.1080/014957399280832. Google Scholar
R. B. Hetnarski and J. Ignaczak, Nonclassical dynamical thermoelasticity,, International Journal of Solids and Structures, 37 (2000), 215. doi: 10.1016/S0020-7683(99)00089-X. Google Scholar
C. O. Horgan, L. E. Payne and L. T. Wheeler, Spatial decay estimates in transient heat conduction,, Quarterly Applied Mathematics, 42 (1984), 119. Google Scholar
C. O. Horgan and R. Quintanilla, Spatial decay of transient end effects in functionally graded heat conducting materials,, Quarterly Applied Mathematics, 59 (2001), 529. Google Scholar
C. O. Horgan and R. Quintanilla, Spatial behaviour of solutions of the dual-phase-lag heat equation,, Math. Methods Appl. Sci., 28 (2005), 43. doi: 10.1002/mma.548. Google Scholar
J. Ignaczak and M. Ostoja-Starzewski, "Thermoelasticity with Finite Wave Speeds,'', Oxford Mathematical Monographs, (2010). Google Scholar
R. Kumar and S. Mukhopadhyay, Analysis of the effects of phase-lags on propagation of harmonic plane waves in thermoelastic media,, Comp. Methods in Sci. Tech., 16 (2010), 19. Google Scholar
M. C. Leseduarte and R. Quintanilla, Some qualitative properties of solutions of the system governing acoustic waves in bubbly liquids,, International Journal of Engineering Science, 44 (2006), 1146. doi: 10.1016/j.ijengsci.2006.06.009. Google Scholar
M. C. Leseduarte and R. Quintanilla, Spatial behavior for solutions in heat conduction with two delays,, Manuscript, (2011). Google Scholar
S. Mukhopadhyay and R. Kumar, Analysis of phase-lag effects on wave propagation in a thick plate under axisymmetric temperature distribution,, Acta Mechanica, 210 (2010), 331. doi: 10.1007/s00707-009-0209-9. Google Scholar
S. Mukhopadhyay, S. Kothari and R. Kumar, On the representation of solutions for the theory of generalized thermoelasticity with three phase-lags,, Acta Mechanica, 214 (2010), 305. doi: 10.1007/s00707-010-0291-z. Google Scholar
R. Quintanilla, Damping of end effects in a thermoelastic theory,, Appl. Math. Letters, 14 (2001), 137. doi: 10.1016/S0893-9659(00)00125-7. Google Scholar
R. Quintanilla, Exponential stability in the dual-phase-lag heat conduction theory,, J. Non-Equilibrium Thermodynamics, 27 (2002), 217. doi: 10.1515/JNETDY.2002.012. Google Scholar
R. Quintanilla, A condition on the delay parameters in the one-dimensional dual-phase-lag thermoelastic theory,, J. Thermal Stresses, 26 (2003), 713. doi: 10.1080/713855996. Google Scholar
R. Quintanilla, A well-posed problem for the dual-phase-lag heat conduction,, Journal of Thermal Stresses, 31 (2008), 260. doi: 10.1080/01495730701738272. Google Scholar
R. Quintanilla, A well-posed problem for the three-dual-phase-lag heat conduction,, Journal of Thermal Stresses, 32 (2009), 1270. doi: 10.1080/01495730903310599. Google Scholar
R. Quintanilla, Spatial estimates for an equation with a delay term,, Journal Applied Mathematics Physics (ZAMP), 61 (2010), 381. doi: 10.1007/s00033-009-0049-4. Google Scholar
R. Quintanilla, Some solutions for a family of exact phase-lag heat conduction problems,, Mechanics Research Communications, 38 (2011), 355. doi: 10.1016/j.mechrescom.2011.04.008. Google Scholar
R. Quintanilla and R. Racke, A note on stability of dual-phase-lag heat conduction,, Int. J. Heat Mass Transfer, 49 (2006), 1209. doi: 10.1016/j.ijheatmasstransfer.2005.10.016. Google Scholar
R. Quintanilla and R. Racke, Qualitative aspects in dual-phase-lag thermoelasticity,, SIAM Journal of Applied Mathematics, 66 (2006), 977. doi: 10.1137/05062860X. Google Scholar
R. Quintanilla and R. Racke, Qualitative aspects in dual-phase-lag heat conduction,, Proc. Royal Society London A, 463 (2007), 659. doi: 10.1098/rspa.2006.1784. Google Scholar
R. Quintanilla and R. Racke, A note on stability in three-phase-lag heat conduction,, Int. J. Heat Mass Transfer, 51 (2008), 24. doi: 10.1016/j.ijheatmasstransfer.2007.04.045. Google Scholar
S. K. Roy Choudhuri, On a thermoelastic three-phase-lag model,, J. Thermal Stresses, 30 (2007), 231. doi: 10.1080/01495730601130919. Google Scholar
B. Straughan, "Heat Waves,'', Springer-Verlag, (2011). doi: 10.1007/978-1-4614-0493-4. Google Scholar
D. Y. Tzou, A unified approach for heat conduction from macro to micro-scales,, ASME J. Heat Transfer, 117 (1995), 8. doi: 10.1115/1.2822329. Google Scholar
L. Wang, X. Zhou and X. Wei, "Heat Conduction, Mathematical Models and Analytical Solutions,'', Springer-Verlag, (2008). Google Scholar
F. Xu, S. Moon, X. Zhang, L. Shao, Y. S. Song and U. Demirci, Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation,, Phyl. Transactions Royal Society A-Math. Phys. and Engin. Scies., 368 (2010), 561. doi: 10.1098/rsta.2009.0248. Google Scholar
F. Xu, T. J. Lu and X. E. Guo, Multi-scale biothermal and biomechanical behaviours of biological materials,, Phyl. Transactions Royal Society A-Math. Phys. and Engin. Scies., 368 (2010), 517. doi: 10.1098/rsta.2009.0249. Google Scholar
Fabio Punzo. Phragmèn-Lindelöf principles for fully nonlinear elliptic equations with unbounded coefficients. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1439-1461. doi: 10.3934/cpaa.2010.9.1439
Seppo Granlund, Niko Marola. Phragmén--Lindelöf theorem for infinity harmonic functions. Communications on Pure & Applied Analysis, 2015, 14 (1) : 127-132. doi: 10.3934/cpaa.2015.14.127
Xueke Pu, Boling Guo. Global existence and semiclassical limit for quantum hydrodynamic equations with viscosity and heat conduction. Kinetic & Related Models, 2016, 9 (1) : 165-191. doi: 10.3934/krm.2016.9.165
Sandra Carillo, Vanda Valente, Giorgio Vergara Caffarelli. Heat conduction with memory: A singular kernel problem. Evolution Equations & Control Theory, 2014, 3 (3) : 399-410. doi: 10.3934/eect.2014.3.399
Aymen Jbalia. On a logarithmic stability estimate for an inverse heat conduction problem. Mathematical Control & Related Fields, 2019, 9 (2) : 277-287. doi: 10.3934/mcrf.2019014
Kazuhiro Ishige, Tatsuki Kawakami. Asymptotic behavior of solutions for some semilinear heat equations in $R^N$. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1351-1371. doi: 10.3934/cpaa.2009.8.1351
Cristina Brändle, Arturo De Pablo. Nonlocal heat equations: Regularizing effect, decay estimates and Nash inequalities. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1161-1178. doi: 10.3934/cpaa.2018056
Shouwen Fang, Peng Zhu. Differential Harnack estimates for backward heat equations with potentials under geometric flows. Communications on Pure & Applied Analysis, 2015, 14 (3) : 793-809. doi: 10.3934/cpaa.2015.14.793
Juan Campos, Rafael Obaya, Massimo Tarallo. Recurrent equations with sign and Fredholm alternative. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 959-977. doi: 10.3934/dcdss.2016036
Corrado Mascia. Stability analysis for linear heat conduction with memory kernels described by Gamma functions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3569-3584. doi: 10.3934/dcds.2015.35.3569
Micol Amar, Roberto Gianni. Laplace-Beltrami operator for the heat conduction in polymer coating of electronic devices. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1739-1756. doi: 10.3934/dcdsb.2018078
Claudio Giorgi, Diego Grandi, Vittorino Pata. On the Green-Naghdi Type III heat conduction model. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2133-2143. doi: 10.3934/dcdsb.2014.19.2133
Delio Mugnolo. Gaussian estimates for a heat equation on a network. Networks & Heterogeneous Media, 2007, 2 (1) : 55-79. doi: 10.3934/nhm.2007.2.55
Xianyi Li, Deming Zhu. Comparison theorems of oscillation and nonoscillation for neutral difference equations with continuous arguments. Communications on Pure & Applied Analysis, 2003, 2 (4) : 579-589. doi: 10.3934/cpaa.2003.2.579
Alina Gleska, Małgorzata Migda. Qualitative properties of solutions of higher order difference equations with deviating arguments. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 239-252. doi: 10.3934/dcdsb.2018016
Norisuke Ioku. Some space-time integrability estimates of the solution for heat equations in two dimensions. Conference Publications, 2011, 2011 (Special) : 707-716. doi: 10.3934/proc.2011.2011.707
Olof Heden, Faina I. Solov'eva. Partitions of $\mathbb F$n into non-parallel Hamming codes. Advances in Mathematics of Communications, 2009, 3 (4) : 385-397. doi: 10.3934/amc.2009.3.385
Xavier Cabré, Eleonora Cinti. Energy estimates and 1-D symmetry for nonlinear equations involving the half-Laplacian. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1179-1206. doi: 10.3934/dcds.2010.28.1179
Louis Tebou. Energy decay estimates for some weakly coupled Euler-Bernoulli and wave equations with indirect damping mechanisms. Mathematical Control & Related Fields, 2012, 2 (1) : 45-60. doi: 10.3934/mcrf.2012.2.45
Thierry Cazenave, Flávio Dickstein, Fred B. Weissler. Universal solutions of the heat equation on $\mathbb R^N$. Discrete & Continuous Dynamical Systems - A, 2003, 9 (5) : 1105-1132. doi: 10.3934/dcds.2003.9.1105
PDF downloads (7)
HTML views (0)
M. Carme Leseduarte Ramon Quintanilla
|
CommonCrawl
|
Changes in ceftriaxone pharmacokinetics/pharmacodynamics during the early phase of sepsis: a prospective, experimental study in the rat
Valentina Selmi1,
Beatrice Loriga ORCID: orcid.org/0000-0002-4890-08981,
Luca Vitali1,
Martina Carlucci1,
Alessandro Di Filippo1,
Giulio Carta1,
Eleonora Sgambati2,
Lorenzo Tofani4,
Angelo Raffaele De Gaudio1,
Andrea Novelli3 &
Chiara Adembri1
Sepsis is characterized by the loss of the perm-selectivity properties of the glomerular filtration barrier (GFB) with consequent albuminuria. We examined whether the pharmacokinetics–pharmacodynamics (PK/PD) of ceftriaxone (CTX), an extensively protein-bound 3rd generation cephalosporin, is altered during early sepsis and whether an increase in urinary loss of bound-CTX, due to GFB alteration, can occur in this condition.
A prospective, experimental, randomized study was carried out in adult male Sprague–Dawley rats. Sepsis was induced by cecal ligation and puncture (CLP). Rats were divided into two groups: Sham-operated and CLP. CTX (100 mg i.p., equivalent to 1 g dose in humans) was administered in order to measure plasma and lung CTX concentrations at several time-points: baseline and 1, 2, 4 and 6 h after administration. CTX was measured by High Performance Liquid Chromatography (HPLC). The morphological status of the sialic components of the GFB barrier was assessed by lectin histo-chemistry. Monte Carlo simulation was performed to calculate the probability of target attainment (PTA >90%) for 80 and 100% of Tfree > minimum inhibitory concentration (MIC) for 80 and 100% of dosing interval.
Measurements and main results
After CLP, sepsis developed in rats as documented by the growth of polymicrobial flora in the peritoneal fluid (≤1 × 101 CFU in sham rats vs 5 × 104–1 × 105 CFU in CLP rats). CTX plasma concentrations were higher in CLP than in sham rats at 2 and 4 h after administration (difference at 2 h was 47.3, p = 0.012; difference at 4 h was 24.94, p = 0.004), while lung penetration tended to be lower. An increased urinary elimination of protein-bound CTX occurred (553 ± 689 vs 149 ± 128 mg/L, p < 0.05; % of bound/total CTX 22 ± 6 in septic rats vs 11 ± 4 in sham rats, p < 0.01) and it was associated with loss of the GFB sialic components. According to Monte Carlo simulation a PTA > 90% for 100% of the dosing interval was reached neither for sham nor CLP rats using MIC = 1 mg/L, the clinical breakpoint for Enterobacteriacee.
Sepsis causes changes in the PK of CTX and an alteration in the sialic components of the GFB, with consequent loss of protein-bound CTX. Among factors that can affect drug pharmacokinetics during the early phases of sepsis, urinary loss of both free and albumin–bound antimicrobials should be considered.
The effects of sepsis on the pharmacokinetics-pharmacodynamics (PK/PD) of antimicrobials have been extensively examined in recent years [1]. A major contribution to sepsis-induced changes in PK/PD is due to modification in the volume of distribution (Vd) and renal clearance. The latter [1–3] is frequently impaired and this fact may reduce drug elimination. However, in up to 20% of intensive care unit (ICU) patients, an increase in creatinine clearance (CrCL) occurs in the initial phases of sepsis and, as a consequence, an increase in drug elimination can ensue, with reduction in pathogen exposure to antimicrobials [4]. Theoretically, the increase in CrCL should concern only the "free" antimicrobial, which is physiologically capable of crossing the glomerular filtration barrier (GFB), since the passage of ligand proteins such as albumin is normally prevented by the integrity of the GFB itself. A major contribution to the perm-selective properties of the GFB is provided by the glycocalyx, a network of glycoproteins, proteoglycans and soluble components which line the extracellular surface of all cells, including the glomerular endothelial cells and podocytes [5]. However it has been shown that sepsis impairs GFB properties, since albuminuria is an early marker of postoperative patients developing sepsis [6]. It is therefore likely that during sepsis even the quantity of antimicrobials bound to albumin is eliminated in the urine, which is consistent with the fact that the molecular weight of antimicrobials is usually negligible in comparison to albumin and other ligand proteins (<1000 vs 69,000 Da respectively) [7]. Among the antimicrobials indicated for treating abdominal or uro-sepsis in ICU patients, ceftriaxone (CTX), a highly protein-bound (>95%), third-generation cephalosporin [8], maintains a role when susceptible pathogens are involved. For CTX, a time period during which free concentrations remains higher then MIC (Tfree > MIC) for at least 80% of the dosing interval is considered the optimal PK/PD target [9]. In critically ill patients, a fast (within 24–48 h) achievement of antimicrobial target concentrations is also of special importance in order to improve the outcome [1, 10].
Studies in ICU septic patients have shown very high variability in the main PK parameters of CTX, with changes in renal clearance representing the main cause [11]. No data is reported concerning increased elimination of CTX due to loss of GFB perm-selectivity.
In the present study, we have aimed therefore at evaluating in an experimental model of sepsis whether (1) changes in the PK of CTX occur from the beginning of sepsis, (2) whether these changes are associated with changes in the renal clearance of bound/unbound CTX and changes in the GFB perm-selectivity, (3) whether these changes have consequences on the probability of reaching the PK/PD target (calculated using Monte Carlo simulation).
Adult male Sprague–Dawley rats (Harlan, Udine, Italy) weighing 320–330 g were used for this study. The experimental protocol was approved by the Committee for Animal Experimentation of the Ministry of Health, Rome, Italy. Animals were treated according to Italian and European Guidelines for Animal Care and Experimentation, DL 116/92, in agreement with the European Communities Council Directive guidelines (86/609/EEC).
The SD rats were initially housed three per cage in a temperature (18–22 °C) and humidity-controlled room under a constant 12-h light/dark cycle and fed with standard rat chow and water ad libitum. After acclimatization, animals were identified from number 1–36 and then, by using the Microsoft Excel software (Lombardia, Italy), assigned to one of the two study groups by a blind operator. Eighteen rats underwent sepsis (induced by cecal ligation and puncture ("CLP group"), and 18 rats (who received laparotomy only) served as controls ("Sham-group"). At the end of the surgical procedure each rat received the same amount of CTX intra-peritoneally (i.p.)(see below). A schematic representation of the experimental design is shown in Fig. 1.
Experimental time course
The CLP procedure was performed as previously detailed [12–14]. Briefly, rats anesthetized with sodium pentobarbital (65 mg/kg, i.p.) and positioned on a homoeothermic heating pad in order to maintain body temperature between 36.5 and 37.5 °C, received a 3 cm midline laparotomy on the anterior abdomen. The cecum was exposed and ligated distally to the ileocecal valve, without causing intestinal obstruction, and it was punctured on the anti-mesenteric border with a 16 G (1.65 mm diameter) needle. The cecum was then squeezed to extrude its fecal content and was replaced into the peritoneal cavity; finally, a 20 G cannula was positioned in the peritoneal cavity and secured to the abdominal wall in order to subsequently inject CTX and/or collect peritoneal samples. The anterior peritoneal wall and the skin were closed with 3-0 silk sutures. "Sham" animals underwent laparotomy and peritoneal cannula positioning only. After surgery, all the rats were housed individually for urine collection in metabolic cages in the same environmental conditions.
To document the development of sepsis, the clinical appearance, mortality during the experimental time (6 h) and micro-organism growth in the peritoneal cavity were evaluated. For micro-organism detection and count, samples of peritoneal fluid were collected at the end of the experimental time (6 h) and cultured for the growth of Gram-positive and Gram-negative isolates. Samples were incubated on Mannitol Salt Agar for 24 h at 37 °C for Gram-positive strains and layered on Mac-Conkey Agar III for 24 h at 37 °C for Gram-negative culture. Species identification was determined by the API system (BioMerieux, Inc., Hazelwood, MO), using a panel Gram-positive (GP) card. The effects of sepsis on the status of the GBF, and on CTX elimination were also examined (see below).
The CTX was purchased from Sigma-Aldricht Co. (Saint Louis, Missouri, USA). The antimicrobial solution was prepared daily by dissolving the powder in sterile saline. 100 mg/kg CTX was injected intra-peritoneally through the 20 G abdominal cannula in a volume of 1 mL; the cannula was flushed immediately with 1 mL of saline. The dose of CTX was chosen according to previous experimental protocols in order to mimic the human dose of 1 g [15].
Plasma and lung concentrations were evaluated for the first 6 h after CTX administration. Serial blood samples (drawn from the tail vein) were collected at the following times: before the administration of the antibiotic (baseline sample) and after 1, 2, 4 and 6 h (n = at least four samples each experimental time). The blood was immediately centrifuged at 4000 rpm for 15 min at 4 °C, plasma collected, divided into aliquots and stored at −80 °C until assayed. Lung specimens were also collected at 2, 4 and 6 h after antibiotic treatments to evaluate antibiotic tissue penetration into the lung. Finally, urinary samples were collected from the metabolic cage after the 6 h interval in order to measure CTX elimination.
The CTX concentrations were determined in triplicate by a validated large-plate agar diffusion technique, according to Good Laboratory Practice (GPL) standards, using the Mueller–Hinton Agar (Oxoid, UK) as the culture medium and Escherichia coli K12 as the test organism, with a lower limit of sensitivity of 0.125 mg/L. Standard concentrations were prepared daily in pooled plasma for plasma samples, in saline for the urine and lung samples. The test organism (1 × 106 CFU/mL) was added using the surface layer technique. After homogeneous distribution of the culture, the excess liquid was removed with a pipette. The plates were incubated at 37 °C in air overnight. Best-fit standard curves were obtained by linear regression analysis. The correlation coefficient was no less than 0.99. For all CTX samples, intra-assay precision ranged from 1.5 to 6.8% and the inter-assay precision at a level of 0.75 mg/L ranged from 4.6 to 5.6%. Amicon R 10 K filters (Sigma Aldrich, Milan Italy) were used for the determination of free drug concentration (the filter cut-off being 10.000 nominal molecular weight limit). In short, 0.5 mL of plasma were stratified on the filters and centrifuged at 4000g for 30 min. The filtrate was collected and analyzed with the same bioassay described above.
Diuresis was collected for each rat (at least four rats each group). The total amount of diuresis in the 0–6 h interval of time was noted. Urine was stored at −20 °C until examination.
Changes of GFB-associated glycocalyx, in particular in the sialic components, was performed by lectin histo-chemistry. At the end of the experimental time, after the pentobarbital overdose, a midline incision was made in the abdomen, and kidney specimens were fixed in Carnoy's fluid and routinely processed to obtain 6 μm-thick paraffin sections. Maackia amurensis agglutinin (MAA) and Sambucus nigra agglutinin (SNA) digoxigenin (DIG) labeled lectins (Roche Diagnostic, Mannheim, Germany) were used to identify sialic acids linked α2–3 and α2–6 to galactose or galactosamine, respectively. Lectin histo-chemistry was performed as previously described [16, 17]. In short, sections were treated with 20% acetic acid to inhibit the endogenous alkaline phosphatase, then treated with 10% blocking reagent in Total Buffered Saline (TBS) to reduce the background labelling. Afterwards, sections were washed in TBS and rinsed in Buffer 1, then incubated in DIG-labelled lectins diluted in Buffer 1 (1 and 5 μL/mL for SNA and MAA respectively) for 1 h at room temperature. Sections were then rinsed in TBS, incubated with anti-digoxigenin, conjugated with alkaline phosphatase diluted in TBS and washed in TBS. Labelling of the sites containing bound lectin-digoxigenin was obtained by incubating slides with Buffer 2 containing nitroblue tetrazolium (NBT)/X-phosphate (Roche Diagnostics, Mannheim, Germany) [16, 18].
Controls for lectin specificity included pre-incubation of lectins with the corresponding hapten sugars [16, 18]. For evaluation of the lectin-stained location, ten random fields (40× ocular) in each section (five sections for each specimen) were examined using light microscopy. A densitometric analysis was also carried out for lectin reactivity intensity by measuring the average optical density (OD) of region of interest (ROI, 40 µm2 area), using ImageJ National Institute of Health (NIH, Bethesda, MD, USA) software. Measured values were normalized to background [(OD-ODbkg)/ODbkg]. At least eight regions of interest in ten different optical fields were analyzed in each experiment.
For the PD analysis, the CTX concentration data on plasma were used to develop a population model by non- linear mixed effects modelling (using SAS 9.3 PROC NLMIXED).
We considered the following first-order compartment model for these
$$C_{it} = \frac{{Dk_{ei} k_{ai} }}{{Cl_{i} \left( {k_{ai} - k_{ei} } \right)}} \left[ {e^{{ - k_{ei} t}} - e^{{ - k_{ai} t}} } \right] + e_{it}$$
where C it is the observed concentration of the i-th subject at time t, D is the dose of ceftriaxone, k ei is the elimination rate constant for subject i, k ai is the absorption rate constant for subject i, Cl i is the clearance for subject i, and e it are normal errors. To allow for random variability between subjects, we assumed:
$$Cl_{i} = e^{{\beta_{1} + b_{i1} }}$$
$$k_{ei} = e^{{\beta_{2} + b_{i2} }}$$
$$k_{ai} = e^{{\beta_{3} }}$$
where the βs denote fixed-effects parameters and the bis denote random-effects parameters with an unknown covariance matrix.
To obtain effect estimation and random effect joint distribution the matrix dual quasi-Newton algorithm was used in order to achieve convergence.
Two dosing regimens (100 and 200 mg/kg, corresponding to 1 and 2 g in humans respectively [15]) were simulated using Monte Carlo simulation based on the PK model. Each simulation generated concentration–time profiles from 10,000 rats per dosage regimen. Based on this data, Tfree > MIC and the probability target attainment (PTA) for different multiples of MIC, were estimated graphically. MIC values according to the EUCAST breakpoints were considered, in which CTX susceptibility for Enterobacteriaceae was ≤1 mg/L.
The free concentrations (Cfree) were estimated from the total concentrations (C tot ) using in vivo binding parameters of CTX and the following equation described by Kodama et al. [19]:
$$C_{{free}} = \frac{1}{2}\left[\begin{gathered}- \left( {nP + ~\frac{1}{{k_{{aff}} }} - ~C_{{tot}} } \right) \hfill \\ + ~\sqrt {\left( {nP + ~\frac{1}{{k_{{aff}} }} - ~C_{{tot}} } \right)^{2} + \frac{{4C_{{tot}} }}{{K_{{aff}} }}} \hfill \end{gathered} \right]$$
The following binding of CTX was used: the total concentration of protein binding sites (nP) 517 μmol l−1 and the binding affinity constant (Kaff) 0.0367 l μmol−1.
In order to assess differences in continuous variables between the sham and CLP groups the t test or the Mann–Whitney test were used. The test choice was made following the Shapiro–Wilk test result for normality distribution of the variable in each group.
The difference between sham and CLP groups in PK parameters was assessed by Welch's t-test. Statistical significance was set at p value lower than 0.05.
Sepsis was induced by CLP as documented by clinical signs (lethargy, pilo-erection, tachypnea) and growth of microorganisms in the peritoneal cavity. No spontaneous deaths occurred during the 6 h period. Both Gram-positive and Gram-negative strains (Escherichia coli 70%, Enterococcus faecalis 40%, Streptococcus viridans 15%, and coagulase-negative Staphylococci 70%) were collected at 6 h following surgery. Colony-forming units/mm3 ranged from ≤1 × 101 in sham-operated rats to 5 × 104–1 × 105 in CLP rats.
The kinetic plasma profile of CTX in both groups is reported in Fig. 2. Peak plasma values were 132,48 and 120,29 mg/L respectively in the CLP and sham-operated control groups. Plasma levels remained higher in septic than in sham-operated rats, with a statistical significance between the 2nd and 4th h (difference at 2 h 47.3 mg/L p = 0.012; difference at 4 h 24.94, p = 0.004). Clearance was significantly lower in CLP rats (0.21 vs 0.32 mL/min p = 0.009). Despite the higher plasma levels, free/unbound CTX penetration tended to be lower in lung tissue in septic rats than in controls (lung/plasma value being at 2 h, p = 0.2; at 4 h, p = 0.05; at 6 h, p = 0.6). (see Table 1) In septic rats, the calculated AUC lung/AUC free plasma concentration ratio was 0.33 while in the sham it was 0.51.
Total CTX plasma concentration (mg/L) time curve in septic (CLP, ●) and control (sham–operated, ×) rats (n = at least four rats each point)
Table 1 Free Ctx Lung Concentrations (mean ± SD) in sham and CLP rats
The concentration of bound CTX in the urine was significantly higher in septic CLP-treated rats than in sham rats (553 ± 689 vs 149 ± 128 mg/L, p < 0.05; % of bound/total CTX 22 ± 6 vs 11 ± 4, p < 0.01 respectively in CLP and sham rats). However the total amount of CTX eliminated in the urine (calculated as urinary volume X CTX urinary concentration) was higher in sham than in CLP rats (3.34 ± 0.55 vs 1.85 ± 0.8 mg, p < 0.05) because urinary volume in the time interval 0-6 h was higher in sham than in septic rats (2.25 ± 0.53 vs 1.24 ± 0.72 mL, p < 0.05). For details see Fig. 3a, b.
a CTX urinary concentration (mg/L) at the end of the experimental time (6 h). A significant increased quantity of bound-CTX was present in the urine of septic rats (553.53 vs 149.27 mg/L; *p < 0.05 CLP vs SHAM). b Total urinary loss of CTX (mg) at the end of the experimental time (6 h); the quantity was the product of the 6 h urinary volume and CTX concentration. Due to a reduced urinary output, CTX total loss was reduced in septic rats (1.85 vs 3.34 mg *p < 0.05)
The integrity of the GFB-associated glycocalyx was assessed by lectin histo-chemistry analysis. MAA and SNA reactivity (showing the presence of sialic acids linked α-2,3 and α-2,6 to galactose/galactosamine, respectively) was significantly lower in the glomerular barrier in the CLP rats with respect to the sham-operated rats (MAA 0.38 ± 0.06 vs 0.97 ± 0.07, p < 0.01; SNA 0.41 ± 0.05 vs 1.09 ± 0.09, p < 0.01, Fig. 4).
Representative light microphotograph of Maackia amurensis agglutinin (MAA) (a, b) and Sambucus nigra agglutinin (SNA) (c, d) reactivity at 6 h in sham-operated and CLP-treated rats. Lectin reactivity is present in the glomerular barrier of both study groups; however, reactivity in glomerulus of CLP rats (b, d) appears weaker than in sham-operated ones (a, c). e The quantitative analysis for SNA and MAA content showed that MAA and SNA reactivity intensity is significantly lower in CLP compared to sham-operated samples (MAA: 0.28 vs 0.97; SNA 0.41 vs 1.09 *p < 0.01 CLP vs SHAM). Scale bar = 25 mm. Data are the mean of OD values, at 6 h, both for sham-operated and CLP rats
Monte Carlo simulation was performed to evaluate the PTA attainment of Tfree > MIC for over 80% and for 100% of the dosing interval.
The PTA for the 80% dosing interval was 0% for MIC = 1 mg/L with the 1 g dose, and 99% with the 2 g dose in the sham rats. In the CLP rats, the PTA for the 80% dosing interval was 75% for MIC = 1 for the 1 g dose and 95% for the 2 g dose (Fig. 5a, b).
Monte Carlo simulation was performed to evaluate the PTA attainment of Tfree > MIC for over 80% and for 100% of the dosing interval. The PTA for the 80% dosing interval was 0% for MIC = 1 mg/L with the 1 g dose, and 99% with the 2 g dose in the sham rats. In the CLP rats, the PTA for the 80% dosing interval was 75% for MIC = 1 for the 1 g dose and 95% for the 2 g dose (a, b). Probability of target attainment (PTA) was calculated for the two dosing regimens 100 (blue line) and 200 (red line) mg/kg (corresponding to 1 and 2 g in humans respectively), using Monte Carlo simulation based on the PK model. The time period during which free concentrations remained higher then MIC (Tfree > MIC) and the PTA for different multiples of MIC were estimated graphically. MIC values according to the EUCAST breakpoints were considered, with CTX susceptibility for Enterobacteriaceae being ≤1 mg/L [34]. The PTA for the 100% dosing interval was 0% for MIC = 1 mg/L for both the 1 g and 2 g doses in both the sham rats (c) and the CLP rats (d)
The PTA for the 100% dosing interval was 0% for MIC = 1 mg/L for both the 1 g and 2 g doses in both the sham rats (Fig. 5c) and the CLP rats (Fig. 5d).
Our data show that the PK of CTX is modified from the very initial phases of sepsis in the sense that plasma concentrations are higher but lung penetration is generally lower compared to non-septic sham controls. Another important finding in our study is that, due to sepsis-induced changes in GFB perm-selectivity, urinary loss of bound CTX is higher in septic than in sham rats. Finally, according to Monte Carlo simulation, despite the higher plasma levels of total CTX in septic rats, the dose administered which is equivalent to 1 g in humans is insufficient to reach a PTA >90% both for 80 and 100% of the dosing interval.
It is well known that optimal antimicrobial therapy is characterized by a PK/PD and time-targeted protocol in order to improve patients' outcome. Therefore we have studied sepsis in its early phase to assess its effects concerning the initial optimization of antimicrobial therapy. We have used the same model which we used in the past [12, 16] since it is suitable for the study of the first hours of sepsis because cytokine activation, microorganism growth, impairment of the integrity of the sialic components of the GFB with consequent albuminuria have already occurred within this period of time [6, 18]. In the present study we have expanded our observations on early sepsis by examining whether modifications of highly-bound molecules like CTX were present and whether they were associated with increased renal loss of CTX.
We chose to study CTX because, although it is an "old" cephalosporin, it is commonly prescribed, being the second most popular in acute care hospitals in the USA [20, 21]. In some countries, it is also available in combination with the Beta-lactamases inhibitor Sulbactam and in this case it maintains a role for the treatment of ESBL-positive strains [22]. Moreover, it is interesting to evaluate the effects of sepsis and of increased GFB permeability on a drug which is extensively bound (>95%) and whose renal elimination occurs almost exclusively throughout glomerular filtration [23].
For all cephalosporins, the Tfree > MIC of the dosing interval is the PK/PD parameter which best predicts their efficacy, and the Tfree > MIC of 80–100% of the dosing interval for CTX has been considered the optimal target [9].
In a study on the PK-PD of CTX in 54 critically ill septic patients [24], the CTX PK parameters were extremely variable: half–life ranged between 0.8 and 28 h, and the clearance from 4 to 33 mL/min (=0.26–1.98 L/h), due to the different types of patients and/or organ failure. Despite this variability and no stratification, the authors conclude that with a 2 g dose CTX plasma levels are sufficient to exert maximal antimicrobial activity against all the most common susceptible abdominal pathogens. Garot et al. [8] who performed a Monte Carlo simulation on similar patients, showed that even with a 1 g dose a PTA >90 of 100% of the dosing interval can be achieved for MICs ≤1 mg/L and for Cl ≤120 mL/min. On the contrary, on the basis of our Monte Carlo simulation in our experimental model, even with the 2 g dose the optimal PK/PD target could not be guaranteed in the early phases of sepsis.
Acute Kidney Injury (AKI) is frequently observed in ICU patients but in the early phases of sepsis an increase in creatinine clearance has been described as well. A GFR above 130 mL/1.73 m2 has been found to be associated with sub-therapeutic trough concentrations of beta-lactams [25]. However, so far the permeability properties of the GFB has not received attention [18]. Permeability of the glomerulus is indeed a highly specialized function which limits the passage of proteins and larger molecules but allows water and small molecules to cross. This "perm-selectivity" is guaranteed only when the structural and ultra-structural integrity of the GFB is maintained, including that of its endothelium-associated glycocalyx, which is composed of membrane-bound glycoproteins and proteoglycans [26, 27].
In our experimental conditions, we have found that the loss of the sialic component of glycocalyx was associated with higher CTX-bound urinary loss in septic rats, thus supporting the hypothesis of a loss of perm-selectivity (16).
Antibiotic penetration into the lung is critical in order to obtain good clinical outcomes in ICU patients with pneumonia. Lung penetration of CTX should be theoretically similar to the percentage of the quantity of free drug and it should have been higher for septic rats than for controls. However, tissue penetration of betalactams depends not only on linear plasma concentrations [28] but also on other factors, such as their physico-chemical properties, changes in membrane permeability, re-arrangement of pulmonary circulation [29] and the tissue alteration due to the action of free-radical oxygen production [30]. Therefore, not surprisingly, we observed reduced penetration in septic than in control lungs, in accordance with other studies [29, 31]. The higher plasma levels we found in septic rats may have several causes, such as higher intestinal absorption due to intestinal inflammatory vaso-dilation and reduced penetration into peripheral tissues. Finally, total CTX renal elimination depends not only on the CTX concentration but also on urinary volume and in our study septic rats produced less urine.
Limits of the study
This was a PK/PD study carried out on the rat, therefore its extrapolation to humans can only be hypothetical. However, the dose of 100 mg/kg we have used gives peak plasma levels similar to those observed in humans after 1 g iv administration and the experimental time of 6 h may be equivalent to the 24 h interval in humans according to trough values [15]. Significantly, the level of protein binding is similar in humans and rats [32], thus contributing to support the validity of our results: it underlines therefore the importance of taking into consideration the loss of bound antimicrobials through urine during sepsis. Other limitations of our study are: (1) the fact that we have measured CTX lung penetration in the homogenate of lung tissue and not in the Epithelial Lining Fluid; (2) we do not know if the administration of fluids would have increased urinary volume and thus the total loss of CTX. Finally, a 24 h circadian rhythm in CTX clearance has been shown which might contribute to daily CTX level variations; this is important especially for drugs administered once a day [33]. However, all the experiments were conducted in the same time-frame (starting in the morning at 9 am).
A rapid attainment of therapeutic targets of antimicrobials is a crucial element for an appropriate antimicrobial therapy in critically ill patients [10]. Our study confirms how sepsis impacts on the PK of CTX since the initial phase. Moreover it shows, for the first time, that among factors that can affect drug pharmacokinetics during the early phases of sepsis, urinary loss of both free and albumin-bound antimicrobials should be considered.
In view of the difficulty of predicting sepsis-associated changes in CTX PK, and in order to preserve the activity of this "old" antimicrobial and reduce the risk of resistance, the best way to adjust the dosage of CTX in critically ill patients appears to be therapeutic drug monitoring. Further studies are necessary to confirm these results in humans.
GFB:
glomerular filtration barrier
PK/PD:
pharmacokinetics–pharmacodynamics
CTX:
CLP:
cecal ligation and puncture
ICU:
CrCL:
creatinine clearance
Maackia amurensis agglutinin
SNA:
Sambucus nigra agglutinin
DIG:
digoxigenin
PTA:
probability target attainment
Cfree :
free concentrations
Ctot :
total concentrations
MIC:
minimum inhibitory concentration
AKI:
CFU:
colony forming unit
HPLC:
GPL:
TBS:
total buffered saline
ESBL:
extended-spectrum beta-lactamase
i.p.:
intra-peritoneally
Blot SI, Pea F, Lipman J. The effect of pathophysiology on pharmacokinetics in the critically ill patient–concepts appraised by the example of antimicrobial agents. Adv Drug Deliv Rev. 2014;77:3–11.
Sime FB, Udy AA, Roberts JA. Augmented renal clearance in critically ill patients: etiology, definition and implications for beta-lactam dose optimization. Curr Opin Pharmacol. 2015;24:1–6.
Pea F. Plasma pharmacokinetics of antimicrobial agents in critically ill patients. Curr Clin Pharmacol. 2013;8:5–12.
Udy AA, Roberts JA, De Waele JJ, Paterson DL, Lipman J. What's behind the failure of emerging antibiotics in the critically ill? Understanding the impact of altered pharmacokinetics and augmented renal clearance. Int J Antimicrob Agents. 2012;39:455–7.
Chelazzi C, Villa G, Mancinelli P, De Gaudio AR, Adembri C. Glycocalyx and sepsis-induced alterations in vascular permeability. Crit Care. 2015;19:26.
De Gaudio AR, Adembri C, Grechi S, Novelli GP. Microalbuminuria as an early index of impairment of glomerular permeability in postoperative septic patients. Intensive Care Med. 2000;26:1364–8.
Goodman LS, Gilman AG. Goodman and Gilman's the pharmacological basis of therapeutics. 12th ed. Boston: McGraw Hill Professional; 2011.
Garot D, Respaud R, Lanotte P, Simon N, Mercier E, Ehrmann S, et al. Population pharmacokinetics of ceftriaxone in critically ill septic patients: a reappraisal. Br J Clin Pharmacol. 2011;72:758–67.
Perry TR, Schentag JJ. Clinical use of ceftriaxone: a pharmacokinetic–pharmacodynamic perspective on the impact of minimum inhibitory concentration and serum protein binding. Clin Pharmacokinet. 2001;40:685–94.
Kollef MH, Sherman G, Ward S, Fraser VJ. Inadequate antimicrobial treatment of infections: a risk factor for hospital mortality among critically ill patients. Chest. 1999;115:462–74.
Schleibinger M, Steinbach CL, Töpper C, Kratzer A, Liebchen U, Kees F, et al. Protein binding characteristics and pharmacokinetics of ceftriaxone in intensive care unit patients. Br J Clin Pharmacol. 2015;80:525–33.
Venturi L, Miranda M, Selmi V, Vitali L, Tani A, Margheri M, et al. Systemic sepsis exacerbates mild post-traumatic brain injury in the rat. J Neurotrauma. 2009;26:1547–56.
Hubbard WJ, Choudhry M, Schwacha MG, Kerby JD, Rue LW, Bland KI, et al. Cecal ligation and puncture. Shock. 2005;24:52–7.
Dejager L, Pinheiro I, Dejonckheere E, Libert C. Cecal ligation and puncture: the gold standard model for polymicrobial sepsis? Trends Microbiol. 2011;19:198–208.
B. Braun Medical Inc. Ceftriaxone and dextrose. 2015. http://www.drugs.com/pro/ceftriaxone-and-dextrose.html. Accessed 14 July 2016.
Adembri C, Selmi V, Vitali L, Tani A, Margheri M, Loriga B, et al. Minocycline but not tigecycline is neuroprotective and reduces the neuroinflammatory response induced by the superimposition of sepsis upon traumatic brain injury. Crit Care Med. 2014;42:e570–82.
Marini M, Ambrosini S, Sarchielli E, Thyrion GDZ, Bonaccini L, Vannelli GB, et al. Expression of sialic acids in human adult skeletal muscle tissue. Acta Histochem. 2014;116:926–35.
Adembri C, Sgambati E, Vitali L, Selmi V, Margheri M, Tani A, et al. Sepsis induces albuminuria and alterations in the glomerular filtration barrier: a morphofunctional study in the rat. Crit Care. 2011;15:R277.
Kodama Y, Kuranari M, Tsutsumi K, Kimoto H, Fujii I, Takeyama M. Prediction of unbound serum valproic acid concentration by using in vivo binding parameters. Ther Drug Monit. 1992;14:349–53.
Dumartin C, L'Hériteau F, Péfau M, Bertrand X, Jarno P, Boussat S, et al. Antibiotic use in 530 French hospitals: results from a surveillance network at hospital and ward levels in 2007. J Antimicrob Chemother. 2010;65:2028–36.
Magill SS, Edwards JR, Beldavs ZG, Dumyati G, Janelle SJ, Kainer M. Prevalence of antimicrobial use in US acute care hospitals, May–September 2011. JAMA. 2014;312:1438–46.
Sharma VD, Singla A, Chaudhary M, Taneja M. Population pharmacokinetics of fixed dose combination of ceftriaxone and sulbactam in healthy and infected subjects. AAPS Pharm Sci Tech. 2015;1:1–2.
Patel IH, Chen S, Parsonnet M, Hackman MR, Brooks MA, Konikoff J, et al. Pharmacokinetics of ceftriaxone in humans. Antimicrob Agents Chemother. 1981;20:634–41.
Joynt GM, Lipman J, Gomersall CD, Young RJ, Wong EL, Gin T. The pharmacokinetics of once-daily dosing of ceftriaxone in critically ill patients. J Antimicrob Chemother. 2001;47:421–9.
Udy AA, Varghese JM, Altukroni M, Briscoe S, McWhinney BC, Ungerer JP, et al. Subtherapeutic initial β-lactam concentrations in select critically ill patients: association between augmented renal clearance and low trough drug concentrations. Chest. 2012;142(1):30–9.
Fu BM, Tarbell JM. Mechano-sensing and transduction by endothelial surface glycocalyx: composition, structure, and function. Wiley Interdiscip Rev Syst Biol Med. 2013;5:381–90.
Arkill KP, Neal CR, Mantell JM, Michel CC, Qvortrup K, Rostgaard J, et al. 3D reconstruction of the glycocalyx structure in mammalian capillaries using electron tomography. Microcirculation. 2012;19:343–51.
Lodise TP, Butterfield J. Use of pharmacodynamic principles to inform β-lactam dosing: "S" does not always mean success. J Hosp Med. 2011;6(1):S16–23.
Hutschala D, Kinstner C, Skhirtladze K, Mayer-Helm BX, Zeitlinger M, Wisser W, et al. The impact of perioperative atelectasis on antibiotic penetration into lung tissue: an in vivo microdialysis study. Intensive Care Med. 2008;34:1827–34.
Galvào AM, Wanderley MSO, Silva RA, Filho CAM, Melo-Junior MR, Silva LA, et al. Intratracheal co-administration of antioxidants and ceftriaxone reduces pulmonary injury and mortality rate in an experimental model of sepsis. Respirology. 2014;19:1080–7.
Lodise TP. Penetration of meropenem into epithelial lining fluid of patients with ventilator-associated pneumonia. Antimicrob Agents Chemother. 2011;55(4):1606–10.
Craig WA. Protein-binding and the antimicrobial effects: methods for the determination of protein-binding. In: Lorian V, editor. Antibiotics in laboratory medicine. Baltimore: Williams and Walkins; 1991. p. 367–402.
Rebuelto M, Ambros L, Rubio M. Daily variations in ceftriaxone pharmacokinetics in rats. Antimicrob Agents Chemother. 2003;47:809–12.
Committee TE, Testing AS, Changes N, Pseudomonas E. European Committee on Antimicrobial Susceptibility Testing Breakpoint tables for interpretation of MICs and zone diameters European Committee on Antimicrobial Susceptibility Testing Breakpoint tables for interpretation of MICs and zone diameters. http://www.eucast.org/fileadmin/src/media/PDFs/EUCAST_files/Breakpoint_tables/v_5.0_Breakpoint_Table_01.pdf. 2015;0–77. Available from: http://www.eucast.org/fileadmin/src/media/PDFs/EUCAST_files/Breakpoint_tables/v_5.0_Breakpoint_Table_01.pdf. Accessed 14 July 2016.
VS, LV and CA and AN participated in study conception and design. VS, BL, LV, MC, ADF, ES and GC were involved in acquisition of data. LT contributed to analysis and interpretation of data. CA and BL drafted the manuscript. ARDG and AN were involved in critical revision of the manuscript for important intellectual content. All authors read and approved the final manuscript.
Please contact author for data requests.
Dr. Adembri received grant support from Cassa Risparmio Firenze, Firenze; served as board member for Merck Sharp & Dohme (MSD); and received support for article research from Ministero della Ricerca, Italy. Prof. De Gaudio served as board member for MSD and received support for article research from Ministero della Ricerca (#20087SM5HM_003). His institution received grant support from Ministero della Ricerca.
The experimental protocol was approved by the Committee for Animal Experimentation of the Ministry of Health, Rome, Italy. Animals were treated according to Italian and European Guidelines for Animal Care and Experimentation, DL 116/92, in agreement with the European Communities Council Directive guidelines (86/609/EEC). N. Protocol: 46847 issued on 21/12/2012.
This research was partly supported by a Universitary grant to CA.
Department of Health Sciences, Section of Anesthesiology and Intensive Care, University of Florence, Azienda Ospedaliero-Universitaria Careggi, Largo Brambilla 3, 50134, Florence, Italy
Valentina Selmi
, Beatrice Loriga
, Luca Vitali
, Martina Carlucci
, Alessandro Di Filippo
, Giulio Carta
, Angelo Raffaele De Gaudio
& Chiara Adembri
Department of Biosciences and Territory, University of Molise, Contrada Fonte Lappone, 86090, Pesche, IS, Italy
Eleonora Sgambati
Department of Health Sciences, Section of Clinical Pharmacology and Oncology, University of Florence, Viale Pieraccini 6, 50139, Florence, Italy
Andrea Novelli
Department of Neurosciences, Psychology, Drug Research and Child Health, University of Florence, Florence, Italy
Lorenzo Tofani
Search for Valentina Selmi in:
Search for Beatrice Loriga in:
Search for Luca Vitali in:
Search for Martina Carlucci in:
Search for Alessandro Di Filippo in:
Search for Giulio Carta in:
Search for Eleonora Sgambati in:
Search for Lorenzo Tofani in:
Search for Angelo Raffaele De Gaudio in:
Search for Andrea Novelli in:
Search for Chiara Adembri in:
Correspondence to Beatrice Loriga.
Selmi, V., Loriga, B., Vitali, L. et al. Changes in ceftriaxone pharmacokinetics/pharmacodynamics during the early phase of sepsis: a prospective, experimental study in the rat. J Transl Med 14, 316 (2016) doi:10.1186/s12967-016-1072-9
Pharmacokinetics/pharmacodynamics
Glycocalyx
Sialic acids
|
CommonCrawl
|
Ineffective Theory
Henry V's Fallacy
Westmoreland: Oh, that we now had here but one ten thousand of those men in England that do no work today.
Henry: … No, my fair cousin. If we are marked to die, we are enough to do our country loss; and if to live, the fewer men, the greater share of honor… I pray thee wish not one man more.
The King's statement here is not merely rhetoric (although his entire speech may be the best fictional motivational speech in the English language). It's a quantifiable claim of fact. Henry is claiming that the expected utility $\langle u\rangle$ of the coming battle, as measured by an individual soldier in his army, would be lowered by increasing the number of soldiers $N$. $$ \frac{\mathrm d \langle u \rangle}{\mathrm d N} < 0$$
He presents a simple argument. The battle may be won or lost, depending chiefly on many unknown factors beyond the control or knowledge of the English. The expected utility can be written as an integral over all possible values $x$ of those unknown factors (leaving the measure implicit for brevity). The value of $x$ describes everything about the world, except of course the number of english soldiers $N$. $$ \langle u \rangle = \int \mathrm d x \; u(x) $$
The utility depends primarily on whether the battle is won or lost. As a slight approximation, let's say the utility of defeat is uniformly $u_d$, and that of victory is uniformly $u_v$. The set of unknown possiblities can be split (assuming a deterministic universe) into those that result in victory and those that result in defeat. $$ \langle u \rangle = \int_{\mathrm{loss}} \mathrm d x \; u_d + \int_{\mathrm{win}} \mathrm d x\; u_v$$
So far, so good. Next Henry notes (correctly, if we ignore traitors) that, for a fixed outcome, the average soldier will be no sadder if fewer soldiers fought in the battle. The analysis is simple: either the outcome is a loss, in which case the soldier does not particularly care, or the outcome is a victory, and a victory is sweeter for having been fought against greater odds. $$\frac{\mathrm d u_{v,d}}{\mathrm d N} \le 0$$
The King concludes, Solomon-like, that the derivative of the expected value must negative. $$ \frac{\mathrm d \langle u \rangle}{\mathrm d N} = \frac{\mathrm d}{\mathrm d N}\int_{\mathrm{loss}} \mathrm d x \; u_d + \frac{\mathrm d}{\mathrm d N} \int_{\mathrm{win}} \mathrm d x\; u_v = \int_{\mathrm{loss}} \mathrm d x \; \frac{\mathrm d u_d}{\mathrm d N} + \int_{\mathrm{win}} \mathrm d x\; \frac{\mathrm d u_v}{\mathrm d N} \le 0 $$
The King has made a classic, perhaps un-Solomon-like mistake: he neglected to account for the dependence of the domain of the integrals on $N$. By changing the number of soldiers and holding everything else fixed, the battle could be taken from a loss to a win — presumably a dramatic improvement in utility!
As far as I know, the mistake of "neglecting the dependence of the domain of integration" has no specific name. It's certainly rhetorically useful, though.
You see how much better textual analysis in high school english could have been? For my next post, I'll be sanctimoniously and achronistically problematizing the Earl of Westmoreland's cavalier attitude towards the unemployment rate in his country.
scott.lawrence-1 [at] colorado [dot] edu
|
CommonCrawl
|
Browsing Mathematics by Authors
All of ChesterRepCommunitiesTitleAuthorsPublication DateSubmit DateSubjectsPublisherJournalThis CollectionTitleAuthorsPublication DateSubmit DateSubjectsPublisherJournalProfilesView
2^n Bordered Constructions of Self-Dual codes from Group Rings
Dougherty, Steven; Gildea, Joe; Kaya, Abidin; University of Scranton; University of Chester; Sampoerna Academy (Elsevier, 2020-08-04)
Self-dual codes, which are codes that are equal to their orthogonal, are a widely studied family of codes. Various techniques involving circulant matrices and matrices from group rings have been used to construct such codes. Moreover, families of rings have been used, together with a Gray map, to construct binary self-dual codes. In this paper, we introduce a new bordered construction over group rings for self-dual codes by combining many of the previously used techniques. The purpose of this is to construct self-dual codes that were missed using classical construction techniques by constructing self-dual codes with different automorphism groups. We apply the technique to codes over finite commutative Frobenius rings of characteristic 2 and several group rings and use these to construct interesting binary self-dual codes. In particular, we construct some extremal self-dual codes length 64 and 68, constructing 30 new extremal self-dual codes of length 68.
Bordered Constructions of Self-Dual Codes from Group Rings and New Extremal Binary Self-Dual Codes
Dougherty, Steven; Gildea, Joe; Kaya, Abidin; Korban, Adrian; Tylyshchak, Alexander; Yildiz, Bahattin; University of Scranton; University of Chester; Sampoerna Academy; Uzhgorod State University; Northern Arizona University (Elsevier, 2019-02-22)
We introduce a bordered construction over group rings for self-dual codes. We apply the constructions over the binary field and the ring $\F_2+u\F_2$, using groups of orders 9, 15, 21, 25, 27, 33 and 35 to find extremal binary self-dual codes of lengths 20, 32, 40, 44, 52, 56, 64, 68, 88 and best known binary self-dual codes of length 72. In particular we obtain 41 new binary extremal self-dual codes of length 68 from groups of orders 15 and 33 using neighboring and extensions. All the numerical results are tabulated throughout the paper.
Composite Constructions of Self-Dual Codes from Group Rings and New Extremal Self-Dual Binary Codes of Length 68
Dougherty, Steven; Gildea, Joe; Kaya, Abidin; Korban, Adrian; University of Scranton; University of Chester; Sampoerna University ; University of Chester (American Institute of Mathematical Sciences, 2019-11-30)
We describe eight composite constructions from group rings where the orders of the groups are 4 and 8, which are then applied to find self-dual codes of length 16 over F4. These codes have binary images with parameters [32, 16, 8] or [32, 16, 6]. These are lifted to codes over F4 + uF4, to obtain codes with Gray images extremal self-dual binary codes of length 64. Finally, we use a building-up method over F2 + uF2 to obtain new extremal binary self-dual codes of length 68. We construct 11 new codes via the building-up method and 2 new codes by considering possible neighbors.
Composite Matrices from Group Rings, Composite G-Codes and Constructions of Self-Dual Codes
Dougherty, Steven; Gildea, Joe; Korban, Adrian; Kaya, Abidin; University of Scranton; University of Chester; Harmony School of Technology (Springer, 2021-05-19)
In this work, we define composite matrices which are derived from group rings. We extend the idea of G-codes to composite G-codes. We show that these codes are ideals in a group ring, where the ring is a finite commutative Frobenius ring and G is an arbitrary finite group. We prove that the dual of a composite G-code is also a composite G-code. We also define quasi-composite G-codes. Additionally, we study generator matrices, which consist of the identity matrices and the composite matrices. Together with the generator matrices, the well known extension method, the neighbour method and its generalization, we find extremal binary self-dual codes of length 68 with new weight enumerators for the rare parameters $\gamma$ = 7; 8 and 9: In particular, we find 49 new such codes. Moreover, we show that the codes we find are inaccessible from other constructions.
Extending an Established Isomorphism between Group Rings and a Subring of the n × n Matrices
Dougherty, Steven; Gildea, Joe; Korban, Adrian; University of Scranton; University of Chester
In this work, we extend an established isomorphism between group rings and a subring of the n × n matrices. This extension allows us to construct more complex matrices over the ring R. We present many interesting examples of complex matrices constructed directly from our extension. We also show that some of the matrices used in the literature before can be obtained by a direct application of our extended isomorphism.
G-codes over Formal Power Series Rings and Finite Chain Rings
Dougherty, Steven; Gildea, Joe; Korban, Adrian; University of Scranton; University of Chester (2020-02-29)
In this work, we define $G$-codes over the infinite ring $R_\infty$ as ideals in the group ring $R_\infty G$. We show that the dual of a $G$-code is again a $G$-code in this setting. We study the projections and lifts of $G$-codes over the finite chain rings and over the formal power series rings respectively. We extend known results of constructing $\gamma$-adic codes over $R_\infty$ to $\gamma$-adic $G$-codes over the same ring. We also study $G$-codes over principal ideal rings.
G-Codes, self-dual G-Codes and reversible G-Codes over the Ring Bj,k
Dougherty, Steven; Gildea, Joe; Korban, Adrian; Sahinkaya, Serap; Tarsus University; University of Chester (Springer, 2021-05-03)
In this work, we study a new family of rings, Bj,k, whose base field is the finite field Fpr . We study the structure of this family of rings and show that each member of the family is a commutative Frobenius ring. We define a Gray map for the new family of rings, study G-codes, self-dual G-codes, and reversible G-codes over this family. In particular, we show that the projection of a G-code over Bj,k to a code over Bl,m is also a G-code and the image under the Gray map of a self-dual G-code is also a self-dual G-code when the characteristic of the base field is 2. Moreover, we show that the image of a reversible G-code under the Gray map is also a reversible G2j+k-code. The Gray images of these codes are shown to have a rich automorphism group which arises from the algebraic structure of the rings and the groups. Finally, we show that quasi-G codes, which are the images of G-codes under the Gray map, are also Gs-codes for some s.
Group Rings, G-Codes and Constructions of Self-Dual and Formally Self-Dual Codes
Dougherty, Steven; Gildea, Joe; Taylor, Rhian; Tylyshchak, Alexander; University of Scranton; University of Chester; Uzhgorod State University (Springer, 2017-11-15)
We describe G-codes, which are codes that are ideals in a group ring, where the ring is a finite commutative Frobenius ring and G is an arbitrary finite group. We prove that the dual of a G-code is also a G-code. We give constructions of self-dual and formally self-dual codes in this setting and we improve the existing construction given in [13] by showing that one of the conditions given in the theorem is unnecessary and, moreover, it restricts the number of self-dual codes obtained by the construction. We show that several of the standard constructions of self-dual codes are found within our general framework. We prove that our constructed codes must have an automorphism group that contains G as a subgroup. We also prove that a common construction technique for producing self-dual codes cannot produce the putative [72, 36, 16] Type II code. Additionally, we show precisely which groups can be used to construct the extremal Type II codes over length 24 and 48. We define quasi-G codes and give a construction of these codes.
New Extremal Self-Dual Binary Codes of Length 68 via Composite Construction, F2 + uF2 Lifts, Extensions and Neighbors
Dougherty, Steven; Gildea, Joe; Korban, Adrian; Kaya, Abidin; University of Scranton; University of Chester; University of Chester; Sampoerna Academy; (Inderscience, 2020-02-29)
We describe a composite construction from group rings where the groups have orders 16 and 8. This construction is then applied to find the extremal binary self-dual codes with parameters [32, 16, 8] or [32, 16, 6]. We also extend this composite construction by expanding the search field which enables us to find more extremal binary self-dual codes with the above parameters and with different orders of automorphism groups. These codes are then lifted to F2 + uF2, to obtain extremal binary images of codes of length 64. Finally, we use the extension method and neighbor construction to obtain new extremal binary self-dual codes of length 68. As a result, we obtain 28 new codes of length 68 which were not known in the literature before.
New Self-Dual and Formally Self-Dual Codes from Group Ring Constructions
Dougherty, Steven; Gildea, Joe; Kaya, Abidin; Yildiz, Bahattin; University of Scranton; University of Chester; Sampoerna Academy; University of Chester; Northern Arizona University (American Institute of Mathematical Sciences, 2019-08-31)
In this work, we study construction methods for self-dual and formally self-dual codes from group rings, arising from the cyclic group, the dihedral group, the dicyclic group and the semi-dihedral group. Using these constructions over the rings $_F2 +uF_2$ and $F_4 + uF_4$, we obtain 9 new extremal binary self-dual codes of length 68 and 25 even formally self-dual codes with parameters [72,36,14].
Quadruple Bordered Constructions of Self-Dual Codes from Group Rings
Dougherty, Steven; Gildea, Joe; Kaya, Abidin; University of Scranton; University of Chester; Sampoerna University (Springer Verlag, 2019-07-05)
In this paper, we introduce a new bordered construction for self-dual codes using group rings. We consider constructions over the binary field, the family of rings Rk and the ring F4 + uF4. We use groups of order 4, 12 and 20. We construct some extremal self-dual codes and non-extremal self-dual codes of length 16, 32, 48, 64 and 68. In particular, we construct 33 new extremal self-dual codes of length 68.
|
CommonCrawl
|
Mechanical response of cardiovascular stents under vascular dynamic bending
Jiang Xu1,
Jie Yang1,
Nan Huang2,
Christopher Uhl3,
Yihua Zhou4 &
Yaling Liu1,3,4
Backround
Currently, the effect of vascular dynamic bending (VDB) has not been fully considered when studying cardiovascular stents' long-term mechanical properties, as the previous studies about stent's mechanical properties mostly focus on the effect of vascular pulsation (VP). More and more clinical reports suggested that the effect of VDB have a significant impact on stent.
In this paper, an explicit-implicit coupling simulation method was applied to analyze the mechanical responses of cardiovascular stents considering the effect of VDB. The effect of VP on stent mechanical properties was also studied and compared to the effect of VDB.
The results showed that the dynamic bending deformation occurred in stents due to the effect of VDB. The effects of VDB and VP resulted in alternating stress states of the stent, while the VDB alternate stresses effective on the stent were almost three times larger than that of the VP. The stress concentration under VDB mainly occurred in bridge struts and the maximal stress was located in the middle loops of the stent. However, the stress distributed uniformly in the stents under the effect of VP. Stent fracture occurred more frequently as a result of VDB with the predicted fracture position located in the bridging struts of the stent. These results are consistent with the reported data in clinical literatures. The stress of the vessel under VDB was higher, than that caused by VP.
The results showed that the effect of VDB has a significant impact on the stent's stress distribution, fatigue performance and overall stress on the vessel, thus it is necessary to be considered when analyzing stent's long-term mechanical properties. Meanwhile, the results showed that the explicit-implicit coupling simulation can be applied to analyze stent mechanical properties.
Vascular stenting, with its advantages of slight trauma and effective treatment, has been widely used in clinic [1]. The mechanical behavior of a coronary stent in the human body affects short-term and long-term therapeutic effects of vascular stenting [2, 3]. Thus, analyzing stent mechanical properties is of great importance for further improvement of stent design and effective treatment.
Currently, finite element method (FEM) is widely used to study stents' mechanical properties. FEM studies about the stent solid mechanical properties are mainly categorized into two types: the first investigates mechanical properties of the stent during the process of free expansion without any external constraints; the other studies mechanical properties of stents surrounded by blood vessels. The first type is mainly applied to obtain the stent's expanding pressure, axial contraction, flexibility, radial recoil rate, strain distribution in stent ends, uniformity of expansion, the "dog bone" phenomenon, fatigue of the stent and so on [4–12]. Its main purpose is to provide guidance for stent design and optimization. The second type is mainly for studying the interaction between the stent and the vessel. It aims to understand the clinical complications of stent implantation [i.e. in-stent restenosis (ISR) and stent fracture (SF)], while also providing technical support for stent surgery optimization [13–28].
Many clinical results have shown that stent fracture is one of the most important factors for serious complications (i.e. ISR) after stent implantation. Most stent fractures happened in the mid-late service time, although a few of them immediately fractured after implantation. Nakazawa et al. [29] found that stent complications occurred in 78 % of serious stent fracture cases. The statistical data from Umeda et al. [30] showed the restenosis rate increased from 3 to 15 % after stent fracture. The restenosis rate before and after stent fracture from Aoki et al. [31] was 12.4 and 37.5 % respectively. Lee et al. [32] claimed that stent restenosis was more likely to happen after stent fracture, because the fractured stent promoted neointimal or smooth muscle hyperplasia.
The fatigue of coronary artery stents is mainly caused by contractions of the heart (the systolic and diastolic function) where the main effects on the stent's mechanical properties are the pulsation of vessels and vascular movement. Currently, the pulsation force of the vasculature has been applied to study the stent fatigue. Marrey et al. [33] investigated the development of micro cracks in stents and predicted the life of stents under the vascular pulse conditions based on fracture mechanics. Li et al. [34] analyzed the pulse pressure applied on stents by computational fluid dynamics. Making use of computational fluid dynamics, the pulsation force was calculated and then converted into a stress by the group. The stress was then used in conjunction with the S–N curve for the material in order to determine the expected lifespan of the stent in terms of cycles. Zhi et al. [35] studied the fatigue life of NiTi stents, which simplified the pulse pressure to a harmonic load on the inner surface of the stents. Others also analyzed the coronary stent's fatigue performance, such as Weiß et al. [36] tested the coronary stent's fatigue performance based on the stent test standard, H.A.F. Argente et al. [37] studied a balloon stent's fatigue life based on a two-scale continuum damage mechanics model. H. M. Hsiao et al. [38, 39] studied the fatigue properties of renal artery stent in which the effect of VP and VDB was considered. Among these studies about cardiovascular stent's mechanical performance only the effect of VP was being considered, the effect of vascular movement on stent fatigue performance has not been fully studied.
However, more and more clinical results have revealed that the movement of vascular has an important impact on stent fatigue. Lee et al. [40], Shaikh et al. [41] and Umeda et al. [30] suggested that deformations of the vasculature such as bending, tensile, shear, and torsion, which were caused by heart contractions, lead to alternating stress states in the stent. This alternating state of stress is one of the main reasons for stent fracture. Marrey et al. [33] demonstrated that during the alternation between systolic and diastolic states of the heart, the stent underwent reciprocating conditions, which included stent fatigue and stent fracture. Doi et al. [42] found that during the transition from the systolic to the diastolic state of the heart, the twist of the coronary artery was one of the major reasons for stent fatigue and fracture.
Thus, the vascular movement during the beating of the heart influences the stent mechanical properties (i.e. the fatigue and fracture). Among studies of stent mechanical properties, only the effect of vascular pulsation (VP) caused by heart contractions has been studied, without any consideration for the effect of vascular movement. So it is necessary to determine whether the motion of the artery during heart contractions, negatively impacts stent mechanical performance or not.
The movement of the coronary artery caused by heart contractions is complex, comprised of rigid motion, dynamic twists and dynamic bending. The rigid motion of the coronary artery was neglected because it only changed the position of the stents and artery without changing the stress in the stents or artery. The effect of vessel dynamic twist (VDT) was also ignored accordingly, because anatomical studies proved that coronary arteries undergo important curvature changes throughout the cardiac cycle [43].
In this study, structural FE models were implemented to simulate stent expansion in curved vessel and to investigate the effects of VDB on the long-term mechanical properties of the stent. The aim of this work was to provide a mechanical explanation for the increased risks of stent failure after being implanted in the body. Furthermore, the effect of VDB was explored for its relative importance during mechanical analysis of coronary stents and was to determine whether the motion of the artery during heart contractions, negatively impacts stent mechanical performance or not. The effect of VP on stent mechanical properties was also studied and compared to the effect of VDB.
In order to compare the effect of VDB and VP on stent's mechanical properties, two simulation models were built. The only differences in these two models were the boundary conditions. Details of the simulation models are described as follows.
Stent model
The stent was a typical open-cell design (an Endeavor™ like stent), with 13 loops along the axial direction and 16 struts in each loop. The stent had a length of 16 mm, inner diameter of 1.78 mm, and thickness of 0.08 mm. Eight-node linear element with reduced integration and hourglass control (Abaqus element type C3D8R) was applied to mesh the stent. The total number of elements to mesh the stent was 76,104, which were based on mesh sensitivity studies. Figure 1 shows the geometry model and finite element mesh of the stent.
The stent geometry model and finite element mesh
The material of the stents was the L605 Co-Cr alloy. The material properties of this alloy were modeled in virtue of an elasto-plastic constitutive model with linear isotropic and kinematic hardening. The material properties of the Co alloy are listed in Table 1. The same model has been used in the work of Marrey et al. [33] to study the stent fatigue performance under the effect of VP, the more details see Ref. [33].
Table 1 Material properties of Co alloy
The balloon model
The balloon had a length of 18 mm, diameter of 3 mm, and thickness of 0.05 mm when it was fully deflated. The balloon was an isotropic, linear-elastic material with a Young's modulus of 900 MPa, a Poisson's ratio of 0.30 and a density of 2000 kg/m3 [44].
The balloon was meshed using 4-node membrane elements with reduced integration and hourglass control (Abaqus element type M3D4R). The total number of elements for the balloon was 12,120 based on mesh sensitivity studies.
In this paper, the balloon was compressed by a negative pressure of 0.01 MPa applied on its inner surface with proximal and distal ends fully constrained, as shown in Fig. 2. The balloon was inserted into the stent after deflation.
The balloon model of deflation and folding process. a is the original unfolded balloon, b is the folded balloon, c is the details of folded balloon
Coronary model
The coronary artery model was a simplified tube model with a small curvature. The initial curvature radius of the coronary artery is 30 mm [45]. The coronary artery has a length of 30 mm (along the center line), inner diameter of 3.0 mm, and thickness of 0.9 mm. An asymmetric atherosclerotic plaque which has a length of 14 mm is built in the model. The maximal grade of stenosis is set as 60 % of the normal vessel and is located in the middle section of artery. The eight node linear brick, reduced integration elements with hourglass control (Abaqus element type C3D8) was used to mesh the coronary artery. The total number of elements to mesh the stent was 136,360 based on mesh sensitivity studies, as shown in Fig. 3.
The coronary artery model. a is the structure and the finite element mesh of the coronary artery model. b is the section and the mesh details of the artery model. The legend indicates the different material regions considered in the model
The coronary artery consists of three layers: the intima, media, and adventitia, with thicknesses corresponding to 0.28, 0.32, and 0.3 mm respectively. The mechanical behavior of the coronary artery was modeled using a homogeneous, isotropic and hyper-elastic constitutive model, based on the work of Holzapfel et al. [46]. The constitutive law was based on a reduced polynomial strain energy density function, U, of sixth order:
$${\text{U}} = \mathop \sum \limits_{i = 1}^{6} {\text{C}}_{i0}\,(\bar{I}_{1} - 3)^{i}$$
Here, \(\bar{I}_{1}\) is the first invariant of the Cauchy–Green tensor:
$$\bar{I}_{1} = \bar{\lambda }_{1}^{2} + \bar{\lambda }_{2}^{2} + \bar{\lambda }_{3}^{2} ,\begin{array}{*{20}c} {} & {} \\ \end{array} \bar{\lambda }_{i} = J^{( - 1/3)} \lambda_{i}$$
where, λ i are the principal stretches and J is the total volume ratio.
The material parameters for each layer used in this model are listed in Table 2.
Table 2 Coefficients of the strain energy density function for each layer of the coronary artery
The bending of coronary artery was driven by the artery movement. The artery movement was simulated by changing the sphere radius (R), with the center of the sphere fixed at the coordinate origin. The harmonic curvature variation was adopted with the same parameters used in Weydahl et al. [45] and Moore et al. [47]. More specifically, R was expressed as a sinusoidal function:
$$\text{R(t)} = \text{R}_{0} \left( {1 \,+\, \updelta \sin \left( {\frac{\uppi \text{t}}{\text{T}}} \right)} \right)$$
where the mean sphere radius, R0 was set as 30 mm, parameter δ was 0.15, and T as 0.75 s.
Figure 4a shows the schematic diagram of the coronary artery motion caused by the heart beating. In this model the heart beating was simulated by changing the sphere radius. Figure 4b shows the simplified VDB model. Figure 4c shows the fluctuation of the mean sphere radius according to Eq. (3).
The schematic diagram of the coronary artery movement and the motion model. a is the schematic diagram of coronary artery motion caused by the heart; b is the simplified VDB model; c is the dynamic bending of the vessel
In this work, the simulation as two major mechanical processes: the stent expansion in a curved vessel and the stent working in the curved vessel (the stent under the effect of VDB or VP). The stent expansion process was simulated using the explicit dynamics method (Abaqus/Explicit) and the second one was simulated using the implicit method (Abaqus/Standard). A data (mainly include the stresses of stent and artery after stent implanted) transfer process was carried out between two different mechanical processes, which is referred to as a typical explicit-implicit continuous analysis method.
In order to compare the effect of vascular dynamic curvature with that of vascular pulsation, two different simulation models were carried out: Model-1 was the VDB effect simulation and Model-2 was the VP effect simulation.
The simulation was divided into four steps in each model, as shown in Fig. 5.
Simulation steps and boundary conditions
The details of each step are defined below for Model-1.
Step1: Compressing the stent to the folded balloon and bending stent and folded balloon
This step was a pre-treatment step which include two process: compressing the stent to the folded balloon and bending stent and folded balloon to meet the initial curvature of the artery. For the compressing stent process, the balloon was constrained; two ends of the stent were constrained in the circumferential direction and a radial displacement of 0.3 mm was applied on the outer surface of the stent to compress it. After being released, the radial displacement of outer and inner stent was about 1.3 mm. For the bending stent process, the stent and balloon was free, a rigid catheter was imposed in the outer surface of stent and balloon, the bending deformation of stent and balloon is driven by the configure change of the rigid catheter set by the displacement boundary conditions. This method for bending stent and balloon had been applied to study stent's mechanical properties by F. Auricchio et al. [48].
Step2: Expansion of the stent in the curved vessel and release of the pressure applied on balloon
In this step, two processes were simulated: expansion of the stent in the curved vessel followed by the release of the pressure. A load of 1.0 MPa was initially applied on the inner surface of balloon for expansion. When the stent was completely expanded under a full load of 1.0 MPa, the load was gradually decreased from 1.0 to 0 MPa. Two ends of the balloon and coronary artery were constrained in all directions and two ends of the stent were constrained in the circumferential direction.
Due to the high nonlinear compression/expansion of the balloon and stent in Step1 and Step2, a quasi-static analysis was performed using Abaqus/Explicit. In order to control the ratio of kinetic energy to the total strain energy under 5 % in each step, the simulation time was set as 5 s in Step1 and 13 s in Step2.
Step3: Transferring mechanical information from Abaqus/Explicit to Abaqus/Standard
When the load applied on the inner surface of balloon was released completely, the mechanical information (i.e. displacement U, stress σ) of the stent and the artery was transferred from the Abaqus/Explicit module to the Abaqus/Standard module. The residual stresses after stent expansion were being considered to analyze stent's mechanical properties under the effect VDB or VP. The transferred information would be set as the initial conditions of Step4. In this step, the balloon was withdrawn from the simulation model. The boundary conditions of the stent and coronary artery were the same as in Step2.
Step4: Applying the VDB on the stent
A spatial displacement function described in section "Coronary model" was applied to the outer surface of the artery to simulate its bending. The stent was set free in all directions.
For Model-2, the boundary conditions in Step1, Step2 and Step3 were the same as those in Model-1, with the only differences occurring in Step4 of the two modules. In Step4, different values of cyclic pressure ranging from 80 mmHg (diastolic) to 120 mmHg (systolic) were applied on the inner surface of the stent to simulate the pulsation of vessels caused by the beating of the heart. The load was regarded as harmonic, with the period of the load being 0.75 s, which was equal to the cardiac cycle. It was considered to be the same as the load in the work by Zhi et al. [35]. The two ends of the artery were constrained in all directions and the stent was set free in all directions.
In this work, a surface to surface contact algorithm is selected to model the nonlinear contacts in Step1, Step2 and Step4. The coulomb friction model is used to model the frictional contacts between the balloon and stent, balloon and artery as well as the stent and artery. This contact method was used in many of the works about the interactions between stents and vessels [8, 11, 17, 19–21, 23, 25]. A value of 0.1 was selected in the whole simulation [19, 20].
The explicit-implicit coupling method
In this work, an explicit-implicit coupling method was used, the key step of this approach was coupling the Abaqus/Explicit module and the Abaqus/Standard module. This step mainly include two aspects:
The deformed configure of stent and artery at the end time of stent expansion process in Explicit module was imported to the Standard module and it was set as the initial configure of the stent and artery for analyzing stent response under VDB and VP.
The residua stresses of stent and artery (the balloon had been removed from the simulation model) after stent implanted was set as the initial stress conditions of the implicit simulation. The plastic strain of stent during the expansion process was omitted due to under the effect of VDB or VP the mechanical properties of stent were in the elastic state.
The predicted effective alternate stresses and effective mean stresses were then used to calculate a fatigue safety factor (FSF) distribution by utilizing the modified-Goodman relationship, as used by Marrey et al. [33]. The FSF, which essentially quantifies the proximity of the effective mean stresses and effective alternate stresses at any given numerical integration point to the limiting Goodman curve (Fig. 14), was determined through:
$$\frac{1}{FSF} = \frac{{\sigma_{m} }}{{S_{u} }} + \frac{{\sigma_{a} }}{{S_{a} }}$$
Here, σ m is the effective mean stress, σ a is the effective alternate stress, S u is the ultimate stress and S a is the endurance limit for zero mean stress. Associated with the ultimate tensile strength and endurance strength of the Co alloy stent material, respectively; actual values used in the current analysis are listed in Table 2.
After collecting the values of principal stresses (σ 1, σ 2 and σ 3) at each node, the effective mean and alternate stresses were calculated using the following equations:
$$\sigma_{m} = \frac{1}{\sqrt 2 }\sqrt {(\sigma_{1m} - \sigma_{2m} )^{2} + (\sigma_{2m} - \sigma_{3m} )^{2} + (\sigma_{3m} - \sigma_{1m} )^{2} }$$
$$\sigma_{a} = \frac{1}{\sqrt 2 }\sqrt {(\sigma_{1a} - \sigma_{2a} )^{2} + (\sigma_{2a} - \sigma_{3a} )^{2} + (\sigma_{3a} - \sigma_{1a} )^{2} }$$
where the σ 1m , σ 2m and σ 3m are principal mean stresses, while the σ 1a , σ 2a and σ 3a are principal alternate stresses. These values were used to build Goodman diagrams that are commonly used to quantify the combined effect of mean and alternating stresses on the fatigue life of a material.
The validity of simulation results
To evaluate the simulation with the explicit method, the total kinetic and total internal energy of the system was used to judge whether the results were valid. It was a quasi-static simulation method, according to the Abaqus suggestions, when the kinetic energy accounted for less than 5 % of the internal energy, the results were acceptable [23, 26]. The internal energy and kinetic energy in respect to the time during stent expansion is shown in Fig. 6. The ratio of kinetic energy to internal energy was less than 5 %. Thus, the result of the expanding stent in the curved vessel by explicit method was acceptable.
Time history of the internal and kinetic energy during the stent expansion process
The deformation and stress of vessel under the effect of VDB
The shape of the stent in the curved vessel and stress distribution of the vessel at two different instantaneous times during the stent expansion process are shown in Fig. 7. When the stent was expanded completely, the maximal stress of the artery was 439.77 kPa and was localized in the zone which was close to the distal/proximal end of the plaque. The peak stress in the curved vessel reached 131.93 kPa after the load was released completely and appeared on the vessel zone which was close to the ends of plaque. During the stent expansion process, the maximal compression effect of the stent on the artery appeared at two ends of the plaque.
The shape of the stent and stress distribution on the vessel during stent expansion. a is the stent completely expanded, b is the load completely released
The map of the stress distribution in the curved vessel at four different instances during the vascular dynamic curvature in a cardiac cycle is shown in Fig. 8. The initial mechanical properties of the stent and artery were the same as those reported in Step2 (as shown in Fig. 7b). This suggests that the mechanical information (the stress of vessel after stent implanted) in Step2 was correctly transferred to the standard analysis module in Step4, as initial conditions. At a quarter of the cardiac cycle, the maximal displacement of the curved vessel was 0.6 mm, while the maximal stress of the vessel was 287.03 kPa which was localized in the two ends of plaque. At half of a cardiac cycle, the displacement of the vessel reached a maximal value of 1.8 mm with the maximal stress of the vessel being 615.68 kPa which was localized in the middle position of vessel. At 4/5 of a cardiac the maximal stress of the vessel was 175.91 kPa. The stress distributed in the vessel during the process of vessel movement was greater than that which occurred during the stent expansion process.
Stress distribution in the vessel during artery moving process. a is the initial time, b is a quarter of a cardiac cycle, c is half of a cardiac cycle, d is 4/5 of a cardiac cycle
The bending deformation of stent under the effect of VDB
At half of a cardiac cycle, the map of the vascular displacement is shown in Fig. 9a. From the map, the obtained displacement at every loop was approximated. Along the axial direction of the stent, 13 typical positions were marked out at every loop of the stent (shown in Fig. 9a as the observation Points 1–13) to measure the stent's displacement during the vascular dynamic curvature process (as shown in Fig. 9a). The maximal displacement was 1.8 mm, which occurred at the middle loop of the stent and the minimal displacement was about 1.1 mm which occurred at the two end loops of the stent. The displacement of the stent was symmetrical about the middle position. The displacement of the stent included two parts: the rigid motion and the bending deformation. The displacement difference between these 13 points can be used to describe the bending deformation of the stent (as shown in Fig. 9a). The maximal bending deformation of the stent was about 0.65 mm at half of a cardiac cycle. The bending deformation of the stent was symmetrical about the middle position which was a typical simplified deformation model of a support beam.
The displacement of the stent. a The map of the stent displacement at half of a cardiac cycle during the process of vascular motion. b The displacement of the stent in relation to time, at two positions
The displacement of Point 13 and Point 7 relative to the cardiac cycle is shown in Fig. 9b. The displacement difference between Point 13 and Point 7 was caused by the dynamic curvature of the stent during the process of VDB. The maximal bending deformation of the stent occurred at half of a cardiac cycle. During the process of VDB, the stent was in a state of dynamic bending deformation. Under this dynamic bending deformation state, the stent experienced alternating stress states, which would cause the fatigue and failure of the stent.
The stress of stent during the process of VDB
Figure 10 shows the von-Mises stress distribution of the stent at half of a cardiac cycle. The maximal stress of the stent was 589.92 MPa, which occurred in the bridge struts of the stent. In the axial direction, the stress mainly distributed in the middle loops of the stent; corresponding to the maximal bending deformation. Also, the stress of the bridge struts was greater than that of the main struts in the circumferential direction. 13 observation points were marked in the stent, Points 1–7 were located at the bridge struts and Points 8–13 were located in the ends of the stent loops in order to observe the stress of the stent relative to time.
The distribution of stress in the stent at half of a cardiac cycle
Figure 11 shows the stress of the stent's marked points in terms of time, during the process of VDB. It shows that the stent bears a cyclic stress due to the vascular dynamic curvature. Also, the relationship of the stress and time can be expressed by a sine wave function as time changes. Figure 11 shows the stress in the bridge struts of the stent (Points 1–7, marked in Fig. 10) and the stress in the main struts (Points 8–13, marked in Fig. 10). In the axial direction, the stress mainly occurred in the middle loops where the bending deformation was higher, (i.e. the stresses at Points 3, 4, and 5 were greater than at Points 1, 2, 6 and 7). It was observed that the stress mainly occurred in the bridge struts, with the maximal stress of 589.92 MPa being located in bridge struts, while the maximal stress in the main struts was only 380 MPa. The reason for the aforementioned occurrence is that the stress of the stent is mainly caused by the bending deformation in the VDB process. While in the bending deformation state, the stent's stress mainly occurred in the bridge struts. From the view of time, the maximal stress occurred at half of a cardiac cycle, when the maximal bending deformation occurred. This also indicates that failure frequently occurs in the bridge struts, because the greater stress causes more severe damages.
The stress of the stent in terms of time. Points 1–7 were located at the bridge struts and Points 8–13 were located in the ends of the stent loops
The safety of the stent
In the process of VDB, the stent experienced dynamic bending deformation; the stress placed on the stent was an alternating stress. This was a main contributing factor to stent fatigue. As discussed in the introduction, the VP was another main trigger for stent fatigue which has been considered in previous studies. As a result, the effect of VP on stent fatigue performance was also studied and compared to the effect of VDB.
Figure 12 shows the stent effective alternate stresses distribution. In regards to the effect of VDB, the stent's maximal effective alternate stress was 159.25 MPa in a cardiac cycle while only 59.37 MPa with regards to VP. Considering the effects of VP, the effective alternate stress of 48.02 MPa was close to the result obtained by Marrey et al. [33], which was 52 MPa. The maximal deformation of the stent was close to 6 % of the inner diameter of vessel (here it was close to 0.18 mm) in regards to VP, while it was about 0.65 mm (as shown in Fig. 9) in regards to VDB.
Stent effective alternate stresses distribution of a stent in a cardiac cycle. a Distribution with regards to the effect of VDB. b Distribution with regards to the effect of VP
The maximal effective alternate stresses occurred in bridge struts as the result of both the VDB and VP. A concentration in the middle loops of the stent was observed where the bending deformation was greater than that in other positions. Also, the effective alternate stress in the bridge struts was higher than that in the main struts with regards to VDB. In terms of VP, the effective alternate stress uniformly distributed along the stent axially and the effective alternate stresses in the bridge struts was similar to that in the main struts. This occurred because the dynamic bending deformation appeared in the stent under the effect of VDB, therefore the main bearing load portion was the bridge struts. Also in terms of VP, the load was uniformly applied on the inner surface of the stent, the deformation of the stent was in the state of dynamic expansion and recoil, therefore the effective alternate stresses in the bridge struts was similar to that in the main struts.
Figure 13 shows the distribution of the stent effective mean stresses in a cardiac cycle. In regards to the effects of both VDB and VP, the maximal effective mean stress of the stent was similar according to Fig. 13. With regards to the effect of VDB, the effective mean stress mainly occurred in middle loops of the stent along the axial direction. Along the circumferential direction, the effective mean stress mainly occurred in bridge struts. The stress was uniformly distributed along the stent in the axial direction when regarding VP. This distribution was the same as the distribution of the effective alternate stresses.
Effective mean stress distribution in the stent. a Distribution for VDB. b Distribution for VP
Figure 14 shows the Goodman diagram of the stent under two different fatigue loads [(a) with regards to VDB and (b) with regards to VP]. FEM calculated data was below the Goodman diagram failure line, indicating that the stent was able to pass the fatigue life of 4 × 108 cycles under both fatigue loading conditions. These results fitted favorably with those results reported in the literature [30, 33]. When comparing two Goodman diagrams under different loads, the effect of VDB on the stent's FSF was lower than that caused by VP. As shown in Fig. 14, the FEM results of the VDB were closer to the Goodman curve than those of the VP.
Stent Goodman diagrams. a With regards to the VDB. b With regards to the VP
A contour plot of the inverse FSF is shown in Fig. 15. The maximal inverse FSF of the stent for the VDB was 1.108, while it was only 0.539 for the VP. This suggests that the fatigue failure of the stent in regards to the effect of VDB occurred more easily than those regarding VP (if the inverse FSF equals 1, the stent failure occurs). This mainly occurs because, the stent was under higher alternating stress in regards to VDB when compared to that in VP (the effective alternate stress for the VDB was 159.25 vs. 59.37 MPa for the VP). Under two fatigue loading conditions, the predicted worst-case fatigue locations both occurred in the bridge struts. This suggested the fracture of the stent is more likely to happen in the bridge struts, which matched with the results reported in the literature [21], in which the stent fracture was confirmed by medical images in clinic as well as in the literature [49]. So in the analysis model of the stent fracture, the effect of VDB should be considered and it has also been proven useful to optimize bridge struts because the predicted position of stent fracture mainly occurred in the bridge struts from the simulation results and clinical results.
Contour plot of the inverse FSF of the stent. a With regards to VDB. b With regards to VP
The impact of the stent on the vessel
Currently, the reason for ISR is not yet been clearly understood. Clinically it is generally accepted that the vessel injury is caused when the stent presses against the vessel wall after stent implantation. Based on the fact that ISR is an indicator of vessel injury, it is useful to study IRS factors in order to predict the stress of the vessel caused by the stent. The FEM has been used as the basis for studying IRS factors [21–28]. Figure 16 shows the effective mean stress of the vessel during different stent service processes [(a) shows the process of stent implantation, (b) shows the effect of VP in a cardiac cycle, and (c) shows the effect of VDB in a cardiac cycle].
The effective mean stress distribution of vessels caused by the stent. a During stent expansion. b Distribution caused by VP. c Distribution caused by VDB
During the process of stent implantation, the maximal effective mean stress of the vessel caused by the stent was 248.77 kPa. Considering VP, the maximal stress of the vessel caused by the stent was 355.35 kPa. Considering VDB, the maximal stress of the vessel caused by the stent was 497.55 kPa. The stress level of the vessel caused by the stent was higher under VDB than that under VP or during stent expansion; this suggested that more injury occurred as a result of VDB. It indicated that in the analysis model of stent's mechanical properties, the effect of VDB caused by the beating heart should be considered.
Discussion of the simulation method used
For the previous studies on stent mechanical properties, the implicit simulation method was used more widely in early works [4–7, 13–19]. However, only simplified models could be solved using this method because it was difficult to solve the highly nonlinear problem [8]. Since 2008, the explicit method was more and more widely used [9–12, 21–28] in most studies about stent mechanical properties. Up until now, a combination of explicit and implicit methods has never been utilized to analyze the stent mechanical properties.
In fact, the explicit method was a dynamic solution method and the results included inertial effect and kinetic energy, while the static analysis was contrary to it. If the explicit method was applied to solve static problem (the quasi static problem), it was necessary to reduce effect of the kinetic energy of system on the simulation results. A common strategy in previous studies was to increase the loading time and to achieve the purpose of reducing the kinetic energy. At the same time the simulation time also increased with the increase of loading time.
In addition, the stable increment time of the explicit method is.very small after the stent has been meshed. Based on this increment time, a long simulation time is needed. Instead, a mass scale method is used to artificially increase the stable time, however the kinetic energy of the system must also correspondingly be artificially increased. It was very important to find a balance between the kinetic energy of system and the simulation time. But for the implicit method, there was not this limitation. So using the implicit method the simulation time could be saved and the results were more reliable than that using the explicit method.
Here, a novel way to simulate a stent's mechanical responses was provided. The stent expansion process was simulated using the explicit dynamics method (Abaqus/Explicit), due to the fact that the problem was highly nonlinear. The stent mechanical properties under the effect of VDB and VP, were simulated using the implicit method (Abaqus/Standard) to save computational time. A data transfer process was carried out between two different mechanical processes, which is referred to as a typical explicit-implicit continuous analysis method. From the simulation results, it was concluded that this method can be applied to study stent's mechanical properties. In this work, we describe the explicit-implicit continuous analysis method in order to simulate the stent's mechanical properties with the application of this method on other problems to be explored in the future.
In this work, an explicit-implicit coupling finite element simulation was employed to investigate the coronary stent's mechanical properties considering the effect of VDB after the stent was expanded in a curved vessel. Compared with the mechanical properties of a coronary stent with regards to VP, the following conclusion can be drawn.
Under the effect of VDB, the predicted worst case of stent fracture was mainly located in the bridge struts of the middle loops, which matches well with the results reported in the literature in which the stent fracture was confirmed by medical images. For the effect of VP on the stent, the predicted worst-case of stent fracture was also located in the bridge struts rather than in the middle loops. The result for the effect of VDB was consistent with clinical observations. The effect of VDB was a reason for the increased risk of long-term stent fracture.
Under the effect of VDB, the stent was in a bending deformation state, where the deformation of the stent under VDB was larger than that under VP, which caused a higher alternating stress level under VDB (the effective alternate stresses were 159.25 MPa under VDB vs. 59.37 under VP). The fatigue failure of the stent under VDB occurred more easily than that under VP (the fatigue safety factor of the stent was lower for the VDB than for the VP, which is mainly caused by the higher alternating stress induced by the VDB). Under VDB, the stress in the vessel was greater than that under VP. During stent expansion, it was suggested that more vascular injury occurs as a result of VDB. The effect of VDB during the beating of the heart would negatively impact stent mechanical performance. Thus, in the analysis model for the coronary stent, the mechanical performance should be considered.
Meanwhile an explicit-implicit coupling method was employed to analyze the long term stent mechanical properties. The results showed that this method is feasible and effective to study stent mechanical properties.
Garg S, Serruys PW. Coronary Stents: current status. J Am Coll Cardiol. 2010;56(10 Supplement):S1–42.
Sangiorgi G, Melzi G, Agostoni P, Cola C, Clementi F, Romitelli P, Virmani R, Colombo A. Engineering aspects of stents design and their translation into clinical practice. Ann Ist Super Sanita. 2007;43(1):89–100.
Katz G, Harchandani B, Shah B. Drug-eluting stents: the past, present, and future. Curr Atheroscler Rep. 2015;17(3):485.
Etave F, Finet G, Boivin M, Boyer J-C, Rioufol G, Thollet G. Mechanical properties of coronary stents determined by using finite element analysis. J Biomech. 2001;34(8):1065–75.
Migliavacca F, Petrini L, Colombo M, Auricchio F, Pietrabissa R. Mechanical behavior of coronary stents investigated through the finite element method. J Biomech. 2002;35(6):803–11.
Petrini L, Migliavacca F, Auricchio F, Dubini G. Numerical investigation of the intravascular coronary stent flexibility. J Biomech. 2004;37(4):495–501.
Wu W, Yang DZ, Qi M, Wang WQ. An FEA method to study flexibility of expanded coronary stents. J Mater Process Technol. 2007;184(1–3):447–50.
De Beule M, Mortier P, Carlier SG, Verhegghe B, Van Impe R, Verdonck P. Realistic finite element-based stent design: the impact of balloon folding. J Biomech. 2008;41(2):383–9.
J. Yang MBL, N. Huang, Q.X. Du. Simulation of stent expansion by finite element method. In ICBBE 3rd International Conference on Bioinformatics and Biomedical Engineering. Beijing: IEEE CFP0929C-CDR, 1–4; 2009.
Li N, Zhang H, Ouyang H. Shape optimization of coronary artery stent based on a parametric model. Finite Elem Anal Des. 2009;45(6–7):468–75.
García A, Peña E, Martínez MA. Influence of geometrical parameters on radial force during self-expanding stent deployment. Application for a variable radial stiffness stent. J Mech Behav Biomed Mater. 2012;10:166–75.
Azaouzi M, Makradi A, Petit J, Belouettar S, Polit O. On the numerical investigation of cardiovascular balloon-expandable stent using finite element method. Comput Mater Sci. 2013;79:326–35.
David Chua SN, MacDonald BJ, Hashmi MSJ. Finite element simulation of slotted tube (stent) with the presence of plaque and artery by balloon expansion. J Mater Process Technol. 2004;155–156:1772–9.
Lally C, Dolan F, Prendergast PJ. Cardiovascular stent design and vessel stresses: a finite element analysis. J Biomech. 2005;38(8):1574–81.
Liang DK, Yang DZ, Qi M, Wang WQ. Finite element analysis of the implantation of a balloon-expandable stent in a stenosed artery. Int J Cardiol. 2005;104(3):314–8.
Wu W, Wang W-Q, Yang D-Z, Qi M. Stent expansion in curved vessel and their interactions: a finite element analysis. J Biomech. 2007;40(11):2580–5.
Pericevic I, Lally C, Toner D, Kelly DJ. The influence of plaque composition on underlying arterial wall stress during stent expansion: the case for lesion-specific stents. Med Eng Phys. 2009;31(4):428–33.
Hyun Kim J, Jin Kang T, Yu WR. Simulation of mechanical behavior of temperature-responsive braided stents made of shape memory polyurethanes. J Biomech. 2010;43(4):632–43.
Schievano S, Taylor AM. Patient specific finite element analysis results in more accurate prediction of stent fractures: application to percutaneous pulmonary valve implantation. J Biomech. 2010;43(4):687–93.
Zahedmanesh H, John Kelly D, Lally C. Simulation of a balloon expandable stent in a realistic coronary artery—Determination of the optimum modelling strategy. J Biomech. 2010;43(11):2126–32.
Gu L, Zhao S, Muttyam AK, Hammel JM. The relation between the arterial stress and restenosis rate after coronary stenting. J Med Devices. 2010;4(3):031005–031005.
De Bock S, Iannaccone F, De Santis G, De Beule M, Van Loo D, Devos D, Vermassen F, Segers P, Verhegghe B. Virtual evaluation of stent graft deployment: a validated modeling and simulation study. J Mech Behav Biomed Mater. 2012;13:129–39.
Hsiao H-M, Chiu Y-H, Lee K-H, Lin C-H. Computational modeling of effects of intravascular stent design on key mechanical and hemodynamic behavior. Comput Aided Des. 2012;44(8):757–65.
De Bock S, Iannaccone F, De Santis G, De Beule M, Mortier P, Verhegghe B, Segers P. Our capricious vessels: the influence of stent design and vessel geometry on the mechanics of intracranial aneurysm stent deployment. J Biomech. 2012;45(8):1353–9.
Morlacchi S, Pennati G, Petrini L, Dubini G, Migliavacca F. Influence of plaque calcifications on coronary stent fracture: a numerical fatigue life analysis including cardiac wall movement. J Biomech. 2014;47(4):899–907.
Huang Y, Teng Z, Sadat U, Graves MJ, Bennett MR, Gillard JH. The influence of computational strategy on prediction of mechanical stress in carotid atherosclerotic plaques: comparison of 2D structure-only, 3D structure-only, one-way and fully coupled fluid-structure interaction analyses. J Biomech. 2014;47(6):1465–71.
Li J, Zheng F, Qiu X, Wan P, Tan L, Yang K. Finite element analyses for optimization design of biodegradable magnesium alloy stent. Mater Sci Eng C. 2014;42:705–14.
Douglas GR, Phani AS, Gagnon J. Analyses and design of expansion mechanisms of balloon expandable vascular stents. J Biomech. 2014;47(6):1438–46.
Nakazawa G, Finn AV, Vorpahl M, Ladich E, Kutys R, Balazs I, Kolodgie FD, Virmani R. Incidence and predictors of drug-eluting stent fracture in human coronary artery: a pathologic analysis. J Am Coll Cardiol. 2009;54(21):1924–31.
Umeda H, Gochi T, Iwase M, Izawa H, Shimizu T, Ishiki R, Inagaki H, Toyama J, Yokota M, Murohara T. Frequency, predictors and outcome of stent fracture after sirolimus-eluting stent implantation. Int J Cardiol. 2009;133(3):321–6.
Aoki J, Nakazawa G, Tanabe K, Hoye A, Yamamoto H, Nakayama T, Onuma Y, Higashikuni Y, Otsuki S, Yagishita A, et al. Incidence and clinical impact of coronary stent fracture after sirolimus-eluting stent implantation. Catheter Cardiovasc Interv. 2007;69(3):380–6.
Lee S-H, Park J-S, Shin D-G, Kim Y-J, Hong G-R, Kim W, Shim B-S. Frequency of stent fracture as a cause of coronary restenosis after sirolimus-eluting stent implantation. Am J Cardiol. 2007;100(4):627–30.
Marrey RV, Burgermeister R, Grishaber RB, Ritchie RO. Fatigue and life prediction for cobalt-chromium stents: a fracture mechanics analysis. Biomaterials. 2006;27(9):1988–2000.
Li H, Zhang Y, Wang X. Analysis of stent expansion, blood flow and fatigue life based on finite element method. J Med Biomech. 2012;27(2):178–85.
Zhi Y, Shi X. Fatigue and fracture behavior of Nitinol cardiovascular stents. J Med Biomech. 2011;26(1):1–6.
Weiß S, Szymczak H, Meißner A. Fatigue and endurance of coronary stents. Materialwiss Werkstofftech. 2009;40(1–2):61–4.
Argente dos Santos HAF, Auricchio F, Conti M. Fatigue life assessment of cardiovascular balloon-expandable stents: a two-scale plasticity–damage model approach. J Mech Behav Biomed Mater. 2012;15:78–92.
Hsiao HM, Nikanorov A, Prabhu S, Razavi MK. Respiration-induced kidney motion on cobalt-chromium stent fatigue resistance. J Biomed Mater Res B Appl Biomater. 2009;91(2):508–16.
Hsiao HM, Prabhu S, Nikanorov A, Razavi M. Renal artery stent bending fatigue analysis. J Med Devices. 2007;1(2):113–8.
Lee MS, Jurewitz D, Aragon J, Forrester J, Makkar RR, Kar S. Stent fracture associated with drug-eluting stents: clinical characteristics and implications. Catheter Cardiovasc Interv. 2007;69(3):387–94.
Shaikh F, Maddikunta R, Djelmami-Hani M, Solis J, Allaqaband S, Bajwa T. Stent fracture, an incidental finding or a significant marker of clinical in-stent restenosis? Catheter Cardiovasc Interv. 2008;71(5):614–8.
Doi H, Maehara A, Mintz GS, Tsujita K, Kubo T, Castellanos C, Liu J, Yang J, Oviedo C, Aoki J, et al. Classification and potential mechanisms of intravascular ultrasound patterns of stent fracture. Am J Cardiol. 2009;103(6):818–23.
Liao R, Chen SYJ, Messenger JC, Groves BM, Burchenal JEB, Carroll JD. Four-dimensional analysis of cyclic changes in coronary artery shape. Catheter Cardiovasc Interv. 2002;55(3):344–54.
Gervaso F, Capelli C, Petrini L, Lattanzio S, Di Virgilio L, Migliavacca F. On the effects of different strategies in modelling balloon-expandable stenting by means of finite element method. J Biomech. 2008;41(6):1206–12.
Weydahl ES, Moore JE. Dynamic curvature strongly affects wall shear rates in a coronary artery bifurcation model. J Biomech. 2001;34(9):1189–96.
Holzapfel GA, Sommer G, Gasser CT, Regitnig P. Determination of layer-specific mechanical properties of human coronary arteries with nonatherosclerotic intimal thickening and related constitutive modeling. Am J Physiol Heart Circ Physiol. 2005;289(5):H2048–58.
Moore JE, Weydahl ES, Santamarina A. Frequency dependence of dynamic curvature effects on flow through coronary arteries. J Biomech Eng. 2001;123(2):129–33.
Auricchio F, Conti M, De Beule M, De Santis G, Verhegghe B. Carotid artery stenting simulation: from patient-specific images to finite element analysis. Med Eng Phys. 2011;33(3):281–9.
Wang J, Li J, Tang J, Lu S, Xi T. A Long Term Accelerating Corrosion Fatigue Texting of Coronary Stents in Vitro. J Biomed Eng. 2008;25(2):398–401.
JX participated in the design and development of the model, performed finite element simulations and drafted the article. JY participated in the design and development of the model and drafting the article. NH performed finite element simulation, developed the simulation methods and drafted the article. CU contributed to design the simulation model and heart motion model and drafting the article. YZ performed finite element simulations and drafted the article. YL contributed to develop simulation model and drafting the article and contributed for intellectual content. All authors read and approved the final manuscript.
This work was supported by a grant from National High-Tech Program of China (2006AA02A139), NSFC (No.11372257and1330031.) for J. Y., National Science Foundation Grant CBET-1113040, and National Institute of Health grant EB015105 for Y. L. and doctorial innovation fund of SWJTU for J. Xu.
School of Mechanics and Engineering, Southwest Jiaotong University, 610031, Chengdu, People's Republic of China
Jiang Xu, Jie Yang & Yaling Liu
School of Material Engineering and Science, Southwest Jiaotong University, 610031, Chengdu, People's Republic of China
Nan Huang
Bioengineering Program, Lehigh University, Bethlehem, PA, 18015, USA
Christopher Uhl & Yaling Liu
Department of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, PA, 18015, USA
Yihua Zhou & Yaling Liu
Jiang Xu
Christopher Uhl
Yihua Zhou
Yaling Liu
Correspondence to Jie Yang.
Xu, J., Yang, J., Huang, N. et al. Mechanical response of cardiovascular stents under vascular dynamic bending. BioMed Eng OnLine 15, 21 (2016). https://doi.org/10.1186/s12938-016-0135-8
Cardiovascular stent
Vascular dynamic bending (VDB)
Vascular pulsation (VP)
Stent fatigue
Explicit-implicit coupling simulation method
|
CommonCrawl
|
Theory of Distributed Systems
Past Semester
Publications of the research group
Directly to page Team
SHK + WHB
Forms + Informations
Research Group Theory of Distributed Systems
Open list in Research Information System
J. Castenow, C. Kolb, C. Scheideler, in: Proceedings of the 21st International Conference on Distributed Computing and Networking (ICDCN), ACM, 2020
Always be Two Steps Ahead of Your Enemy - Maintaining a Routable Overlay under Massive Churn with an Almost Up-to-date Adversary
T. Götte, V.R. Vijayalakshmi, C. Scheideler, in: Proceedings of the 2019 IEEE 33rd International Parallel and Distributed Processing Symposium (IPDPS '19), IEEE, 2019
We investigate the maintenance of overlay networks under massive churn, i.e. nodes joining and leaving the network. We assume an adversary that may churn a constant fraction $\alpha n$ of nodes over the course of $\mathcal{O}(\log n)$ rounds. In particular, the adversary has an almost up-to-date information of the network topology as it can observe an only slightly outdated topology that is at least $2$ rounds old. Other than that, we only have the provably minimal restriction that new nodes can only join the network via nodes that have taken part in the network for at least one round. Our contributions are as follows: First, we show that it is impossible to maintain a connected topology if adversary has up-to-date information about the nodes' connections. Further, we show that our restriction concerning the join is also necessary. As our main result present an algorithm that constructs a new overlay- completely independent of all previous overlays - every $2$ rounds. Furthermore, each node sends and receives only $\mathcal{O}(\log^3 n)$ messages each round. As part of our solution we propose the Linearized DeBruijn Swarm (LDS), a highly churn resistant overlay, which will be maintained by the algorithm. However, our approaches can be transferred to a variety of classical P2P Topologies where nodes are mapped into the $[0,1)$-interval.
Distributed Computation in Node-Capacitated Networks
J. Augustine, M. Ghaffari, R. Gmyr, K. Hinnenthal, F. Kuhn, J. Li, C. Scheideler, in: Proceedings of the 31st ACM Symposium on Parallelism in Algorithms and Architectures, ACM, 2019, pp. 69--79
Fast Distributed Algorithms for LP-Type Problems of Low Dimension
K. Hinnenthal, C. Scheideler, M. Struijs, in: 33rd International Symposium on Distributed Computing (DISC 2019), 2019
Self-Stabilizing Metric Graphs
R. Gmyr, J. Lefevre, C. Scheideler, Theory Comput. Syst. (2019), 63(2), pp. 177-199
Implementation and Evaluation of Authenticated Data Structures Using Intel SGX Enclaves
N. N., Master's thesis, 2019
On the Complexity of Local Graph Transformations
C. Scheideler, A. Setzer, in: Proceedings of the 46th International Colloquium on Automata, Languages, and Programming, Dagstuhl Publishing, 2019, pp. 150:1--150:14
We consider the problem of transforming a given graph G_s into a desired graph G_t by applying a minimum number of primitives from a particular set of local graph transformation primitives. These primitives are local in the sense that each node can apply them based on local knowledge and by affecting only its 1-neighborhood. Although the specific set of primitives we consider makes it possible to transform any (weakly) connected graph into any other (weakly) connected graph consisting of the same nodes, they cannot disconnect the graph or introduce new nodes into the graph, making them ideal in the context of supervised overlay network transformations. We prove that computing a minimum sequence of primitive applications (even centralized) for arbitrary G_s and G_t is NP-hard, which we conjecture to hold for any set of local graph transformation primitives satisfying the aforementioned properties. On the other hand, we show that this problem admits a polynomial time algorithm with a constant approximation ratio.
The 31st ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2019, Phoenix, AZ, USA, June 22-24, 2019
C. Scheideler, P. Berenbrink, ACM, 2019
Skeap & Seap: Scalable Distributed Priority Queues for Constant and Arbitrary Priorities
M. Feldmann, C. Scheideler, in: Proceedings of the 31st ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), ACM, 2019, pp. 287--296
We propose two protocols for distributed priority queues (denoted by 'heap' for simplicity in this paper) called SKEAP and SEAP. SKEAP realizes a distributed heap for a constant amount of priorities and SEAP one for an arbitrary amount. Both protocols build on an overlay, which induces an aggregation tree on which heap operations are aggregated in batches, ensuring that our protocols scale even for a high rate of incoming requests. As part of SEAP we provide a novel distributed protocol for the k-selection problem that runs in time O(log n) w.h.p. SKEAP guarantees sequential consistency for its heap operations, while SEAP guarantees serializability. SKEAP and SEAP provide logarithmic runtimes w.h.p. on all their operations. SKEAP and SEAP provide logarithmic runtimes w.h.p. on all their operations with SEAP having to use only O(log n) bit messages.
A Loosely Self-stabilizing Protocol for Randomized Congestion Control with Logarithmic Memory
M. Feldmann, T. Götte, C. Scheideler, in: Proceedings of the 21st International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS), Springer, Cham, 2019, pp. 149-164
We consider congestion control in peer-to-peer distributed systems. The problem can be reduced to the following scenario: Consider a set $V$ of $n$ peers (called \emph{clients} in this paper) that want to send messages to a fixed common peer (called \emph{server} in this paper). We assume that each client $v \in V$ sends a message with probability $p(v) \in [0,1)$ and the server has a capacity of $\sigma \in \mathbb{N}$, i.e., it can recieve at most $\sigma$ messages per round and excess messages are dropped. The server can modify these probabilities when clients send messages. Ideally, we wish to converge to a state with $\sum p(v) = \sigma$ and $p(v) = p(w)$ for all $v,w \in V$. We propose a \emph{loosely} self-stabilizing protocol with a slightly relaxed legitimate state. Our protocol lets the system converge from \emph{any} initial state to a state where $\sum p(v) \in \left[\sigma \pm \epsilon\right]$ and $|p(v)-p(w)| \in O(\frac{1}{n})$. This property is then maintained for $\Omega(n^{\mathfrak{c}})$ rounds in expectation. In particular, the initial client probabilities and server variables are not necessarily well-defined, i.e., they may have arbitrary values. Our protocol uses only $O(W + \log n)$ bits of memory where $W$ is length of node identifiers, making it very lightweight. Finally we state a lower bound on the convergence time an see that our protocol performs asymptotically optimal (up to some polylogarithmic factor).
J. Castenow, C. Kolb, C. Scheideler, in: Proceedings of the 26th International Colloquium on Structural Information and Communication Complexity (SIROCCO), 2019, pp. 345-348
Breaking the $\tilde\Omega(\sqrt{n})$ Barrier: Fast Consensus under a Late Adversary
P. Robinson, C. Scheideler, A. Setzer, in: Proceedings of the 30th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2018
We study the consensus problem in a synchronous distributed system of n nodes under an adaptive adversary that has a slightly outdated view of the system and can block all incoming and outgoing communication of a constant fraction of the nodes in each round. Motivated by a result of Ben-Or and Bar-Joseph (1998), showing that any consensus algorithm that is resilient against a linear number of crash faults requires $\tilde \Omega(\sqrt n)$ rounds in an n-node network against an adaptive adversary, we consider a late adaptive adversary, who has full knowledge of the network state at the beginning of the previous round and unlimited computational power, but is oblivious to the current state of the nodes. Our main contributions are randomized distributed algorithms that achieve consensus with high probability among all except a small constant fraction of the nodes (i.e., "almost-everywhere'') against a late adaptive adversary who can block up to ε n$ nodes in each round, for a small constant ε >0$. Our first protocol achieves binary almost-everywhere consensus and also guarantees a decision on the majority input value, thus ensuring plurality consensus. We also present an algorithm that achieves the same time complexity for multi-value consensus. Both of our algorithms succeed in $O(log n)$ rounds with high probability, thus showing an exponential gap to the $\tilde\Omega(\sqrt n)$ lower bound of Ben-Or and Bar-Joseph for strongly adaptive crash-failure adversaries, which can be strengthened to $\Omega(n)$ when allowing the adversary to block nodes instead of permanently crashing them. Our algorithms are scalable to large systems as each node contacts only an (amortized) constant number of peers in each communication round. We show that our algorithms are optimal up to constant (resp.\ sub-logarithmic) factors by proving that every almost-everywhere consensus protocol takes $\Omega(log_d n)$ rounds in the worst case, where d is an upper bound on the number of communication requests initiated per node in each round. We complement our theoretical results with an experimental evaluation of the binary almost-everywhere consensus protocol revealing a short convergence time even against an adversary blocking a large fraction of nodes.
Forming Tile Shapes with Simple Robots
R. Gmyr, K. Hinnenthal, I. Kostitsyna, F. Kuhn, D. Rudolph, C. Scheideler, T.F. Strothmann, in: Proceedings of the 24th International Conference on DNA Computing and Molecular Programming, Springer International Publishing, 2018, pp. 122-138
Relays: Towards a Link Layer for Robust and Secure Fog Computing
C. Scheideler, in: Proceedings of the 2018 Workshop on Theory and Practice for Integrated Cloud, Fog and Edge Computing Paradigms, TOPIC@PODC 2018, Egham, United Kingdom, July 27, 2018, 2018, pp. 1-2
Self-stabilizing Overlays for high-dimensional Monotonic Searchability
M. Feldmann, C. Kolb, C. Scheideler, in: Proceedings of the 20th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS), Springer, Cham, 2018, pp. 16-31
We extend the concept of monotonic searchability~\cite{DBLP:conf/opodis/ScheidelerSS15}~\cite{DBLP:conf/wdag/ScheidelerSS16} for self-stabilizing systems from one to multiple dimensions. A system is self-stabilizing if it can recover to a legitimate state from any initial illegal state. These kind of systems are most often used in distributed applications. Monotonic searchability provides guarantees when searching for nodes while the recovery process is going on. More precisely, if a search request started at some node $u$ succeeds in reaching its destination $v$, then all future search requests from $u$ to $v$ succeed as well. Although there already exists a self-stabilizing protocol for a two-dimensional topology~\cite{DBLP:journals/tcs/JacobRSS12} and an universal approach for monotonic searchability~\cite{DBLP:conf/wdag/ScheidelerSS16}, it is not clear how both of these concepts fit together effectively. The latter concept even comes with some restrictive assumptions on messages, which is not the case for our protocol. We propose a simple novel protocol for a self-stabilizing two-dimensional quadtree that satisfies monotonic searchability. Our protocol can easily be extended to higher dimensions and offers routing in $\mathcal O(\log n)$ hops for any search request.
Brief Announcement: Competitive Routing in Hybrid Communication Networks
D. Jung, C. Kolb, C. Scheideler, J. Sundermeier, in: Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures (SPAA), ACM Press, 2018
Shape Recognition by a Finite Automaton Robot
R. Gmyr, K. Hinnenthal, I. Kostitsyna, F. Kuhn, D. Rudolph, C. Scheideler, in: 43rd International Symposium on Mathematical Foundations of Computer Science, MFCS 2018, August 27-31, 2018, Liverpool, UK, 2018, pp. 52:1-52:15
On Underlay-Aware Self-Stabilizing Overlay Networks
T. Götte, C. Scheideler, A. Setzer, in: Proceedings of the 20th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2018), Springer, 2018, pp. 50-64
We present a self-stabilizing protocol for an overlay network that constructs the Minimum Spanning Tree (MST) for an underlay that is modeled by a weighted tree. The weight of an overlay edge between two nodes is the weighted length of their shortest path in the tree. We rigorously prove that our protocol works correctly under asynchronous and non-FIFO message delivery. Further, the protocol stabilizes after O(N^2) asynchronous rounds where N is the number of nodes in the overlay.
A Peer-to-Peer based Cloud Storage supporting orthogonal Range Queries of arbitrary Dimension
M. Benter, T. Knollmann, F. Meyer auf der Heide, A. Setzer, J. Sundermeier, in: Proceedings of the 4th International Symposium on Algorithmic Aspects of Cloud Computing (ALGOCLOUD), 2018
We present a peer-to-peer network that supports the efficient processing of orthogonal range queries $R=\bigtimes_{i=1}^{d}[a_i,\,b_i]$ in a $d$-dimensional point space.\\ The network is the same for each dimension, namely a distance halving network like the one introduced by Naor and Wieder (ACM TALG'07). We show how to execute such range queries using $\mathcal{O}\left(2^{d'}d\,\log m + d\,|R|\right)$ hops (and the same number of messages) in total. Here $[m]^d$ is the ground set, $|R|$ is the size and $d'$ the dimension of the queried range. Furthermore, if the peers form a distributed network, the query can be answered in $\mathcal{O}\left(d\,\log m + d\,\sum_{i=1}^{d}(b_i-a_i+1)\right)$ communication rounds. Our algorithms are based on a mapping of the Hilbert Curve through $[m]^d$ to the peers.
Self-Stabilizing Supervised Publish-Subscribe Systems
M. Feldmann, C. Kolb, C. Scheideler, T.F. Strothmann, in: Proceedings of the 32nd IEEE International Parallel & Distributed Processing Symposium (IPDPS), IEEE, 2018
In this paper we present two major results: First, we introduce the first self-stabilizing version of a supervised overlay network (as introduced in~\cite{DBLP:conf/ispan/KothapalliS05}) by presenting a self-stabilizing supervised skip ring. Secondly, we show how to use the self-stabilizing supervised skip ring to construct an efficient self-stabilizing publish-subscribe system. That is, in addition to stabilizing the overlay network, every subscriber of a topic will eventually know all of the publications that have been issued so far for that topic. The communication work needed to processes a subscribe or unsubscribe operation is just a constant in a legitimate state, and the communication work of checking whether the system is still in a legitimate state is just a constant on expectation for the supervisor as well as any process in the system.
Relays: A New Approach for the Finite Departure Problem in Overlay Networks
C. Scheideler, A. Setzer, in: Proceedings of the 20th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2018), 2018
A fundamental problem for overlay networks is to safely exclude leaving nodes, i.e., the nodes requesting to leave the overlay network are excluded from it without affecting its connectivity. To rigorously study self-stabilizing solutions to this problem, the Finite Departure Problem (FDP) has been proposed [9]. In the FDP we are given a network of processes in an arbitrary state, and the goal is to eventually arrive at (and stay in) a state in which all leaving processes irrevocably decided to leave the system while for all weakly-connected components in the initial overlay network, all staying processes in that component will still form a weakly connected component. In the standard interconnection model, the FDP is known to be unsolvable by local control protocols, so oracles have been investigated that allow the problem to be solved [9]. To avoid the use of oracles, we introduce a new interconnection model based on relays. Despite the relay model appearing to be rather restrictive, we show that it is universal, i.e., it is possible to transform any weakly-connected topology into any other weakly-connected topology, which is important for being a useful interconnection model for overlay networks. Apart from this, our model allows processes to grant and revoke access rights, which is why we believe it to be of interest beyond the scope of this paper. We show how to implement the relay layer in a self-stabilizing way and identify properties protocols need to satisfy so that the relay layer can recover while serving protocol requests.
Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures
C. Scheideler, J.T. Fineman, ACM, 2018
Distributed Algorithms for Overlay Networks and Programmable Matter
R. Gmyr, Universität Paderborn, 2018
Skueue: A Scalable and Sequentially Consistent Distributed Queue
M. Feldmann, C. Scheideler, A. Setzer, in: Proceedings of the 32nd IEEE International Parallel & Distributed Processing Symposium (IPDPS), IEEE, 2018
We propose a distributed protocol for a queue, called Skueue, which spreads its data fairly onto multiple processes, avoiding bottlenecks in high throughput scenarios. Skueuecan be used in highly dynamic environments, through the addition of join and leave requests to the standard queue operations enqueue and dequeue. Furthermore Skueue satisfies sequential consistency in the asynchronous message passing model. Scalability is achieved by aggregating multiple requests to a batch, which can then be processed in a distributed fashion without hurting the queue semantics. Operations in Skueue need a logarithmic number of rounds w.h.p. until they are processed, even under a high rate of incoming requests.
On the runtime of universal coating for programmable matter
J. J. Daymude, Z. Derakhshandeh, R. Gmyr, A. Porter, A. W. Richa, C. Scheideler, T.F. Strothmann, Natural Computing (2018)(1), pp. 81--96
A Self-Stabilizing Hashed Patricia Trie
T. Knollmann, C. Scheideler, in: Proceedings of the 20th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS), Springer, Cham, 2018
While a lot of research in distributed computing has covered solutions for self-stabilizing computing and topologies, there is far less work on self-stabilization for distributed data structures. Considering crashing peers in peer-to-peer networks, it should not be taken for granted that a distributed data structure remains intact. In this work, we present a self-stabilizing protocol for a distributed data structure called the hashed Patricia Trie (Kniesburges and Scheideler WALCOM'11) that enables efficient prefix search on a set of keys. The data structure has a wide area of applications including string matching problems while offering low overhead and efficient operations when embedded on top of a distributed hash table. Especially, longest prefix matching for $x$ can be done in $\mathcal{O}(\log |x|)$ hash table read accesses. We show how to maintain the structure in a self-stabilizing way. Our protocol assures low overhead in a legal state and a total (asymptotically optimal) memory demand of $\Theta(d)$ bits, where $d$ is the number of bits needed for storing all keys.
Monotone Suchbarkeit bei den selbststabilisierenden Protokollen Build-List und Build-Multilist mit systemverlassenden Knoten
M. Jochmaring, Bachelor's thesis, 2018
Provably Anonymous Communication Based on Trusted Execution Environments
J. Blömer, J. Bobolz, C. Scheideler, A. Setzer, 2018
In this paper, we investigate the use of trusted execution environments (TEEs, such as Intel's SGX) for an anonymous communication infrastructure over untrusted networks. For this, we present the general idea of exploiting trusted execution environments for the purpose of anonymous communication, including a continuous-time security framework that models strong anonymity guarantees in the presence of an adversary that observes all network traffic and can adaptively corrupt a constant fraction of participating nodes. In our framework, a participating node can generate a number of unlinkable pseudonyms. Messages are sent from and to pseudonyms, allowing both senders and receivers of messages to remain anonymous. We introduce a concrete construction, which shows viability of our TEE-based approach to anonymous communication. The construction draws from techniques from cryptography and overlay networks. Our techniques are very general and can be used as a basis for future constructions with similar goals.
C. Scheideler, Theor. Comput. Sci. (2018), 751, pp. 1
Competitive Routing in Hybrid Communication Networks
D. Jung, C. Kolb, C. Scheideler, J. Sundermeier, in: Proceedings of the 14th International Symposium on Algorithms and Experiments for Wireless Networks (ALGOSENSORS) , Springer, 2018
Routing is a challenging problem for wireless ad hoc networks, especially when the nodes are mobile and spread so widely that in most cases multiple hops are needed to route a message from one node to another. In fact, it is known that any online routing protocol has a poor performance in the worst case, in a sense that there is a distribution of nodes resulting in bad routing paths for that protocol, even if the nodes know their geographic positions and the geographic position of the destination of a message is known. The reason for that is that radio holes in the ad hoc network may require messages to take long detours in order to get to a destination, which are hard to find in an online fashion. In this paper, we assume that the wireless ad hoc network can make limited use of long-range links provided by a global communication infrastructure like a cellular infrastructure or a satellite in order to compute an abstraction of the wireless ad hoc network that allows the messages to be sent along near-shortest paths in the ad hoc network. We present distributed algorithms that compute an abstraction of the ad hoc network in $\mathcal{O}\left(\log ^2 n\right)$ time using long-range links, which results in $c$-competitive routing paths between any two nodes of the ad hoc network for some constant $c$ if the convex hulls of the radio holes do not intersect. We also show that the storage needed for the abstraction just depends on the number and size of the radio holes in the wireless ad hoc network and is independent on the total number of nodes, and this information just has to be known to a few nodes for the routing to work.
Distributed Monitoring of Network Properties: The Power of Hybrid Networks
R. Gmyr, K. Hinnenthal, C. Scheideler, C. Sohler, in: Proceedings of the 44th International Colloquium on Automata, Languages, and Programming (ICALP), 2017, pp. 137:1--137:15
We initiate the study of network monitoring algorithms in a class of hybrid networks in which the nodes are connected by an external network and an internal network (as a short form for externally and internally controlled network). While the external network lies outside of the control of the nodes (or in our case, the monitoring protocol running in them) and might be exposed to continuous changes, the internal network is fully under the control of the nodes. As an example, consider a group of users with mobile devices having access to the cell phone infrastructure. While the network formed by the WiFi connections of the devices is an external network (as its structure is not necessarily under the control of the monitoring protocol), the connections between the devices via the cell phone infrastructure represent an internal network (as it can be controlled by the monitoring protocol). Our goal is to continuously monitor properties of the external network with the help of the internal network. We present scalable distributed algorithms that efficiently monitor the number of edges, the average node degree, the clustering coefficient, the bipartiteness, and the weight of a minimum spanning tree. Their performance bounds demonstrate that monitoring the external network state with the help of an internal network can be done much more efficiently than just using the external network, as is usually done in the literature.
Algorithmic Foundations of Programmable Matter Dagstuhl Seminar 16271
S. P. Fekete, A. W. Richa, K. Römer, C. Scheideler, SIGACT News (2017)(2), pp. 87--94
Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2017, Washington DC, USA, July 24-26, 2017
C. Scheideler, M. Taghi Hajiaghayi, 2017
Self-* Algorithms for Distributed Systems
T.F. Strothmann, Universität Paderborn, 2017
Self-Stabilizing Spanners for Tree Metrics
T. Götte, Master's thesis, Universität Paderborn, 2017
C. Scheideler, M. Taghi Hajiaghayi, ACM, 2017
Sade: competitive MAC under adversarial SINR
A. Ogierman, A. Richa, C. Scheideler, S. Schmid, J. Zhang, Distributed Computing (2017), 31(3), pp. 241-254
This paper considers the problem of how to efficiently share a wireless medium which is subject to harsh external interference or even jamming. So far, this problem is understood only in simplistic single-hop or unit disk graph models. We in this paper initiate the study of MAC protocols for the SINR interference model (a.k.a. physical model). This paper makes two contributions. First, we introduce a new adversarial SINR model which captures a wide range of interference phenomena. Concretely, we consider a powerful, adaptive adversary which can jam nodes at arbitrary times and which is only limited by some energy budget. Our second contribution is a distributed MAC protocol called Sade which provably achieves a constant competitive throughput in this environment: we show that, with high probability, the protocol ensures that a constant fraction of the non-blocked time periods is used for successful transmissions.
A Self-Stabilizing General De Bruijn Graph
M. Feldmann, C. Scheideler, in: Proceedings of the 19th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS), Springer, Cham, 2017, pp. 250-264
Searching for other participants is one of the most important operations in a distributed system.We are interested in topologies in which it is possible to route a packet in a fixed number of hops until it arrives at its destination.Given a constant $d$, this paper introduces a new self-stabilizing protocol for the $q$-ary $d$-dimensional de Bruijn graph ($q = \sqrt[d]{n}$) that is able to route any search request in at most $d$ hops w.h.p., while significantly lowering the node degree compared to the clique: We require nodes to have a degree of $\mathcal O(\sqrt[d]{n})$, which is asymptotically optimal for a fixed diameter $d$.The protocol keeps the expected amount of edge redirections per node in $\mathcal O(\sqrt[d]{n})$, when the number of nodes in the system increases by factor $2^d$.The number of messages that are periodically sent out by nodes is constant.
Universal coating for programmable matter
Z. Derakhshandeh, R. Gmyr, A. W. Richa, C. Scheideler, T.F. Strothmann, Theor. Comput. Sci. (2017), pp. 56--68
Routing in Hybrid Communication Networks with Holes - Considering Bounding Boxes as Hoel Abstractions
J. Sundermeier, Master's thesis, Universität Paderborn, 2017
MultiSkipList: A Self-stabilizing Overlay Network with Monotonic Searchability maintained
L. Luo, Master's thesis, Universität Paderborn, 2017
Improved Leader Election for Self-organizing Programmable Matter
J. J. Daymude, R. Gmyr, A. W. Richa, C. Scheideler, T.F. Strothmann, in: Algorithms for Sensor Systems - 13th International Symposium on Algorithms and Experiments for Wireless Sensor Networks, ALGOSENSORS 2017, Vienna, Austria, September 7-8, 2017, Revised Selected Papers, 2017, pp. 127--140
Towards a universal approach for the finite departure problem in overlay networks
A. Koutsopoulos, C. Scheideler, T.F. Strothmann, Inf. Comput. (2017), pp. 408--424
A Self-Stabilizing Protocol for Graphs of Diameter Two
T. Knollmann, Master's thesis, Universität Paderborn, 2017
R. Gmyr, J. Lefèvre, C. Scheideler, in: Proceedings of the 18th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS), 2016, pp. 248--262
We present a self-stabilizing algorithm for overlay networks that, for an arbitrary metric given by a distance oracle, constructs the graph representing that metric. The graph representing a metric is the unique minimal undirected graph such that for any pair of nodes the length of a shortest path between the nodes corresponds to the distance between the nodes according to the metric. The algorithm works under both an asynchronous and a synchronous daemon. In the synchronous case, the algorithm stablizes in time O(n) and it is almost silent in that after stabilization a node sends and receives a constant number of messages per round.
Universal Shape Formation for Programmable Matter
Z. Derakhshandeh, R. Gmyr, A. W. Richa, C. Scheideler, T.F. Strothmann, in: Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2016, Asilomar State Beach/Pacific Grove, CA, USA, July 11-13, 2016, ACM, 2016, pp. 289--299
Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2016, Asilomar State Beach/Pacific Grove, CA, USA, July 11-13, 2016
C. Scheideler, S. Gilbert, 2016
Jamming-Resistant MAC Protocols for Wireless Networks
A. W. Richa, C. Scheideler, in: Encyclopedia of Algorithms, 2016, pp. 999--1002
Churn- and DoS-resistant Overlay Networks Based on Network Reconfiguration
M. Drees, R. Gmyr, C. Scheideler, in: Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2016, pp. 417--427
We present three robust overlay networks: First, we present a network that organizes the nodes into an expander and is resistant to even massive adversarial churn. Second, we develop a network based on the hypercube that maintains connectivity under adversarial DoS-attacks. For the DoS-attacks we use the notion of a Omega(log log n)-late adversary which only has access to topological information that is at least Omega(log log n) rounds old. Finally, we develop a network that combines both churn- and DoS-resistance. The networks gain their robustness through constant network reconfiguration, i.e., the topology of the networks changes constantly. Our reconguration algorithms are based on node sampling primitives for expanders and hypercubes that allow each node to sample a logarithmic number of nodes uniformly at random in O(log log n) communication rounds. These primitives are specific to overlay networks and their optimal runtime represents an exponential improvement over known techniques. Our results have a wide range of applications, for example in the area of scalable and robust peer-to-peer systems.
Systematic evaluation of peer-to-peer systems using PeerfactSim.KOM
M. Feldotto, K. Graffi, Concurrency and Computation: Practice and Experience (2016), 28(5), pp. 1655-1677
Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we present the peer-to-peer system simulator PeerfactSim.KOM, which we extended over the last years. PeerfactSim.KOM comes with an extensive layer model to support various facets and protocols of peer-to-peer networking. In this article, we describe PeerfactSim.KOM and show how it can be used for detailed measurements of large-scale peer-to-peer networks. We enhanced PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate sets of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated, and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems. An immediate comparison of different configurations and overlays under different aspects is possible directly after the execution without any manual post-processing.
Insider-resistent Distributed Storage Systems
M. Eikel, 2016
Aggregation in Overlay Networks
K. Hinnenthal, Master's thesis, Universität Paderborn, 2016
We consider the problem of aggregation in overlay networks. We use a synchronous time model in which each node has polylogarithmic memory and can send at most a polylogarithmic number of messages per round. We investigate how to quickly compute the result of an aggregate functionf over elements that are distributed among the nodes of the network such that the result is eventually known by a selected root node. We show how to compute distributive aggregate functions such as SUM, MAX, and OR in time $O(\log n / \log\log n)$ using a tree that is created in a pre-processing phase. If only a polylogarithmic number of data items need to be aggregated, we show how to compute the result in time $O(\sqrt{\log n / \log\log n})$. Furthermore, we show how to compute holistic aggregate functions such as DISTINCT, SMALLEST(k) and MODE(k) in time $O(\log n / \log\log n)$. Finally, we show a lower bound of $\Omega(\sqrt{\log n / \log\log n})$ for deterministic algorithms that compute any of the aggregate functions in the scope of the thesis.
SplayNet: Towards Locally Self-Adjusting Networks
S. Schmid, C. Avin, C. Scheideler, M. Borokhovich, B. Haeupler, Z. Lotker, IEEE/ACM Trans. Netw. (2016)(3), pp. 1421--1433
The Impact of Communication Patterns on Distributed Self-Adjusting Binary Search Tree
T.F. Strothmann, Journal of Graph Algorithms and Applications (2016), 20(1), pp. 79-100
This paper introduces the problem of communication pattern adaption for a distributed self-adjusting binary search tree. We propose a simple local algorithm that is closely related to the over thirty-year-old idea of splay trees and evaluate its adaption performance in the distributed scenario if different communication patterns are provided. To do so, the process of self-adjustment is modeled similarly to a basic network creation game in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and that convergence is related to certain structures of the communication interests, which we call conflicts. We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well. Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.
Towards a Universal Approach for Monotonic Searchability in Self-stabilizing Overlay Networks
C. Scheideler, A. Setzer, T.F. Strothmann, in: Proceedings of the 30th International Symposium on Distributed Computing (DISC), 2016, pp. 71--84
For overlay networks, the ability to recover from a variety of problems like membership changes or faults is a key element to preserve their functionality. In recent years, various self-stabilizing overlay networks have been proposed that have the advantage of being able to recover from any illegal state. However, the vast majority of these networks cannot give any guarantees on its functionality while the recovery process is going on. We are especially interested in searchability, i.e., the functionality that search messages for a specific identifier are answered successfully if a node with that identifier exists in the network. We investigate overlay networks that are not only self-stabilizing but that also ensure that monotonic searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. Monotonic searchability was recently introduced in OPODIS 2015, in which the authors provide a solution for a simple line topology.We present the first universal approach to maintain monotonic searchability that is applicable to a wide range of topologies. As the base for our approach, we introduce a set of primitives for manipulating overlay networks that allows us to maintain searchability and show how existing protocols can be transformed to use theses primitives.We complement this result with a generic search protocol that together with the use of our primitives guarantees monotonic searchability.As an additional feature, searching existing nodes with the generic search protocol is as fast as searching a node with any other fixed routing protocol once the topology has stabilized.
Z. Derakhshandeh, R. Gmyr, A. Porter, A. W. Richa, C. Scheideler, T.F. Strothmann, in: DNA Computing and Molecular Programming - 22nd International Conference, DNA 22, Munich, Germany, September 4-8, 2016, Proceedings, 2016, pp. 148--164
An Algorithmic Framework for Shape Formation Problems in Self-Organizing Particle Systems
Z. Derakhshandeh, R. Gmyr, A. W. Richa, C. Scheideler, T.F. Strothmann, in: Proceedings of the Second Annual International Conference on Nanoscale Computing and Communication, NANOCOM' 15, Boston, MA, USA, September 21-22, 2015, ACM, 2015, pp. 21:1--21:2
A deterministic worst-case message complexity optimal solution for resource discovery
S. Kniesburges, A. Koutsopoulos, C. Scheideler, Theoretical Computer Science (2015), pp. 67-79
We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the address of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)O(n)) or bits (O(nlogn)O(nlogn)) a node receives or sends coincides with the lower bound, while ensuring only a linear runtime (O(n)O(n)) on the number of rounds.
Brief Announcement: On the Feasibility of Leader Election and Shape Formation with Self-Organizing Programmable Matter
Z. Derakhshandeh, R. Gmyr, T.F. Strothmann, R. A. Bazzi, A. W. Richa, C. Scheideler, in: Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing, PODC 2015, Donostia-San Sebasti{\'{a}}n, Spain, July 21 - 23, 2015, ACM, 2015, pp. 67--69
Towards Establishing Monotonic Searchability in Self-Stabilizing Data Structures
C. Scheideler, A. Setzer, T.F. Strothmann, in: Proceedings of the 19th International Conference on Principles of Distributed Systems (OPODIS), 2015
Distributed applications are commonly based on overlay networks interconnecting their sites so that they can exchange information. For these overlay networks to preserve their functionality, they should be able to recover from various problems like membership changes or faults. Various self-stabilizing overlay networks have already been proposed in recent years, which have the advantage of being able to recover from any illegal state, but none of these networks can give any guarantees on its functionality while the recovery process is going on. We initiate research on overlay networks that are not only self-stabilizing but that also ensure that searchability is maintained while the recovery process is going on, as long as there are no corrupted messages in the system. More precisely, once a search message from node u to another node v is successfully delivered, all future search messages from u to v succeed as well. We call this property monotonic searchability. We show that in general it is impossible to provide monotonic searchability if corrupted messages are present in the system, which justifies the restriction to system states without corrupted messages. Furthermore, we provide a self-stabilizing protocol for the line for which we can also show monotonic searchability. It turns out that even for the line it is non-trivial to achieve this property. Additionally, we extend our protocol to deal with node departures in terms of the Finite Departure Problem of Foreback et. al (SSS 2014). This makes our protocol even capable of handling node dynamics.
IRIS: A Robust Information System Against Insider DoS Attacks
M. Eikel, C. Scheideler, Transactions on Parallel Computing (2015)(3), pp. 18:1--18:33
In this work, we present the first scalable distributed information system, that is, a system with low storage overhead, that is provably robust against denial-of-service (DoS) attacks by a current insider. We allow a current insider to have complete knowledge about the information system and to have the power to block any ϵ-fraction of its servers by a DoS attack, where ϵ can be chosen up to a constant. The task of the system is to serve any collection of lookup requests with at most one per nonblocked server in an efficient way despite this attack. Previously, scalable solutions were only known for DoS attacks of past insiders, where a past insider only has complete knowledge about some past time point t0 of the information system. Scheideler et al. [Awerbuch and Scheideler 2007; Baumgart et al. 2009] showed that in this case, it is possible to design an information system so that any information that was inserted or last updated after t0 is safe against a DoS attack. But their constructions would not work at all for a current insider. The key idea behind our IRIS system is to make extensive use of coding. More precisely, we present two alternative distributed coding strategies with an at most logarithmic storage overhead that can handle up to a constant fraction of blocked servers.
Dynamics and Efficiency in Topological Self-Stabilization
A. Koutsopoulos, Universität Paderborn, 2015
Brief Announcement: Towards a Universal Approach for the Finite Departure Problem in Overlay Networks
A. Koutsopoulos, C. Scheideler, T.F. Strothmann, in: Proceedings of the 27th ACM on Symposium on Parallelism in Algorithms and Architectures, SPAA 2015, Portland, OR, USA, June 13-15, 2015, ACM, 2015, pp. 77--79
Structural Information and Communication Complexity - 22nd International Colloquium, SIROCCO 2015, Montserrat, Spain, July 14-16, 2015, Post-Proceedings
C. Scheideler, 2015
A. Koutsopoulos, C. Scheideler, T.F. Strothmann, in: Proceedings of the 17th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS), 2015, pp. 201-216
A fundamental problem for overlay networks is to safely exclude leaving nodes, i.e., the nodes requesting to leave the overlay network are excluded from it without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state, but almost no formal results are known for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation when woken up by an incoming message. We are the first to present a self-stabilizing protocol for the FDP and the FSP that can be combined with a large class of overlay maintenance protocols so that these are then guaranteed to safely exclude leaving nodes from the system from any initial state while operating as specified for the staying nodes. In order to formally define the properties these overlay maintenance protocols have to satisfy, we identify four basic primitives for manipulating edges in an overlay network that might be of independent interest.
Distributed Data Structures and the Power of topological Self-Stabilization
S. Kniesburges, Universität Paderborn, 2015
Monotonic Searchability for distributed sorted Lists and De Bruijn Graphs
M. Feldmann, Master's thesis, Universität Paderborn, 2015
The impact of communication patterns on distributed locally self-adjusting binary search trees
T.F. Strothmann, in: Proceedings of the 9th International Workshop on Algorithms and Computation (WALCOM), 2015, pp. 175--186
This paper introduces the problem of communication pattern adaption for a distributed self-adjusting binary search tree. We propose a simple local algorithm, which is closely related to the nearly thirty-year-old idea of splay trees and evaluate its adaption performance in the distributed scenario if different communication patterns are provided.To do so, the process of self-adjustment is modeled similarly to a basic network creation game, in which the nodes want to communicate with only a certain subset of all nodes. We show that, in general, the game (i.e., the process of local adjustments) does not converge, and convergence is related to certain structures of the communication interests, which we call conflicts.We classify conflicts and show that for two communication scenarios in which convergence is guaranteed, the self-adjusting tree performs well.Furthermore, we investigate the different classes of conflicts separately and show that, for a certain class of conflicts, the performance of the tree network is asymptotically as good as the performance for converging instances. However, for the other conflict classes, a distributed self-adjusting binary search tree adapts poorly.
Leader Election and Shape Formation with Self-organizing Programmable Matter
Z. Derakhshandeh, R. Gmyr, T.F. Strothmann, R. A. Bazzi, A. W. Richa, C. Scheideler, in: DNA Computing and Molecular Programming - 21st International Conference, DNA 21, Boston and Cambridge, MA, USA, August 17-21, 2015. Proceedings, 2015, pp. 117--132
Brief announcement: amoebot - a new model for programmable matter
Z. Derakhshandeh, S. Dolev, R. Gmyr, A. W. Richa, C. Scheideler, T.F. Strothmann, in: 26th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA'14, Prague, Czech Republic - June 23 - 25, 2014, ACM, 2014, pp. 220--222
RoBuSt: A Crash-Failure-Resistant Distributed Storage System
C. Scheideler, A. Setzer, M. Eikel, in: Proceedings of the 18th International Conference on Principles of Distributed Systems (OPODIS), 2014, pp. 107--122
In this work we present the first distributed storage system that is provably robust against crash failures issued by an adaptive adversary, i.e., for each batch of requests the adversary can decide based on the entire system state which servers will be unavailable for that batch of requests. Despite up to \gamma n^{1/\log\log n} crashed servers, with \gamma>0 constant and n denoting the number of servers, our system can correctly process any batch of lookup and write requests (with at most a polylogarithmic number of requests issued at each non-crashed server) in at most a polylogarithmic number of communication rounds, with at most polylogarithmic time and work at each server and only a logarithmic storage overhead. Our system is based on previous work by Eikel and Scheideler (SPAA 2013), who presented IRIS, a distributed information system that is provably robust against the same kind of crash failures. However, IRIS is only able to serve lookup requests. Handling both lookup and write requests has turned out to require major changes in the design of IRIS.
Competitive MAC under adversarial SINR
A. Ogierman, A.W. Richa, C. Scheideler, S. Schmid, J. Zhang, in: Proceedings of the 33rd Annual IEEE International Conference on Computer Communications (INFOCOM), 2014, pp. 2751--2759
This paper considers the problem of how to efficiently share a wireless medium which is subject to harsh external interference or even jamming. While this problem has already been studied intensively for simplistic single-hop or unit disk graph models, we make a leap forward and study MAC protocols for the SINR interference model (a.k.a. the physical model). We make two contributions. First, we introduce a new adversarial SINR model which captures a wide range of interference phenomena. Concretely, we consider a powerful, adaptive adversary which can jam nodes at arbitrary times and which is only limited by some energy budget. The second contribution of this paper is a distributed MAC protocol which provably achieves a constant competitive throughput in this environment: we show that, with high probability, the protocol ensures that a constant fraction of the non-blocked time periods is used for successful transmissions.
Algorithmic Aspects of Resource Management in the Cloud
S. Kniesburges, C. Markarian, F. Meyer auf der Heide, C. Scheideler, in: Proceedings of the 21st International Colloquium on Structural Information and Communication Complexity (SIROCCO), 2014, pp. 1-13
In this survey article, we discuss two algorithmic research areas that emerge from problems that arise when resources are offered in the cloud. The first area, online leasing, captures problems arising from the fact that resources in the cloud are not bought, but leased by cloud vendors. The second area, Distributed Storage Systems, deals with problems arising from so-called cloud federations, i.e., when several cloud providers are needed to fulfill a given task.
Re-Chord: A Self-stabilizing Chord Overlay Network
S. Kniesburges, A. Koutsopoulos, C. Scheideler, Theory of Computing Systems (2014)(3), pp. 591-612
The Chord peer-to-peer system is considered, together with CAN, Tapestry and Pastry, as one of the pioneering works on peer-to-peer distributed hash tables (DHT) that inspired a large volume of papers and projects on DHTs as well as peer-to-peer systems in general. Chord, in particular, has been studied thoroughly, and many variants of Chord have been presented that optimize various criteria. Also, several implementations of Chord are available on various platforms. Though Chord is known to be very efficient and scalable and it can handle churn quite well, no protocol is known yet that guarantees that Chord is self-stabilizing, i.e., the Chord network can be recovered from any initial state in which the network is still weakly connected. This is not too surprising since it is known that the Chord network is not locally checkable for its current topology. We present a slight extension of the Chord network, called Re-Chord (reactive Chord), that turns out to be locally checkable, and we present a self-stabilizing distributed protocol for it that can recover the Re-Chord network from any initial state, in which the n peers are weakly connected, in O(nlogn) communication rounds. We also show that our protocol allows a new peer to join or an old peer to leave an already stable Re-Chord network so that within O(logn)^2) communication rounds the Re-Chord network is stable again.
Minimum Linear Arrangement of Series-Parallel Graphs
C. Scheideler, M. Eikel, A. Setzer, in: Proceedings of the 12th Workshop on Approximation and Online Algorithms (WAOA), 2014, pp. 168--180
We present a factor $14D^2$ approximation algorithm for the minimum linear arrangement problem on series-parallel graphs, where $D$ is the maximum degree in the graph. Given a suitable decomposition of the graph, our algorithm runs in time $O(|E|)$ and is very easy to implement. Its divide-and-conquer approach allows for an effective parallelization. Note that a suitable decomposition can also be computed in time $O(|E|\log{|E|})$ (or even $O(\log{|E|}\log^*{|E|})$ on an EREW PRAM using $O(|E|)$ processors). For the proof of the approximation ratio, we use a sophisticated charging method that uses techniques similar to amortized analysis in advanced data structures. On general graphs, the minimum linear arrangement problem is known to be NP-hard. To the best of our knowledge, the minimum linear arrangement problem on series-parallel graphs has not been studied before.
HSkip+: A Self-Stabilizing Overlay Network for Nodes with Heterogeneous Bandwidths
M. Feldotto, C. Scheideler, K. Graffi, in: Proceedings of the 14th IEEE International Conference on Peer-to-Peer Computing (P2P), 2014, pp. 1-10
In this paper we present and analyze HSkip+, a self-stabilizing overlay network for nodes with arbitrary heterogeneous bandwidths. HSkip+ has the same topology as the Skip+ graph proposed by Jacob et al. [PODC 2009] but its self-stabilization mechanism significantly outperforms the self-stabilization mechanism proposed for Skip+. Also, the nodes are now ordered according to their bandwidths and not according to their identifiers. Various other solutions have already been proposed for overlay networks with heterogeneous bandwidths, but they are not self-stabilizing. In addition to HSkip+ being self-stabilizing, its performance is on par with the best previous bounds on the time and work for joining or leaving a network of peers of logarithmic diameter and degree and arbitrary bandwidths. Also, the dilation and congestion for routing messages is on par with the best previous bounds for such networks, so that HSkip+ combines the advantages of both worlds. Our theoretical investigations are backed by simulations demonstrating that HSkip+ is indeed performing much better than Skip+ and working correctly under high churn rates.
SKIP*: A Self-Stabilizing Skip Graph
R. Jacob, A. W. Richa, C. Scheideler, S. Schmid, H. Täubig, J. ACM (2014)(6), pp. 36:1--36:26
On Stabilizing Departures in Overlay Networks
D. Foreback, A. Koutsopoulos, M. Nesterenko, C. Scheideler, T.F. Strothmann, in: Proceedings of the 16th International Symposium on Stabilization, Safety, and Security of Distributed Systems, 2014, pp. 48--62
A fundamental problem for peer-to-peer systems is to maintain connectivity while nodes are leaving, i.e., the nodes requesting to leave the peer-to-peer system are excluded from the overlay network without affecting its connectivity. There are a number of studies for safe node exclusion if the overlay is in a well-defined state initially. Surprisingly, the problem is not formally studied yet for the case in which the overlay network is in an arbitrary initial state, i.e., when looking for a self-stabilizing solution for excluding leaving nodes. We study this problem in two variants: the Finite Departure Problem (FDP) ) and the Finite Sleep Problem (FSP). In the FDP the leaving nodes have to irrevocably decide when it is safe to leave the network, whereas in the FSP, this leaving decision does not have to be final: the nodes may resume computation if necessary. We show that there is no self-stabilizing distributed algorithm for the FDP, even in a synchronous message passing model. To allow a solution, we introduce an oracle called NIDEC and show that it is sufficient even for the asynchronous message passing model by proposing an algorithm that can solve the FDP using NIDEC. We also show that a solution to the FSP does not require an oracle.
Secure Distributed Data Structures for Peer-to-Peer-based Social Networks
J. Janiuk, A. Mäcker, K. Graffi, in: Proceedings of the International Conference on Collaboration Technologies and Systems (CTS), 2014, pp. 396-405
Online social networks are attracting billions of nowadays, both on a global scale as well as in social enterprise networks. Using distributed hash tables and peer-to-peer technology allows online social networks to be operated securely and efficiently only by using the resources of the user devices, thus alleviating censorship or data misuse by a single network operator. In this paper, we address the challenges that arise in implementing reliably and conveniently to use distributed data structures, such as lists or sets, in such a distributed hash-tablebased online social network. We present a secure, distributed list data structure that manages the list entries in several buckets in the distributed hash table. The list entries are authenticated, integrity is maintained and access control for single users and also groups is integrated. The approach for secure distributed lists is also applied for prefix trees and sets, and implemented and evaluated in a peer-to-peer framework for social networks. Evaluation shows that the distributed data structure is convenient and efficient to use and that the requirements on security hold.
Principles of Robust Medium Access and an Application to Leader Election
B. Awerbuch, A.W. Richa, C. Scheideler, S. Schmid, J. Zhang, Transactions on Algorithms (2014)(4)
This article studies the design of medium access control (MAC) protocols for wireless networks that are provably robust against arbitrary and unpredictable disruptions (e.g., due to unintentional external interference from co-existing networks or due to jamming). We consider a wireless network consisting of a set of n honest and reliable nodes within transmission (and interference) range of each other, and we model the external disruptions with a powerful adaptive adversary. This adversary may know the protocol and its entire history and can use this knowledge to jam the wireless channel at will at any time. It is allowed to jam a (1 − )-fraction of the timesteps, for an arbitrary constant > 0 unknown to the nodes. The nodes cannot distinguish between the adversarial jamming or a collision of two or more messages that are sent at the same time. We demonstrate, for the first time, that there is a local-control MAC protocol requiring only very limited knowledge about the adversary and the network that achieves a constant (asymptotically optimal) throughput for the nonjammed time periods under any of the aforementioned adversarial strategies. The derived principles are also useful to build robust applications on top of the MAC layer, and we present an exemplary study for leader election, one of the most fundamental tasks in distributed computing.
A Note on the Parallel Runtime of Self-Stabilizing Graph Linearization
D. Gall, R. Jacob, A.W. Richa, C. Scheideler, S. Schmid, H.. Täubig, Theory of Computing Systems (2014)(1), pp. 110-135
Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and bestcase parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.
Competitive throughput in multi-hop wireless networks despite adaptive jamming
A. W. Richa, C. Scheideler, S. Schmid, J. Zhang, Distributed Computing (2013)(3), pp. 159--171
Corona: A stabilizing deterministic message-passing skip list
R. Mohd Nor, M. Nesterenko, C. Scheideler, Theor. Comput. Sci. (2013), pp. 119--129
CONE-DHT: A distributed self-stabilizing algorithm for a heterogeneous storage system
S. Kniesburges, A. Koutsopoulos, C. Scheideler, in: Proceedings of the 27th International Symposium on Distributed Computing (DISC), 2013, pp. 537-549
We consider the problem of managing a dynamic heterogeneous storagesystem in a distributed way so that the amount of data assigned to a hostin that system is related to its capacity. Two central problems have to be solvedfor this: (1) organizing the hosts in an overlay network with low degree and diameterso that one can efficiently check the correct distribution of the data androute between any two hosts, and (2) distributing the data among the hosts so thatthe distribution respects the capacities of the hosts and can easily be adapted asthe set of hosts or their capacities change. We present distributed protocols forthese problems that are self-stabilizing and that do not need any global knowledgeabout the system such as the number of nodes or the overall capacity of thesystem. Prior to this work no solution was known satisfying these properties.
Adding Capacity-Aware Storage Indirection to Homogeneous Distributed Hash Tables
P. Wette, K. Graffi, in: Proceedings of the Conference on Networked Systems (NetSys), 2013, pp. 35-42
Distributed hash tables are very versatile to use, as distributed storage is a desirable feature for various applications. Typical structured overlays like Chord, Pastry or Kademlia consider only homogeneous nodes with equal capacities, which does not resemble reality. In a practical use case, nodes might get overloaded by storing popular data. In this paper, we present a general approach to enable capacity awareness and load-balancing capability of homogeneous structured overlays. We introduce a hierarchical second structured overlay aside, which allows efficient capacity-based access on the nodes in the system as hosting mirrors. Simulation results show that the structured overlay is able to store various contents, such as of a social network, with only a negligible number of overloaded peers. Content, even if very popular, is hosted by easily findable capable peers. Thus, long-existing and well-evaluated overlays like Chord or Pastry can be used to create attractive DHT-based applications.
An Efficient and Fair MAC Protocol Robust to Reactive Interference
A. W. Richa, C. Scheideler, S. Schmid, J. Zhang, IEEE/ACM Trans. Netw. (2013)(3), pp. 760--771
Approximation Algorithms for the Linear Arrangement of Special Classes of Graphs
A. Setzer, Master's thesis, Universität Paderborn, 2013
Bootstrapping Skynet: Calibration and Autonomic Self-Control of Structured Peer-to-Peer Networks
K. Graffi, T. Klerx, in: Proceedings of the International Conference on Peer-to-Peer Computing (P2P'13), 2013, pp. 1-5
Peer-to-peer systems scale to millions of nodes and provide routing and storage functions with best effort quality. In order to provide a guaranteed quality of the overlay functions, even under strong dynamics in the network with regard to peer capacities, online participation and usage patterns, we propose to calibrate the peer-to-peer overlay and to autonomously learn which qualities can be reached. For that, we simulate the peer-to-peer overlay systematically under a wide range of parameter configurations and use neural networks to learn the effects of the configurations on the quality metrics. Thus, by choosing a specific quality setting by the overlay operator, the network can tune itself to the learned parameter configurations that lead to the desired quality. Evaluation shows that the presented self-calibration succeeds in learning the configuration-quality interdependencies and that peer-to-peer systems can learn and adapt their behavior according to desired quality goals.
Locally Self-Adjusting Tree Networks
C. Avin, B. Häupler, Z. Lotker, C. Scheideler, S. Schmid, in: Proceedings of the 27th IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2013, pp. 395-406
This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern $\sigma$. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of $\sigma$, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern $\sigma$. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.
Comparative Evaluation of Peer-to-Peer Systems Using PeerfactSim.KOM
M. Feldotto, K. Graffi, in: Proceedings of the International Conference on High Performance Computing and Simulation (HPCS'13), 2013, pp. 99-106
Comparative evaluations of peer-to-peer protocols through simulations are a viable approach to judge the performance and costs of the individual protocols in large-scale networks. In order to support this work, we enhanced the peer-to-peer systems simulator PeerfactSim.KOM with a fine-grained analyzer concept, with exhaustive automated measurements and gnuplot generators as well as a coordination control to evaluate a set of experiment setups in parallel. Thus, by configuring all experiments and protocols only once and starting the simulator, all desired measurements are performed, analyzed, evaluated and combined, resulting in a holistic environment for the comparative evaluation of peer-to-peer systems.
Symbiotic Coupling of P2P and Cloud Systems: The Wikipedia Case
K. Graffi, L. Bremer, in: Proceedings of the International Conference on Communications (ICC'13), 2013, pp. 3444 - 3449
Cloud computing offers high availability, dynamic scalability, and elasticity requiring only very little administration. However, this service comes with financial costs. Peer-to-peer systems, in contrast, operate at very low costs but cannot match the quality of service of the cloud. This paper focuses on the case study of Wikipedia and presents an approach to reduce the operational costs of hosting similar websites in the cloud by using a practical peer-to-peer approach. The visitors of the site are joining a Chord overlay, which acts as first cache for article lookups. Simulation results show, that up to 72% of the article lookups in Wikipedia could be answered by other visitors instead of using the cloud.
IRIS: A Robust Information System Against Insider DoS-Attacks
M. Eikel, C. Scheideler, in: Proceedings of the 25th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), 2013, pp. 119-129
In this work we present the first scalable distributed information system,i.e., a system with low storage overhead, that is provably robust againstDenial-of-Service (DoS) attacks by a current insider. We allow acurrent insider to have complete knowledge about the information systemand to have the power to block any \epsilon-fraction of its serversby a DoS-attack, where \epsilon can be chosen up to a constant. The taskof the system is to serve any collection of lookup requests with at most oneper non-blocked server in an efficient way despite this attack. Previously,scalable solutions were only known for DoS-attacks of past insiders, where apast insider only has complete knowledge about some past time pointt_0 of the information system. Scheideler et al. (DISC 2007, SPAA 2009) showedthat in this case it is possible to design an information system so that anyinformation that was inserted or last updated after t_0 is safe against a DoS-attack. But their constructions would not work at all for a current insider. The key idea behindour IRIS system is to make extensive use of coding. More precisely, we presenttwo alternative distributed coding strategies with an at most logarithmicstorage overhead that can handle up to a constant fraction of blocked servers.
S. Kniesburges, A. Koutsopoulos, C. Scheideler, in: Proceedings of 20th International Colloqium on Structural Information and Communication Complexity (SIROCCO), 2013, pp. 165-176
We consider the problem of resource discovery in distributed systems. In particular we give an algorithm, such that each node in a network discovers the add ress of any other node in the network. We model the knowledge of the nodes as a virtual overlay network given by a directed graph such that complete knowledge of all nodes corresponds to a complete graph in the overlay network. Although there are several solutions for resource discovery, our solution is the first that achieves worst-case optimal work for each node, i.e. the number of addresses (O(n)) or bits (O(nlogn)) a node receives or sendscoincides with the lower bound, while ensuring only a linearruntime (O(n)) on the number of rounds.
Towards Duality of Multicommodity Multiroute Cuts and Flows: Multilevel Ball-Growing
P. Kolman, C. Scheideler, Theory of Computing Systems (2013)(2), pp. 341-363
An elementary h-route ow, for an integer h 1, is a set of h edge- disjoint paths between a source and a sink, each path carrying a unit of ow, and an h-route ow is a non-negative linear combination of elementary h-routeows. An h-route cut is a set of edges whose removal decreases the maximum h-route ow between a given source-sink pair (or between every source-sink pair in the multicommodity setting) to zero. The main result of this paper is an approximate duality theorem for multicommodity h-route cuts and ows, for h 3: The size of a minimum h-route cut is at least f=h and at most O(log4 k f) where f is the size of the maximum h-routeow and k is the number of commodities. The main step towards the proof of this duality is the design and analysis of a polynomial-time approximation algorithm for the minimum h-route cut problem for h = 3 that has an approximation ratio of O(log4 k). Previously, polylogarithmic approximation was known only for h-route cuts for h 2. A key ingredient of our algorithm is a novel rounding technique that we call multilevel ball-growing. Though the proof of the duality relies on this algorithm, it is not a straightforward corollary of it as in the case of classical multicommodity ows and cuts. Similar results are shown also for the sparsest multiroute cut problem.
M. Feldotto, Master's thesis, Universität Paderborn, 2013
Ca-Re-Chord: A Churn Resistant Self-stabilizing Chord Overlay Network
K. Graffi, M. Benter, M. Divband, S. Kniesburges, A. Koutsopoulos, in: Proceedings of the Conference on Networked Systems (NetSys), 2013, pp. 27-34
Self-stabilization is the property of a system to transfer itself regardless of the initial state into a legitimate state. Chord as a simple, decentralized and scalable distributed hash table is an ideal showcase to introduce self-stabilization for p2p overlays. In this paper, we present Re-Chord, a self-stabilizing version of Chord. We show, that the stabilization process is functional, but prone to strong churn. For that, we present Ca-Re-Chord, a churn resistant version of Re-Chord, that allows the creation of a useful DHT in any kind of graph regardless of the initial state. Simulation results attest the churn resistance and good performance of Ca-Re-Chord.
Continuous Gossip-based Aggregation through Dynamic Information Aging
K. Graffi, V. Rapp, in: Proceedings of the International Conference on Computer Communications and Networks (ICCCN'13), 2013, pp. 1-7
Existing solutions for gossip-based aggregation in peer-to-peer networks use epochs to calculate a global estimation from an initial static set of local values. Once the estimation converges system-wide, a new epoch is started with fresh initial values. Long epochs result in precise estimations based on old measurements and short epochs result in imprecise aggregated estimations. In contrast to this approach, we present in this paper a continuous, epoch-less approach which considers fresh local values in every round of the gossip-based aggregation. By using an approach for dynamic information aging, inaccurate values and values from left peers fade from the aggregation memory. Evaluation shows that the presented approach for continuous information aggregation in peer-to-peer systems monitors the system performance precisely, adapts to changes and is lightweight to operate.
Editorial for Algorithmic Aspects of Wireless Sensor Networks
S. Dolev, C. Scheideler, Theor. Comput. Sci. (2012), pp. 1
Tiara: A self-stabilizing deterministic skip list and skip graph
T. Clouser, M. Nesterenko, C. Scheideler, Theoretical Computer Science (2012), pp. 18-35
We present Tiara — a self-stabilizing peer-to-peer network maintenance algorithm. Tiara is truly deterministic which allows it to achieve exact performance bounds. Tiara allows logarithmic searches and topology updates. It is based on a novel sparse 0-1 skip list. We then describe its extension to a ringed structure and to a skip-graph.Key words: Peer-to-peer networks, overlay networks, self-stabilization.
Smoothed analysis of left-to-right maxima with applications
V. Damerow, B. Manthey, F. Meyer auf der Heide, H. Räcke, C. Scheideler, C. Sohler, T. Tantau, Transactions on Algorithms (2012)(3), pp. 30
A left-to-right maximum in a sequence of n numbers s_1, …, s_n is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers s_i ∈ [0,1] that are perturbed by uniform noise from the interval [-ε,ε], the expected number of left-to-right maxima is Θ(&sqrt;n/ε + log n) for ε>1/n. For Gaussian noise with standard deviation σ we obtain a bound of O((log3/2 n)/σ + log n).We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of Θ(&sqrt;n/ε + log n) and Θ(n/ε+1&sqrt;n/ε + n log n), respectively, for uniform random noise from the interval [-ε,ε]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.
Max number of publications reached - all publications can be found in our Research Infomation System.
|
CommonCrawl
|
How does magnetic field store energy?
You are quite correct that a magnetic field cannot do work on charged particles, so I find that your question is a very good one. The answer is that anything that causes a magnetic field to change is liable to introduce a change in the electric field too. This is the content of Faraday's law of electromagnetic induction. In particular, when a magnetic field is first non-zero, and then is made to fall to a zero value, then it passes its energy to an electric field which in this case grows. The electric field can then pass the energy on to other things such as kinetic energy of charged particles.
There is a very nice detailed argument from the Maxwell equations and Lorentz force equation to show that it all works out correctly, as long as we associate the energy you quoted ($B^2 / (2 \mu_0)$ per unit volume) with the magnetic field.
My question is that if magnetic field cannot do work, then what does the energy signify?
The energy stored in the magnetic field of an inductor can do work (deliver power). The energy stored in the magnetic field of the inductor is essentially kinetic energy (the energy stored in the electric field of a capacitor is potential energy). See the circuit diagram below.
In the diagrams the voltage source is a battery. In the top diagram the switch has been in the closed position for a long time so that all transients have disappeared. Under these conditions the ideal inductor looks like a short circuit. So the voltage across the inductor is zero and the current in the inductor is
$$I_{L}=\frac{ε}{R_2}$$
The current in $R_1$ is zero and the energy stored in the magnetic field is
$$E_{L}=\frac{LI_{L}^2}{2}$$
In the bottom diagram the switch is opened at time $t=0$. The instant the switch opens the current in the inductor is the same $I_L$ as before the switch was opened since you can't change the current in an ideal inductor instantaneously (in zero time). So the initial current in $R_1$ is now that same $I_L$. The inductor is now delivering energy to the resistor which is dissipated as heat. The current decays in time according to
$$i(t)=I_{L}e^{-R_{1}t/L}$$
Eventually becoming zero when all the energy that was stored in the magnetic field is dissipated as heat in $R_1$.
Here, the magnetic field does no work. It gets converted to electric field in the wire which makes the electrons move against the resistance which in turn dissipates as heat
I said the energy stored in the magnetic field does work, not that the magnetic field itself does work. The mechanical analogue is the kinetic energy stored in a moving object can do work when bringing it to a stop. Mass is the analogue of inductance. The velocity of the mass is analogous to the current in the inductor. The inertia of mass that resists a change in its velocity is analogous to the inductor resisting a change in current.
From Jackson's "Classical Electrodynamics" third edition in Chapter 5 section 16 "Energy in the Magnetic Field". He talks about how "the creation of a steady-state configuration of current involves an initial transient period during which the currents and fields are brought from zero to the final values. For such time-varying fields there are induced elecromotive forces that cause the sources of current to do work. Since the energy in the field is by definition the total work done to establish it, we must consider these contributions".
Then he goes on to derive an expression for the magnetic field energy, which is identical to yours if there is a linear relationship between $\mathbf{B}$ and $\mathbf{H}$. If you're interested you can pick up a copy, I highly recommend it if you're interested in the theoretical side of electrodynamics/physics.
How do Rindler coordinates fit into special relativity? Is Hawking radiation real for a far away observer? How is it possible for the same force to do different amounts of work in two different inertial frames? Why doesn't Gaussian wavepacket broadening in position mean there will be a shortening in momentum? Are magnetic field and electric field perpendicular is spherical waves? Do "almost black holes" exist? How do photons affect each other gravitationally? Estimating of the size of the Milky Way from the Sun's motion Charged Black Hole in 1+2 dimensions without cosmological constant Why can't an electric field line suddenly break? Will a plastic feel less heavy when I put it in a bucket of water and carry it? Renormalization and regularization operators for ultracold atoms
|
CommonCrawl
|
Rock Mechanics and Rock Engineering
September 2013 , Volume 46, Issue 5, pp 923–944 | Cite as
Experimental Investigations into the Mechanical Behaviour of the Breccias Around the Proposed Gibraltar Strait Tunnel
W. Dong
E. Pimentel
G. Anagnostou
First Online: 11 January 2013
The proposed Gibraltar Strait tunnel will cross two zones with breccia consisting of a chaotic mixture of blocks and stones embedded in a clay matrix. The breccia is saturated, has a high porosity and exhibits poor mechanical properties in the range between hard soils and weak rocks. The overburden and high in situ pore pressures in combination with the low strength of the breccia may lead to heavy squeezing. The crossing of the breccia zones thus represents one of the key challenges in the construction of the tunnel. In order to improve our understanding of the mechanical behaviour of the breccias, a series of triaxial compressions tests were carried out. Standard rock mechanics test equipment was not adequate for this purpose, because it does not provide pore pressure control, which is important in the case of saturated porous materials. Pore pressure control is routine in soil mechanics tests, but standard soil mechanics equipment allows only for relatively low nominal loads and pressures. In addition, the low hydraulic conductivity of the breccias demands extremely low loading rates and a long test duration. For these reasons, we re-designed several components of the test apparatus to investigate the mechanical behaviour of the breccia by means of consolidated drained and undrained tests. The tests provided important results concerning the strength, volumetric behaviour, consolidation state and hydraulic conductivity of the breccias. The present paper describes the test equipment and procedures, provides an overview of the test results and discusses features of the mechanical behaviour of the breccias which make them qualitatively different from other weak rocks such as kakirites—a typical squeezing rock in alpine tunnelling. The paper also demonstrates the practical importance of the experimental findings for tunnelling in general. More specifically, it investigates the short-term ground response to tunnel excavation from the perspective of elasto-plastic behaviour with the Mohr–Coulomb yield criterion. The computational results indicate that the breccias will probably experience very large deformations already around the advancing tunnel heading, which can be reduced considerably, however, by advance drainage. The analyses additionally show that plastic dilatancy is favourable with respect to the short-term response, thus highlighting the importance of the constitutive model when it comes to theoretical predictions.
Triaxial test Pore pressure Breccia Gibraltar Strait tunnel Squeezing ground Ground response
List of symbols
Pore pressure parameter A
Tunnel radius
Pore pressure parameter A at failure
Pore pressure parameter B
Compressibility of specimen skeleton
c′
Effective cohesion of ground
Coefficient of consolidation
Compressibility of water
Diameter of sample
Diameter of the oil pressure amplifier cylinder
Diameter of the axial loading piston
Diameter of the pore water pressure device cylinder
Young's modulus
Height of sample
Pore pressure
p′
Effective isotropic stress
pw,0
Initial pore pressure
q′
Effective deviatoric stress
Undrained shear strength
Time taken to reach 95 % dissipation of excess pore pressure at failure
Tunnel wall displacement
Volume of oil in triaxial system
Volume of specimen
Volume of water in triaxial system
Coordinate in vertical direction
Greek symbols
αo
Thermal expansion coefficient of oil
αw
Thermal expansion coefficient of water
γw
Unit weight of water
Δho
Displacement of the cylinder of the oil pressure amplifier
\( \Updelta h_{\text{o}}^{\text{temp}} \)
Temperature induced displacement of the cylinder of the oil pressure amplifier
Δhp
Displacement of the axial loading piston
Δhw
Displacement of the cylinder of the pore water pressure device
\( \Updelta h_{\text{w}}^{\text{temp}} \)
Temperature induced displacement of the cylinder of the pore water pressure device
Δpw
Increment of pore pressure
Δσ1
Increment of axial stress
Increment of radial stress
ΔT
Axial strain
εvol
Volumetric strain
\( \varepsilon_{\text{vol}}^{\text{o}} \)
Volumetric strain (determined via oil volume change)
\( \varepsilon_{\text{vol}}^{{{\text{o}},{\text{corr}}}} \)
Volumetric strain (corrected via oil volume change)
\( \varepsilon_{\text{vol}}^{{{\text{o}},{\text{err}}}} \)
Temperature induced volumetric strain error (via oil volume change)
\( \varepsilon_{\text{vol}}^{\text{w}} \)
Volumetric strain (determined via water volume change)
\( \varepsilon_{\text{vol}}^{{{\text{w}},{\text{corr}}}} \)
Volumetric strain (corrected via water volume change)
\( \varepsilon_{\text{vol}}^{{{\text{w}},{\text{err}}}} \)
Temperature induced volumetric strain error (via water volume change)
Constant depending on the drainage conditions
Poisson's ratio
Total stress
Axial stress
Radial stress
σ′
Effective stress
\( \sigma_{0}^{'} \)
Initial effective stress
Effective axial stress
Effective radial stress
\( \sigma_{{{\text{a}},{\text{DR}}}}^{'} \)
Effective stress after advance drainage
Initial total stress
σa
Support pressure at excavation boundary
σa,DR
Total stress after advance drainage
ϕ′
Effective friction angle
Dilatancy angle
The authors are very glad to thank SECEGSA and SNED for the permission to publish the test results and for allowing the presentation of Figs. 1 and 2. The authors also wish to express their gratitude to Mr. Roca and Mr. Sandoval from SECEGSA and Mr. Bensaid and Mr. Bahmad from SNED for their support for the research project. The research was funded by a Grant from the Swiss National Science Foundation (SNF Grant No. 200021-137888).
Appendix: Calculation of Volumetric Strain and of Temperature Compensation
Determination of Volumetric Strain without Temperature Compensation
The volumetric strain of the sample can be determined either via oil volume change or via water volume change. Adopting the common sign convention of mechanics (i.e. that expansion is positive), the oil-based volumetric strain of the sample
$$ \varepsilon_{\text{vol}}^{\text{o}} = \frac{{\pi ( - D_{\text{o}}^{2} \cdot \Updelta h_{\text{o}} - D_{\text{p}}^{2} \cdot \Updelta h_{\text{p}} )}}{{4 \cdot V_{\text{s}} }}, $$
where D o, Δh o, D p and Δh p denote the diameter and the displacement of the cylinder of the oil pressure amplifier and of the axial loading piston, respectively (Fig. 6; Table 6), and V s is the volume of the sample. Analogously, the water-based volumetric strain
$$ \varepsilon_{\text{vol}}^{\text{w}} = \frac{{\pi \cdot (D_{\text{w}}^{2} \cdot \Updelta h_{\text{w}} )}}{{4 \cdot V_{\text{s}} }}, $$
where D w and Δh w are, respectively, the diameter and the displacement of the pore water pressure device cylinder (Fig. 6; Table 6).
Data for thermal error computations
D o, D w (mm)
D p (mm)
V o, V w (mm3)
2 × 106
V s a (mm3)
α (/°C)
1.2 × 10−3
2 × 10−4
\( \varepsilon_{\text{vol}}^{\text{err}} \) (%)
aFor samples with a slenderness ratio H/D = 1
Displacements due to Thermal Expansion of Oil and Water
Since the volumetric expansion coefficient of metal is lower by several orders of magnitude than that of oil and water, the influence of thermal metal strain can be neglected. An increase in the fluid temperature by ΔT will cause an increase in the fluid volume, which manifests itself as a displacement of the oil pressure amplifier cylinder (\( \Updelta h_{\text{o}}^{\text{temp}} \)) or of the pore water pressure device (\( \Updelta h_{\text{w}}^{\text{temp}} \)):
$$ \Updelta h_{\text{o}}^{\text{temp}} = - \frac{{4 \cdot \alpha_{\text{o}} \cdot \Updelta T \cdot V_{\text{o}} }}{{\pi \cdot D_{\text{o}}^{2} }}, $$
$$ \Updelta h_{\text{w}}^{\text{temp}} = - \frac{{4 \cdot \alpha_{\text{w}} \cdot \Updelta T \cdot V_{\text{w}} }}{{\pi \cdot D_{\text{w}}^{2} }}, $$
where V o and V w are the thermal expansion coefficients and the volumes of oil and water in the test system, respectively (Table 6).
Temperature-Induced Error
For determining the temperature-induced volumetric strain errors \( \varepsilon_{\text{vol}}^{{{\text{o}},{\text{err}}}} \) and \( \varepsilon_{\text{vol}}^{{{\text{w}},{\text{err}}}} \), \( \Updelta h_{\text{o}}^{\text{temp}} \) and \( \Updelta h_{\text{w}}^{\text{temp}} \) from Eqs. (14) and (15) are introduced into Eqs. (12) and (13), respectively, considering no axial strain in the sample, i.e. Δh p = 0 in Eq. (12):
$$ \varepsilon_{\text{vol}}^{{{\text{o}},{\text{err}}}} = \frac{{\alpha_{\text{o}} \cdot \Updelta T \cdot V_{\text{o}} }}{{V_{\text{s}} }}, $$
$$ \varepsilon_{\text{vol}}^{{{\text{w}},{\text{err}}}} = - \frac{{\alpha_{\text{w}} \cdot \Updelta T \cdot V_{\text{w}} }}{{V_{\text{s}} }}, $$
The corrected volumetric strains read as follows:
$$ \varepsilon_{\text{vol}}^{{{\text{o}},{\text{corr}}}} = \varepsilon_{\text{vol}}^{\text{o}} - \varepsilon_{\text{vol}}^{{{\text{o}},{\text{err}}}} , $$
$$ \varepsilon_{\text{vol}}^{{{\text{w}},{\text{corr}}}} = \varepsilon_{\text{vol}}^{\text{w}} - \varepsilon_{\text{vol}}^{{{\text{w}},{\text{err}}}} , $$
The last row of Table 6 shows, for the purposes of comparison between water and oil, the volumetric strain errors caused by a temperature increase of 1 °C. The measuring system is less sensitive to water dilation than oil dilation by a factor of 30.
Amberg F (2009) Numerical simulations of tunnelling in soft rock under water pressure. In: EURO:TUN 2009, 2nd international conference on computational methods in tunnelling, Ruhr University Bochum, pp 353–360Google Scholar
Anagnostou G (2009) The effect of advance-drainage on the short-term behaviour of squeezing rocks in tunnelling. In: International symposium on computational geomechanics, Juan-Les-Pins, France, pp 668–679Google Scholar
Anagnostou G, Pimentel E, Cantieni L (2008) AlpTransit Gotthard Basistunnel Teilabschnitt Sedrun, Felsmechanische Laborversuche Los 378 Schlussbericht., vol 080109. Inst. für Geotechnik, ETH ZürichGoogle Scholar
Aristorenas GV (1992) Time-dependent behavior of tunnels excavated in shale. Ph.D. thesis, Massachusetts Institute of Technology, BostonGoogle Scholar
Bahmad A (2008) Unpublished communication from April 8, 2008, SNEDGoogle Scholar
Barla G, Barla M, Debernardi D (2010) New triaxial apparatus for rocks. Rock Mech Rock Eng 43(2):225–230CrossRefGoogle Scholar
Bellwald P (1992) A contribution to the design of tunnels in argillaceous rock. Ph.D. thesis, Massachusetts Institute of Technology, BostonGoogle Scholar
Bishop AW, Henkel DJ (1957) The measurement of soil properties in the triaxial test, 2nd edn. E. Arnold, LondonGoogle Scholar
Bonini M, Barla G (2012) The Saint Martin La Porte access adit (Lyon–Turin Base Tunnel) revisited. Tunn Undergr Space Technol 30:38–54CrossRefGoogle Scholar
Bonini M, Debernardi D, Barla M, Barla G (2009) The mechanical behaviour of clay shales and implications on the design of tunnels. Rock Mech Rock Eng 42(2):361–388CrossRefGoogle Scholar
Chiu HK, Johnston IW, Donald IB (1983) Appropriate techniques for triaxial testing of saturated soft rock. Int J Rock Mech Min Sci 20(3):107–120CrossRefGoogle Scholar
Floria V, Fidelibus C, Repetto L, Russo G (2008) Drainage and related increase of short-term strength of low permeability rock mass. In: AFTES, International Congress of Monaco, Monte Carlo, pp 281–284Google Scholar
Franklin JA, Hoeck E (1970) Developments in triaxial testing technique. Rock Mech Rock Eng 2(4):223–228Google Scholar
Graziani A, Ribacchi R (2001) Short- and long-term load conditions for tunnels in low permeability ground in the framework of the convergence–confinement method. In: Adachi I et al (eds) Modern tunneling science and technology, vol 1. Swets and Zeitlinger, Amsterdam, pp 83–88Google Scholar
Hashimoto K, Tanabe Y (1986) Construction of the Seikan undersea tunnel—II. Execution of the most difficult sections. Tunn Undergr Space Technol 1(3–4):373–379CrossRefGoogle Scholar
Head K (1998) Manual of soil laboratory testing. Effective stress tests, vol 2. Pentech Press, New YorkGoogle Scholar
ISRM (1978) ISRM suggested method for quantitative description of discontinuities in rock masses. Int J Rock Mech Min Sci Geomech Abstr 15:319–368CrossRefGoogle Scholar
ISRM (1983) ISRM suggested methods for determining the strength of rock materials in triaxial compression. Int J Rock Mech Min Sci 20(6):283–290Google Scholar
Kovári K, Amberg F, Ehrbar H (2000) Mastering of squeezing rock in the Gotthard Base Tunnel. In: World tunnelling, pp 234–238Google Scholar
Lade PV, Hernandez SB (1977) Membrane penetration effects in undrained tests. J Geotech Eng Div 103(2):109–125Google Scholar
Lambe TW, Whitman RV (1969) Soil mechanics. Wiley, New YorkGoogle Scholar
Lombardi G, Neuenschwander M, Panciera A (2009) Gibraltar Tunnel Project update—the geomechanical challenges (Gibraltar Tunnel Projektaktualisierung-die geomechanischen Herausforderungen). Geomech Tunn 2(5):578–590CrossRefGoogle Scholar
Luján M, Crespo-Blanc A, Comas M (2011) Morphology and structure of the Camarinal Sill from high-resolution bathymetry: evidence of fault zones in the Gibraltar Strait. Geo-Mar Lett 31:163–174CrossRefGoogle Scholar
Mair R, Taylor R (1993) Predictions of clay behaviour around tunnels using plasticity solutions. In: Proceedings of the wroth memorial symposium, pp 449–462Google Scholar
Panciera A, Bensaid A, Roca F (2010) The Gibraltar tunnel: the design revision. In: World tunnel congress, VancouverGoogle Scholar
Panet M, Guenot A (1982) Analysis of convergence behind the face of a tunnel. In: Tunnelling 82, Brighton, pp 197–204Google Scholar
Pliego JM (2005) Open session—the Gibraltar Strait tunnel. An overview of the study process. Tunn Undergr Space Technol 20(6):558–569CrossRefGoogle Scholar
Rice JR (1975) On the stability of dilatant hardening for saturated rock masses. J Geophys Res 80(11):1531–1536CrossRefGoogle Scholar
Russo G, Grasso P, Bensaid A (2008) A framework for the risk analysis of the Gibraltar strait railway-link tunnel. In: ITA-AITES world tunnel congress, India, pp 1726–1735Google Scholar
Salençon J (1969) Contraction quasi-statique d'une cavite a symetrie spherique ou cylindrique dans un milieu elastoplastique. Annales des Ponts et Chaussées 4:231–236Google Scholar
Sandoval N, Roca F, Sauras JM (2011) Proyecto de túnel ferroviario a través del estrecho de Gibraltar. Aljaranda: Revista de Estudios Tarifeños (80):20–34Google Scholar
Schneider T (1997) Behandlung der Störzonen beim Projekt des Gotthard Basistunnels. Felsbau 6(97):489–495Google Scholar
Schofield A, Wroth P (1968) Critical state soil mechanics. McGraw-Hill, LondonGoogle Scholar
Skempton A (1954) The pore-pressure coefficients A and B. Geotechnique 4(4):143–147CrossRefGoogle Scholar
Steiger RP, Leung PK (1991) Consolidated undrained triaxial test procedure for shales. In: The 32nd U.S. symposium on rock mechanics (USRMS), Norman, Oklahoma, pp 637–646Google Scholar
Taik M, Serrano JM (1991) A tunnel project under the straits of Gibraltar. Tunn Undergr Space Technol 6(3):319–323CrossRefGoogle Scholar
Terzaghi K (1943) Theoretical soil mechanics. Wiley, New YorkCrossRefGoogle Scholar
Triclot J, Rettighieri M, Barla G (2007) Large deformations in squeezing ground in the Saint-Martin La Porte gallery along the Lyon-Turin Base Tunnel. In: Barták J, Hrdina I, Romancov G, Zlámal J (eds) Underground space—the 4th dimension of metropolises. Taylor & Francis Group, LondonGoogle Scholar
Vallejo LG, Ferrer M (2009) Geological engineering. Taylor and Francis, LondonGoogle Scholar
Villanueva A, Serrano JM (1986) Geological and geotechnical studies for the Gibraltar tunnel. Tunn Undergr Space Technol 1(3–4):237–241CrossRefGoogle Scholar
Vogelhuber M (2007) Der Einfluss des Porenwasserdrucks auf das mechanische Verhalten kakiritisierter Gesteine. Ph.D. thesis, ETH, ZurichGoogle Scholar
Vogelhuber M, Kovári K (1998) Triaxialversuche im Labor, Sondierbohrung SB 3.2, AlpTransit-Gotthard Basistunnel. Institut fur Geotechnik, ETH ZurichGoogle Scholar
Vogelhuber M, Anagnostou G, Kovári K (2004a) The influence of pore water pressure on the mechanical behavior of squeezing rock. In: Ohnishi Y, Aoki K (eds) 3rd Asian rock mechanics symposium, Kyoto, pp 659–664Google Scholar
Vogelhuber M, Anagnostou G, Kovári K (2004b) Pore water pressure and seepage flow effects in squeezing ground. In: Proc. X MIR conference "Caratterizzazione degli ammassi rocclosi nella progettazione geotechnica", TorinoGoogle Scholar
© Springer-Verlag Wien 2013
1.ETH ZurichZurichSwitzerland
Dong, W., Pimentel, E. & Anagnostou, G. Rock Mech Rock Eng (2013) 46: 923. https://doi.org/10.1007/s00603-012-0350-y
Accepted 06 December 2012
First Online 11 January 2013
Publisher Name Springer Vienna
|
CommonCrawl
|
Steve Dufourny: on 1/28/11 at 14:58pm UTC, wrote Hi Tom, A windows , and the memmory ,heuh a 524 or 1048 I think, very slow...
T H Ray: on 1/28/11 at 14:25pm UTC, wrote Steve, I suspect that your computer is too slow, and the connection "times...
Steve Dufourny: on 1/27/11 at 16:49pm UTC, wrote still the same Tom,it's bizare,I click and hop the page disappears.??? ...
Steev Dufourny: on 1/27/11 at 16:44pm UTC, wrote Thanks. In fact when I click, all disappears , my page also.I have...
T H Ray: on 1/27/11 at 14:56pm UTC, wrote What's the problem, Steve? You can't download the program, or you can't...
Steve Dufourny: on 1/27/11 at 12:08pm UTC, wrote Hi all, Dear TH, Not possible, always my computer has a problem,I am...
John Merryman: on 1/27/11 at 4:21am UTC, wrote Galaxy at 13.2 billion years! ...
TOPIC: Time & Foundations: Awardees Announced [refresh]
FQXi Administrator Brendan Foster wrote on Jan. 18, 2011 @ 22:11 GMT
On behalf of FQXi, I am pleased to announce the future grantees of our latest grant competition on "Time and Foundations". Aided by a rigorous, two-step process involving multiple, dedicated expert review panels, FQXi chose 21 proposals to share just under US$2M in funds for research on the nature of time and other foundational questions in physics and cosmology. The full list of winners appears elsewhere on our site.
We intended to give special emphasis to the study of time. As stated in the official request for proposals, "Science, and particularly physics, has produced dramatic insights into the nature of time... Careful consideration of time has likewise caused revolutions in physics, and may again do so." Winning proposals will address questions like:
Is the nature of time intrinsically different from the nature of space? Can we travel back in time? If not, why not? Could time run differently in different universes? If time started, when and how?
Researchers will also look at other issues in foundational physics, ranging from laboratory tests of the macroscopic limits of quantum physics, to searches for signs of collisions between multiple "bubble" universes.
Many thanks to everyone who devoted time to preparing a proposal, and many thanks to our reviewers, who devoted much time and effort to making the difficult decisions.
John Merryman wrote on Jan. 19, 2011 @ 03:33 GMT
Wow! 1.8 mil. to explain time!
Here's my 2 cents, again.
The present doesn't move from past to future. The changing configuration of the present turns the future into the past. We don't travel the fourth dimension from yesterday to tomorrow. Tomorrow becomes yesterday because the earth rotates. Time is an effect of activity, not the fundamental basis for it.
Because time is...
Because time is an effect of motion, there cannot be a dimensionless point in time without actually freezing the motion creating time, thus a particle cannot be isolated from its activity, whether it's an electron or a car. A dimensionless point in time would be like taking a picture with the shutter speed set at zero.
Understanding time as going from past events to future makes sense if we examine these events in the past tense, but if we consider an event as it is occurring, it quickly recedes into the past, as succeeding events replace it and are then replaced to recede into the past as well.
We view past events as cause of future ones, but total input into any event cannot be determined prior to its occurrence. It is this sum potential which is cause and the events which actually occur that are effect. Thus future is cause and past is effect.
The concept of free will is meaningless in terms of the present moving from past to future, because we only exist at the moment of the present and cannot change the past, or affect the future. On the other hand, with time as an effect of motion, our input is integral to our circumstance. We affect our circumstance, as it affects us.
As an effect of motion, time is similar to temperature, being the sequence of changing configuration, as temperature is the level of activity. If similar clocks record different rates of change in different circumstances, it is due to the level of activity being increased or decreased and thus speeded up or slowed down. Not that these clocks travel different time vectors.
We don't travel into that probabilistic future of multiworlds. It is the actual collapsing of probabilities which is time. Future is cause. Past is effect.
Or you can go with Julian Barbour:
"My application is for two mutually reinforcing projects. The first is to show that the structure of space essentially determines the dynamics of space, which in turn determines the physcial properties of time. This will be done by completing my program for the relational derivation of classical dynamics from the fewest possible axioms. In particular, only scale-invariant (angle-determining) structure of space is presupposed. Much of the structure of spacetime usually taken as fundamental is thereby shown to be emergent. This is likely to be important in quantum gravity, in which emergent structure of the classical theory should play no fundamental role. My second project is to write a monograph presenting a unifying vision of the relational foundations of physics. I am confident that I do now have a clear overview of relationalism in classical dynamics. The part played by scale invariance - the relativity of size and its relation to time - was the last piece of the picture to fall into place. A monograph that presents this picture will have value in itself and be a resource for researchers wishing to apply the insights of relational dynamics in quantum gravity."
Jason Mark Wolfe wrote on Jan. 19, 2011 @ 05:15 GMT
OK, here's my two cents as well. There exists only the present, sort of a local present. Two communicators can communicate in real time by using photons which travel at the speed of light. Photons are the carriers of causality (communication, information, et.) Once that photon arrives at its destination, it can't be taken back. If that photon was an insult or something you didn't mean to say or do, once it arrives at its destination, it can't be taken back. It's the past.
As photons travel from point A to point B, if there is a time dilation between those two points, then the frequency will change (redshift or blueshift). In my opinion, space-time is made out of wave-functions, it is emergent from the quantum vacuum. The rate at which time unfolds has to do with gravitational (or gravity equivalent) energy levels. These gravitational energy levels (either gravity or transitions between inertial frames) are observable as time dilation. Time dilation changes the frequency of a photon that travels between them.
Time dilation/gravity/inertial reference frames are all properties of a space-time made out of wave-functions.
John Merryman replied on Jan. 19, 2011 @ 11:56 GMT
What is "time dilation?"
My point above is that gravitational effects, etc., change a specific level of activity, such as the rate at which a cesium atom vibrates. Essentially its energy level/temperature. So you change its energy level, you affect the rate of change. There is no fundamental dimension of time being dilated.
Consider a giant meteor striking the earth a glancing blow in the direction of its spin, such that it increased the rate of spin. Our days would be shorter, because the spin is faster, but that is just one(very large and humanly important) clock which was speeded up.
As Julian pointed out, the idea is to get rid of as many axioms as possible, which is what we do, if we reduce time from a first order dimension, such as space, to a third order effect, such as temperature.
The only problem is that everyone insists on seeing time as the present moving from past to future, which is akin to insisting it is the sun that is moving east to west.
Jason Wolfe replied on Jan. 19, 2011 @ 18:02 GMT
I like the idea of getting rid of as many axioms as possible. Pursue simplicity. There is a way to do that, but it is difficult to grasp conceptually. I'll try again.
Space-time: is made out of wave-functions.
Wave-functions: store and transmit energy primarily as photons.
Particles: can be decomposed into wave-functions and photons.
Photons are carriers of causality.
Gravity: The energy of the Big Bang E_BB plus the gravitational energy U_GR sum to zero.
Wave-functions and photons CAUSE ALL PHYSICS to occur.
OK, that's six axioms that explain everything. Photons are far more fundamental than just telling time by rotation or orbit of the earth.
Time Dilation has to do with the quotient of time between two locations/ emitter and observer/ two inertial reference frames. The photon has frequency E=hf. Frequency is in cycles per SECOND. Time dilation changes the duration of a SECOND.
That is how gravity (Equivalence Principle) snaps/clicks into place/connects with quantum mechanics and electromagnetic radiation.
"Click!"
Space isn't made out of anything. This give it two properties. Since it isn't composed of anything, it's fundamentally flat and since it's not bounded by anything, it's infinite. The alternative, which everyone seems to be working on, is that space began as a point and has expanded from there. This creates quite a number of conceptual contradictions and patches to maintain, of which I've mentioned many.
Such as how can space expand from a point, yet still have a stable speed of light. If it is "spacetime," then the "duration of a SECOND" would have to increase proportionally, by light speeding up, such that it would require the same duration for light to cross the same percentage of the universe at any time in its history. This would contradict the assumption that other galaxies would eventually disappear, since the light they radiate would speed up.
Wouldn't it be simpler, ie, less axioms, to say that light travels as its own wave, which we model as a "wave function" because any attempt to measure it with any form of atomic mass object means that only the smallest measurable quantity of light that can interact with atomic mass can be measured? That way, you have a physical wave, without having to argue the geometry has physical properties.
"Photons are carriers of causality" across what? Does light create space, or traverse it? This gets back to the issue of lightspeed and the fact that a stable speed of light means a stable, ie. flat dimension of space. That would be the vacuum. Obviously if the space is occupied by some property which slows the speed of light, then that property also exists in the vacuum, ie. empty space.
The energy of the Big Bang and gravity were supposed to sum to zero, then they discovered that cosmological constant referred to as dark energy. A proper cosmological constant would in fact balance gravity, as proposed by Einstein. My essay touched on how to provide a balance between whatever effect causes redshift and gravity. If redshift doesn't actually push apart the universe, but expands the measure of space between galaxies, to the degree the measure of space is collapsed by gravity/galaxies, any light threading its way past many galaxies will be redshifted proportionally to how far it travels and eventually it will be redshifted off the visible spectrum. This would create a horizon line to explain why every point would appear at the center of its own view of the universe. Rather than supposing the entire universe expanded from some original point.
Of course, this discussion of the nature of time evolves a discussion of the nature of space.
"Frequency is in cycles per SECOND. Time dilation changes the duration of a SECOND."
Which is easily done by speeding up the cycles, while defining a second as a certain number of cycles. How do you increase the speed of the cycles? Add energy, or reduce drag/gravity.
You said:"Space isn't made out of anything. This give it two properties. Since it isn't composed of anything, it's fundamentally flat and since it's not bounded by anything, it's infinite. "
Nonsense. Space is made out of something. Otherwise, why does anything occur? Why does gravity occur at all? It's a little hard to conceptualize things like gravity, frame dragging and time dilation without having a conceptual symbol for space. I agree that the universe is infinite, but this "space began as a point" stuff is mathematics run amuck. I think we agree on this point.
You asked: ""Photons are carriers of causality" across what? Does light create space, or traverse it?" Space-time is made out of wave-functions. Photons are energized wave-functions. Non energized wave-functions are just space or perhaps virtual particles. Virtual particles have a little bit of energy.
You asked, "Which is easily done by speeding up the cycles, while defining a second as a certain number of cycles. How do you increase the speed of the cycles? Add energy, or reduce drag/gravity."
Wave-functions are responsible for gravity, inertial frames and time dilation as well. Although the mathematical expression that describes space-time will be more like a time dilation equation then a complex exponential. I wish I could draw a picture. The full range of photon frequencies creates a frequency ramp. The connecting image that connects to the frequency ramp is a time dilation ramp. They connect together. I took the picture out of my essay to save space...mmm.... Gotta go.
Physics operates on the assumption that space is simply a function of measurement and thus if there is nothing to measure, than space doesn't exist. So there is always something to create space, be it geometry or physical matter and energy. Think about it though. What is zero in geometry? Is it the point at the center, or is it the blank sheet? Consider all the problems cosmology has in trying to constrain infinity and how everything from multiverses to dark energy keep breaking open those bounds.
I'm not saying every bit of space doesn't have potential energy, vacuum fluctuations, etc, but is it the energy, or the potential for energy, ie. the vacuum, which is more fundamental?
Physics is about reducing all the axioms down as far as possible. What could be more fundamental than empty space? Like absolute zero and infinity, it might defy the concept of measurement, but does that invalidate it?
On the other hand, look at all the theoretical gyrations used to avoid anything which cannot be measured.
In a sense, empty space would be like a holograph of eternity. An infinite equilibrium.
Everything else is just spikes of disequilibrium.
Allow me to describe the relationship between gravity and light. Think in terms of two things that are engineered to go together, like a stereo male connector and the female outlet. Don't get sidetracked about the word "engineered"; I'm not talking about intelligent design. I'm just describing two things that fit together.
From this, imagine or draw a square. From the upper left corner, draw a diagonal line down to the lower right corner. I'm sure you've seen pictures of the curvature of space. The lower triangle is a massive simplification of the curvature of space-time. The upper left corner
From this, imagine or draw a square. From the upper left corner, draw a diagonal line down to the lower right corner. I'm sure you've seen pictures of the curvature of space. The lower triangle is a massive simplification of the curvature of space. The upper left corner represents zero gravity. The lower right corner represents a very deep point in the gravity well.
The upper triangle is filled with photons that blue-shift from left to right. You can either think of them as a spectrum of many photons at each progressively increasing frequency, or as many blue-shifting photons from left to right.
The idea is that the photons in the upper half, with energy E=hf, and the gravitational potential well (lower half), add to zero.
In effect, this means that the energy from the Big Bang came from the gravity well that was created. The vertical sides can be either energy or frequency. The horizontal lines can represent either radius/distance, or time.
Does any of this make sense?
Just because you can't measure something doesn't mean it doesn't exist. Wave-functions are solutions to the Schrodinger energy equation. The Schrodinger equation is a quantum version of Hamiltonian mechanics. We don't measure wave-functions directly. We only measure the probability which looks a little like,
When the potential energy is zero, the wave-function might look like,
Which vaguely looks like the phasor in electromagnetics.
By the way, describing space using wave-functions is not unprecedented. In solid state physics, the lattice of atoms is described with wave-functions. From that, valence and conduction bands can be calculated. You get position and momentum states for the electrons.
In the case of space-time, there is no lattice. However, wavefunctions CAN be used to describe it. On those grounds, it is reasonable to entertain the idea that space-time itself is made out of wave-functions.
I'm saying the waves are real. You get entangled particles, essentially blurred together, you have a wave. As I point out about time being an effect of motion, since there can be no dimensionless point in time, there is no real distinction between a particle and its motion. Essentially the particle is a spike in the wave.
Here is another way to think about gravity and light: Light expands. Gravity contracts. This relationship is balanced, like a cycle. Sort of like looking through a Merry-go-round, the side closest to you is going one way and the side away from you is going the other way, but they still balance out. It's not as though one side moves the whole thing one way and then the other side moves it back and it's just coincidence that it ends up in the same place. The effects of gravity and light exist simultaneously and they add up to flat space. There is no need to think it all collapses to a point and then expands back out. Whether you agree with the logic or not, in my digital vs. analog essay, I give a possible explanation for how redshift can be an effect of light expanding out, rather than having the actual source recede. If redshift could be explained as a function of the effect of light expanding out over enormous volumes of space and not the recession of the source, it would yield a far simpler, logical and much less patchwork cosmology.
James Putnam wrote on Jan. 19, 2011 @ 17:17 GMT
Congratulations to the grantees. Their education, vision, and accomplishments deserve recognition. Thank you to FQXi.org and its financial contributors for adding financial support.
Jason Wolfe wrote on Jan. 20, 2011 @ 06:44 GMT
John, Peter, physics community,
Do you know why the photon is so important? It is the only massless particle that stores its energy as frequency. All other particles have rest mass; they store additional energy as momentum/kinetic energy. Particles with mass actually store additional energy as velocity which approaches the speed of light. Photons are already at the speed of light.
If photon frequency is so irrelevant, then why is it that time dilation (caused by gravity, relativistic velocity, etc...) can change that frequency so easily? Photon frequency is in cycles per SECOND. But what does time dilation do? It changes the duration of a SECOND.
The Equivalence Principle says that inertial mass and gravitational mass are equivalent. But that is not a helpful fact. What is a helpful fact is that g-force can be caused by vehicular acceleration or gravity. That is the other interpretaiton of the Equivalence Principle.
So why are photons important? Because it's so EASY to calculate a change in graviational potential using photon frequency. It's so EASY, even an 8th grader could do it.
"Photon frequency is in cycles per SECOND. But what does time dilation do? It changes the duration of a SECOND."
A second is a unit of measure. It has whatever value we assign it. If we say it's x cycles and those cycles speed up or slow down, then the duration of the second changes. It's meaningless to say that time is dilated because if it were, there would be no way to tell. It's just like saying space is bent because light doesn't follow a straight line through a gravity field. If we didn't have that concept of flat space to compare the path of light to, there would be no way to define this path as curved. It's also like saying space expands because of the redshift of the spectrum of distant sources, but the speed of light remains constant. If you have two frames of reference, the stable one is the constant.
Peter Jackson wrote on Jan. 20, 2011 @ 12:20 GMT
I've learnt that to solve problems you first address the difficult bits not the 'easy' bits. You say;
"The Equivalence Principle says that inertial mass and gravitational mass are equivalent. But that is not a helpful fact."
That may only be because we can't comprehend how a lump of mass can have higher gravitational potential when it goes faster through Johns 'nothing' (the elephant was here - I can't blame John as that's mainstream!).
How can it be nothing when we now know the CMBR has a 'rest frame'. And that it measures at 2.7 degrees. Nobody with real intellect can still 'believe it is nothing'. Even Einstein said "space without ether is unthinkable". Unfortunately most of our scientists prefer denial to intellect. What we CAN say is that it is not 'matter' as we know it, and that it cannot be 'absolute' so must be local, i.e. represent various local inertial frames.
And that means Jason, that 'photons' as individual physical entities are not conserved during atomic scattering. Compton/Ramar/Stokes/Anti-Stokes scattering is all Fresenls ubiquitous 'n' (Refraction) and all represent Doppler shifting. However - the information, the SIGNAL, the photon carries IS conserved, by E=flamba.
If you're not to weeded to 'beliefs' try it. You'll find it works.
The photoelectrons propagated from the field by motion through it ARE the missing gravitational and inertial mass.
They are propagated to preserve E, 'c' and causality locally, i.e. to implement the 'time' transition between inertial 'frames' (or read 'fields as lines and points are only abstractions).
So that's time sorted, and without £hundreds of thousands! Is there really nobody here with the intellect to understand it?
P: "I've learnt that to solve problems you first address the difficult bits not the 'easy' bits. "
Just because its easy doesn't make it wrong or useless. From some very simple assumptions, I am trying to extract the basis for a gravity beam. I can see that I will be branded a crackpot and ignored. For this reason, I will have to do the experiment without help. It is a sorry sight to see a physics community that is averse to technological advancement because creativity is smothered by fear of being branded a crank.
The cosmic microwave background is the burning ember of what remains after the Big Bang. It is nearly Ubiquitous with microwaves with a temperature of 2.7K. I'm not familiar with the deeper facts about the CMBR and WMAP, but if you think it can help prove the existence of a rest frame (an aether?) then more power to you. I do like the way you attack and undermine the belief that FTL phenomena is impossible. I empathize with your passion for your epiphany and your frustration with the physics community that is very well trained not to question authority.
Nothing is nothing. The cosmic background radiation is something. I suspect that as we examine it in ever greater detail, we will find the shadows of galaxies so distant their light has been shifted down to black body radiation, with frequencies so long as to be flat.
I don't see "space" as having a physical effect, such as gravitational drag, aether, etc. They are all material properties, fields of such. I think I basically agree with your general hypothesis, that C is dependent on the field in question and that light travels through any such field at a maximum velocity, but different fields have different maximums and these fields can be curved, such that C is greater in the center than the periphery. It doesn't surprise me that the light radiated by me striking a match travels slower that that being propelled as a jet out the core of a galaxy. The principle is the same, though, in that they are at a maximum speed because there is no constituent energy or activity within light that could be converted to greater velocity.
The point I make about space is strictly geometric. Is zero in geometry the center point to the three dimensional coordinate system, or is it the blank space in which the entire coordinate system exists, with any number of other coordinate systems potentially occupying the same space? Essentially modern math is build around the idea that zero is a dimensionless point. Which is a contradiction. anything multiplied by zero is zero. It doesn't exist. So if you can't have zero as a dimensionless point, then the alternative is that zero is blank space, which means that geometry doesn't create space, but only defines it. Eventually you have a different mathematical and cosmological model.
You and others are free to pursue what you believe to be most important. More power to you.
However, the fact that photons are the only particles that store their energy as frequency; AND time dilation can change the frequency by changing the duration of a second, leads to this. This subtle fact will have enormous consequences in the physics community. This little fact will redirect the flow of research grant money for decades.
"changing the duration of a second"
Doesn't this seem at all problematic to you? You have two concepts of time, the duration and the unit, ie, second. If they change relative to one another, which is the constant and which is the variable?
It is as though I have a ruler made of rubber and I stretch it out and say that space expands because the foot is longer.
If time actually varied, you wouldn't be able to tell, because all the references would have to change accordingly. Having one clock go faster than another simply means, for whatever reason, one runs faster.
That the cycles of cesium atoms are slower in a gravity field than out of it simply means those cycles are slower due to the effects of gravity. Does gravity cause "time to dilate," or does it drag on atomic structure? I suspect we will eventually discover the mechanism by which it exerts drag.
Your getting the idea, well done. Now this is what's interesting. Call it conjecture, but what if I build an electronic device that can reproduce the frequency profile of a photon falling into a gravity well. Can I get back a gravity field? I'm trying to figure out how to write an equation that describes the momentum that the electronic emitter (or lasers) will impart. If the gravity beam idea works, it would be difficult to ignore.
A changing E&M frequency could result in a gravity field.
Do you think the experiment is worth trying?
It comes down to you. You have both common sense AND an understanding of cosmology, redshift, time dilation etc. You get to decide whether or not this idea I have of electronically generated frequency shifting is worth investigating. Orange and Yellow lasers cost $4000 each. To buy all of the test equipment, electronics and lasers would probably cost about $35,000. I hope to finish my essay this weekend; I will describe the physical principles behind the idea of a gravity beam.
From what I've told you so far, do you think the idea is:
a. crack-pottery, recommend taking up a new different hobby;
b. yeah, there might be something worth testing;
c. need more information.
J; "I think I basically agree with your general hypothesis, that C is dependent on the field in question and that light travels through any such field at a maximum velocity," .Thanks, excellent to that point, but now consider;
What if the light then came across a thick plasma cloud, say part of a giant 'bubble' around a body, the whole lot moving though space (wrt the CMB rest frame), at say 0.2c. (Lets say towards the light source).
We know the refractive index of plasma at that density is say n = 1.1., ;Q1. what do you think happens to the light as it enters the moving plasma? and (Q2); .continues on inside the bubble? ..Think reeally carefully!
Now also consider some light going the same way but just catching the edge of the cloud, going through say 100k of the moving plasma? Q3. Do you think that bit of lights 'space/time might not be curved a little?
And Q4, what would we find relatively when they both escape and get on their way again?
Now consider the answers separately viewed from the rest frame of the bubble as well as the CMB. ..Now you're doing real physics, with real inertial fields, not just abstract numbers, 'points' and 'lines'.
I've offered an interesting suggestion to your posts on the essay string. -And I shouldn't believe all the standard stuff about big bang echo either. It hasn't bounced off anything unless there's a wall.... Look up 'ame' anomalous microwave emissions, which are the same, emissions from actual activity SINCE 13.7 o'clock. Does anyone reeeally believe otherwise!?
You basically asking if the bubble, with n=1.0 and velocity v=0.2c can cause light to reach its destination faster than the light outside the bubble (that has to walk). I grasp what you're looking at. The light inside of the bubble would travel at 1.2c while inside of the bubble; however, from the point of view inside the bubble, only 1.0c. Along the plasma edges where n=1.1, the light would travel through that at c+v/n = 1.2c/1.1 = 1.091c.
I grasp what your saying. You've offered experimental evidence to support your view; I admit I haven't looked at it carefully enough. Yes, it makes me nervous that you seem to be contradicting a long held dictum that nothing moves faster than c. I thought that time dilation inside of the bubble prevented anything from traveling faster than c. The whole inertial reference frame of the bubble should be time dilated as \frac{t'}{t}= \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}=1.02
How does your idea negate length contraction and time dilation?
If superluminal drives were possible, then the light would certainly reach its destination long before the light traveling through regular empty space. However, in order to build a superluminal drive, you have to cut the connection between the spaceship and the rest of space-time. Time dilation has to be blocked from occurring.
I think you should continue to pursue your ideas. Stick with it. I think your major obstacle now is the need to show that time dilation and length contraction between the plasma bubble and the rest of space-time (or the M87 phenomena) is somehow prevented from occurring. LC and TD are the anchor that holds you back.
I looked up anomalous microwave emissions and got something about spinning dust.
P:"-And I shouldn't believe all the standard stuff about big bang echo either. "
I'm not that well versed in CMB stuff. Sure, their are no sides to the universe. John has suggested that the universe was here before the Big Bang. I could go along with that.
If it's moving toward the source, wouldn't it just slow the light that much more?
To the extent the path of the light is curved, I don't see that as bending space, but simply reacting to the physical attributes of the plasma field.
I've wondered whether we might eventually find that gravity is electrostatic in nature. I realize this has obviously been examined, but "curvature of spacetime" is too much a calibration of the effect, than a cause. It's a map of gravity, rather than the actual territory.
There was something recent about "anti-matter" forming at the top of storm clouds. Is it potentially "anti-gravity" as well. As in the opposite polarity.
I've been thinking recently that we have the idea of anti-matter wrong. It doesn't cancel matter, because it in fact enables matter. In the sense that you can't have one without the other. So in a sense they are polarized. An electron is the outer part of the atom and it is negative, while the proton is the center and its positive. Now light expands out, while gravity pulls in. Could mass be the ultimate proton and light the ultimate electron.....
I'd better get some sleep, before i dig this hole any deeper.
Jason/John
Jason; You've seen the elephant in the room again. Don't look away this time! Length contraction and time dilation are a simple step away, and what the short video was about, - watch it again. It's all simply Doppler shift due to refraction and co-motion. Once our brains get used to dealing with those TWO moving variables at once science will advance 100 years and we'll see it's simplicity. (once we've swept away all the droppings).
http://fqxi.org/data/forum-attachments/1_YouTube_
_Dilation.htm
You're so nearly there, but afraid of it. Mathematical physics has got so complex they've forgotten that the question was only;. what does 2 plus 2 make?
Can I put it like this; We have 2 problems;
1. We have this strange force we can't understand that slows light (so 'time'?!) and also seems to bend it (or bend 'space'?!), but there is no 'space' to bend, just a void. How on earth could it work.?? And the answer needs to be a 'real' quantum process for the holy grail of unifying physics!! No wonder we're a bit lost!
2. We have this known fact that refraction slows down light, (so travel time) and curves it's path in a gas or plasma, (which we know there is of lot of in space) to a degree subject to density (mass), by a known quantum mechanism. Why on earth can't we find it's effects anywhere?? All we can find is this theoretical time dilation and curved space stuff everywhere!
So do you think there might just be some kind of link. What answer did you get for that complex equation 2 plus 2 = .....?.
I don't believe in belief based 'quasiscience', but I suggest you can safely believe the answer you get! If our top scientists can't, perhaps they need different glasses, or replacing?
Good question. If I may paraphrase, using my own understanding: What balances light? It seems much is covered by this mystery called "gravity." Where light is expending energy in all directions, it is collecting it from all directions.
Where light is a presence that is immaterial, it is an absence that is material. Maybe they will find the Higgs and all will be solved.
I still think there is an element to space we are missing. To quote Nietszche, "I was staring into the abyss when I realized in was staring back."
A holograph of infinity....
"I still think there is an element to space we are missing". Precisely. ..That's the giant 'elephant in the room' I mentioned.
We may call it the interstellar medium, the interplanetary medium (same stuff but at rest wrt the sun), the dark energy condensate, (which condensed matter comes from), the Higgs Field, the Ether, whatever we wish, but it gives us the local 'CMB rest frame' which is now mainstream physics.. The only reason to ban 'the ether' can now go (space without it is 'unthinkable' anyway AE) with local fields.
But those 'inside' the box (who still think they're outside) have got too used to living without it, so will fight tooth and nail for their beliefs whatever logic says. The search for intelligent life in the mainstream physics universe continues! I feel like writing a modern 'Alice in Wonderland'.
I think there is a mathematical aspect that should be considered and would cause even more distress to the control oriented and that's the idea that zero is not a dimensionless point. The whole notion of space as three dimensional emerges from it, since the zero point is the anchor of the three dimensional coordinate system and it is this mathematical construct which is what three dimensions boil down to. I have a strong suspicion that if all the conceptual framework out of which Big Bang emerged was dissected, it would reveal that this assumption is the seed.
Anonymous wrote on Jan. 22, 2011 @ 05:07 GMT
FQX is an old boys network and nothing like what it purports to be.
Anonymous replied on Jan. 22, 2011 @ 07:02 GMT
It's a network of people who are well versed in physics and know what they're talking about.
Uh, welcome to the real world. Similar politics applies in any such situation. If you want to climb the ladder, you have to follow the rules of that ladder. We just have a little garden here, that while it has no serious access to the castle, provides a nice space to converse for like minded people. Here, we can take out our little hammers and chip little scratches in the castle walls and no one cares. If the big names actually joined the public conversation, it would draw much more attention and the debate would likely cohere around established schools of thought. So that those of us who think the establishment is somewhat off the tracks would be marginalized, if not outright banned. Been there, been banned. Yea for FQXi!
T H Ray replied on Jan. 22, 2011 @ 19:21 GMT
There's no castle. Professional scientists are remarkably accessible to people who have studied their subject or who wish to learn it -- just survey the 'net, and all the information so freely available on the Web. Science is, after all, the most collaborative enterprise on Earth. It's a bit much to expect of anyone, however, to listen very long to propositions that violate the most basic known principles and proven results, and then be abused for defending some nonexistent "establishment." What benefit does one derive from that?
Steve Dufourny replied on Jan. 22, 2011 @ 21:28 GMT
FQXi is an interesting platfrom, very innovant and transparent, it's the most important.A real innovant platform indeed, I think even it's the first platform of this kind even.Furthermore, the architecture is very pleasant and agreeable.
The people here with all their ideas are interestings and have relevant ideas.
Here we discuss, we critic, we have our vanities, our humilities, we learn, we imrpove, we share,we speak with a total transparence.We live , we think, we study and we extrapolate.We have bad and good moments, the life what.The net is a big revolution, and the connectibility also.
People from all over the world can come and discuss, have you seen the context this year,we see africans, europeans, americans,asians,Idians,....i find that wonderfull.In fact FQXI is young , it's the beginning and already it's super.
CONGRATULATIONS FQXI A REAL SUCCESS THIS PLATFORM.
THE SCIENCES ARE UNIVERSAL ......THIS SPHERE EVOLVES AND HAS AN AIM !!! THE ULTIM SPHERE.
Sincerely, Steve , human on Earth with qualities and defaults,dreams and hopes,faith and love.
FRIENDLY DEAR BROTHERS HUMANS OF THE MILKY WAY TURNING AROUND THE BH CENTER TURNING AROUND THE SUPERGROUP AND ITS CENTER.....AND TURNING AROUND THE UNIVERSAL CENTER INSIDE THE UNIVERSAL SPHERE.
I have a big problem to write the equations, I know my maths and the symbols but I can't write them, could you help me please, with latex and others, how can I do for having the possibility to write the symbols of maths.
Include latex equations, yes but how????
It could be well if FQXI INVENTS A PROGRAMM EASY OF UTILIZATION added here when we write.Or a simple list of symbols just above here between link help page and your name, like that we can copy them and just put them here.
You know egal identic,diffrent,smaller,bigger,...absolute value,log,ln,...sinus....hyperbolic cotengent....arcsinus.....hyperbolic cotengent argument....limit,....sum....alpha, beta,gamma,delta,epsilon,.....khi,psi,omega.,.....
It will be cool.
the rationality of the day........rac of 2 exp rac of 2 exp rac of 2=rac 2 exp rac2*rac2=rac 2 expo 2= 2 like what the irrationality becomes always rational in its pure prediction.
My point is not about the politics of physics, but the physics of politics. There is a gatekeeping/filtering function as surely as a cell has a membrane. I recognize the reality and necessity of this. I was simply responding to the dissatisfaction expressed by anon 1.
While you are defending "the most basic known principles and proven results," could you explain to me how something of this size could have coalesced out of the radiation field left by the Inflation stage?
http://www.physorg.com/news/2011-01-astronomers-distan
t-galaxy-cluster.html
"Scientists refer to this growing lump of galaxies as a proto-cluster. COSMOS-AzTEC3 is the most distant massive proto-cluster known, and also one of the youngest, because it is being seen when the universe itself was young. The cluster is roughly 12.6 billion light-years away from Earth. Our universe is estimated to be 13.7 billion years old. Previously, more mature versions of these clusters had been spotted at 10 billion light-years away.
The astronomers also found that this cluster is buzzing with extreme bursts of star formation and one enormous feeding black hole."
"The lump sum of the mass turned out to be a minimum of 400 billion suns -- enough to indicate that the astronomers had indeed uncovered a massive proto-cluster. The Spitzer observations also helped confirm that a massive galaxy at the center of the cluster was forming stars at an impressive rate.
Chandra X-ray observations were used to find and characterize the whopping black hole with a mass of more than 30 million suns. Massive black holes are common in present-day galaxy clusters, but this is the first time a feeding black hole of this heft has been linked to a cluster that is so young."
Given that the inflation stage expanded the universe out to quite a large area, it would seem that by the end of this, the level of energy would be fairly diffuse, so how could this much energy clump together in little over a billion years? Given it takes the Milky Way galaxy 225 million years to make just one rotation, 1.1 billion years is not very long at all. To use a rough analogy, it would be like saying that from the invention of the wheel to the development of the Model T was as long as it would take to drive around New York City five times.
For reference, the black hole at the center of our galaxy is aprox. 4 million solar masses.
John, if one is not versed in the fundamentals, any explanation for a phenomenon is as good as another. Science is a progressive enterprise, constrained by known boundaries, not driven by personal beliefs. Knowing the boundaries doesn't guarantee success at predicting the nature of undiscovered phenomena; it does, however, make discovery and prediction possible, independent of mere opinion.
In other words, you have no adequate explanation for how that much energy could have coalesced in that amount of time.
I'm afraid that some of us who are not fundamentalists are really starting to scratch our heads as to what is going on.
Maybe if you were a "fundamentalist," you would itch less.
You really just don't have an answer. It just doesn't bother you that bridging the gap between that post-inflation stage and a galaxy cluster of that magnitude in approximately a billion years is such a stretch that no one has stepped forward with any sort of patch.
Yet it doesn't tickle your brain in the slightest. Are we talking the fundamentals of logic, or doctrine here? In a faith based system, any questioning of the infallibility of that system is considered a test of faith and it is a matter of principle to resist even the acknowledgement of an issue. Whereas with logic, every little brain wiggle, tickle, or itch is a question to be considered.
Maybe if I was a "fundamentalist," I would itch less, but I would also be sliding down the slope to senility, because the mind is like the rest of the body, use it, or lose it.
Dan T Benedict replied on Jan. 24, 2011 @ 22:07 GMT
My essay has been posted today. I would be interested in your comments and/or criticisms since I'm somewhat familiar with your views of the BBT. However, I don't really consider my model to fit this category. I prefer Evolving Steady State, although it probably is consistent with Roger Penrose's Conformal Cyclical Model. You probably will enjoy reading the third reference given. It is a rather scathing commentary on the state of Modern Cosmology from Prof. Michael Disney of Cardiff Univ. He was one of the initial leaders of the HST science team. While I don't entirely agree with his assessment, I believe he has more valid points than invalid ones. Of course he was soundly criticized for his opinions, much as Lee Smolin was criticized by many in the String Theory camp after his book "The Trouble with Physics" was published.
Dan Benedict
I read through it and I'll try offering some ideas in the thread.
That is an interesting article. I'll post the last three paragraphs for Tom's sake:
In the 1930s, Richard Tolman proposed such a test, really good data for which are only now becoming available. Tolman calculated that the surface brightness (the apparent brightness per unit area) of receding galaxies should fall off in a particularly dramatic way with redshift—indeed, so dramatically that those of us building the first cameras for the Hubble Space Telescope in the 1980s were told by cosmologists not to worry about distant galaxies, because we simply wouldn't see them. Imagine our surprise therefore when every deep Hubble image turned out to have hundreds of apparently distant galaxies scattered all over it (as seen in the first image in this piece). Contemporary cosmologists mutter about "galaxy evolution," but the omens do not necessarily look good for the Tolman test of Expansion at high redshift.
In its original form, an expanding Einstein model had an attractive, economic elegance. Alas, it has since run into serious difficulties, which have been cured only by sticking on some ugly bandages: inflation to cover horizon and flatness problems; overwhelming amounts of dark matter to provide internal structure; and dark energy, whatever that might be, to explain the seemingly recent acceleration. A skeptic is entitled to feel that a negative significance, after so much time, effort and trimming, is nothing more than one would expect of a folktale constantly re-edited to fit inconvenient new observations.
The historian of science Daniel Boorstin once remarked: "The great obstacle to discovering the shape of the Earth, the continents and the oceans was not ignorance but the illusion of knowledge. Imagination drew in bold strokes, instantly serving hopes and fears, while knowledge advanced by slow increments and contradictory witnesses." Acceptance of the current myth, if myth it is, could likewise hold up progress in cosmology for generations to come.
Peter Jackson replied on Jan. 25, 2011 @ 11:41 GMT
Dan/John
Very enlightening. Recent discoveries are problematic for the 13.7year limit, with old and new galaxies at over 12bn yrs distance, defying logic, 'c' or both. I'm currently trying to falsify the basis an apparent solution, extending the DFM from my own essay, (2020 Vision) but can't yet find any inconsistencies (except with 'beliefs').
I believe I may enjoy reading yours Dan, but will comment there, and hope you may comment on mine.
And Tom; Becoming too wedded to what you term 'fundamentals' is equivalent to building a box around yourself. We should never forget the differences between foundations and boxes.
John, you have a remarkable facility for missing the point. Theories are judged by the phenmomena they explain, not for anomalies left unexplained.
One has to understand, first of all, that phenomena of themselves mean absolutely nothing -- and I do hope you pause to think about that, because you and others here have a pronounced tendency to assign value to various observations without any notion of how they might fit a theoretical framework and indeed, you sneer at theory as the product of lesser intellects. You may not need no stinkin' theories in your world of self-satisfied armchair quarterbacking, but science does. Were it otherwise, we would not have advanced beyond Aristotle.
One example often cited, of how theory drives interpretation, is that of steady state cosmology vs. big bang. Before Penzias and Wilson discovered background radiation, the theories held equal status. But big bang had already predicted backgound radiation, and steady state cannot. Data drives theory, and theory drives interpretation.
Einstein is regarded as a genius not because he sprang suddenly into inspiration and let his wisdom shine forth (though those least familiar with Einstein and his work may think so) -- but because he was thoroughly soaked in the knowledge of the physics that had gone before. If you actually understood Boorstin in context (and you apparently don't) you would see that he is supporting incremental progress over "bold strokes." Einstein himself is so misunderstood for his saying, "Imagination is more important than knowledge," that every time I hear it quoted I get nausea. He didn't mean that imagination is a substitute for knowledge -- rather, that one step beyond to a more inclusive theory gives knowledge a chance to germinate and grow. Einstein's and Leopold Infeld's masterful little book, _The Evolution of Physics_, is so simple and elegant that a bright junior high school student can understand it -- though I surmise that you and others will turn up your noses at the thought of reading such a book, even though day after day I read your posts holding forth hypotheses that violate even the most fundamentally well known facts.
As for myself, I am happy to be a slug in the garden of physics, and leave you to whatever other garden you inhabit.
"Fundamentalist" was John's label, not mine.
Since Daniel Boorstin and the "illusion of knowledge" came up, I find myself wanting to say more, because having spent many years in communications at the Fortune 500 level, it's a subject I know something about.
Indeed, the bible of advertising and public relations for 40 years has been Trout & Reis, _Positioning_ , whose guiding premise is "Perception is reality." Boorstin was also highly familiar with that doctrine and how it works -- one builds an illusion of self-reinforcing reality within the niche one wants to own.
The second premise of positioning is that one positions one's strengths against the competition's weaknesses in order to dominate that niche. The unspoken assumption is the necessity to compete for attention -- to use perception as a weapon much like a judo master uses his opponent's weight against him. A classic example (for those old enough to remember) is the case of Hertz vs. Avis -- when Avis began its campaign of "We're number two, so we try harder," the statement although true was slightly disingenuous. Hertz owned the rental car market, and Avis was a very, very distant number two. By constantly beating the drum at Hertz's expense, though, Avis gained the perception that the two were nearly equal. In other words, when someone in the rental car market thought of Hertz, they also thought of Avis. Eventually, the perception fulfilled itself -- Hertz and Avis became equal in market share. Hertz had failed to realize that its very strength of being number one was in fact a liability.
In this little laboratory of communication, one can see the same positioning dynamic, with those pushing their pet ideas jockeying for attention, creating (consciously or not) a perception that the "competition," that of "establishment science" or some such nonexistent thing has abandoned its noble mission and fallen into religious fundamentalism. Oh, my, the travesty.
I expect the non delusional will eventaully realize that this is all chaff in the wind, because the work of science -- real, objective, rational knowledge -- is not done in chatty forums. It is done in professional journals and conferences where the wheels of progress grind ever so slowly. A dilettante may find this ever so painful; however, as Boorstin points out, however comforting the illusion of knowledge may be, actual knowledge is so much more valuable.
You have to install LaTex on your computer (it's free) to write equations with it. Try this site.
Thank you for the diatribe, but let's get back to some basic points, which you have a remarkable facility for avoiding. For one, how does one go about falsifying a theory, when contradictory observations are ignored whenever a patch cannot be fitted? As in I'm still waiting for an explanation for how that amount of energy can coalesce in that amount of time. From reading the article, I could supply the probable answer: a clump of Dark Matter. Obviously an extremely enormous amount, but that shouldn't be a problem for cosmology, given the size of some of the other patches.
As for the theory I've been considering, which is simply that redshift is an optical effect of light traveling enormous distances, rather than the actual recession of the source, this observation of distant galaxies having significant surface brightness would be quite good evidence that the redshift is actually an optical effect, not that those galaxies exist out on some spatial curve, pushing them away.
While I'm not particularly familiar with Boorstin, since the mention was from the article discussed, in my more physically oriented view, it is called "leverage." As in applying pressure to the necessary points to achieve the desired result. Such as using the term fundamentalist in response to your defense of fundamentals. Everyone feels they have some grasp of fundamentals, but the fact is that all we do have is theories.
How does one determine whether knowledge is actual or illusion? Are we just supposed to leave it to the professionals since they are vastly more educated? What if some of the knowledge of which they are trained is more illusory than actual. Surely, they will conclude that eventually.
I find the theory of relativity to be one of the most beautiful theories of all time, yet, it is also one of the ugliest. It is beautiful in its scope and breadth of the phenomena that it describes and the mathematical sophistication in which it does so. However, it remains an ugly theory in than it allows the very laws of physics to break down at singularities. This is a travesty. No theory is so beautiful that it should allow such abhorrent entities. But after nearly a century, they still exist in the theory. Neither the formalisms or the interpretations have been conclusively altered in order to appropriately deal with them. In the words of Leonard Susskind, "we don't yet have the mental architecture which with to understand them, but we're working on it" or words to that effect. So even the most educated people, at the pinnacle of their profession, don't have a monopoly on new, fresh, interesting, and even "world class" ideas.
So what is the non-academic, with no access to professional journals, and no support system, etc., with promising, perhaps even "world class" ideas supposed to do with them? I believe this is what makes FQXI different, especially their essay contest. It provides an outlet for everyone to express their ideas, even the dilettantes and cranks. But if ever a opportunity for a non-professional to make an impact on the foundations of science existed, it exists here. Maybe I'm just fooling myself and the non-academic will never be taken seriously, since their ideas may not be presented with the eloquence and sophistication of the professional, but that does mean that these ideas don't represent actual progress. Only time will tell.
Right, John. All we DO have are theories, and that is the only way we can interpret data. Until you grasp that, you are grasping at straws.
We determine objective knowledge by measured correspondence of theory to physical result. This is not the province of academics, but the nature of science and scientific method.
I can't understand why you consider relativity ugly, merely because it cannot avoid singularities. Einstein wrote voluminously on the problem of creating a singularity-free theory, and actually intended general relativity to be intermediate toward a unified field theory. However, relativity is what theorists call "mathematically complete" in that it proceeds from first principles alone. Quantum mechanics is "ugly" in the sense that its mathematical model is cobbled after the fact of observation.
At any rate, a theory that obviates the need to specify boundary conditions is still out of reach. That doesn't make everything else ugly, does it?
I look forward to new world class ideas. No matter where they come from.
Thanks, for your reply. I'll agree that QM is definitely the uglier of the two theories. I just have a major problem when the laws of physics are allowed to break down. If we could slightly modify GR to accomplish obtain better understanding, without waiting for a full TOE or QG, I believe this would be progress. For a career physicist to tinker with Einstein is risky. Maybe it's better to come from an outsider with nothing to lose.
A truly relational theory should carefully describe how events and physical properties are defined. I propose that they are defined only in relation to the universe as a whole. If this principle is accepted, then nothing, not matter, nor energy can ever cross an event horizon. At most, they can only merge with the boundary. A relativist might protest that this violates the equivalence principle, but does it really? An observer that merges with the boundary is in a locally timeless state and therefore experiences nothing! The interior of a BH represents its own cosmos. Nothing can leave, nothing can enter. The two separate cosmic spacetimes share only the boundary. Thus this interpretation of GR recognizes exterior solutions for Einstein's equations for "in-falling" mass and energy and a cosmic solution to the interior of BHs. My essay describes how singularities can be understood in this scenario and the implications for cosmology and astrophysics. If you have time to read it, I would value your opinion.
Yes, all we do have is theory and observation, but what then is this fundamentals of which you speak?
When you consider the history of the intellect, often the "fundamentals" proved to be transient, as further insights emerged from the cracks which developed in those fundamentals.
"If you actually understood Boorstin in context (and you apparently don't) you would see that he is supporting incremental progress over "bold strokes.""
Michael J. Disney made it quite clear he understood the increased potential for error inherent in the "bold stroke."
Would you consider Inflation theory "incremental progress," or a "bold stroke?"
Let's turn that around. Would one have considered Hoyle's steady state cosmology incremental progress or a bold stroke?
Theories live and die by data. As it is, the data do seem to show a slight asymmetry in background radiation that supports inflation. So there's a prediction, and there's no competing theory that I know of that accounts for all the other physics we know, plus the novel prediction. Until something better comes along, inflation is solid.
And that's my point about knowing what sticks to the wall, so to speak, when it comes to speculation about what one can know in the future. All the chattering here about "I've got this idea ..." reminds me of Reagan, "I've got this letter in my pocket ..." As Boorstin said, an illusion of knowledge.
It's all too easy to huff and puff about the state of theoretical physics when one has nothing at stake. At the end of the day, one finds that the huffing and puffing has no substance, while the work of making and testing theories quietly goes on. What makes the news, does not make physics.
I had in fact previously read your essay, though one reading of a serious technical work is inadequate for honest criticism. On second reading, I find the piece quite worthy of comment, which I will post in your essay forum. Thanks.
Thanks dear TH,
It's nice.I am going to try,hope that will go.
It could be well if an universal keyboard was created with more than our azerty.With all signs,symbols ....that will be easier in all case.
MICROSOFT....GOOGLE.....APPLE...LET'S GO .we wait.
Galaxy at 13.2 billion years!
http://www.telegraph.co.uk/science/space/8285196/Scien
tists-discover-the-most-ancient-galaxy.html
Dear TH,
Not possible, always my computer has a problem,I am going to try still.
I don't know but since two months perhaps, my computer becomes bizare.
My connection also.
What's the problem, Steve? You can't download the program, or you can't install it? What computer platform are you on?
Steev Dufourny replied on Jan. 27, 2011 @ 16:44 GMT
In fact when I click, all disappears , my page also.I have sometimes the problem when I try to go on some platforms.My system is very basic, and slow.My connection also.It's not the new cray, or the jaguar or the Blue gene if I can say ,my protections also evidently.Perhaps it's the cia hihihi they know all of us Tom , they know all of us hihihi lol
I am going to try still Tom ,I will tell you the evolution.Thanks still.
still the same Tom,it's bizare,I click and hop the page disappears.???
Steve, I suspect that your computer is too slow, and the connection "times out" before the download completes. By platform I mean what operating system are you running -- MAC, Windows? What's your computer processing speed and memory? Maybe there are some workarounds available.
A windows , and the memmory ,heuh a 524 or 1048 I think, very slow indeed.
You know it's not when I dowload, I don't understand why these pages disappear as that.Probably you are right, the virtual memmory is slow and weak.
Let's find a solution Tom hihihi 10001010001000001000010001111011101111011111110000100000000.
.....the codes and the algorythms always.....
The rubber is getting closer to hitting the road:
http://physicsworld.com/cws/article/indepth/44805
Realit
y check at the LHC
Lots of comments from Robert Oldershaw
Does anyone here know how to perform a transformation of a gravitational potential energy from an accelerated coordinates into an inertial reference frame?
Bubba wrote on Jan. 25, 2011 @ 18:52 GMT
IMO, University physics departments need to put the physics back in physics education.
It is not uncommon today to see newly minted post-docs and PhD's who lack any sense of physical intuition or imagination.Students have been indoctrinated too heavily in mathematical rigor and formalisms.
Modern physics has become boring, unimaginitive, and conformist.
Dear Ray, other thoughtful people,
Do you think that athiests and others would be less afraid of God if we as human beings were permitted, by God, to take personal responsibility for our own moral and ethical conduct? In doing so, this would eliminate needless worry that we are attending the wrong church or religion.
I think the root cause of a lot of human misery is the dilemma of not knowing if we are going to suffer eternal damnation for not worshipping God at all, or doing so in a way that someone tells us is improper.
There are many different kinds of people in the world. Some people do not feel a connection to the Creator.
I'm trying to make the argument that freedom of religion and religious harmony are positive things. We have it in the United Stats. Maybe we should ask this Higher Power to honor our choice to believe, not to believe, or worship in a way that we are in attunement with.
Isn't that what religious freedom means?
I talk to God all the time, but he just says the same thing, "Keep moving until I tell you to stop." Of course, it might be his answering machine.
LOL. That's pretty funny!
I came downloaded with the religion 1.2 program. It has "Zeus is sick of your whining. Shut up and reload." printed on the cover of the installation floppy, with the price listed as blood of a ram.
It's got one of those early updates though. There is a "Je" scribbled in front of the Zeus, the "is sick of" is scratched out and "listens to" is written in above it. The "shut up and" is scratched out and "and pray." is written after "reload." Blood of a ram is scratched out too and "10%" is written over it, but I think it's a pirated copy, because the most anyone seems to ever pay is a few coins.
It really doesn't work as advertised though. Mostly it's the old program with a few of the more obnoxious bugs patched over, but you know how those programers are. Every little patch is branded and copyrighted as the best thing since sliced bread. As Tom said, it's all about positioning and perception.
Fact is, they are all pretty similar. Some lean toward the being and some toward the doing. The whole noun/verb, factor/function thing, but the reality is they all blur position and momentum together.
As Frank Sinatra so eloquently put it, "Dobedobedobedo."
Of course, he also said, "I did it my way," but that likely involved controlled substances.
I can relate to what you're saying; the obligation to attack and destroy evil, the misunderstood concept of sacrifice, and the ongoing transformation into a higher and more perfected expression of God. It is like trying to tame/master the wild and powerful force within us. Part of us, is part of God. I have experienced what I can only call the Jesus Christ Comples in which our ego is pumped up by the power of God; where we believe we can heal and perform miracles, and miracles happen to us. But most who make it this far will overload and destroy themselves in their own lack of compassion and love.
I am trying to learn how to use this power; I think some call it the Super Conscious. I wanted learn to use it in a way that could be scientifically verified. That part I wrote down and submitted in the essay contest. I came up with the frequency analog to Newtonian force, F=ma. I obtained the equation for a shift photon which is used in creating a force field/tractor beam/gravity beam. But the feedback I have gotten has been sketchy. A few people who have read it really liked it. Most can't understand it.
If you wanted proof of the existence of a Higher Power, read my essay. It's very technical and the language is mostly formal and higher learning. It took a long time to figure out how to convey these ideas is an acceptable format.
I used to go for those brain spikes, but now it seems the opposite is preferable, to zen out. It's better for plugging into the larger context. It's a bit of a trade off between power and awareness. Doing vs. being.
The current political situation is a good example of those having the most power being most blind to the larger context. Consider it in terms of a belief system, where those who are most convinced rise to the top, as faith in the system becomes its own reward, while those asking questions go off in other directions, either of their own volition, or being pushed out as heretics.
Haven't been following the contest closely, but will read yours as soon as time allows.
|
CommonCrawl
|
Research on application of multimedia image processing technology based on wavelet transform
Kun Sui1 &
Hyung-Gi Kim1
With the development of information technology, multimedia has become a common information storage technology. The original information query technology has been difficult to adapt to the development of this new technology, so in order to be able to retrieve useful information in a large amount of multimedia information which has become a hot topic in the development of search technology, this paper takes the image in the multimedia information storage technology as the research object, uses the wavelet transform to divide the picture into the advantages of the low-frequency and high-frequency characteristics, and establishes the multimedia processing technology model based on the wavelet transform. The simulation results of face, vehicle, building, and landscape images show that different wavelet basis functions and different layers of images are decomposed, and the retrieval results and retrieval speed of images are different, When taking four layers of wavelet decomposition, the cubic b-spline wavelet as the wavelet basis function makes the classification result optimal, and the accuracy rate is 89.08%.
Multimedia generally refers to images, graphics, texts, and sounds. As an important information carrier, images have features such as intuitive images and rich content. They are an important way of expressing information. Image processing technology has become a major content of multimedia processing technology. Especially with the development of multimedia technology and the arrival of the information age, people are increasingly exposed to a large number of image information. How to effectively organize, manage, and retrieve large-scale image databases has become an urgent problem to be solved.
In the research of multimedia image retrieval technology, from the aspect of feature representation, it can basically be divided into three directions: (1) Based on the color features of the image, the color feature is the most widely used visual feature in image retrieval and is the most intuitive, and most obviously, it is one of the most important perceptual features of image vision. The main reason is that the color is often very much related to the objects or scenes contained in the image. In addition, compared with other visual features, the color feature has less dependence on the size, orientation, and viewing angle of the image itself, so that it has higher stability and higher robustness, and the calculation is simple, so it is widely used at present. Users can input the color features that they want to query and match the information in the color feature library. The color-based feature extraction method can better represent the color information of the image. At present, the methods of color feature extraction mainly include color histogram [1, 2], color moments [3, 4], color sets [5], color coherence vector [6, 7], and color correlogram [8]. (2) Based on the retrieval of image texture features, texture features are visual features that reflect the homogeneity of the image independently of color or brightness. It is a common intrinsic feature of all surfaces. Texture features contain important information about the structure and arrangement of the surface and their relationship with the surrounding environment. Because of this, texture features are widely used in content-based image retrieval, and users can find other images that contain similar textures by submitting images that contain some kind of texture. In the texture feature retrieval method, co-occurrence matrix [9,10,11] and Gabor filter [12,13,14] are two commonly used methods. (3) Based on the retrieval of the image shape feature, the shape information of the image does not vary with the color and other characteristics of the image, so it is a stable feature of object. Especially for graphics, shape is its only important feature. In general, there are two kinds of representations of shape features, one is a contour feature and the other is a region feature. The former only uses the outer boundary of the object, while the latter relates to the entire shape area. The most typical methods for these two types of shape features are Fourier shape descriptions [15, 16] and moment invariants [17, 18].
The concept of wavelet transform was first proposed by J. Morlet, an engineer engaged in petroleum signal processing in France, in 1974. The inversion formula was established through practical experience of physical intuition and signal processing. The essential difference between wavelet analysis and Fourier analysis is that Fourier analysis only considers the one-to-one mapping between the time domain and frequency domain. It uses the function of a single variable (time or frequency) to represent the signal, while the wavelet analysis uses the joint time scale function to analyze the non-stationary signal. The difference between wavelet analysis and time-frequency analysis is that time-frequency analysis represents a non-stationary signal in the time-frequency plane. Wavelet analysis describes that the non-stationary signal is also in the two-dimensional plane; it is not on the time-frequency plane but on the so-called time scale plane. In the short-time Fourier transformation, the signal is observed at the same resolution (that is, a uniform window function), and in the wavelet analysis, the signal is observed at different scales or resolution. This multi-scale or multi-resolution view in signal analysis is the basic point of wavelet analysis.
The basic idea of wavelet analysis is derived from Fourier analysis, which is a breakthrough development of Fourier analysis. It is not only a powerful analytical technique, but also a fast calculation tool, which has both important theoretical significance and practical value. Wavelet analysis is a powerful tool for characterizing the internal correlation of signal data and has powerful power in data compression and numerical approximation. Due to its "self-adaptive" and "mathematical microscope properties," it has become the focus of much attention in many disciplines.
In the research of pattern recognition, wavelet analysis can be used to decompose the low frequency and high frequency of the frequency to represent the characteristics of the signal. It is widely used in many fields of signal analysis. Wavelet analysis is a frequency analysis method that has been widely used in many fields for feature analysis [19,20,21,22,23].
Wavelet analysis is also often used to analyze image features during image analysis. In the research of Li et al. [24], the fusion of multi-sensor images is realized by wavelet transform. The goal of image fusion is to integrate supplementary information from multi-sensor data, making the new image more suitable for human visual perception and computer processing and for the purpose of tasks such as segmentation, feature extraction, and object recognition. The proposed scheme performs better than the Laplacian-based approach. It is recommended to use a specially generated test image for performance measurement and to evaluate different fusion methods and compare the advantages of different wavelet transforms with ker Nelsons' extensive experimental results. Chang and Kuo [25] used the advantage of wavelet transform to propose a multi-resolution method based on improved wavelet transform, called tree structure wavelet transform or wavelet packet. The development of this transformation is that a large class of natural textures can be modeled as quasi-periodic signals with a dominant frequency in the intermediate frequency channel. The transform can be scaled up to any desired frequency channel for further decomposition. In contrast, conventional pyramid structure wavelet transforms perform further decomposition in the low-frequency channel. A progressive texture classification algorithm has been developed, which not only has computational appeal, but also has excellent performance.
In this paper, the multimedia retrieval technology is studied with the image as the research object, as well as the wavelet decomposition of images, extraction of image features, comparison of the effect of different wavelet bases on the recognition results, and the effect of different decomposition layers on the recognition results in the retrieval and analysis process. Through the recognition results of face, vehicle, building, and landscape images, the optimal wavelet basis function and the optimal number of layers are selected, and an image retrieval model based on wavelet decomposition is established.
The contributions of this article are as follows:
Design an image retrieval method based on wavelet decomposition
Analyze the influence of different wavelet bases on image retrieval and obtain the optimal wavelet base for wavelet decomposition
Analyze the influence of different layer decomposition on image retrieval and get the optimal layer
Proposed method
Wavelet theory
The basic idea of wavelet analysis originates from the Fourier analysis, which is a breakthrough development of analysis. It is not only a powerful analytical technique but also a fast computing tool. The multi-resolution analysis in wavelet theory provides an effective way to describe and analyze signals with different resolution and approximation accuracy. It is highly valued in image processing and applications. The wavelet transform method can be expressed as follows:
$$ {C}_{x\left(a,\tau \right)}=\frac{1}{\sqrt{a}}{\int}_{-\infty}^{+\infty }x(t){\psi}^{\ast}\left(\frac{t-\tau }{a}\right) dt\kern0.84em a>0 $$
where ψ(t) is the mother wavelet, a is the scale factor, and τ is the translation factor.
In the past decade, wavelet analysis has made rapid progress in both theory and method. People study from three different starting points: multi-resolution, framework, and filter bank. At present, the description of function space, construction of wavelet basis, cardinal interpolation wavelet, vector wavelet, high-dimensional wavelet, multi-band wavelet, and periodic wavelet are the main research directions and hotspots of wavelet theory. Nowadays, people have recognized multi-resolution processing in computer vision, subband coding in speech and image compression, non-stationary signal analysis based on non-uniform sampling grids, and wavelet series expansion in applied mathematics are only the same theory. That is, different views of wavelet theory.
In application, wavelet analysis has quite an extensive application space due to its good time-frequency localization characteristics, scale variation characteristics, and directional characteristics. Its application areas include many disciplines of mathematics, quantum mechanics, theoretical physics, signal analysis and processing, image processing, pattern recognition and artificial intelligence, machine vision, data compression, nonlinear analysis, automatic control, computational mathematics, artificial synthesis of music and language, medical imaging and diagnosis, geological exploration data processing, fault diagnosis of large-scale machinery, and many other aspects. The scope of its application is constantly expanding. Wavelet analysis is used as an important analytical theory and tool in almost all subject areas, and fruitful results have been achieved in the research and application process.
Let ψ(t) ∈ L2(R), if the Fourier transform of ψ(t) satisfies the following conditions:
$$ {C}_{\varphi }={\int}_0^{+\infty}\frac{{\left|\psi \left(\omega \right.\right|}^2}{\omega } d\omega <+\infty $$
Then, ψ(t) is called the mother wavelet. The mother wavelet is translated and expanded to form a family of functions.
$$ {\psi}_{a\tau}(t)={\left|a\right|}^{-0.5}\psi \left(\frac{t-\tau }{a}\right)\kern0.24em a,\tau \in R;a\ne 0. $$
The continuous wavelet transform of the function f(t) is defined as:
$$ {W}_f\left(a,\tau \right)={a}^{0.5}{\int}_Rf(t){\psi}_{a,t}\left(\frac{t-\tau }{a}\right) dt $$
Wavelet basis
French scholar Daubechies proposed a class of wavelets with the following characteristics, called the Daubechies wavelet.
Finite support in time domain, that is, the length of ψ(t) is finite and its high-order origin ∫tpψ(t)dt = 0, p = 0 ∼ N. The longer the N value, the longer the length of ψ(t).
In the frequency domain, ψ(ω) has a N zero point at ω.
ψ(t) and its integer displacement are orthogonal.
Color characteristics of the image
Color features are the most widely used visual features in image retrieval. Colors allow the human brain to distinguish between objects' brightness and boundaries. In image processing, color is based on well-established descriptions and models. Each system has its own characteristics and scope of use. When processing images, color systems can be determined according to requirements and can be used in different color systems. A color feature is a global feature that describes the surface properties of a scene corresponding to an image or image area. The general color feature is based on the characteristics of the pixel, at which point all pixels belonging to the image or image area have a white contribution. The color is often related to the background of the object in the image, and compared with other visual features, the color feature has less dependence on the size, direction, and viewing angle of the image itself and thus has higher robustness.
Since the color is insensitive to changes in the direction, size, etc. of the image or image area, the color feature does not capture the local features of the object in the image well. In addition, when only the color feature is used, if the database is very human, many unneeded images are often retrieved. Color histograms are the most commonly used methods for expressing color features. They have the advantage of being unaffected by image rotation and translation changes. Further, normalization is not affected by image scale changes. The disadvantage is that color space distribution is not expressed. Color histograms are color features that are commonly used in many image retrieval systems. It describes the proportion of different colors in the entire image and does not care about the spatial position of each color, that is, the object or object in the image cannot be described. Color histograms are particularly well suited for describing images that are difficult to white-divide.
Image texture features
The so-called image texture reflects a local structural feature of the image, which is expressed as a certain change in the gray level or color of the pixel in a neighborhood of the image pixel, and the change is spatially statistically related. The arrangement of texture primitives and primitives consists of two elements. Texture analysis methods include statistical methods, structural methods, and model-based methods.
A texture feature is also a global feature that also describes the surface properties of a scene corresponding to an image or image region. However, since the texture is only a characteristic of the surface of the object and does not fully reflect the essential properties of the object, high-level image content cannot be obtained by only using the texture feature. Unlike color features, texture features are not pixel-based features, and they require statistical calculations in regions that contain multiple pixels. In pattern matching, this regional feature has greater advantages and cannot be successfully matched due to local deviations. As a statistical feature, texture features often have rotational invariance and are more resistant to noise. However, texture features also have their disadvantages. One obvious drawback is that when the resolution of the image changes, the calculated texture may have a large deviation. In addition, due to the possibility of being affected by illumination and reflection, the texture reflected from the image is not necessarily the actual texture of the surface of the object, for example, reflections in water. The effects of reflections from smooth metal surfaces, etc., can cause texture changes. Since these are not the characteristics of the object itself, when applying texture information to a search, sometimes these fake textures can be "misleading" to the search.
The use of texture features is an effective method when searching for texture images that have large differences in thickness, density, and the like. However, when there is little difference between the easily distinguishable information such as the thickness and the density between the textures, the usual texture features are difficult to accurately reflect the difference between the textures of different human visual perceptions.
Experimental results
This data is based on the face database cas-Peal of the Institute of Technology of the Chinese Academy of Sciences. The database was built in 2003, including 1040 face samples. The face image of the database is complex, including faces with different positions, such as front and side, and face samples with different time periods. To meet the requirements of sample diversity, the database includes samples of men and women of different ages, and the images include a variety of backgrounds.
In order to verify the correctness and robustness of this method, the second data in this paper comes from life, using life pictures taken by MI 4, including vehicle, building, and landscape, 200 photos of each type, the picture size is 92 × 112, and the picture is converted to BMP (Bitmap) format using JPEG (Joint Photographic experts group).
Experimental environment
The data processing in this paper is performed in MATLAB R2014b 8.4 software environment. The main parameters of the hardware environment are Intel Core i7-4710HQ quad-core processor, Kingston DDR3L 4G memory, and Windows 7 Ultimate 64-bit SP1 operating system.
Classification method
In order to guarantee the stability of the classification, this paper uses a Support Vector Machine (SVM) as a classifier that uses a linear kernel function, in which the test set and training set of the sample are divided by a 10-fold cross-validation method, and the sample is divided into 10 samples, one of which was used as a test sample and nine were taken as training samples.
Image preprocessing
The wavelet transform divides the image into high frequency and low frequency. The low frequency includes the frame part of the original image, and the high frequency preserves the detail part of the image. Therefore, the main features of multimedia retrieval exist in the low frequency part. Figure 1 shows the comparison of the original picture and the low frequency part after wavelet transform.
The original picture and the low frequency part after wavelet transform
As can be clearly seen from Fig. 1, after the wavelet transform, the picture becomes blurred, but the basic features of the face, such as the eyes, mouth, nose, cheeks, eyebrows, and other features are still very clear. The result of the blurred picture shows that the number of feature tables is few, and the basic picture features clearly show that although the feature is reduced, it does not affect the feature extraction.
The influence of the wavelet parameters on the classification
The factors affecting the recognition result and efficiency are mainly the wavelet basis and wavelet layer number. The choice of wavelet basis directly affects the quality of feature extraction and affects the final retrieval rate. The number of wavelet layers determines the number of features in the recognition. The higher the number of layers, the more features of the image. This paper compares the effects of five kinds of wavelet bases such as Daub(2), Daub(4), Daub(6), cubic b-spline wavelet, and orthogonal base wavelet on the recognition results. At the same time, the effects of 1-, 2-, 3-, 4-, and 5-layer wavelet transform on classification efficiency are compared.
The wavelet decomposition process can decompose the image into different frequencies, and the decomposition ability of the different layers of wavelet decomposition to the image is different. The following shows three layers of wavelet decomposition, using DB2 as the wavelet basis. The image is a building in life, as shown in Fig. 2.
Building image as an example
The wavelet decomposition is performed on Fig. 2, as shown in Fig. 3. It can be seen from the results of Fig. 3 that the information of the picture is mainly concentrated in the low frequency part, and the information in the high frequency part is very small.
Image information after three-layer wavelet decomposition
Refactoring the decomposed image, Fig. 4 is the result of the two-layer reconstruction of the above image, and Fig. 5 is the result of the three-layer reconstruction of the above decomposition. The image displayed in the left to right three pictures is the result of the reconstruction after the high-frequency part of the layer is abandoned. From the results, no matter whether it is three or four layers, the reconstructed image results have no obvious tendency to worsen.
Two-layer reconstruction results
Three-layer reconstruction results
Figure 6 shows the influence of different wavelet and wavelet transforms of different layers on the face recognition standard database. It can be seen from the results of Fig. 6 that the different choices of wavelet layer and wavelet basis will affect the results of wavelet decomposition. For Daub(2), the wavelet layer has little effect on the accuracy rate and reaches the maximum when the Daub(4) wavelet layer is at 2, but as the number of layers increases, the accuracy rate slowly decreases. When Daub(6), cubic b-spline wavelet and orthogonal base wavelet are used as wavelet basis functions, the wavelet layer has the greatest impact, and when the number of layers is four, the classification result is optimal.
The influence of wavelet layers and different wavelet bases on accuracy (the 1, 2, 3, 4, and 5 in the x-coordinate represents the number of wavelet decomposition layers. The five columns correspond to Daub(2), Daub4), Daub(6), cubic b-spline wavelet, and orthogonal basis wavelets, respectively. Ordinates are the accuracy of image recognition)
The result can be obtained from Fig. 6. When the cubic b-spline wavelet is used as the wavelet basis function and the four-layer wavelet decomposition is used as the layer number, the classification effect is optimal and the accuracy rate is 89.08%.
Taking the cubic b-spline wavelet as the wavelet basis function, the effects of different layers on image recognition efficiency are analyzed for the five-layer wavelet decomposition. Table 1 shows the average time for image retrieval in different layers. The results in Table 1 show that as the number of wavelet layers increases, the retrieval time for each image will gradually increase, but the time between levels 1 and 2 to levels 3 and 4 will increase significantly, and the search time for level 5 will increase significantly for the second time. Based on the recognition rate of Fig. 1, it can be seen that the recognition rate of the three- and four-layer wavelet decomposition is higher than 1, 2, and 5. The results can show that although the decomposition rate of the one and two layers is faster, but the partial feature is lost, the recognition effect is not good, and the five-layer wavelet decomposition causes too many redundant features and affects the recognition effect.
Table 1 Image retrieval time
Recognition results of non-face images
Results 1 and 2 show that the highest recognition efficiency of the standard face database is the four-layer wavelet transform with the cubic b-spline wavelet as the wavelet basis function. The results of the image recognition of the vehicle, building, and landscape based on this result are shown in Table 2.
Table 2 Three results of four layers, cubic b-spline wavelet transform
The results in Table 2 show that using the four-layer wavelet analysis and the cubic b-spline wavelet as the wavelet basis, the recognition rates of the three types of pictures are all higher. Among them, the recognition rate of buildings and vehicle is higher than that of landscape. The reason may be that the frequency characteristics of vehicle and buildings are obvious, but the frequency characteristics of landscape are not obvious. From the training time and retrieval time of the same sample number, although the retrieval accuracy rate of landscape is the lowest, the landscape consumes the most time. The training time is 2.7 times and 2.4 times that of the vehicle and the building respectively, and the variance of the retrieval time is also relatively large, which shows that the method has a poor effect on the feature extraction of the landscape images.
Low-frequency and high-frequency recognition rate
The wavelet analysis divides the original signal into high-frequency and low-frequency parts. High frequency describes the detail of the wavelet in this layer. The low frequency description is a general situation. The above features in this paper are all comprehensive features. The reduction of the characteristic number is the best means to improve the recognition time. Figure 7 shows the recognition results of four kinds of images in the data source, and as can be seen in the results of Fig. 7, the recognition rate of low-frequency features is far higher than that of high-frequency features, reaching 86.99%, 91.41%, 89.75%, and 75%, respectively. It shows that the recognition effect of low-frequency features achieves the effect of mixed features, while the recognition rate of high-frequency features is lower than the recognition rate of mixed features. The results show that high-frequency features are redundant features in multimedia recognition in wavelet analysis.
Recognition results of low frequency and high frequency. Recognition results of low frequency and high frequency (the x-coordinates are four kinds of images, which are facial features, vehicle features, building features, and landscape features. The two histograms of each feature correspond to low frequency and high frequency features, respectively)
Multimedia resources have become a way for people to obtain information. Intelligent query of multimedia information is a new hotspot of data mining technology. In the query of multimedia information, the query algorithm design is one of the main aspects. Although the wavelet transform has been successfully used and image research, the optimal selection problem between the number of layers in the wavelet transform and the wavelet basis function has not been solved in the image retrieval process. In this paper, wavelet analysis is used as an image feature query method to analyze face, vehicle, building, and landscape images. The wavelet bases on different wavelet basis functions and the number of decomposition layers are analyzed, and the accuracy and query speed are used as evaluation indicators, and the effects of different wavelet basis functions and layers on the results are compared and analyzed.
BMP:
JPEG:
Joint Photographic experts group
S. Ahmadi, A. Manivarnosfaderani, B. Habibi, Motor oil classification using color histograms and pattern recognition techniques. J. AOAC Int. 101, 1967–1976 (2018)
Liu H, Zhao F, Chaudhary V. Pareto-based interval type-2 fuzzy c-means with multi-scale JND color histogram for image segmentation. Digital Signal Process. 76, 75-83 (2018)
L. Li, K. Liu, F. Cheng, An improved TLD with Harris corner and color moment. Proceedings of the Spie 225, 102251P (2017)
V. Vinayak, S. Jindal, CBIR system using color moment and color auto-Correlogram with block truncation coding. International Journal of Computer Applications 161(9), 1–7 (2017)
P.G.J. Barten, Effects of quantization and pixel structure on the image quality of color matrix displays. J. Soc. Inf. Disp. 1(2), 147–153 (2012)
I.M. Stephanakis, G.C. Anastassopoulos, L. Iliadis. A self-organizing feature map (SOFM) model based on aggregate-ordering of local color vectors according to block similarity measures. Neurocomputing 107, 97-107 (2013)
I.M. Stephanakis, G.C. Anastassopoulos, L.A. Iliadis, Self-organizing feature map (SOFM) model based on aggregate-ordering of local color vectors according to block similarity measures. Neurocomputing 107(4), 97–107 (2013)
D. Chai, K.N. Ngan, Face segmentation using skin-color map in videophone applications. IEEE Trans Csvt 9(4), 551–564 (1999)
I. Pantic, Z. Nesic, J.P. Pantic, et al., Fractal analysis and gray level co-occurrence matrix method for evaluation of reperfusion injury in kidney medulla. J. Theor. Biol. 397(2), 61–67 (2016)
I. Pantic, D. Dimitrijevic, D. Nesic, et al., Grey level co-occurrence matrix algorithm as pattern recognition biosensor for oxidopamine-induced changes in chromatin architecture. J. Theor. Biol. 406, 124–128 (2016)
I. Pantic, D. Dimitrijevic, D. Nesic, et al., Gray level co-occurrence matrix algorithm as pattern recognition biosensor for oxidopamine-induced changes in lymphocyte chromatin architecture. J. Theor. Biol. 406, 124–128 (2016)
A.K. Jain, F. Farrokhnia, et al., Unsupervised texture segmentation using Gabor filters. Pattern Recogn. 24(12), 1167–1186 (1991)
Jain A K, Farrokhnia F. Unsupervised texture segmentation using Gabor filters[C]// IEEE International Conference on Systems, Man and Cybernetics, 1990. Conference proceedings. IEEE, 2002:1167–1186
S.E. Grigorescu, N. Petkov, P. Kruizinga, Comparison of texture features based on Gabor filters. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society 11(10), 1160–1167 (2002)
Navarro-Alarcon D, Liu Y H. Fourier-based shape servoing: a new feedback method to actively deform soft objects into desired 2-D image contours. IEEE Trans. Robot., 2018, PP(99):1–8
H. Yun, B. Li, S. Zhang, Pixel-by-pixel absolute three-dimensional shape measurement with modified Fourier transform profilometry. Appl. Optics 56(5), 1472 (2017)
J. Flusser, T. Suk, Pattern recognition by affine moment invariant. Pattern Recogn. 26(1), 167–174 (1993)
D. Marin, A. Aquino, M.E. Gegundezarias, et al., A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Trans. Med. Imaging 30(1), 146 (2011)
M.R. Banham, N.P. Galatsanos, H.L. Gonzalez, et al., Multichannel restoration of single channel images using a wavelet-based subband decomposition. IEEE Trans. Image Process. 3(6), 821–833 (2016)
H. Shao, X. Deng, F. Cui, Short-term wind speed forecasting using the wavelet decomposition and AdaBoost technique in wind farm of East China. Iet Generation Transmission & Distribution 10(11), 2585–2592 (2016)
Y. Guo, B.Z. Li, Blind image watermarking method based on linear canonical wavelet transform and QR decomposition. IET Image Process. 10(10), 773–786 (2016)
K.R. Singh, S. Chaudhury, Efficient technique for rice grain classification using back-propagation neural network and wavelet decomposition. IET Comput. Vis. 10(8), 780–787 (2017)
W.W. Boles, B. Boashash, A human identification technique using images of the iris and wavelet transform. IEEE Trans. Signal Process. 46(4), 1185–1188 (1998)
H. Li, B.S. Manjunath, S.K. Mitra, Multisensor image fusion using the wavelet transform. Graphical models and image processing 57(3), 235–245 (1995)
T. Chang, C.-C.J. Kuo, Texture analysis and classification with tree-structured wavelet transform. IEEE Trans. Image Process. 2(4), 429–441 (1993)
The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
Please contact author for data requests.
Graduate School of Advanced Imaging Science, Multimedia and Film Chung-Ang University, Seoul, South Korea
Kun Sui & Hyung-Gi Kim
Kun Sui
Hyung-Gi Kim
HGK is the correspondence author, and KS is the first author. Both authors read and approved the final manuscript.
Correspondence to Hyung-Gi Kim.
301 Art Center Chung-Ang University 221 Heukseok-dong Dongjak-gu, Seoul, 156–756 Korea.
Kun Sui was born in Qingdao, Shandong, P.R. China, in 1982. Doctor of Technology Art, Lecturer. Graduated from the Korea Dong Yang University in 2009. Worked in Qingdao Agricultural University. His research interests include New Media Art and digital image processing.
*Author for correspondence:
Hyung-Gi Kim, was born in Korea, in 1960.
Doctor of Technology Art, Professor. Graduated from the Soongsil University in 2009. Worked in Graduate school of Advanced Imaging Science, Multimedia and Film Chung-Ang University, Seoul, Korea. He has held eleven successful solo Media Art Exhibitions and participated in many group exhibitions. His research focuses on 3D display systems, projection mapping, kinetic art, interactive media art, and media performance.
Sui, K., Kim, HG. Research on application of multimedia image processing technology based on wavelet transform. J Image Video Proc. 2019, 24 (2019). https://doi.org/10.1186/s13640-018-0396-1
Wavelet transform
Wavelet base
Multimedia retrieval
Visual Information Learning and Analytics on Cross-Media Big Data
|
CommonCrawl
|
Genome-wide characterization of cellulases from the hemi-biotrophic plant pathogen, Bipolaris sorokiniana, reveals the presence of a highly stable GH7 endoglucanase
Shritama Aich2 na1,
Ravi K. Singh1 na1,
Pritha Kundu1,
Shree P. Pandey1 &
Supratim Datta2,3
Bipolaris sorokiniana is a filamentous fungus that causes spot blotch disease in cereals like wheat and has severe economic consequences. However, information on the identities and role of the cell wall-degrading enzymes (CWDE) in B. sorokiniana is very limited. Several fungi produce CWDE like glycosyl hydrolases (GHs) that help in host cell invasion. To understand the role of these CWDE in B. sorokiniana, the first step is to identify and annotate all possible genes of the GH families like GH3, GH6, GH7, GH45 and AA9 and then characterize them biochemically.
We confirmed and annotated the homologs of GH3, GH6, GH7, GH45 and AA9 enzymes in the B. sorokiniana genome using the sequence and domain features of these families. Quantitative real-time PCR analyses of these homologs revealed that the transcripts of the BsGH7-3 (3rd homolog of the GH 7 family in B. sorokiniana) were most abundant. BsGH7-3, the gene of BsGH7-3, was thus cloned into pPICZαC Pichia pastoris vector and expressed in X33 P. pastoris host to be characterized. BsGH7-3 enzyme showed a temperature optimum of 60 °C and a pHopt of 8.1. BsGH7-3 was identified to be an endoglucanase based on its broad substrate specificity and structural comparisons with other such endoglucanases. BsGH7-3 has a very long half-life and retains 100% activity even in the presence of 4 M NaCl, 4 M KCl and 20% (v/v) ionic liquids. The enzyme activity is stimulated up to fivefold in the presence of Mn+2 and Fe+2 without any deleterious effects on enzyme thermostability.
Here we reanalysed the B. sorokiniana genome and selected one GH7 enzyme for further characterization. The present work demonstrates that BsGH7-3 is an endoglucanase with a long half-life and no loss in activity in the presence of denaturants like salt and ionic liquids, and lays the foundation towards exploring the Bipolaris genome for other cell wall-degrading enzymes.
Biofuels produced from lignocellulosic biomass has many potential benefits over first-generation biofuel, including lower CO2 emissions and no competition with food for human consumption. In lignocellulose, the cellulose and hemicellulose are embedded in a lignin matrix and not easily accessible to enzymes. Lignocellulolytic fungi can be an efficient source of specialized enzymes that aid in the degradation of complex plant cell wall components to produce sugars. The exact nature and relative abundances of these enzymes vary from one plant species to another or across tissues within a plant. One of the best known example is the cellulase cocktail, secreted by the soft rot fungus Trichoderma reesei in large quantities [1]. Recently, it was reported that T. reesei being a necrophyte lacks several protein families related to infection and degradation of living plant tissue [2]. One way to get around this limitation is to add the missing enzymes in the cellulase cocktail or to manipulate the hydrolytic efficiency of cellulolytic enzymes encoded in this model organism. Another strategy could be to explore the fungal biodiversity for synergistic enzyme activities in order to supplement and increase the hydrolytic yield achieved by a T. reesei cocktail or, if possible, a new and more active cocktail based on enzymes from other organisms [3, 4].
Phytopathogenic fungi produce cell wall-degrading enzymes (CWDE) that are thought to aid their invasion into host cells [5]. A major group of CWDE consists of cellulases, which are glycosyl hydrolases (GHs) and catalyse hydrolysis of the β-1,4-glycosidic bonds in cellulose. Some of the CWDE-coding gene families have expanded during evolution among different groups of fungi [6, 7]. Further, these enzymes also show preference for specific types of plant biomass [8, 9]. Cellulases can be classified into three major types, namely endoglucanases (EG), cellobiohydrolases (CBH) and β-glucosidases (BG), all of which work synergistically to efficiently degrade cellulose [10,11,12].
Cochliobolus sativus (anamorph Bipolaris sorokiniana) is a fungal pathogen that causes spot blotch of wheat and barley and poses a severe challenge to their farming worldwide [13]. This fungal pathogen displays an enormous variability in its pathogenic, morphological and physiological forms. On the basis of their colony colour and growth behaviour, B. sorokiniana is broadly grouped into three categories. The black strain with thick dark mycelia are the most sporulating and aggressive kind, and the puffy white cotton-like mycelial strain is least sporulating but grows aggressively, while the mixed strain with greyish white mycelial growth has an intermediate number of spores and is the least aggressive [14, 15]. In addition, Bipolaris also attacks many grasses, including switch grass that is currently being developed as a bioenergy crop for biofuel production [16].
In 1999, Geimba et al. reported the partial purification and characterization of a BG from B. sorokiniana [17]. The same group in 2002 reported the presence of β-xylosidase, cellobiohydrolase and chitobiohydrolase activities in six isolates of B. sorokiniana originating from different areas of Brazil [18]. While a few loci have been stated to contain domains of the cellulolytic enzymes in B. sorokiniana genome (genome portal: Joint Genome Institute (JGI), University of California; http://genome.jgi-psf.org), systematic analysis of such genes across the genome or characterization of cellulase activities has not yet been reported [19]. A detailed characterization of these genes would be the first step towards the biotechnological application of these enzymes in biomass hydrolysis in addition or as an alternative to T. reesei cellulases and also for developing novel approaches towards biological control of pathogens. Here, we describe an integrative genomics approach to study the B. sorokiniana GHs and report the biochemical characterization of a novel GH7 endoglucanase.
Identification and analysis of glycoside hydrolases (GH) homologs in B. sorokiniana
The draft genome of B. sorokiniana lists 273 loci that are predicted to contain the domains of cellulases [19, 20]. In order to confirm and further annotate all possible genes of glycoside hydrolase family, we reanalysed the B. sorokiniana genome using the HMM (Hidden Markov model) profile-based search and phylogeny-based clustering methods. We used protein sequences of the eukaryotic glycoside hydrolases, GH3, GH6, GH7, GH45 and GH61 [auxiliary activity family 9 (AA9)] from the CAZy database (http://www.cazy.org/), to construct the HMM profiles for each of the five GH family members [21]. The redundant sequences were removed from the dataset of each family using CDHIT [22]. Then, multiple sequence alignment (MSA) of each of the GH family members was performed using MAFFT v 7.123b with default parameters [23]. These MSAs were used to construct the HMM profiles for each of the GH family members. Using these HMM profiles, the predicted proteome of B. sorokiniana was searched by HMMER program with the E value set to ≤10−5. The predicted homologs were searched for the presence and distribution of domains using Pfam database [24]. The B. sorokiniana genes are prefixed as "Bs" followed by their family names. If a family contained more than one gene, they were sequentially numbered as per standard practice [25,26,27,28]. For example, GH7 family in B. sorokiniana contains six homologous genes, and therefore these are named as BsGH7-1 to BsGH7-6. Following a commonly accepted nomenclature, references to gene names and transcripts are italicized, whereas those to proteins are straight. Phylogenetic clustering of B. sorokiniana GH family members (GH3, GH6, GH7, GH45 and AA9) was performed by maximum likelihood (ML) method using RAxMLv7.2.8 [29]. We also used one bacterial protein from each family (GenBank IDs: AJP42775.1, AIF91560.1, AIQ82274.1 and AIF91527.1 for GH3, GH6, GH7 and GH45 families, respectively) as an out-group for phylogenetic analysis [30, 31]. Clade robustness was assessed with 1000 bootstrap replications. FigTree was used to visualize the phylogenetic tree (http://beast.bio.ed.ac.uk/FigTree). The evolutionary divergence among GH family members was estimated using MEGA 6 [32]. The genomic architecture of homologs for each GH family member was generated using GSDS v2.0 server [33]. The coordinates of the intron–exon boundary were calculated using program 'blastn' on the genome sequence of B. sorokiniana (available at http://genome.jgi-psf.org/Cocsa1/Cocsa1.home.html) [19]. Percent identities between paralogs of each GH family were calculated with the help of Clustal Omega [34]. We used the HHpred server (http://toolkit.tuebingen.mpg.de/hhpred/) to model the B. sorokiniana GH structure [35]. HHpred, at first, detects remote protein homology and then predicts structures from pairwise comparison of HMM profiles (through various databases, such as PDB, SCOP, Pfam, SMART, COGs and CDD) to produce query-template alignments. Further, it generates 3D structural models from these alignments. Root mean square deviations (RMSD; Å) between structures were calculated using TMalign server (http://zhanglab.ccmb.med.umich.edu/TM-align/) [36]. Area and volume of the binding pocket on the structures were calculated using CASTp server (http://sts.bioe.uic.edu/castp/calculation.php) [37]. Distribution and arrangement of positive electrostatic patches on the structures were calculated using Patch Finder plus server (http://pfp.technion.ac.il/index.html) [38]. Pymol was used to visualize modelled structures and prepare figures [39].
Culture maintenance and propagation
Bipolaris sorokiniana was maintained under standard conditions recommended for culturing this fungus. Potato dextrose agar (PDA) has been used as a common medium for isolating Bipolaris from natural populations of wheat and barley and for maintaining and manipulating it in the laboratory [13,14,15, 40, 41]. The HD3069 strain with black morphology was maintained on PDA under complete darkness at 25 °C in a fungal incubator (Eyela, Model SU-1201) [13, 14]. The black strains are aggressive, produce maximum spores and are often used in characterization of plant responses to spot blotch attack [13,14,15]. Ten-day-old PDA plates were used for the collection of mycelial mass for isolation of nucleic acids.
Transcriptional profiling by quantitative real-time PCR
Approximately 200 mg of crushed mycelial mass was used for RNA isolation. Total RNA of B. sorokiniana was extracted using the Trizol method following the manufacturer's instructions and treated with DNase enzyme (Invitrogen, Carlsbad, USA). cDNA was prepared using a Superscript III First-strand synthesis system, oligo-dT primers and 5 µg of DNase-treated RNA following the manufacturer's protocol (Life Technologies, Carlsbad, USA).
Gene-specific primers for each GH family homolog (Additional file 1: Table S2a) were designed using Primer Express software version 3.0 (Applied Biosystems; http://www.appliedbiosystems.com). SYBR green chemistry (KAPA Biosystems, Wilmington, USA) was used to estimate the transcript abundance of the B. sorokiniana GHs using gene-specific primers. For determining the absolute amount of transcript, a standard curve was prepared for each of the genes using cDNA amounts corresponding to 50, 100, 150 and 200 ng of total RNA in four replicates. Based on the formulae obtained from the standard curve, cDNA corresponding to 150 ng of total RNA was used to evaluate the absolute transcript amount based on their respective CT values. Three independent experiments were conducted, each comprising four replicates, the mean values were used to plot the graph. Elongation factor alpha (EF-α) was used as an endogenous control.
The software Assistat 7.6 beta was used for statistical analysis to determine the significance of differences in the expression among the GH members under study. Duncan multiple range test (DMRT) was performed at a level of 5% probability (p ≤ 0.05).
Cloning of the BsGH7-3 gene
All the chemicals used were of reagent grade. Medium for cell growth was purchased from HiMedia Laboratories (Mumbai, India). Restriction enzymes and polymerase enzyme used for PCR were from New England Biolabs (Beverly, USA) and Taq Polymerase from Biobharati LifeScience (Kolkata, India). Escherichia coli Top10F' cloning strain, Pichia pastoris yeast expression strain X33 and the vectors were from Life Technologies (Carlsbad, USA). The fraction obtained post purification was buffer-exchanged using a 30 kDa cut-off size membrane of Amicon-Ultra-15 (Millipore, Darmstadt, Germany). Substrate and other reagents for enzymatic assays were purchased from Sigma-Aldrich (St Louis, USA).
BsGH7-3 was PCR amplified using the gene-specific primers (Additional file 1: Table S2b). The cDNA template was PCR amplified by Phusion™ high-fidelity DNA polymerase on a Veriti® thermal cycler (Life technologies, Carlsbad, USA) using 54–60 °C temperature gradient to identify the optimum conditions. PCR products were separated using 1% agarose gel electrophoresis and specific DNA fragments were extracted using the QIAquick Gel extraction kit (Qiagen, Hilden, Germany). The gel-purified DNA was digested with XhoI and NotI-HF and ligated to the linearized pPICZαC vector. The ligated product was transformed into E. coli Top10F' and verified by colony PCR, unique site restriction digestion and DNA sequencing using 5′α-factor and the 3′AOX1 (universal primer) of pPICZαC as the sequencing primers.
Expression and purification of the protein
The plasmid construct was linearized using the unique site restriction enzyme, SacI within the 5′AOX1 region and integrated into the X33 P. pastoris host genome by transformation of the linearized construct into the X33 competent cells following the instructions provided with the Pichia EasyComp™ kit (Life technologies, Carlsbad, USA). Colony PCR (as per standard protocol) was used to screen for positively integrated Pichia clones and the Mut (methanol utilization) phenotype identified following the manufacturer's protocol (EasySelect™ Pichia expression kit, Life Technologies, Carlsbad, USA). Phenotype determination is required to verify if the AOX1 gene is intact towards identifying the best medium for conducting the expression studies. To overexpress BsGH7-3, a 100 mL primary culture was grown in buffered complex glycerol (BMGY) medium with 100 µg mL−1 of zeocin. At O.D. 2.0, the cells were harvested by centrifugation at 3000×g for 8 min and the pellet dissolved in buffered complex methanol (BMMY) medium such that the O.D. of the starter culture was 1.0. Cells were induced with 0.5% methanol every 24 h and grown for 96 h. The protein secreted in the medium was precipitated with 50–80% ammonium sulphate and the cell pellet dialysed against 20 mM phosphate buffer, pH 7.3. The protein was further purified by passing through a Macro-Prep Q column (Bio-Rad Laboratories, Hercules, USA) equilibrated with 20 mM Tris–HCl buffer, pH 7.0, and eluted by 20 mM Tris–HCl/500 mM NaCl, pH 7.0. After desalting the protein with 20 mM phosphate buffer (pH 7.3), concentration was measured by Bradford assay with BSA and A280 and purity assessed by SDS-PAGE [42].
Activity of BsGH7-3 was measured by mixing 1 μg of enzyme and 2% carboxymethyl cellulose (CMC; low viscosity of 100 cps at 25 °C in 4% water and degree of polymerization 0.7) as a substrate in McIlvaine buffer to a total reaction volume of 150 μL and incubating the enzyme at T opt. DNS (3,5-dinitrosalicylic acid) assay was performed to measure the reducing ends of CMC after enzymatic reaction [43]. 150 μL of DNS reagent (1.3 M DNS, 1 M potassium sodium tartrate and 0.4 N NaOH) was added and the reaction mixture incubated at 95 °C for 5 min. Absorbance was measured at 540 nm after cooling the reaction mix to room temperature. One unit of endoglucanase activity is the amount of enzyme required to release 1 nmol of reducing sugar per minute from the substrate. Glucose was used as the standard for the estimation of reducing sugars. All assays were performed in triplicate and standard deviation was calculated.
Determination of pH and temperature optima of BsGH7-3
Using CMC as the substrate, the effect of temperature on enzyme activity was measured from 50 to 68 °C after incubating the enzyme in a buffer of optimum pH for 30 min. The optimal pH (pHopt) was measured by quantitating the enzyme activity on CMC over a pH range of 5.2–8.6 using McIlvaine buffer (pH 5.2–8.1) and Tris–HCl buffer (pH 8.0 and 8.6). pH stability was measured by determining residual activity after incubating the enzyme in McIlvaine buffer pH 8.1 for 6 h at 4 °C.
Effect of salt, metal ions, ionic liquids and detergents on BsGH7-3 activity
The effects of additives were determined by measuring enzyme activity in the presence of salt, metal ions, ionic liquids and commercial detergents (Ariel™, Tide™, Sunlight™ and SDS) in McIlvaine buffer, pH 8.1. The additives were co-incubated with enzyme at 4 °C for 1 h before measuring the enzyme activity by standard activity assay. The specific activity without any additives was considered as 100% and relative activity in the presence of additives was estimated.
Thermostability and half-life assay
The thermostability of the enzyme was determined by incubating the enzyme in McIlvaine buffer, pH 8.1, at 60 °C. Residual enzyme activity was measured by removing aliquots at regular intervals to measure enzyme activity. The enzyme stability was also checked by assaying the enzyme after 30 days of incubation at 4 °C in 10 mM phosphate buffer, pH 7.1.
Substrate specificity and kinetic parameters of BsGH7-3 with CMC as a substrate
The specificity of BsGH7-3 was determined by measuring specific activity across a range of substrates, namely lichenan (MP Biomedicals, Ohio, USA), β-d-glucan from barley, Avicel PH-101, CMC (Sigma-Aldrich, Saint Louis, USA) and phosphoric acid swollen cellulose (PASC). PASC was prepared following the protocol of Walseth et al. [44]. Activity was determined by incubating 1 µg of enzyme with 0.8% substrate (w/v) at 60 °C for 30 min in McIlvaine buffer pH 8.1 and then measuring the reducing sugar generated by DNS assay. The specific activity on CMC was considered to be 100% and the relative activity on other substrates was estimated. The Michaelis–Menten parameters of GH7-3 on CMC was measured between 0.5 and 18 mg mL−1 of CMC and determined by a non-linear regression fit of Michaelis–Menten equation using GraphPad PRISM version 7.0 (GraphPad Software, La Jolla, CA).
Effect of GH7-3 on the reduction of substrate viscosity
30% (w/v) of substrate (lichenan and β-d-glucan) in McIlvaine buffer, pH 8.1, was incubated with 36 µg of GH7-3 for 60 min at 60 °C and cooled to room temperature. A capillary viscometer was used to measure the viscosity of the supernatant (6 mL) at room temperature with substrate viscosity in the absence of enzyme as the control. The viscosity reduction was calculated using the following equations [45]:
$$\mu = \left( {\mu_{\text{water}} \times t \times \, \rho } \right)/\left( {t_{\text{water}} \times \, \rho_{\text{water}} } \right)$$
$$\Delta \mu = \left( {\mu_{\text{control}} - \mu } \right) \times 100/\left( {\mu_{\text{control}} } \right),$$
where μ is the viscosity, t is the total flow time through viscometer, Δμ is the reduction of viscosity and ρ is the density.
Glycoside hydrolase (GH) in B. sorokiniana: annotation and sequence characterization
We performed a detailed genomic characterization of GH families in B. sorokiniana genome and targeted five putative cellulases across GH families (GH3, GH6, GH7, GH45 and AA9) in this analysis. These five families comprise the minimum set of cellulolytic enzymes (EGs, CBHs and BGs) that are required for biomass hydrolysis and its identification is a step towards the search for all such cell wall-degrading enzymes in B. sorokiniana [46]. Using the HMM profile search and phylogenetic clustering methods, we confirmed the identity of 15, 3, 6, 3 and 23 B. sorokiniana homologs of GH3, GH6, GH7, GH45 and AA9, respectively, as initially determined by Ohm et al. [19] (Fig. 1; Additional file 1: Figures S1, S2, Table S1). One additional homolog of the AA9 family was identified in this study (locus ID: jgi|Cocsa1|155289|gm1.10_g) (Fig. 1; Additional file 1: Figures S1, S2, Table S1). The homologs of each GH family varied in the lengths of the gene as well as their transcripts (Additional file 1: Figures S1, S2, Table S1). The members of AA9 are of relatively shorter length, while the GH3 members were larger (Additional file 1: Figures S1, S2, Table S1). Further, we found variable number of introns among the homologous genes in each GH family (Additional file 1: Figure S2, Table S1). Interestingly, some members of GH3, GH6 and AA9 families are intron less (Additional file 1: Figure S2). The sequences of the homologs of each GH family in B. sorokiniana have diverged differently (Additional file 1: Figure S3). For example, the members of GH3 family are more diverged with mean identity 29%, while GH7 family members are relatively less diverged with 52% identity. Homologs in each family have the characteristic domain of their respective family (Fig. 1a). GH6, GH7, GH45 and AA9 family members are single domain proteins, while most of the GH3 family members contain two domains (GH3 N-terminal and GH3 C-terminal) with the exception of GH3-14 and GH3-15, which only contains a GH3 N-terminal domain (Fig. 1a) (Additional file 1: Figure S1). Additionally, GH3 family members also have a 'Fn3-like' domain at the C-terminal [except in GH3-13 and GH3-15] (Fig. 1a). GH3-5 contains two additional domains at the C-terminus, 'CPSase_sm_chain' and 'GATase'. GH3-13 contains a 'P450' domain at the N-terminal end and GH3-14 contains a 'GNAT' domain at the C-terminus. Interestingly, only five of the members in AA9 family contain an additional cellulose binding module (CBM1 domain) at the C-terminus (Fig. 1a; Additional file 1: Figure S1).
Characterization of GHs in B. sorokiniana genome. a Domain distributions in the homologs of B. sorokiniana GHs and redox enzymes (AA9). The figure shows the schematic of arrangement of domains in each of the BsGH homologs (comparative lengths are unscaled). b Phylogeny and evolution of B. sorokiniana GHs. Maximum likelihood rooted phylogeny of B. sorokiniana GHs and redox enzymes (AA9). The values on branches show the bootstrap (%). The dotted terminal branches in the clades of GH3, GH45, GH6 and GH7 families are the corresponding GHs from the bacterial species used as an out-group in the phylogenetic analysis. c Evolutionary divergence between GH family members. The upper diagonal represents the number of substitutions per site between the respective GH families, while the lower diagonal represents its standard deviation
We used maximum likelihood (ML) methods to determine the phylogenetic clustering among the GH family members in B. sorokiniana and obtained five robust clusters each for GH3, GH6, GH7, GH45 and AA9 (Fig. 1b). The GHs from bacteria are clustered within the clade of the respective families to indicate that horizontal gene transfer events might have played an important role in the evolution of GHs in B. sorokiniana and other fungi [47]. Variation in branch lengths suggests that after divergence from their common ancestor, the five GH families evolved at varied rates before their further duplication and expansion, resulting in high sequence diversities (Fig. 1b, c). We found maximum evolutionary divergence between GH6 and GH7 families with 9.155 amino acid substitutions per site (Fig. 1c). The large number of poorly aligned regions is also evident from the MSA (Additional file 1: Figure S3).
Transcriptional profiling of glycoside hydrolases in B. sorokiniana
We set out to identify the minimum set of enzymes across endoglucanases, cellobiohydrolases and β-glucosidases in B. sorokiniana and succeeded in annotating all of these five GH families. Of these, endoglucanases and cellobiohydrolases are found across GH6, GH7 and GH45 and catalyse the hydrolysis of the β(1,4) cellulose bond to produce cellobiose. The GH3 further catalyses the hydrolysis of cellobiose into glucose. GH61 is the AA9 copper-dependent oxidative enzyme family [20]. Considering the role in driving committed reactions in cellulose degradation, we started by studying the following three families: GH6, GH7 and GH45.
We investigated the abundances of the mRNAs of three homologs of BsGH6, six belonging to BsGH7 and two of BsGH45, in the constitutive states. Among the three gene families, GH7 showed higher level of transcript accumulation in three of its homologs, GH7-3, GH7-4 and GH7-6, followed by GH7-1, GH7-2 and GH7-5. After GH7, GH6-1 showed significantly higher accumulation compared to its other two homologs. GH45-2 showed comparatively less accumulation compared to GH45-1 (Fig. 2). Maximum transcript abundance was recorded for GH7-3 and therefore the gene was chosen for biochemical characterization.
Transcript abundance of the GH family genes in B. sorokiniana. A standard curve using known amounts of cDNA was prepared for the three homologs of BsGH6, six belonging to BsGH7 and two of BsGH45; (further details in "Results") in four replicates. Based on the standard curve, cDNA corresponding to 150 ng of total RNA was used to evaluate the absolute transcript amount based on the respective CT values. Three independent experiments were conducted, each comprising four replicates, and the mean values were used to plot the graph. Data are represented as mean ± SE. Assistat 7.6 beta was used for statistical analysis (DMRT-Duncan multiple range test). The Duncan test at a level of 5% of probability was applied. Bars with the same letter do not differ statistically between themselves (p ≤ 0.05)
Biochemical characterization of BsGH7-3
The open reading frame encoding BsGH7-3 was cloned into a Pichia pPICZαC expression vector and verified by sequencing. The sequenced product showed a 100% sequence match to the nucleotide sequence of the predicted GH7-3 in the Bipolaris genome (Additional file 1: Figure S4). Protein obtained after ammonium sulphate precipitation and anion exchange chromatography was analysed by SDS-PAGE, and the molecular weight of BsGH7-3 was in agreement with the apparent molecular mass of 46.6 kDa calculated from the sequence (Fig. 3a). The enzyme preparation had a specific activity of 5967 U mg−1 (1U = 1 nmol of reducing sugars formed per min per mg of BsGH7-3; Fig. 3b).
Biochemical characterization of BsGH7-3. a 10% SDS-PAGE of purified BsGH7-3. Lane M PageRuler plus pre-stained protein ladder (ThermoFisher, Waltham, USA) and BsGH7-3 stained by Coomassie Brilliant Blue R250. b Specific activity and stability of BsGH7-3 reported as percent residual activity at 4 and 60 °C, respectively. c Effect of temperature on GH7-3 measured over a range of 48 to 68 °C with 0.8% CMC as a substrate and McIlvaine buffer, pH 8.1. The residual activity was measured by the standard DNS assay and reported as % specific activity. d Effect of pH was determined by incubating the enzyme in buffer of pH range 5.2–8.6 for 6 h at 4 °C and then the kinetic assay was performed to determine the % specific activity
BsGH7-3 maintained a broad activity range over pH 5.0–9.0 though the pH optimum (pHopt) is 8.1. The enzyme retains 66% of its activity at pH 5.4 and 70% activity at pH 8.6 after overnight incubation at 4 °C (Fig. 3). Temperature optimization studies at pH 8.1 showed that at 60 °C the purified BsGH7-3 had the maximum cellulase activity (Fig. 3c).
BsGH7-3 activity was found to be stimulated by Mn+2 and Fe+2 in McIlvaine buffer, pH 8.1 (Table 1). Both metal ions together also stimulate the enzyme, resulting in a 512% increase in relative specific activity on CMC. Upon incubation of the enzyme in the presence of Mn+2 and Fe+2 for 72 h at T opt in McIlvaine buffer pH 8.1, only 13% decrease in relative activity is observed. The metal ions could be easily removed by passage through a column packed with Chelex® 100 resin (Sigma-Aldrich, St. Louis, USA), indicating the absence of specific metal binding site(s). In addition, inductively coupled plasma mass spectrometry (ICP-MS) measurements also confirmed the removal of all manganese and ferrous ions (data not shown). When enzyme activity was again measured with the metal-stripped enzyme on CMC, the specific activity decreased to the level prior to metal addition. Endoglucanases are not known to require metals as a cofactor, though there are previous reports of such proteins being stimulated by metal ions [48, 49]. In the presence of 4 M KCl and 4 M NaCl, the specific activity of BsGH7-3 increased by 25 and 10%, respectively (Table 1). Thus, BsGH7-3 is salt tolerant. To determine the enzyme's robustness and potential use in industrial applications, BsGH7-3 activity was measured in the presence of a few readily available commercial detergents. The enzyme showed maximum stability in the presence of Tide™ (Procter & Gamble, Mumbai, India), with a residual activity of 69% after incubation for 1 h at 60 °C (Table 1). In the presence of Ariel™ (Procter & Gamble, Mumbai, India), SDS (Sigma-Aldrich, St. Louis, USA) and Sunlight™ (Hindustan Unilever, Mumbai, India), the enzyme residual activity was 62, 51 and 74%, respectively (Table 1). Thus, BsGH7-3 is stable in the presence of detergents.
Table 1 Effect of metal ions, salts, ionic liquids and detergents on the specific activity of BsGH7-3 and measured by standard spectrophotometric assay
After incubation at 60 °C for 365 h (15.2 days), BsGH7-3 retained 66% of its specific activity (Fig. 3b). Further, upon incubating the enzyme for 30 days at 4 °C, a residual specific activity of 93% was retained. Ionic liquids (ILs) hold great promise for biomass pretreatment and thus have been the subject of many studies towards understanding its compatibility with enzymes. Since ILs have been generally known to denature cellulase, we desired to test BsGH7-3 stability against three ILs as a further probe of enzyme thermostability [50,51,52]. In the presence of 20% (v/v) 1-ethyl-3-methyl imidazolium chloride ([C2C1im][Cl]), 1-ethyl-3-methyl imidazolium phosphate ([C2C1im][C2C2PO4]) and 1-ethyl-3-methyl imidazolium acetate ([C2C1im][MeCO2]), the enzyme activity is unaffected (Table 1). BsGH7-3 is thus a very stable enzyme with a very long half-life.
BsGH7-3 showed the highest activity towards lichenan, with the relative activity being 367% as compared to CMC. The relative specific activity towards β-d-glucan is 174% and decreases to 68 and 57% towards PASC and Avicel, respectively (Table 2). The steady-state kinetic parameters of BsGH7-3 were measured under optimal assay conditions (30 min, pH 8.1, 60 °C) by varying the CMC concentration and the data fit using a non-linear regression method (Fig. 4). The enzyme had the K m, V max and k cat values of 0.75 mg mL−1, 21.64 µM min−1 and 288 min−1, respectively. BsGH7-3 also decreases the viscosity of lichenan by 11.42% and of β-d-glucan by 9.8%, indicating that BsGH7-3 had a positive effect on viscosity reduction of substrates.
Table 2 Relative substrate specificity of recombinant BsGH7-3
BsGH7-3 catalysed hydrolysis of carboxymethyl cellulose (CMC) as determined by visible spectrophotometry. The solid line indicates the best fit of the Michaelis–Menten equation. The values determined were K m = 0.7461 mg mL−1, V max = 21.64 µM min−1 and k cat = 288 min−1
Structural insights into the GH7-3 function
While the GH7 enzyme family contains both CBH (Cel7A) and EG (Cel7B) enzymes with a similar β-sheet sandwich motif, differences exist. For example, endoglucanases have substrate tunnel-associated peptide loops of shorter lengths compared to cellobiohydrolases. To get an insight into the function of BsGH7-3, we modelled the structure of BsGH7-3. A HMM-based homology search predicted Humicola insolens GH7 (HiGH7, PDB ID: 1OJJ) as the best template for BsGH7-3. These two sequences are 53% identical (Fig. 5a; Additional file 1: Figure S5a). T. reesei GH7 (PDB ID: 7CEL; TrGH7), which is the most studied cellobiohydrolase (CBH), on the other hand possess a sequence identity of 38% with BsGH7-3 (Fig. 5a; Additional file 1: Figure S5a). Residues in the A loop (A1, A2 and A3) and B loop (B1 and B4) of BsGH7-3 are more identical to HiGH7 than TrGH7 (Fig. 5a). TrGH7 contains three additional but functionally important loops characteristic of CBHs (tunnel exit motif A4, and B2 and B3) that are absent in endoglucanases, including in BsGH7-3 and HiGH7. TrGH7 exhibits variations in all motifs, except T3 containing the catalytic residues (Fig. 5a). Although several GH7 enzymes (6 out of 27 endoglucanases and 27 out of 57 cellobiohydrolases) contain a carbohydrate binding module (CBM), BsGH7-3 does not contain any known CBM domain. Two characteristic Arg residues of CBH, Arg251 and Arg 394 in TrCel7A, are absent in BsGH7-3. Arg251 located at the base of loop B3 in TrCel7A has been implicated in coordination of the reaction product cellobiose but is absent in endoglucanases [53]. Similarly, Arg394 in TrCel7A is a key factor in processive motion of CBHs and is absent in the non-processive endoglucanases [54].
Comparative analysis of B. sorokiniana GH7-3 with H. insolens endoglucanase GH7 (PDB ID: 1OJJ-B) and T. reesei cellobiohydrolase GH7 (PDB ID: 7CEL). (a) Multiple sequence alignment of BsGH7-3, HiGH7 and TrGH7. The black rectangular box contains a stretch of six residues of the catalytic motif that are conserved across the three proteins. The catalytic residues are marked with red arrows. The coloured rectangular boxes denote the residues of motifs B (green), A (red) and tunnel T (light yellow). b Homology model-based structure of BsGH7-3 shows variation in the secondary structure of motifs A, B and T (same colour as in a, with HiGH7 and TrGH7). The motifs A (red colour) and B (green colour) are in cartoon view on the mesh background. The surface view of the inner lining of the tunnel is shown in salmon colour
While the overall modelled structure of BsGH7-3 is not significantly different from TrGH7 (RMSD < 3Å; Fig. 5b; Additional file 1: Figure S5a), variations in the size and shape of the substrate binding tunnel were evident. The tunnel in BsGH7-3 appears to resemble a shallow crevice with inner solvent-accessible surface area of 3793.1 Å2 compared to the 3879.3 Å2 deep tunnel in TrGH7 (Fig. 5b). BsGH7-3 also shows striking differences in the area and volume of the largest binding pocket and the number of residues in the largest electrostatic patch on the protein surface compared to HiGH7 and TrGH7 (Additional file 1: Figure S5b). BsGH7-3 has a smaller binding pocket and the largest electrostatic patch is made up of only five residues (Additional file 1: Figure S5b). Further, the electrostatic potential distribution and its pattern on the surface vary between BsGH7-3, HiGH7 and TrGH7 (Additional file 1: Figure S5c). TrGH7 has relatively more negative patches than HiGH7 and BsGH7-3. Such differences in the electrostatic charge distribution may influence the interaction of protein with salt and ionic liquids.
Plant cell wall polysaccharides are an important source of organic compounds for use as raw material in many industrial processes and serve as a carbon source for different microorganisms including plant pathogens. Pathogens are equipped with a variety of enzymes for degrading polysaccharides. Although genes for many polysaccharide-degrading enzymes have been cloned over the past decade and commercial cocktails manufactured, the cost and efficiency of cellulases remain a challenge. Plant pathogens have evolved to break through the plant cell wall to utilize the plant's lignocellulose to survive. The wheat pathogen B. sorokiniana might thus offer unique cell wall-degrading enzymes towards a more efficient saccharification of wheat straw.
We confirmed and annotated the homologs across five GH families, GH3, GH6, GH7, GH45 and AA9, in B. sorokiniana genome. This genome contains different numbers of paralogs ranging from 3 (in GH6 and GH45) to 24 (in GH61) (Fig. 1; Additional file 1: Figure S2, Table S1). Paralogs of the five families show different degrees of identity suggesting that each GH family may have evolved and expanded at a different rate (from 6.014 to 9.155 amino acid substitutions per site) indicating functional variability (Fig. 1c). The study on transcript abundance also suggests variations in the expression of genes within each family and among the families that show up to fourfold differences in expression (Fig. 2). To get further insight into the biochemical mechanism, we selected the BsGH7-3 homolog for further characterization.
GH7 family members are amongst the most important cellulolytic enzymes that are commonly employed in plant cell wall degradation across different eukaryotic kingdoms and play a significant role in biomass hydrolysis. GH7 enzymes typically cleave β-1,4 glycosidic bonds in cellulose/β-1,4-glucans. Endo-1,4-β-glucanase, cellobiohydrolase and endo-1,3-1,4-β-glucanase have been identified in the GH7 family. To elucidate BsGH7-3 function, we characterized the substrate specificity and modelled the structure of BsGH7-3. BsGH7-3 shows higher specific activity towards lichenan (14172.7 U mg−1) and β-d-glucan (6739.4 U mg−1) and the lowest activity towards Avicel (Avicel being a substrate specific to cellobiohydrolases) following the trends reported for other endoglucanases [45, 49, 55]. The 25% reduction in activity observed on the substrate PASC has also been reported for other GH7 endoglucanases [45]. This non-specific substrate specificity is a characteristic feature of the GH7 endoglucanase. BsGH7-3 effectively decreases substrate viscosity similar to the Cel7A endoglucanase from Neosartorya fischeri P1, though this decrease is lower than that of Egl7A EG from Talaromyces emersonii CBS394.64 [45, 55]. This reduction in substrate viscosity is also common across endoglucanases. Therefore, we have classified BsGH7-3 as an endoglucanase. The homology-based model of BsGH7-3 further suggests the enzyme to be an endoglucanase. Similar to other endoglucanase (such as HiGH7), it lacks the structural motifs A4, B2 and B3 (Fig. 5). Additionally, BsGH7-3 shows variations in the inner surface area of the binding cavity compared to HiGH7. The inner tunnel area is predicted to be 56.1 Å2 smaller along with a smaller binding cavity on its surface than in HiGH7 [56]. This suggests possible differences in catalytic mechanism compared to HiGH7.
BsGH7-3, with a pHopt of 8.1, is an alkaliphilic GH7. The endoglucanase from Bacillus sp. MTCC 10048 with a half-life of around 12 h was previously reported to be an alkaliphile [57]. Most other fungal GH7 endoglucanases reported thus far have pH optima between 3.5 and 7.5 [58,59,60]. The EG1 from Humicola grisea var. thermoidea has been reported to show an optimal pH 5.0 though the enzyme was reported to be stable between pH 5.0 and 11.0 at 4 °C for 20 h [60]. The alkaliphilic nature also makes BsGH7-3 compatible to AFEX or lime pretreatment of biomass [61]. With a 65% residual activity after more than 15 days (365 h) at T opt, the half-life of BsGH7-3 is amongst the highest reported compared to other GH7 endoglucanases, particularly at high pH [58]. Chokhawala et al. reported the expression of an engineered T. reesei EGI variant in T. reesei (G230A/D113S/D115T Tr_TrEG1) with a half-life of 161 h at 60 °C and pH 4.85 in comparison to the recombinant (T. reesei host) wild-type TrEG1 with a half-life of 74 h at 60 °C, pH 4.85 [62]. Another EG from Trichoderma harzianum has also been reported to be very stable, with a little change in activity after 2 months of incubation (Additional file 1: Table S3). Here too, the enzyme has a very low turnover number at 0.45 s−1 on the substrate xyloglucan [63]. The alkaliphilic endoglucanase from the Bacillus sp. MTCC 10048 also shows little activity with a turnover number of 0.55 s−1 [57]. Therefore, the comparatively high kinetic efficiency with a k cat of 4.8 s−1 and high stability makes BsGH7-3 a very promising alkaliphilic endoglucanase.
The stimulatory effect of BsGH7-3 observed in the presence of divalent metal ions, Mn2+ and Fe2+, is intriguing. Metal binding studies indicate that the enzyme is not a metalloenzyme since the 3- to 5-fold increase observed in the presence of metals is reversed upon metal removal. While the stimulation by metal ions has been previously reported across cellulases, and in particular stimulation of endoglucanase activity, no mechanisms have been proposed [64,65,66,67,68]. The activity increase is probably due to better folding of the protein and possible metal-induced multimerization effects that enhance protein stability. Further experiments are required to understand the role of metal ions in the stability of this enzyme. The enzyme was also stable in the presence of four commercial detergents tested, with residual activity in the range of 51–74%. Although ILs are also known to denature enzymes, there are some reports of endoglucanases which are stable towards ILs. The endoglucanases from Stachybotrys microspora are 50% active in the presence of 20% (v/v) 1-butyl-3-methylimidazolium chloride [59]. Gladden et al. reported an endoglucanase from GH12 with the highest activity in the presence of 15% (v/v)
1-ethyl-3-methylimidazolium acetate ([C2mim][OAc]) and another from GH5 with the highest activity in 25% [C2mim][OAc] [69]. BsGH7-3 does not show any loss in activity in the presence of 20% (v/v) of the three ILs tested, indicating high stability and compatibility to IL pretreatment.
BsGH7-3 tolerates salt and also shows up to 1.25-fold increase in activity in the presence of salt. Sequence analysis shows that the acidic residues account for 12% of the total residues and the pI of the protein as determined by ProtParam is 4.96 [70]. Acidic amino acid residues help create a salt hydration shell to resist the denaturing environment created by high salt concentration and confer stability to the protein [71,72,73].
Here we report the annotation and characterization of cellulase genes in B. sorokiniana and derive phylogenetic inferences. Based on expression profiling of the cellulase genes, the third homolog of GH7 was characterized to be an endoglucanase from the GH7 family. The enzyme is highly thermostable, salt tolerant and of higher kinetic competence than most similarly thermostable fungal GH7 EGs. Several other cellulase genes of the pathogen have also been shortlisted based on expression levels, and their characterization is ongoing in the laboratory. We hope that this methodology of searching and screening will further enhance the repertoire of promising enzymes, particularly in plant pathogens, and help us find novel enzymes in the degradation of specific plant biomass.
endoglucanase
CBH:
cellobiohydrolase
CMC:
Bs:
Bipolaris sorokiniana
[C2mim]:
1-ethyl-3-methylimidazolium
[C2C1im][Cl]:
1-ethyl-3-methylimidazolium chloride
[C2C1im][MeCO2]:
1-ethyl-3-methylimidazolium acetate
[C2C1im][C2C2PO4]:
1-ethyl-3-methylimidazolium phosphate
AA:
auxiliary activity
glycoside hydrolase family 7
PASC:
phosphoric acid swollen cellulose
ICP-MS:
inductively coupled plasma mass spectrometry
t1/2 :
Durand H, Clanet M, Tiraby G. Genetic improvement of Trichoderma reesei for large scale cellulase production. Enzyme Microbiol Technol. 1988;10(6):341–6.
Martinez D, Berka RM, Henrissat B, Saloheimo M, Arvas M, Baker SE, Chapman J, Chertkov O, Coutinho PM, Cullen D, et al. Genome sequencing and analysis of the biomass-degrading fungus Trichoderma reesei (syn. Hypocrea jecorina). Nat Biotechnol. 2008;26(5):553–60.
Harris PV, Welner D, McFarland KC, Re E, Navarro Poulsen JC, Brown K, Salbo R, Ding H, Vlasenko E, Merino S, et al. Stimulation of lignocellulosic biomass hydrolysis by proteins of glycoside hydrolase family 61: structure and function of a large, enigmatic family. Biochemistry. 2010;49(15):3305–16.
Datta S. Recent strategies to overexpress and engineer cellulases for biomass degradation. Curr Metabol. 2016;4(1):14–22.
Walton JD. Deconstructing the cell wall. Plant Physiol. 1994;104(4):1113–8.
Skamnioti P, Furlong RF, Gurr SJ. The fate of gene duplicates in the genomes of fungal pathogens. Commun Integr Biol. 2008;1(2):196–8.
Zhao Z, Liu H, Wang C, Xu J-R. Erratum to: comparative analysis of fungal genomes reveals different plant cell wall degrading capacity in fungi. BMC Genom. 2014;15:6.
Couturier M, Navarro D, Olivé C, Chevret D, Haon M, Favel A, Lesage-Meessen L, Henrissat B, Coutinho PM, Berrin J-G. Post-genomic analyses of fungal lignocellulosic biomass degradation reveal the unexpected potential of the plant pathogen Ustilago maydis. BMC Genom. 2012;13:57.
King BC, Waxman KD, Nenni NV, Walker LP, Bergstrom GC, Gibson DM. Arsenal of plant cell wall degrading enzymes reflects host preference among plant pathogenic fungi. Biotechnol Biofuels. 2011;4:4.
Woodward J, Wiseman A. Fungal and other β-glucosidases—their properties and applications. Enzyme Microbial Technol. 1982;4(2):73–9.
Väljamäe P, Sild V, Nutt A, Pettersson G, Johansson G. Acid hydrolysis of bacterial cellulose reveals different modes of synergistic action between cellobiohydrolase I and endoglucanase I. Eur J Biochem. 1999;266(2):327–34.
Zhang YH, Lynd LR. A functionally based model for hydrolysis of cellulose by fungal cellulase. Biotechnol Bioeng. 2006;94(5):888–98.
Sahu R, Sharaff M, Pradhan M, Sethi A, Bandyopadhyay T, Mishra VK, Chand R, Chowdhury AK, Joshi AK, Pandey SP. Elucidation of defense-related signaling responses to spot blotch infection in bread wheat (Triticum aestivum L.). Plant J. 2016;86(1):35–49.
Chand R, Pandey S, Singh H, Kumar S, Joshi A. Variability and its probable cause in natural populations of spot blotch pathogen Bipolaris sorokiniana of wheat (T. aestivum L.) in India. J Plant Dis Prot. 2003;110(1):27–35.
Pandey S, Sharma S, Chand R, Shahi P, Joshi AK. Clonal variability and Its relevance in generation of new pathotypes in the spot blotch pathogen, Bipolaris sorokiniana. Curr Microbiol. 2008;56:33–41.
Bouton JH. Molecular breeding of switchgrass for use as a biofuel crop. Curr Opin Genet Dev. 2007;17(6):553–8.
Geimba MP, Riffel A, Agostini V, Brandelli A. Characterisation of cellulose-hydrolysing enzymes from the fungus Bipolaris sorokiniana. J Sci Food Agric. 1999;79(13):1849–54.
Geimba MP, Brandelli A. Extracellular enzymatic activities of Bipolaris sorokiniana isolates. J Basic Microbiol. 2002;42(4):246–53.
Ohm RA, Feau N, Henrissat B, Schoch CL, Horwitz BA, Barry KW, Condon BJ, Copeland AC, Dhillon B, Glaser F, et al. Diverse lifestyles and strategies of plant pathogenesis encoded in the genomes of eighteen Dothideomycetes fungi. PLoS Pathog. 2012;8(12):e1003037.
Condon BJ, Leng Y, Wu D, Bushley KE, Ohm RA, Otillar R, Martin J, Schackwitz W, Grimwood J, MohdZainudin N, et al. Comparative genome structure, secondary metabolite, and effector coding capacity across Cochliobolus pathogens. PLoS Genet. 2013;9(1):e1003233.
Cantarel BL, Coutinho PM, Rancurel C, Bernard T, Lombard V, Henrissat B. The carbohydrate-active enzymes database (CAZy): an expert resource for glycogenomics. Nucleic Acids Res. 2009;37(Database issue):D233–8.
Huang Y, Niu B, Gao Y, Fu L, Li W. CD-HIT Suite: a web server for clustering and comparing biological sequences. Bioinformatics. 2010;26(5):680–2.
Katoh K, Misawa K, Kuma K, Miyata T. MAFFT: a novel method for rapid multiple sequence alignment based on fast Fourier transform. Nucleic Acids Res. 2002;30(14):3059–66.
Finn RD, Coggill P, Eberhardt RY, Eddy SR, Mistry J, Mitchell AL, Potter SC, Punta M, Qureshi M, Sangrador-Vegas A, et al. The Pfam protein families database: towards a more sustainable future. Nucleic Acids Res. 2016;44(D1):D279–85.
Jones L, Keining T, Eamens A, Vaistij FE. Virus-induced gene silencing of Argonaute genes in Nicotiana benthamiana demonstrates that extensive systemic silencing requires Argonaute 1-like and Argonaute 4-like genes. Plant Physiol. 2006;141(2):598–606.
Mohanta TK, Arora PK, Mohanta N, Parida P, Bae H. Identification of new members of the MAPK gene family in plants shows diverse conserved domains and novel activation loop variants. BMC Genom. 2015;16(1):58.
Gabaldon T, Koonin EV. Functional and evolutionary implications of gene orthology. Nat Rev Genet. 2013;14(5):360–6.
Singh RK, Gase K, Baldwin IT, Pandey SP. Molecular evolution and diversification of the Argonaute family of proteins in plants. BMC Plant Biol. 2015;15(1):23.
Stamatakis A. RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics. 2014;30(9):1312–3.
Maddison WP, Donoghue MJ, Maddison DR. Outgroup analysis and parsimony. Syst Biol. 1984;33(1):83–103.
Pearson T, Hornstra HM, Sahl JW, Schaack S, Schupp JM, Beckstrom-Sternberg SM, O'Neill MW, Priestley RA, Champion MD, Beckstrom-Sternberg JS, et al. When outgroups fail; phylogenomics of rooting the emerging pathogen, Coxiella burnetii. Syst Biol. 2013;62(5):752–62.
Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S. MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol. 2011;28(10):2731–9.
Hu B, Jin J, Guo A-Y, Zhang H, Luo J, Gao G. GSDS 2.0: an upgraded gene feature visualization server. Bioinformatics. 2015;31(8):1296–7.
Sievers F, Wilm A, Dineen D, Gibson TJ, Karplus K, Li W, Lopez R, McWilliam H, Remmert M, Soding J, et al. Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Mol Syst Biol. 2011;7:539.
Söding J, Biegert A, Lupas AN. The HHpred interactive server for protein homology detection and structure prediction. Nucleic Acids Res. 2005;33(Web Server issue):W244–8.
Zhang Y, Skolnick J. TM-align: a protein structure alignment algorithm based on the TM-score. Nucleic Acids Res. 2005;33(7):2302–9.
Dundas J, Ouyang Z, Tseng J, Binkowski A, Turpaz Y, Liang J. CASTp: computed atlas of surface topography of proteins with structural and topographical mapping of functionally annotated residues. Nucleic Acids Res. 2006;34(Web Server issue):W116–8.
Shazman S, Celniker G, Haber O, Glaser F, Mandel-Gutfreund Y. Patch Finder Plus (PFplus): a web server for extracting and displaying positive electrostatic patches on protein surfaces. Nucleic Acids Res. 2007;35(Web Server issue):W526–30.
Schrodinger, LLC: the PyMOL Molecular Graphics System, Version 1.8 (www.pymol.org). In; 2015.
Liljeroth E, Jansson HB, Schafer W. Transformation of Bipolaris-sorokiniana with the Gus gene and use for studying fungal colonization of barley roots. Phytopathology. 1993;83:1484–9.
Jaiswal SK, Prasad LC, Sharma S, Kumar S, Prasad R, Pandey SP, Chand R, Joshi AK. Identification of molecular marker and aggressiveness for different groups of Bipolaris sorokiniana isolates causing spot blotch disease in wheat (Triticum aestivum L.). Curr Microbiol. 2007;55(2):135–41.
Bradford MM. A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem. 1976;72:248–54.
Miller GL. Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem. 1959;31(3):426–8.
Walseth CS. Occurrence of cellulases in enzyme preparations from microorganisms. Tappi. 1952;35(5):228–33.
Liu Y, Dun B, Shi P, Ma R, Luo H, Bai Y, Xie X, Yao B. A novel GH7 Endo-β-1,4-Glucanase from Neosartorya fischeri P1 with good thermostability, broad substrate specificity and potential application in the brewing industry. PLoS ONE. 2015;10(9):e0137485.
Himmel ME, Ding SY, Johnson DK, Adney WS, Nimlos MR, Brady JW. Biomass recalcitrance: engineering plants and enzymes for biofuels production. Science. 2007;315(5813):804–7.
Garcia-Vallve S, Romeu A, Palau J. Horizontal gene transfer of glycosyl hydrolases of the rumen fungi. Mol Biol Evol. 2000;17(3):352–61.
Lucas R, Robles A, Garcia MT, Alvarez De Cienfuegos G, Galvez A. Production, purification, and properties of an endoglucanase produced by the hyphomycete Chalara (Syn. Thielaviopsis) paradoxa CH32. J Agric Food Chem. 2001;49(1):79–85.
Zhang L, Fan Y, Zheng H, Du F, Zhang KQ, Huang X, Wang L, Zhang M, Niu Q. Isolation and characterization of a novel endoglucanase from a Bursaphelenchus xylophilus metagenomic library. PLoS ONE. 2013;8(12):e82437.
Sheldon RA, Lau RM, Sorgedrager MJ, van Rantwijk F, Seddon KR. Biocatalysis in ionic liquids. Green Chem. 2002;4(2):147–51.
Sinha SK, Datta S. β-Glucosidase from the hyperthermophilic archaeon Thermococcus sp. is a salt-tolerant enzyme that is stabilized by its reaction product glucose. Appl Microbiol Biotechnol. 2016;100(19):8399–409.
Goswami S, Gupta N, Datta S. Using the β-glucosidase catalyzed reaction product glucose to improve the ionic liquid tolerance of β-glucosidases. Biotechnol Biofuels. 2016;9:72.
Ubhayasekera W, Munoz IG, Vasella A, Stahlberg J, Mowbray SL. Structures of Phanerochaete chrysosporium Cel7D in complex with product and inhibitors. FEBS J. 2005;272(8):1952–64.
Knott BC, Crowley MF, Himmel ME, Ståhlberg J, Beckham GT. Carbohydrate-protein interactions that drive processive polysaccharide translocation in enzymes revealed from a computational study of cellobiohydrolase processivity. J Am Chem Soc. 2014;136(24):8810–9.
Wang K, Luo H, Shi P, Huang H, Bai Y, Yao B. A highly-active endo-1,3-1,4-β-glucanase from thermophilic Talaromyces emersonii CBS394.64 with application potential in the brewing and feed industries. Process Biochem. 2014;49(9):1448–56.
Ducros VM, Tarling CA, Zechel DL, Brzozowski AM, Frandsen TP, von Ossowski I, Schulein M, Withers SG, Davies GJ. Anatomy of glycosynthesis: structure and kinetics of the Humicola insolens Cel7B E197A and E197S glycosynthase mutants. Chem Biol. 2003;10(7):619–28.
Sadhu S, Saha P, Sen SK, Mayilraj S, Maiti TK. Production, purification and characterization of a novel thermotolerant endoglucanase (CMCase) from Bacillus strain isolated from cow dung. SpringerPlus. 2013;2(1):10.
Payne CM, Knott BC, Mayes HB, Hansson H, Himmel ME, Sandgren M, Stahlberg J, Beckham GT. Fungal cellulases. Chem Rev. 2015;115(3):1308–448.
Ben Hmad I, Boudabbous M, Belghith H, Gargouri A. A novel ionic liquid-stable halophilic endoglucanase from Stachybotrys microspora. Process Biochem. 2017;54:59–66.
Takashima S, Nakamura A, Hidaka M, Masaki H, Uozumi T. Cloning, sequencing, and expression of the cellulase genes of Humicola grisea var. thermoidea. J Biotechnol. 1996;50(2):137–47.
da Costa Sousa L, Chundawat SP, Balan V, Dale BE. 'Cradle-to-grave' assessment of existing lignocellulose pretreatment technologies. Curr Opin Biotechnol. 2009;20(3):339–47.
Chokhawala HA, Roche CM, Kim T-W, Atreya ME, Vegesna N, Dana CM, Blanch HW, Clark DS. Mutagenesis of Trichoderma reesei endoglucanase I: impact of expression host on activity and stability at elevated temperatures. BMC Biotechnol. 2015;15(1):1–12.
Pellegrini VO, Serpa VI, Godoy AS, Camilo CM, Bernardes A, Rezende CA, Junior NP, Franco Cairo JP, Squina FM, Polikarpov I. Recombinant Trichoderma harzianum endoglucanase I (Cel7B) is a highly acidic and promiscuous carbohydrate-active enzyme. Appl Microbiol Biotechnol. 2015;99(22):9591–604.
Kern M, McGeehan JE, Streeter SD, Martin RN, Besser K, Elias L, Eborall W, Malyon GP, Payne CM, Himmel ME, et al. Structural characterization of a unique marine animal family 7 cellobiohydrolase suggests a mechanism of cellulase salt tolerance. Proc Natl Acad Sci USA. 2013;110(25):10189–94.
You S, Tu T, Zhang L, Wang Y, Huang H, Ma R, Shi P, Bai Y, Su X, Lin Z, et al. Improvement of the thermostability and catalytic efficiency of a highly active β-glucanase from Talaromyces leycettanus JCM12802 by optimizing residual charge–charge interactions. Biotechnol Biofuels. 2016;9(1):1–12.
Theberge M, Lacaze P, Shareck F, Morosoli R, Kluepfel D. Purification and characterization of an endoglucanase from Streptomyces lividans 66 and DNA sequence of the gene. Appl Environ Microbiol. 1992;58(3):815–20.
Rawat R, Kumar S, Chadha BS, Kumar D, Oberoi HS. An acidothermophilic functionally active novel GH12 family endoglucanase from Aspergillus niger HO: purification, characterization and molecular interaction studies. Antonie Van Leeuwenhoek. 2015;107(1):103–17.
Li CH, Wang HR, Yan TR. Cloning, purification, and characterization of a heat- and alkaline-stable endoglucanase B from Aspergillus niger BCRC31494. Molecules. 2012;17(8):9774–89.
Gladden JM, Park JI, Bergmann J, Reyes-Ortiz V, D'Haeseleer P, Quirino BF, Sale KL, Simmons BA, Singer SW. Discovery and characterization of ionic liquid-tolerant thermophilic cellulases from a switchgrass-adapted microbial community. Biotechnol Biofuels. 2014;7:15.
Gasteiger E, Hoogland C, Gattiker A, Duvaud SE, Wilkins MR, Appel RD, Bairoch A. Protein identification and analysis tools on the ExPASy server. In: Walker JM, editor. The proteomics protocols handbook. Totowa: Humana Press; 2005. p. 571–607.
Hirasawa K, Uchimura K, Kashiwa M, Grant WD, Ito S, Kobayashi T, Horikoshi K. Salt-activated endoglucanase of a strain of alkaliphilic Bacillus agaradhaerens. Antonie Van Leeuwenhoek. 2006;89(2):211–9.
Zhang T, Datta S, Eichler J, Ivanova N, Axen SD, Kerfeld CA, Chen F, Kyrpides N, Hugenholtz P, Cheng J-F, et al. Identification of a haloalkaliphilic and thermostable cellulase with improved ionic liquid tolerance. Green Chem. 2011;13(8):2083–90.
Endo K, Hakamada Y, Takizawa S, Kubota H, Sumitomo N, Kobayashi T, Ito S. A novel alkaline endoglucanase from an alkaliphilic Bacillus isolate: enzymatic properties, and nucleotide and deduced amino acid sequences. Appl Microbiol Biotechnol. 2001;57(1):109–16.
SA and RKS contributed equally to this work. SD and SPP designed the study; SA, RKS and PK conducted the study and analysed the data; SD and SPP participated in conducting the study and data analysis as well as provided resources. All the authors participated in writing the manuscript. All authors read and approved the final manuscript.
SD and SPP gratefully acknowledge the additional support by Indian Institute of Science Education and Research Kolkata. SA and PK acknowledge the support by IISER Kolkata Institute fellowship.
Additional file 1: Supplemental material to "Genome-wide characterization of cellulases from the hemi-biotrophic plant pathogen, Bipolaris sorokiniana, reveals presence of a highly stable GH7 endoglucanase".
All authors approved the manuscript and this submission.
This work was supported in part by Rapid Grant for Young Investigators, Department of Biotechnology, Government of India, BT/PR6511/GBD/27/424/2012 (SD), Energy Bioscience Overseas Fellowship, Department of Biotechnology, Government of India, BT/NBDB/22/06/2011 (SD), Science & Engineering Research Board, EMR/2016/003705 (SD) and by the WHEAT Competitive Grants Initiative, CIMMYT and the CGIAR, A4031.09.10 (SPP). RKS is supported by the MPG-India partner group program of the Max Planck Society, Germany and the Indo-German Centre for Science and Technology/Department of Science and Technology, Government of India (SPP).
Shritama Aich and Ravi K. Singh contributed equally to this work
Department of Biological Sciences, Indian Institute of Science Education and Research Kolkata, Mohanpur, 741246, India
Ravi K. Singh, Pritha Kundu & Shree P. Pandey
Protein Engineering Laboratory, Department of Biological Sciences, Indian Institute of Science Education and Research Kolkata, Mohanpur, India
Shritama Aich & Supratim Datta
Centre for Advanced Functional Materials, Indian Institute of Science Education and Research Kolkata, Mohanpur, India
Supratim Datta
Shritama Aich
Ravi K. Singh
Pritha Kundu
Shree P. Pandey
Correspondence to Shree P. Pandey or Supratim Datta.
Additional file 1.
Supplemental material to "Genome-wide characterization of cellulases from the hemi-biotrophic plant pathogen, Bipolaris sorokiniana, reveals presence of a highly stable GH7 endoglucanase". Figure S1. Transcript sequences of B. sorokiniana GHs (GH3, GH6, GH7 and GH45) and AA9 genes. Figure S2. Genomic architecture of B. sorokiniana GHs and redox enzymes (AA9). The figure shows the schematic of arrangement of introns and exons in each of the BsGH homologs (comparative lengths are unscaled). Figure S3. Multiple sequence alignment of protein sequences of B. sorokiniana GHs and AA9. Figure S4. Complete CDS sequence of B. sorokiniana GH7-3 as obtained after sequencing. Figure S5. (a) The upper and lower diagonal in matrix represents the % of identities between the sequences and RMSD (Å) between the structures of TrGH7 (7CEL-A), HiGH7 (1OJJ-B) and BsGH7-3 respectively. (b) The structural diversities in the binding. regions among TrGH7, HiGH7 and BsGH7-3. (c) Comparative electrostatic potential distribution between TrGH7, HiGH7 and BsGH7-3. Table S1. The genomic features of B. sorokiniana GHs and AA9 genes. Details of length of exons and ORF coordinates of B. sorokiniana GHs and AA9 transcripts. Table S2. (a) Details of primers used for qPCR analysis of B. sorokiniana GHs transcripts (b) Details of primers used for cloning of BsGH7-3 transcripts. Table S3. Comparison of BsGH7-3 with other fungal endoglucanases of the GH7 family with CMC as the substrate.
Aich, S., Singh, R.K., Kundu, P. et al. Genome-wide characterization of cellulases from the hemi-biotrophic plant pathogen, Bipolaris sorokiniana, reveals the presence of a highly stable GH7 endoglucanase. Biotechnol Biofuels 10, 135 (2017). https://doi.org/10.1186/s13068-017-0822-0
Cell wall-degrading enzymes
GH7 endoglucanases
Alkaliphilic
Thermostable
|
CommonCrawl
|
(Math Challenge) Who can answer it first?
Find an integer x such that $\frac{2}{3} < \frac{x}{5} < \frac{6}{7}$
4/5, 2/3 = 0.666666... 4/5 = 0.8, 6/7 = 0.857142...
Get a common denominator between 3 5 and 7 = 105
So we have
70 / 105 < 21x / 105 < 90/105
Note that if x = 4 we have that
70 /105 < 84/ 105 < 90 /105
read my username!
|
CommonCrawl
|
Asian Pacific Journal of Cancer Prevention
Asian Pacific Organization for Cancer Prevention (아시아태평양암예방학회)
Acute Promyelocytic Leukemia: a Single Center Study from Southern Pakistan
Sultan, Sadia ;
Irfan, Syed Mohammed ;
Ashar, Sana
https://doi.org/10.7314/APJCP.2015.16.17.7893
Citation PDF KSCI
Background: Acute promyelocytic leukemia (APL) is a distinctive clinical, biological and molecular subtype of acute myeloid leukemia. However, data from Pakistan are scarce. Therefore we reviewed the demographic and clinical profile along with risk stratification of APL patients at our center. Materials and Methods: In this descriptive cross sectional study, 26 patients with acute promyelocytic leukemia were enrolled from January 2011 to June 2015. Data were analyzed with SPSS version 22. Results: The mean age was $31.8{\pm}1.68years$ with a median of 32 years. The female to male ratio was 2:1.2. The majority of our patients had hypergranular variant (65.4%) rather than the microgranular type. The major complaints were bleeding (80.7%), fever (76.9%), generalized weakness (30.7%) and dyspnea (15.38%). Physical examination revealed petechial rashes as a predominant finding detected in 61.5% followed by pallor in 30.8%. The mean hemoglobin was $8.04{\pm}2.29g/dl$ with the mean MCV of $84.7{\pm}7.72fl$. The mean total leukocyte count of $5.44{\pm}7.62{\times}10^9/l$; ANC of $1.08{\pm}2.98{\times}10^9/l$ and mean platelets count were $38.84{\pm}5.38{\times}10^9/l$. According to risk stratification, 15.3% were in high, 65.4% in intermediate and 19.2% in low risk groups. Conclusions: Clinico-epidemiological features of APL in Pakistani patients appear comparable to published data. Haemorrhagic diathesis is the commonest presentation. Risk stratification revealed predominance of intermediate risk disease.
Acute promyelocytic leukemia;clinico-epidemiological;risk stratification;Pakistan
Ambayya A, BBiomed, Zainina S, et al (2014). Antigen expression pattern of acute promyelocytic leukemia cases in Malaysia. Med J Malaysia, 69, 64-9.
Avvisati G, Lo Coco F, Diverio D, et al (1996). AIDA (alltrans retinoic acid + idarubicin) in newly diagnosed acute promyelocytic leukemia: a Gruppo Italiano Malattie Ematologiche Maligne dell'Adulto (GIMEMA) pilot study. Blood, 88, 1390-8.
Bajpai J, Sharma A, Kumar L, et al (2011). Acute promyelocytic leukemia: An experience from a tertiary care centre in north India. Indian J Cancer, 48, 316-22. https://doi.org/10.4103/0019-509X.84938
Breen KA, Grimwade D, Hunt BJ (2012). The pathogenesis and management of the coagulopathy of acute promyelocytic leukaemia. Br J Haematol, 156, 24-36. https://doi.org/10.1111/j.1365-2141.2011.08922.x
Chen P, Huang HF, Lu R, Wu Y, Chen YZ (2012). Prognostic significance of CD44v6/v7 in acute promyelocytic leukemia. Asian Pac J Cancer Prev, 13, 3791-4. https://doi.org/10.7314/APJCP.2012.13.8.3791
Chang H, Kuo MC, Shih LY, et al (2013). Acute promyelocytic leukemia-associated thrombosis. Acta Haematol, 130, 1-6. https://doi.org/10.1159/000345833
Devita VT, Hellman S, Rosenberg SA (2008). Acute leukemias. DeVita VT, Rosenberg SA, Lawrence TS, eds. DeVita, Hellman, and Rosenberg's Cancer: Principles & Practice of Oncology. 8th ed. Philadelphia, Pa: Lippincott, Williams and Wilkins, 2, 2240-1.
Dutta P, Sazawal S, Kumar R, Saxena R (2008). Does acute promyelocytic leukemia in Indian patients have biology different from the West? Indian J Pathol Microbiol, 51, 437-9. https://doi.org/10.4103/0377-4929.42555
Duffield AS, Aoki J, Levis M, et al (2012). Clinical and pathologic features of secondary acute promyelocytic leukemia. Am J Clin Pathol, 137, 395-402. https://doi.org/10.1309/AJCPE0MV0YTWLUUE
Esteve J, Escoda L, Marti n G, et al (2007). Spanish Cooperative Group PETHEMA. outcome of patients with acute promyelocytic leukemia failing to front-line treatment with all-trans retinoic acid and anthracycline-based chemotherapy (PETHEMA protocols LPA96 and LPA99): benefit of an early intervention. Leukemia, 21, 446-52. https://doi.org/10.1038/sj.leu.2404501
Hillestad LK (1957). Acute promyelocytic leukemia. Acta Med Scand, 159, 189-94.
Imani-Saber Z and Ghafouri-Fard S (2014). Promyelocytic leukemia gene functions and roles in tumorigenesis. Asian Pac J Cancer Prev, 15, 8021-8.
Karim F, Shaikh U, Adil SN, Khurshid M (2014). Clinical characteristics, outcome and early induction deaths in patients with acute promyelocytic leukaemia: a five-year experience at a tertiary care centre. Singapore Med J, 55, 443-7. https://doi.org/10.11622/smedj.2014105
Khorshid O, Diaa A, Moaty MA, et al (2011). Clinical features and treatment outcome of acute promyelocytic leukemia patients treated at cairo national cancer institute in egypt. Mediterr J Hematol Infect Dis, 3, 2011060. https://doi.org/10.4084/mjhid.2011.060
Liang JY, Wu DP, Liu YJ, et al (2008). The clinical and laboratory features of acute promyelocytic leukemia: an analysis of 513 cases. Zhonghua Nei Ke Za Zhi, 47, 389-92.
Lou Y, Ma Y, Suo S, et al (2015). Prognostic factors of patients with newly diagnosed acute promyelocytic leukemia treated with arsenic trioxide-based frontline therapy. Leuk Res, 39, 938-44 https://doi.org/10.1016/j.leukres.2015.05.016
Mandegary A, Rostami S, Alimoghaddam K, Ghavamzadeh A, Ghahremani MH (2011). Gluthatione-S-transferase T1- null genotype predisposes adults to acute promyelocytic leukemia; a case-control study. Asian Pac J Cancer Prev, 12, 1279-82.
Montesinos P, Rayon C, Vellenga E, PETHEMA; HOVON Groups, et al (2011). Clinical significance of CD56 expression in patients with acute promyelocytic leukemia treated with all-trans retinoic acid and anthracycline-based regimens. Blood, 117, 1799-805. https://doi.org/10.1182/blood-2010-04-277434
Mitrovic M, Suvajdzic N, Elezovic I, et al (2015). Thrombotic events in acute promyelocytic leukemia. Thromb Res, 135, 588-93. https://doi.org/10.1016/j.thromres.2014.11.026
Park JH, Qiao B, Panageas KS, et al (2011). Early death rate in acute promyelocytic leukemia remains high despite all-trans retinoic acid. Blood, 118, 1248-54. https://doi.org/10.1182/blood-2011-04-346437
Sanz MA, Lo Coco F, Marti n G, et al (2000). Definition of relapse risk and role of nonanthracycline drugs for consolidation in patients with acute promyelocytic leukemia: a joint study of the PETHEMA and GIMEMA cooperative groups. Blood, 96, 1247-53.
Ziaei JE (2004). High frequency of acute promyelocytic leukemia in northwest Iran. Asian Pac J Cancer Prev, 5, 188-9.
|
CommonCrawl
|
Regular and Chaotic Dynamics ISSN 1560-3547 (print), 1468-4845 (on-line) Distributed by
Volume 25, Number 1, 2020
Special issue: In honor of Valery Kozlov for his 70th birthday
Valery V. Kozlov. On the Occasion of his 70th Birthday
Citation: Valery V. Kozlov. On the Occasion of his 70th Birthday, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 1
DOI:10.1134/S1560354720010013
Artemyev A. V., Neishtadt A. I., Vasiliev A. A.
A Map for Systems with Resonant Trappings and Scatterings
Slow-fast dynamics and resonant phenomena can be found in a wide range of physical systems, including problems of celestial mechanics, fluid mechanics, and charged particle dynamics. Important resonant effects that control transport in the phase space in such systems are resonant scatterings and trappings. For systems with weak diffusive scatterings the transport properties can be described with the Chirikov standard map, and the map parameters control the transition between stochastic and regular dynamics. In this paper we put forward the map for resonant systems with strong scatterings that result in nondiffusive drift in the phase space, and trappings that produce fast jumps in the phase space. We demonstrate that this map describes the transition between stochastic and regular dynamics and find the critical parameter values for this transition.
Keywords: scattering on resonance, capture into resonance
Citation: Artemyev A. V., Neishtadt A. I., Vasiliev A. A., A Map for Systems with Resonant Trappings and Scatterings, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 2-10
Tabachnikov S.
Two Variations on the Periscope Theorem
A (multidimensional) spherical periscope is a system of two ideal mirrors that reflect every ray of light emanating from some point back to this point. A spherical periscope defines a local diffeomorphism of the space of rays through this point, and we describe such diffeomorphisms. We also solve a similar problem for (multidimensional) reversed periscopes, the systems of two mirrors that reverse the direction of a parallel beam of light.
Keywords: periscope, optical reflection, projectively gradient vector field
Citation: Tabachnikov S., Two Variations on the Periscope Theorem, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 11-17
Borisov A. V., Tsiganov A. V.
On the Nonholonomic Routh Sphere in a Magnetic Field
This paper is concerned with the motion of an unbalanced dynamically symmetric sphere rolling without slipping on a plane in the presence of an external magnetic field. It is assumed that the sphere can consist completely or partially of dielectric, ferromagnetic, superconducting and crystalline materials. According to the existing phenomenological theory, the analysis of the sphere's dynamics requires in this case taking into account the Lorentz torque, the Barnett – London effect and the Einstein – de Haas effect. Using this mathematical model, we have obtained conditions for the existence of integrals of motion which allow one to reduce integration of the equations of motion to a quadrature similar to the Lagrange quadrature for a heavy rigid body.
Keywords: nonholonomic systems, integrable systems, magnetic field, Barnett – London effect, Einstein – de Haas effect
Citation: Borisov A. V., Tsiganov A. V., On the Nonholonomic Routh Sphere in a Magnetic Field, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 18-32
Sachkov Y. L.
Periodic Controls in Step 2 Strictly Convex Sub-Finsler Problems
We consider control-linear left-invariant time-optimal problems on step 2 Carnot groups with a strictly convex set of control parameters (in particular, sub-Finsler problems). We describe all Casimirs linear in momenta on the dual of the Lie algebra.
In the case of rank 3 Lie groups we describe the symplectic foliation on the dual of the Lie algebra. On this basis we show that extremal controls are either constant or periodic. Some related results for other Carnot groups are presented.
Keywords: optimal control, sub-Finsler geometry, Lie groups, Pontryagin maximum principle
Citation: Sachkov Y. L., Periodic Controls in Step 2 Strictly Convex Sub-Finsler Problems, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 33-39
Rauch-Wojciechowski S., Przybylska M.
On Dynamics of Jellet's Egg. Asymptotic Solutions Revisited
We study here the asymptotic condition $\dot E=-\mu g_n {\boldsymbol v}_A^2=0$ for an eccentric rolling and sliding ellipsoid with axes of principal moments of inertia directed along geometric axes of the ellipsoid, a rigid body which we call here Jellett's egg (JE). It is shown by using dynamic equations expressed in terms of Euler angles that the asymptotic condition is satisfied by stationary solutions. There are 4 types of stationary solutions: tumbling, spinning, inclined rolling and rotating on the side solutions. In the generic situation of tumbling solutions concise explicit formulas for stationary angular velocities $\dot\varphi_{\mathrm{JE}}(\cos\theta)$, $\omega_{3\mathrm{JE}}(\cos\theta)$ as functions of JE parameters $\widetilde{\alpha},\alpha,\gamma$ are given. We distinguish the case $1-\widetilde{\alpha}<\alpha^2<1+\widetilde{\alpha}$, $1-\widetilde{\alpha}<\alpha^2\gamma<1+\widetilde{\alpha}$ when velocities $\dot\varphi_{\mathrm{JE}}$, $\omega_{3\mathrm{JE}}$ are defined for the whole range of inclination angles $\theta\in(0,\pi)$. Numerical simulations illustrate how, for a JE launched almost vertically with $\theta(0)=\tfrac{1}{100},\tfrac{1}{10}$, the inversion of the JE depends on relations between parameters.
Keywords: rigid body, nonholonomic mechanics, Jellett egg, tippe top
Citation: Rauch-Wojciechowski S., Przybylska M., On Dynamics of Jellet's Egg. Asymptotic Solutions Revisited, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 40-58
Kudryashov N. A.
Lax Pairs and Special Polynomials Associated with Self-similar Reductions of Sawada – Kotera and Kupershmidt Equations
Self-similar reductions of the Sawada – Kotera and Kupershmidt equations are studied. Results of Painlevé's test for these equations are given. Lax pairs for solving the Cauchy problems to these nonlinear ordinary differential equations are found. Special solutions of the Sawada – Kotera and Kupershmidt equations expressed via the first Painlevé equation are presented. Exact solutions of the Sawada – Kotera and Kupershmidt equations by means of general solution for the first member of $K_2$ hierarchy are given. Special polynomials for expressions of rational solutions for the equations considered are introduced. The differentialdifference equations for finding special polynomials corresponding to the Sawada – Kotera and Kupershmidt equations are found. Nonlinear differential equations of sixth order for special polynomials associated with the Sawada – Kotera and Kupershmidt equations are obtained. Lax pairs for nonlinear differential equations with special polynomials are presented. Rational solutions of the self-similar reductions for the Sawada – Kotera and Kupershmidt equations are given.
Keywords: higher-order Painlevé equation, Sawada – Kotera equation, Kupershmidt equation, self-similar reduction, special polynomial, exact solution
Citation: Kudryashov N. A., Lax Pairs and Special Polynomials Associated with Self-similar Reductions of Sawada – Kotera and Kupershmidt Equations, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 59-77
Andrade J., Boatto S., Combot T., Duarte G., Stuchi T. J.
$N$-body Dynamics on an Infinite Cylinder: the Topological Signature in the Dynamics
The formulation of the dynamics of $N$-bodies on the surface of an infinite cylinder is considered. We have chosen such a surface to be able to study the impact of the surface's topology in the particle's dynamics. For this purpose we need to make a choice of how to generalize the notion of gravitational potential on a general manifold. Following Boatto, Dritschel and Schaefer [5], we define a gravitational potential as an attractive central force which obeys Maxwell's like formulas.
As a result of our theoretical differential Galois theory and numerical study — Poincaré sections, we prove that the two-body dynamics is not integrable. Moreover, for very low energies, when the bodies are restricted to a small region, the topological signature of the cylinder is still present in the dynamics. A perturbative expansion is derived for the force between the two bodies. Such a force can be viewed as the planar limit plus the topological perturbation. Finally, a polygonal configuration of identical masses (identical charges or identical vortices) is proved to be an unstable relative equilibrium for all $N >2$.
Keywords: $N$-body problem, Hodge decomposition, central forces on manifolds, topology and integrability, differential Galois theory, Poincaré sections, stability of relative equilibria
Citation: Andrade J., Boatto S., Combot T., Duarte G., Stuchi T. J., $N$-body Dynamics on an Infinite Cylinder: the Topological Signature in the Dynamics, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 78-110
Markeev A. P.
On Periodic Poincaré Motions in the Case of Degeneracy of an Unperturbed System
This paper is concerned with a one-degree-of-freedom system close to an integrable system. It is assumed that the Hamiltonian function of the system is analytic in all its arguments, its perturbing part is periodic in time, and the unperturbed Hamiltonian function is degenerate. The existence of periodic motions with a period divisible by the period of perturbation is shown by the Poincaré methods. An algorithm is presented for constructing them in the form of series (fractional degrees of a small parameter), which is implemented using classical perturbation theory based on the theory of canonical transformations of Hamiltonian systems. The problem of the stability of periodic motions is solved using the Lyapunov methods and KAM theory. The results obtained are applied to the problem of subharmonic oscillations of a pendulum placed on a moving platform in a homogeneous gravitational field. The platform rotates with constant angular velocity about a vertical passing through the suspension point of the pendulum, and simultaneously executes harmonic small-amplitude oscillations along the vertical. Families of subharmonic oscillations of the pendulum are shown and the problem of their Lyapunov stability is solved.
Keywords: Hamiltonian system, degeneracy, periodic motion, stability
Citation: Markeev A. P., On Periodic Poincaré Motions in the Case of Degeneracy of an Unperturbed System, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 111-120
Burov A., Guerman A., Nikonov V. I.
Asymptotic Invariant Surfaces for Non-Autonomous Pendulum-Type Systems
Invariant surfaces play a crucial role in the dynamics of mechanical systems separating regions filled with chaotic behavior. Cases where such surfaces can be found are rare enough. Perhaps the most famous of these is the so-called Hess case in the mechanics of a heavy rigid body with a fixed point.
We consider here the motion of a non-autonomous mechanical pendulum-like system with one degree of freedom. The conditions of existence for invariant surfaces of such a system corresponding to non-split separatrices are investigated. In the case where an invariant surface exists, combination of regular and chaotic behavior is studied analytically via the Poincaré – Mel'nikov separatrix splitting method, and numerically using the Poincaré maps.
Keywords: separatrices splitting, chaotic dynamics, invariant surface
Citation: Burov A., Guerman A., Nikonov V. I., Asymptotic Invariant Surfaces for Non-Autonomous Pendulum-Type Systems, Regular and Chaotic Dynamics, 2020, vol. 25, no. 1, pp. 121-130
№ 1 2
№ 1 2 3-4
№ 1 2 3 4
№ 1 2 3 4 5 6
№ 1 2 3 4-5 6
№ 1 2-3 4-5 6
№ 1-2 3-4 5 6
№ 1 2 3-4 5 6
№ 1-2 3 4 5 6
№ 1 2 3 4 5 6 7-8
№ 1 2 3 4 5 6 7 8
© Institute of Computer Science Izhevsk, 2005 - 2021
|
CommonCrawl
|
Continuous shift commuting maps between ultragraph shift spaces
DCDS Home
Topological entropy of free semigroup actions for noncompact sets
February 2019, 39(2): 1019-1032. doi: 10.3934/dcds.2019042
Intermediate Lyapunov exponents for systems with periodic orbit gluing property
Xueting Tian 1, , Shirou Wang 2,3, and Xiaodong Wang 4,,
School of Mathematical Sciences, Fudan University, Shanghai 200433, China
Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, T6G2G1, Canada
Academy of Mathematics and System Sciences, Chinese Academy of Sciences, Beijing 100190, China
School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
* Corresponding author: Xiaodong Wang
Received March 2018 Revised August 2018 Published November 2018
Full Text(HTML)
We prove that the average Lyapunov exponents of asymptotically additive functions have the intermediate value property provided the dynamical system has the periodic gluing orbit property. To be precise, consider a continuous map
$f \colon X\rightarrow X$
over a compact metric space
$X$
and an asymptotically additive sequence of functions
$\Phi = \{\phi_n\colon X\rightarrow \mathbb{R}\}_{n\geq 1}$
. If
$f$
has the periodic gluing orbit property, then for any constant
$a$
satisfying
$\inf\limits_{\mu\in \mathcal M_{inv} (f,X)} \chi_\Phi(\mu) <a<\sup\limits_{\mu\in\mathcal M_{inv} (f,X)} \chi_\Phi(\mu)$
$\chi_\Phi(\mu) = \liminf_{n\rightarrow \infty}\int\frac1n\phi_n d\mu$
, and the infimum and supremum are taken over the set of all
-invariant probability measures, there is an ergodic measure
$\mu_a\in \mathcal M_{inv} (f,X)$
$\chi_\Phi(\mu_a) = a$
${\rm{supp}}(\mu)=X.$
Keywords: Shadowing, specification property, gluing orbit property, asymptotically additive functions, Lyapunov exponents.
Mathematics Subject Classification: 37A25, 37D25, 54H20.
Citation: Xueting Tian, Shirou Wang, Xiaodong Wang. Intermediate Lyapunov exponents for systems with periodic orbit gluing property. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1019-1032. doi: 10.3934/dcds.2019042
L. Barreira, Dimension and Recurrence in Hyperbolic Dynamics, Basel: Birkhäuser, 2008. Google Scholar
L. Barreira, Thermodynamic Formalism and Applications to Dimension Theory, Springer Science & Business Media, 2011. doi: 10.1007/978-3-0348-0206-2. Google Scholar
L. Barreira and P. Doutor, Almost additive multifractal analysis, Journal de Mathématiques Pures et Appliquées, 92 (2009), 1-17. doi: 10.1016/j.matpur.2009.04.006. Google Scholar
L. Barreira and K. Gelfert, Multifractal analysis for Lyapunov exponents on nonconformal repellers, Comm. Math. Phys., 267 (2006), 393-418. doi: 10.1007/s00220-006-0084-3. Google Scholar
L. Barreira and B. Saussol, Variational principles and mixed multifractal spectra, Trans. Amer. Math. Soc., 353 (2001), 3919-3944. doi: 10.1090/S0002-9947-01-02844-6. Google Scholar
A. M. Blokh, Decomposition of dynamical systems on an interval, Uspekhi Mat. Nauk., 38 (1983), 179-180. Google Scholar
T. Bomfim, M. J. Torres and P. Varandas, Topological features of flows with the reparameterized gluing orbit property, Journal of Differential Equations, 262 (2017), 4292-4313. doi: 10.1016/j.jde.2017.01.008. Google Scholar
T. Bomfim and P. Varandas, The gluing orbit property, uniform hyperbolicity and large deviations principles for semiflows, arXiv: 1507.03905. Google Scholar
C. Bonatti, L. Díaz and A. Gorodetski, Non-hyperbolic ergodic measures with large support, Nonlinearity, 23 (2010), 687-705. doi: 10.1088/0951-7715/23/3/015. Google Scholar
R. Bowen, Topological entropy for noncompact sets, Trans. Amer. Math. Soc., 184 (1973), 125-136. doi: 10.2307/1996403. Google Scholar
J. Buzzi, Specification on the interval, Trans. Amer. Math. Soc., 349 (1997), 2737-2754. doi: 10.1090/S0002-9947-97-01873-4. Google Scholar
M. Denker, C. Grillenberger and K. Sigmund, Ergodic Theory on the Compact Space, Lecture Notes in Mathematics, Vol. 527. Springer-Verlag, Berlin-New York, 1976. Google Scholar
L. Díaz and A. Gorodetski, Non-hyperbolic ergodic measures for non-hyperbolic homoclinic classes, Ergodic Theory and Dynam. Systems, 29 (2009), 1479-1513. doi: 10.1017/S0143385708000849. Google Scholar
D. Feng, Lyapunov exponents for products of matrices and multifractal analysis. Ⅰ. Positive matrices, Israel J. Math., 138 (2003), 353-376. doi: 10.1007/BF02783432. Google Scholar
D. Feng, Lyapunov exponents for products of matrices and multifractal analysis. Ⅱ. General matrices, Israel J. Math., 170 (2009), 355-394. doi: 10.1007/s11856-009-0033-x. Google Scholar
D. Feng and W. Huang, Lyapunov spectrum of asymptotically sub-additive potentials, Comm. Math. Phys., 297 (2010), 1-43. doi: 10.1007/s00220-010-1031-x. Google Scholar
D. Feng and K. S. Lau, The pressure function for products of non-negative matrices, Math. Res. Lett., 9 (2002), 363-378. doi: 10.4310/MRL.2002.v9.n3.a10. Google Scholar
A. Gogolev, Diffeomorphisms H¨older conjugate to Anosov diffeomorphisms, Ergodic Theory and Dynam. Systems, 30 (2010), 441-456. doi: 10.1017/S0143385709000169. Google Scholar
A. Gorodetski, Yu. Ilyashenko, V. Kleptsyn and M. Nalsky, Nonremovable zero Lyapunov exponents, Funct. Anal. Appl., 39 (2005), 27-38. doi: 10.1007/s10688-005-0014-8. Google Scholar
L. Guan, P. Sun and W. Wu, Measures of intermediate entropies and homogeneous dynamics, Nonlinearity, 30 (2017), 3349-3361. doi: 10.1088/1361-6544/aa8040. Google Scholar
P. A. Guihéneuf and T. Lefeuvre, On the genericity of the shadowing property for conservative homeomorphisms, Proc. Amer. Math. Soc., 146 (2018), 4225-4237. Google Scholar
M. V. Jakobson, Absolutely continuous invariant measures for one-parameter families of one dimensional maps, Comm. Math. Phys., 81 (1981), 39-88. Google Scholar
C. E. Pfister and W. G. Sullivan, Large deviations estimates for dynamical systems without the specification property. Application to the β-shifts, Nonlinearity, 18 (2005), 237-261. doi: 10.1088/0951-7715/18/1/013. Google Scholar
C. E. Pfister and W. G. Sullivan, On the topological entropy of saturated sets, Ergodic Theory and Dynam. Systems, 27 (2007), 929-956. doi: 10.1017/S0143385706000824. Google Scholar
K. Sigmund, Generic properties of invariant measures for axiom A diffeomorphisms, Invention Math., 11 (1970), 99-109. doi: 10.1007/BF01404606. Google Scholar
K. Sigmund, On dynamical systems with the specification property, Trans. Amer. Math. Soc., 190 (1974), 285-299. doi: 10.2307/1996963. Google Scholar
P. Sun, Measures of intermediate entropies for skew product diffeomorphisms, Discrete Contin. Dyn. Syst., 27 (2010), 1219-1231. doi: 10.3934/dcds.2010.27.1219. Google Scholar
P. Sun, Density of metric entropies for linear toral automorphisms, Dyn. Syst., 27 (2012), 197-204. doi: 10.1080/14689367.2011.649246. Google Scholar
F. Takens and E. Verbitskiy, On the variational principle for the topological entropy of certain non-compact sets, Ergodic theory and Dynam. Systems, 23 (2003), 317-348. doi: 10.1017/S0143385702000913. Google Scholar
D. J. Thompson, A variational principle for topological pressure for certain non-compact sets, J. Lond. Math. Soc., 80 (2009), 585-602. doi: 10.1112/jlms/jdp041. Google Scholar
D. J. Thompson, Irregular sets, the β-transformation and the almost specification property, Trans. Amer. Math. Soc., 364 (2012), 5395-5414. doi: 10.1090/S0002-9947-2012-05540-1. Google Scholar
X. Tian, Different asymptotic behavior versus same dynamical complexity: Recurrence & (ir)regularity, Advances in Mathematics, 288 (2016), 464-526. doi: 10.1016/j.aim.2015.11.006. Google Scholar
X. Tian and W. Sun, Diffeomorphisms with various C1 stable properties, Acta Mathematica Scientia, 32 (2012), 552-558. doi: 10.1016/S0252-9602(12)60037-X. Google Scholar
P. Walters, Equilibrium states for β-transformations and related transformations, Mathematische Zeitschrift, 159 (1978), 65-88. doi: 10.1007/BF01174569. Google Scholar
P. Walters, An Introduction to Ergodic Theory, Berlin-Heidelberg-New York: Springer-Verlag, 1982. Google Scholar
Peng Sun. Minimality and gluing orbit property. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4041-4056. doi: 10.3934/dcds.2019162
Kazumine Moriyasu, Kazuhiro Sakai, Kenichiro Yamamoto. Regular maps with the specification property. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2991-3009. doi: 10.3934/dcds.2013.33.2991
Matthias Rumberger. Lyapunov exponents on the orbit space. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 91-113. doi: 10.3934/dcds.2001.7.91
Fang Zhang, Yunhua Zhou. On the limit quasi-shadowing property. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2861-2879. doi: 10.3934/dcds.2017123
Jinjun Li, Min Wu. Generic property of irregular sets in systems satisfying the specification property. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 635-645. doi: 10.3934/dcds.2014.34.635
Jinjun Li, Min Wu. Divergence points in systems satisfying the specification property. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 905-920. doi: 10.3934/dcds.2013.33.905
Lin Wang, Yujun Zhu. Center specification property and entropy for partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 469-479. doi: 10.3934/dcds.2016.36.469
Manfred G. Madritsch, Izabela Petrykiewicz. Non-normal numbers in dynamical systems fulfilling the specification property. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4751-4764. doi: 10.3934/dcds.2014.34.4751
Lidong Wang, Hui Wang, Guifeng Huang. Minimal sets and $\omega$-chaos in expansive systems with weak specification property. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 1231-1238. doi: 10.3934/dcds.2015.35.1231
Zheng Yin, Ercai Chen. The conditional variational principle for maps with the pseudo-orbit tracing property. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 463-481. doi: 10.3934/dcds.2019019
Shingo Takeuchi. The basis property of generalized Jacobian elliptic functions. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2675-2692. doi: 10.3934/cpaa.2014.13.2675
Krzysztof Frączek, M. Lemańczyk, E. Lesigne. Mild mixing property for special flows under piecewise constant functions. Discrete & Continuous Dynamical Systems - A, 2007, 19 (4) : 691-710. doi: 10.3934/dcds.2007.19.691
Thomas Bartsch, Anna Maria Micheletti, Angela Pistoia. The Morse property for functions of Kirchhoff-Routh path type. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 1867-1877. doi: 10.3934/dcdss.2019123
Kazuhiro Sakai. The oe-property of diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 581-591. doi: 10.3934/dcds.1998.4.581
Pablo Sánchez, Jaume Sempere. Conflict, private and communal property. Journal of Dynamics & Games, 2016, 3 (4) : 355-369. doi: 10.3934/jdg.2016019
Konstantinos Drakakis, Scott Rickard. On the generalization of the Costas property in the continuum. Advances in Mathematics of Communications, 2008, 2 (2) : 113-130. doi: 10.3934/amc.2008.2.113
Bo Su. Doubling property of elliptic equations. Communications on Pure & Applied Analysis, 2008, 7 (1) : 143-147. doi: 10.3934/cpaa.2008.7.143
Yang Yang, Xiaohu Tang, Guang Gong. New almost perfect, odd perfect, and perfect sequences from difference balanced functions with d-form property. Advances in Mathematics of Communications, 2017, 11 (1) : 67-76. doi: 10.3934/amc.2017002
Mourad Choulli. Local boundedness property for parabolic BVP's and the Gaussian upper bound for their Green functions. Evolution Equations & Control Theory, 2015, 4 (1) : 61-67. doi: 10.3934/eect.2015.4.61
Flavio Abdenur, Lorenzo J. Díaz. Pseudo-orbit shadowing in the $C^1$ topology. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 223-245. doi: 10.3934/dcds.2007.17.223
HTML views (116)
Xueting Tian Shirou Wang Xiaodong Wang
|
CommonCrawl
|
Over which product do you suggest the firm keep the tightest control?
Your company has compiled the following data on the small set of products that comprise the specialty repair parts division. Perform ABC analysis on the data. Over which product do you suggest the firm keep the tightest control?
$$ \begin{array}{|c|c|c|} \hline \text { SKU } & \text { Annual Demand } & \text { Unit cost } \\ \hline \text { R11 } & 125 & \$ 25 \\ \hline \text { S22 } & 55 & \$ 90 \\ \hline \text { T33 } & 100 & \$ 500 \\ \hline \text { U44 } & 150 & \$ 550 \\ \hline \text { V55 } & 2000 & \$ 4 \\ \hline \end{array} $$
In: Operations Management
What is the inventory carrying cost per unit per year for this item?
For a certain item, the cost-minimizing order quantity obtained with the basic EOQ model is 200 units, and the total annual inventory (carrying and setup) cost is $600. What is the inventory carrying cost per unit per year for this item?
The assumptions of the production order quantity model are met in a situation where annual demand is 3650 units
The assumptions of the production order quantity model are met in a situation where annual demand is 3650 units, the setup cost is $50, holding cost is $12 per unit per year, the daily demand rate is 10 and the daily production rate is 100. The production order quantity for this problem is approximate.
The production order quantity for this problem is approximately 612 units. What is the average inventory for this problem?
a production order quantity problem has a daily demand rate = 10 and a daily production rate = 50. The production order quantity for this problem is approximately 612 units. What is the average inventory for this problem?
A certain type of computer costs $1,000, and the annual holding cost is 25% of the value of the item.
Section A: This section carries 10 points
A certain type of computer costs $1,000, and the annual holding cost is 25% of the value of the item. Annual demand is 10,000 units, and the order cost is $150 per order. What is the approximate economic order quantity? What is the total inventory cost?
Section B: This section carries 10 points
What is the difference between continuous review system and periodic review system?
The two most basic inventory questions answered by the typical inventory model are:
A) timing of orders and cost of orders.
B) order quantity and cost of orders.
C) timing of orders and order quantity.
D) order quantity and service level.
E) ordering cost and carrying cost.
Which of the following is NOT one of the four main types of inventory?
A. raw material inventory
B. work-in-process inventory
C. maintenance/repair/operating supply inventory (MRO)
D. just-in-time inventory
E. finished -goods inventory Reset Selection
The objective of inventory management is to
A. strike a balance between inventory investment and customer service.
B. take advantage of quantity discounts.
C. decouple various parts of the production process.
D. provide a selection of goods for anticipated customer demand.
A soft drink (mostly water) flows in a pipe at a beverage plant with a mass flow rate that would fill 220 0.355
A soft drink (mostly water) flows in a pipe at a beverage plant with a mass flow rate that would fill 220 0.355 - L cans per minute. At point 2 in the pipe, the gauge pressure is 152kPa and the cross-sectional area is 8.00cm2. At point 1, 1.35m above point 2, the cross-sectional area is 2.00cm2.
Find the mass flow rate. kg/s
Find the volume flow rate. L/s
Find the flow speed at point 1. m/s
In: Physics
Two children, Ferdinand and Isabella, are playing with a water hose on a sunny summer day.
Two children, Ferdinand and Isabella, are playing with a water hose on a sunny summer day. Isabella is holding the hose in her hand 1.0 meters above the ground and is trying to sprayFerdinand, who is standing 10.0 meters away.
Will Isabella be able to spray Ferdinand if the water is flowing out of the hose at a constant speed v0 of 3.5 meters per second? Assume that the hose is pointed parallel to the ground and take the magnitude of the acceleration g due to gravity to be 9.81 meters per second, per second.
To increase the range of the water, Isabella places her thumb on the hose hole and partially covers it. Assuming that the flow remains steady, what fraction f of the cross-sectional area of the hose hole does she have to cover to be able to spray her friend?
Assume that the cross-section of the hose opening is circular with a radius of 1.5 centimeters.
Express your answer as a percentage to the nearest integer.
An ore sample weighs 17.50 N in air. when the sample is suspended
An ore sample weighs 17.50 N in air. when the sample is suspended by a light cord and totally immersed in water the tension in the cord is 11.20 N.
Find the total volume and density of the sample.
A hollow plastic sphere is held below the surface of a fresh-water lake by a cord anchored to the bottom of the lake.
A hollow plastic sphere is held below the surface of a fresh-water lake by a cord anchored to the bottom of the lake. The sphere has a volume of 0.650 cubic meters and the tension in the cord is 900 N.
a) Calculate the buoyant force exerted by the water on the sphere.
b) what is the mass of the sphere?
c) the cord breaks and the sphere rises to the surface. When the sphere comes to rest, what fraction of its volume will be submerged?
A slab of ice floats on a freshwater lake. What minimum volume must the slab have for a 45.0 kg woman to be able to stand on it without getting her feet wet?
A rock is suspended by a light string. When the rock is in air, the tension in the string is 39.2 N.
A rock is suspended by a light string. When the rock is in air, the tension in the string is 39.2 N. When the rock is totally immersed in water, the tension is 28.4 N. When the rock is totally immersed in an unknown liquid, the tension is 18.6 N. What is the Density of the unknown liquid.
|
CommonCrawl
|
Nuanced study of local politics and deforestation in Indonesia
From a new working paper on "The Political Economy of Deforestation in the Tropics" by Robin Burgess, Matthew Hansen, Benjamin Olken, Peter Potapov, and Stefanie Sieber (link),
Logging of tropical forests accounts for almost one-fi…fth of greenhouse gas emissions worldwide, significantly degrades rural livelihoods and threatens some of the world's most diverse ecosystems. This paper demonstrates that local-level political economy substantially affects the rate of tropical deforestation in Indonesia. Using a novel MODIS satellite-based dataset that tracks annual changes in forest cover over an 8-year period, we fi…nd three main results. First, we show that local governments engage in Cournot competition with one another in determining how much wood to extract from their forests, so that increasing numbers of political jurisdictions leads to increased logging. Second, we demonstrate the existence of ""political logging cycles," where illegal logging increases dramatically in the years leading up to local elections. Third, we show that, for local government officials, logging and other sources of rents are short-run substitutes, but that this a¤ect disappears over time as the political equilibrium shifts. The results document substantial deviations from optimal logging practices and demonstrate how the economics of corruption can drive natural resource extraction.
There's lots to like about the paper, including a well-identified causal story. (They were lucky that others had already done most of the leg-work needed to demonstrate this.) It is also a timely contribution, as Indonesia is one of the pilot cases for the new global REDD initiative to deal with green house gas build up through forest protection "carbon credits" (link). This kind of "diagnostic" research can determine intervention points that should be targeted by future programs aiming to promote forest conservation. It's already a long paper, but their case would be strengthened if they provided some narrative accounts that demonstrated the plausibility of their interpretation of the data.
Mechanisms to deal with grade inflation
New York Times covers measures recommended by a UNC committee, led by sociologist Andrew Perrin, to deal with grade inflation (link). The suggestions include issuing a statement on the appropriate proportion of students in each class that should receive A's and also having students' transcripts include information on a class's grade distribution (e.g., the class median grade or the percentage of A's) next to a student's grade for that class.
This is an interesting design problem. For graduate school admissions, as grades become less informative as signals of quality, it would seem that the result would be for standardized tests to receive extra weight. This puts a lot of stress on standardized tests, and it's not clear that, e.g., GREs are up to the job, given that they are meant to screen for such a broad range of application types. Witness the amount of heaping that takes place at the upper end of the score range for the quantitative section of the GRE. Ultimately this introduces a lot of arbitrariness into the graduate admissions process.
The solution of adding extra information to transcripts is reasonable given the constraints. But it passes the buck to admissions committees (and other committees, such as scholarship decision committees) who have to expend the effort to make sense of it all. A question, though, is whether these kinds of transcripts cause students to change their behavior in a way that helps to restore some of the information content in grades. Lot's of other interesting things to consider as part of the design problem, including how an optimal grading scheme should combine information on a student's absolute versus relative (to other students in the class) performance.
Clustering, unit level randomization, and insights from multisite trials
Another update to the previous post (link) on clustering of potential outcomes even when randomization occurs at the unit level within clusters: Researching the topic a bit more, I discovered that the literature on "multisite trials" addresses precisely these issues. E.g., this paper by Raudenbush and Liu (2000; link) examines consequences of site-level heterogeneity in outcomes and treatment effects. They formalize a balanced multisite experiment with an hierarchical linear model, $latex Y_{ij} = \beta_{0j} + \beta_{1j}X_{ij} + r_{ij}$ where $latex r_{ij} \sim i.i.d.N(0,\sigma^2)$, and $latex X_{ij}$ is a centered treatment variable (-0.5 for control, 0.5 for treated). In this case, an unbiased estimator for the site-specific treatment effect, $latex \hat \beta_{1j}$, is given by the difference in means between treated and control at site $latex j$, and the variance of this estimator over repeated experiments in different sites is given by, $latex \tau_{11} + 4\sigma^2/n$, where $latex \tau_{11}$ is the variance of the $latex \beta_{1j}$'s over sites, and $latex n$ is the (constant) number of units at each site. Then, an unbiased estimator for the average treatment effect over all sites, $latex 1,\hdots,J$, is simply the average of these site-specific estimates, with variance $latex \frac{\tau_{11} + 4\sigma^2/n}{J}$. What distinguishes this model from the one that I examined in the previous post is that once the site-specific intercept is taken into account, there remains no residual clustering (hence the i.i.d. $latex r_{ij}$'s). Also, heterogeneity in treatment effects is expressed in terms of a simple random effect (implying constant within group correlation conditional on treatment status). These assumptions are what deliver the clean and simple expression of the variance of the site-specific treatment effect estimator, which may understate the variance in the situations that I examined where residual clustering was present. It would be useful to study how well this expression approximates what happens in the more complicated data generating process that I set up.
Regression discontinuity designs and endogeneity
The Social Science Statistics blog posts a working paper by Daniel Carpenter, Justin Grimmer, Eitan Hersch, and Brian Fienstein on possible endogeneity problems in close electoral margins as a source of causal identification in regression discontinuity studies (link). In their abstract, they summarize their findings as such:
In this paper we suggest that marginal elections may not be as random as RDD analysts suggest. We draw upon the simple intuition that elections that are expected to be close will attract greater campaign expenditures before the election and invite legal challenges and even fraud after the election. We present theoretical models that predict systematic divergences between winners and losers, even in elections with the thinnest victory margins. We test predictions of our models on a dataset of all House elections from 1946 to 1990. We demonstrate that candidates whose parties hold structural advantages in their district are systematically more likely to win close elections. Our findings call into question the use of close elections for causal inference and demonstrate that marginal elections mask structural advantages that are troubling normatively.
A recent working paper by Urquiola and Verhoogen draws similar conclusions about non-random sorting in studies that use RDDs to study the effects of class size on student performance (link).
The problem here is that the values of the forcing variable assigned to individuals are endogenous to complex processes that, very likely, are based on the anticipated gains or losses associated with crossing the cut-off point that defines the discontinuity. Though such is not the case in the above examples, it can also be the case that the values of the cut-off are endogenous. Causal identification requires that the processes determining values of the forcing variable and cut-off are not confounding. What these papers indicate is that RDD analysts need a compelling story for why this is the case. (In other words, they need to demonstrate positive identification [link]).
This can be subtle. As both Carpenter et al and Urquiola and Verhoogen demonstrate, it's useful to think of this in terms of a mechanism design problem. Take a simple example drawing on the "original" application of RD: test scores used to determine eligibility for extra tutoring assistance. Suppose you have two students and they are told that they will take a diagnostic test at the beginning of the year and that the one with the lower score will receive extra assistance during the year, with a tie broken by a coin flip. At the end of the year they will both take a final exam that determines whether they win a scholarship for the following year. The mechanism induces a race to the bottom: both students have incentive to flunk the diagnostic test, each scoring 0 actually, in which case they have a 50-50 chance of getting the help that might increase their chances of landing a scholarship. Interestingly, this actually provides a nice identifying condition. But suppose only one of the students is quick enough to learn what would be the optimal strategy in this situation and the other is a little slow. Then the slow student would put in sincere effort, score above 0 and guarantee that the quick-to-learn student got the tutoring assistance. Repeat this process many times, and you systematically have quick-learners below the "cut-off" and slow learners above it, generating a biased estimate of the average effect of tutoring in the neighborhood of the cut-point. What you need for the RD to produce what it purports to produce is a mechanism by which sincere effort is induced (and, as Urquiola and Verhoogen have discussed, a test that minimizes mean-reversion effects).
UPDATE: A new working paper by Caughey and Sekhon (link) provides even more evidence about problems with close elections as a source of identification for RDD studies. They provide some recommendations (shortened here; the full phrasing is available in the paper):
The burden is on the researcher to…identify and collect accurate data on the observable covariates most likely to reveal sorting at the cut-point. [A] good rule of thumb is to always check lagged values of the treatment and response variables.
Careful attention must be paid to the behavior of the data in the immediate neighborhood of the cut-point. [Our analysis] reveals that the trend towards convergence evident in wider windows reverses close to the cut-point, a pattern that may occur whenever a…treatment is assigned via a competitive process with a known threshold.
Automated bandwidth- and specification-selection algorithms are no sure solution. In our case, for example, the methods recommended in the literature select local linear regression bandwidths that are an order of magnitude larger than the window in which covariate imbalance is most obvious.
It is…incumbent upon the researcher to demonstrate the theoretical relevance of quasi-experimental causal estimates.
This entry was posted in Uncategorized on December 7, 2010 by Cyrus.
|
CommonCrawl
|
Congruence mod $n+1$ of radix-$n$ integers
Let $a$ and $b$ be distinct positive integers. Prove (or disprove) that if $a\equiv b\pmod{n+1}$, then the radix-$n$ representation of $a$ and $b$ differ in at least two digits.
Edit to add some more details:
I arose at this conjecture while trying to find a simple way to partition integers into a small number of sets in such a way that no two integers in a set differ by only a single digit. I have only verified for a relatively small number of cases, but it seems to work. It makes some intuitive sense to me for reasons that are tricky to verbalize.
As for an attempt at a proof, my initial thought was to observe that if $a$ and $b$ differ in only a single digit, then $a-b=cn^d$ for some integers $1\leq c<n$ and $d\geq0$. Then, if $a\equiv b\pmod{n+1}$, then $(n+1)\mid cn^d$. But from here I'm stuck.
elementary-number-theory
zappazappa
$\begingroup$ Ooh...by the way, do not forget to include your own thoughts and efforts on the question. Have you worked on starting a proof, or searched for a counterexample? We need to see such efforts, in your question post, so that we may help lead you to what you may need to finish. Oh...let me also add, we don't provide answers and proofs on demand. We ask, however, that you "pay" for help by showing you've seriously put effort into proving/finding a counter-example. $\endgroup$
– amWhy
$\begingroup$ @amWhy: My bad. I've added a bit more details, including the basic approach I've tried so far. $\endgroup$
– zappa
$\begingroup$ @zappa then n+1 | cn^d But $\;\gcd(n+1, n)=1\,$, so $\;\cdots\;$ P.S. Also note that, the way you wrote it, $c$ could also be negative so the right condition is rather $0 \lt |c| \lt n\,$. $\endgroup$
– dxiv
$\begingroup$ Well, that's a great start. It always helps, and never hurts, to include such background. That can be useful for answerers to launch from. $\endgroup$
$\begingroup$ @dxiv oh crap, now I feel silly. I blame it on a new baby and severe lack of sleep. Thank you! If you finish that proof as an answer I'll accept it; otherwise, I'll do that myself in a little while. $\endgroup$
Assume without loss of generality that $a>b$. If $a$ and $b$ differ in just one digit, then there must exist integers $c\in[1..n-1]$ and $e\geq0$ such that $a-b=c\,n^e$. Now, since $a\equiv b\bmod{(n+1)}$, it follows that $(n+1)\mid c\,n^e$. But $\gcd(n,n+1)=1$, so (using an easy generalization of Euclid's Lemma to arbitrary integers) this can only happen when $(n+1)\mid c$, which is clearly impossible due to the restriction that $c\in[1..n-1]$.
(For the claim that $\gcd(n,n+1)=1$: using the fact that $\forall x,y\in\mathbb{N},\gcd(x,y)=\gcd(x,y-x)$, it follows immediately that $\gcd(n,n+1)=\gcd(n,1)=1$.)
Not the answer you're looking for? Browse other questions tagged elementary-number-theory or ask your own question.
Base system and divisibility
Prove that $n^9 \equiv n \pmod{30}$ if $(30,n) > 1$
Find integers $m$ and $n$ such that $14m+13n=7$.
Estimating the number of integers relatively prime to $6$ between $1$ and some integer $x$?
Meaning of the Corollary 31.29 of the Chinese Remainder Theorem
Is there a quicker proof to show that $2^{10^k} \equiv 7 \pmod{9}$ for all positive integers $k$?
Solution of a congruence problem in number theory
Little question about gcd and Fermat pseudoprimes.
If $a$ is relatively prime to $m$ and $a \equiv b\ (\textrm{mod}\ m)$, is $b$ relatively prime to $m$?
$n$ is prime iff $\binom{n^2}{n} \equiv n \pmod{n^4}$?
|
CommonCrawl
|
A powered Gronwall-type inequality and applications to stochastic differential equations
Exact behavior of positive solutions to elliptic equations with multi-singular inverse square potentials
December 2016, 36(12): 7191-7206. doi: 10.3934/dcds.2016113
On large deviations for amenable group actions
Dongmei Zheng 1, , Ercai Chen 2, and Jiahong Yang 1,
School of Mathematical Sciences and Institute of Mathematics, Nanjing Normal University, Nanjing, Jiangsu 210023, China, China
School of Mathematical Sciences and Institute of Mathematics, Nanjing Normal University, Nanjing 210023, Jiangsu
Received August 2015 Revised July 2016 Published October 2016
By proving an amenable version of Katok's entropy formula and handling the quasi tiling techniques, we establish large deviations bounds for countable discrete amenable group actions. This generalizes the classical results of Lai-Sang Young [21].
Keywords: Large deviation, specification, amenable group, quasi tiling., entropy.
Mathematics Subject Classification: Primary: 37A15, 37A60; Secondary: 60F1.
Citation: Dongmei Zheng, Ercai Chen, Jiahong Yang. On large deviations for amenable group actions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7191-7206. doi: 10.3934/dcds.2016113
R. Bowen, Topological entropy for noncompact sets,, Trans. Amer. Math. Soc., 184 (1973), 125. doi: 10.1090/S0002-9947-1973-0338317-X. Google Scholar
M. Brin and A. Katok, On local entropy,, Lecture Notes in Mathematics, 1007 (1983), 30. doi: 10.1007/BFb0061408. Google Scholar
N. Chung, Topological pressure and the variational principle for actions of sofic groups,, Ergod. Th. Dynam. Sys., 33 (2013), 1363. doi: 10.1017/S0143385712000429. Google Scholar
N. Chung and H. Li, Homoclinic group, IE group, and expansive algebraic actions,, Invent. Math., 199 (2015), 805. doi: 10.1007/s00222-014-0524-1. Google Scholar
T. Downarowicz, D. Huczek and G. Zhang, Tilings of amenable groups, to appear in J. Reine Angew. Math.,, , (). Google Scholar
A. Eizenberg, Y. Kifer and B. Weiss, Large deviations for $Z^d$-Actions,, Comm. Math. Physics, 164 (1994), 433. doi: 10.1007/BF02101485. Google Scholar
R. S. Ellis, Entropy, Large Deviations and Statistical Mechanics,, Springer-Verlag, (1985). doi: 10.1007/978-1-4613-8533-2. Google Scholar
W. Huang, X. Ye and G. Zhang, Local entropy theory for a countable discrete amenable group action,, J. Funct. Anal., 261 (2011), 1028. doi: 10.1016/j.jfa.2011.04.014. Google Scholar
A. Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms,, Publ. Math. I.H.E.S., 51 (1980), 137. Google Scholar
Y. Kifer, Multidimensional random subshifts of finite type and their large deviations,, Probability Theory and Related Fields, 103 (1995), 223. doi: 10.1007/BF01204216. Google Scholar
E. Lindenstrauss, Pointwise theorems for amenable groups,, Invent. Math., 146 (2001), 259. doi: 10.1007/s002220100162. Google Scholar
B. Liang and K. Yan, Topological pressure for sub-additive potentials of amenable group actions,, J. Funct. Anal., 262 (2012), 584. doi: 10.1016/j.jfa.2011.09.020. Google Scholar
J. M. Ollagnier, Ergodic Theory and Statistical Mechanics,, Lecture Notes in Math. 1115, (1115). doi: 10.1007/BFb0101575. Google Scholar
J. M. Ollagnier and D. Pinchon, The variational principle,, Studia Math., 72 (1982), 151. Google Scholar
D. S. Ornstein and B. Weiss, Entropy and isomorphism theorems for actions of amenable groups,, J. Anal. Math., 48 (1987), 1. doi: 10.1007/BF02790325. Google Scholar
L. Rey-Bellet and L.-S. Young, Large deviations in non-uniformly hyperbolic dynamical systems,, Ergod. Th. Dynam. Sys., 28 (2008), 587. doi: 10.1017/S0143385707000478. Google Scholar
A. Shulman, Maximal ergodic theorems on groups,, Dep. Lit. NIINTI, (1988). Google Scholar
A. M. Stepin and A. T. Tagi-Zade, Variational characterization of topological pressure of the amenable groups of transformations,, Dokl. Akad. Nauk SSSR, 254 (1980), 545. Google Scholar
T. Ward and Q. Zhang, The Abramov-Rokhlin entropy addition formula for amenable group action,, Monatshefte für Mathematik, 114 (1992), 317. doi: 10.1007/BF01299386. Google Scholar
B. Weiss, Actions of amenable groups,, in Topics in Dynamics and Ergodic Theory, 310 (2003), 226. doi: 10.1017/CBO9780511546716.012. Google Scholar
L.-S. Young, Some large deviation results for dynamical systems,, Trans. Amer. Math. Soc., 318 (1990), 525. doi: 10.2307/2001318. Google Scholar
D. Zheng and E. Chen, Bowen entropy for actions of amenable groups,, Israel J. Math., 212 (2016), 895. doi: 10.1007/s11856-016-1312-y. Google Scholar
Xiaojun Huang, Jinsong Liu, Changrong Zhu. The Katok's entropy formula for amenable group actions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4467-4482. doi: 10.3934/dcds.2018195
Xiankun Ren. Periodic measures are dense in invariant measures for residually finite amenable group actions with specification. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1657-1667. doi: 10.3934/dcds.2018068
Jean-Paul Thouvenot. The work of Lewis Bowen on the entropy theory of non-amenable group actions. Journal of Modern Dynamics, 2019, 15: 133-141. doi: 10.3934/jmd.2019016
Masayuki Asaoka, Kenichiro Yamamoto. On the large deviation rates of non-entropy-approachable measures. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4401-4410. doi: 10.3934/dcds.2013.33.4401
Lin Wang, Yujun Zhu. Center specification property and entropy for partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 469-479. doi: 10.3934/dcds.2016.36.469
Yueling Li, Yingchao Xie, Xicheng Zhang. Large deviation principle for stochastic heat equation with memory. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5221-5237. doi: 10.3934/dcds.2015.35.5221
Xiaojun Huang, Yuan Lian, Changrong Zhu. A Billingsley-type theorem for the pressure of an action of an amenable group. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 959-993. doi: 10.3934/dcds.2019040
Kazuo Yamazaki. Large deviation principle for the micropolar, magneto-micropolar fluid systems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 913-938. doi: 10.3934/dcdsb.2018048
Marcelo Sobottka. Topological quasi-group shifts. Discrete & Continuous Dynamical Systems - A, 2007, 17 (1) : 77-93. doi: 10.3934/dcds.2007.17.77
Yves Guivarc'h. On the spectrum of a large subgroup of a semisimple group. Journal of Modern Dynamics, 2008, 2 (1) : 15-42. doi: 10.3934/jmd.2008.2.15
Yoshikazu Katayama, Colin E. Sutherland and Masamichi Takesaki. The intrinsic invariant of an approximately finite dimensional factor and the cocycle conjugacy of discrete amenable group actions. Electronic Research Announcements, 1995, 1: 43-47.
Peng Zhang. Multiperiod mean semi-absolute deviation interval portfolio selection with entropy constraints. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1169-1187. doi: 10.3934/jimo.2016067
Anna Amirdjanova, Jie Xiong. Large deviation principle for a stochastic navier-Stokes equation in its vorticity form for a two-dimensional incompressible flow. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 651-666. doi: 10.3934/dcdsb.2006.6.651
Jérôme Buzzi, Sylvie Ruette. Large entropy implies existence of a maximal entropy measure for interval maps. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 673-688. doi: 10.3934/dcds.2006.14.673
Jean-François Biasse. Subexponential time relations in the class group of large degree number fields. Advances in Mathematics of Communications, 2014, 8 (4) : 407-425. doi: 10.3934/amc.2014.8.407
Wenying Zhang, Zhaohui Xing, Keqin Feng. A construction of bent functions with optimal algebraic degree and large symmetric group. Advances in Mathematics of Communications, 2020, 14 (1) : 23-33. doi: 10.3934/amc.2020003
Fritz Colonius. Invariance entropy, quasi-stationary measures and control sets. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 2093-2123. doi: 10.3934/dcds.2018086
David Burguet. Examples of $\mathcal{C}^r$ interval map with large symbolic extension entropy. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 873-899. doi: 10.3934/dcds.2010.26.873
Jean Dolbeault, Giuseppe Toscani. Fast diffusion equations: Matching large time asymptotics by relative entropy methods. Kinetic & Related Models, 2011, 4 (3) : 701-716. doi: 10.3934/krm.2011.4.701
Elon Lindenstrauss. Pointwise theorems for amenable groups. Electronic Research Announcements, 1999, 5: 82-90.
HTML views (0)
Dongmei Zheng Ercai Chen Jiahong Yang
|
CommonCrawl
|
Millennial-scale variability of East Asian summer monsoon inferred from sea surface salinity in the northern East China Sea (ECS) and its impact on the Japan Sea during Marine Isotope Stage (MIS) 3
Yoshimi Kubota1,
Katsunori Kimoto2,
Ryuji Tada3,
Masao Uchida4 &
Ken Ikehara5
Color alternations in deep-sea sediment in the Japan Sea have been thought to be linked to millennial-scale variations in the East Asian summer monsoon (EASM), associated with the Dansgaard-Oeschger (D-O) cycles and Heinrich events in the high-latitude North Atlantic during Marine Isotope Stage 3 (MIS 3). In this study, we investigate the variability of sea surface salinity (SSS) in the northern East China Sea (ECS) to evaluate the EASM precipitation in South China and its linkage to the sediment color of the Japan Sea during MIS 3. High time resolution (< 100 years) SSS along with sea surface temperature (SST) records were reconstructed using paired Mg/Ca and the oxygen isotope of planktic foraminifera Globigerionoides ruber sensu stricto from core KR07-12 PC-01 recovered from the northern ECS. The results indicate that millennial-scale variability of the SSS is observed with the amplitude of ~ ± 1 during MIS 3. The variations in SSS are well correlated to D-O cycles and Heinrichs. The EASM precipitation decreases in association with the southward shift of the westerly jet in D-O stadials and Heinrichs, suggesting suppressed moisture convergence along the EASM front associated with weakened North Pacific subtropical high in response to the slow-down of the Atlantic Meridional Overturning Circulation. In a comparison between the SSS in the ECS and the color alternation in the Japan Sea, closely correlated variations between the two records in the interval 44–34 ka indicate that the SSS in the ECS plays a crucial role in regulating nutrient and salinity inflow into the Japan Sea. However, the linkage becomes ambiguous, especially after ~ 30 ka, when the sea level falls toward the level of the last glacial maximum. This shift is associated with changes in sediment facies, confirming that the underlying mechanism in regulating the sedimentary change in the Japan Sea depends on the sea level.
Since high-resolution, continuous oxygen isotope (δ18O) records of the Greenland ice cores covering the past > 100 ka have demonstrated millennial-scale climate changes during Marine Isotope Stage (MIS) 3 (59–29 ka), the so-called Dansgaard-Oeschger (D-O) cycle. The D-O cycle attracted considerable attention due to its abruptness, large amplitude, high frequency, and potential impact on global climate (e.g., Dansgaard et al. 1993). In the Asian monsoon region, the Quaternary hemipelagic sediments of the Japan Sea are characterized by centimeter- to decimeter-scale alternation of dark and light clay to silty clay, which are bio-siliceous and/or bio-calcareous to a various degree (Tada et al. 1999; Tada et al. 2018; Irino et al. 2018). Tada et al. (1999) demonstrated the occurrence of the basin-wide dark-light color alternations in the sediments of the Japan Sea, which are well correlated with D-O cycles. They hypothesized that these sediment color alternations could be attributed to changes in the nutrient and freshwater influx through the Tsushima Strait due to changes in the Changjiang (Yangtze) River discharge (i.e., summer precipitation in South China). Namely, increased summer precipitation in the Changjiang Basin caused the expansion of the low-salinity and nutrient-enriched coastal waters in the northern part of the East China Sea (ECS) during the interstadials of the D-O cycle, which flowed into the Japan Sea and resulted in the accumulation of organic carbon-rich dark layers (Tada et al. 1999).
Summer precipitation in East Asia is known as Meiyu (China), Baiu (Japan), and Changma (Korea), and is one of the crucial aspects of the East Asian summer monsoon (EASM) system. A series of high-resolution speleothem δ18O (δ18Osp) records in southeastern and central China, which had been interpreted as reflecting EASM intensity/precipitation, demonstrated millennial-scale variations in association with D-O cycles (Wang et al. 2001; Cheng et al. 2016). However, the interpretation of the δ18Osp is controversial as studies using analyses of δ18O mass balance and climate models suggested that the δ18Osp record reflects neither local precipitation nor EASM circulation intensity, but rather changes in the moisture source or seasonality of the precipitation (Clemens et al. 2010; Dayem et al. 2010; Pausata et al. 2011; Maher and Thompson 2012). More recently, a trace element study on one of the Chinese speleothems has claimed that millennial-scale δ18Osp variations during the last deglaciation were better interpreted as changes in the EASM circulation, instead of the local precipitation (Zhang et al. 2018). By contrast, a proxy of sea surface salinity (SSS) from a marine core provides reliable information on the changes in regional precipitation, through a freshwater inflow from rivers, which can further serve to test the hypothesis by Tada et al. (1999).
The northern ECS is a crucial area, vis-à-vis its position where the freshwater from the Changjiang (Yangtze) River reaches it and the waters from the ECS inflow to the Japan Sea (Fig. 1). Ijiri et al. (2005) reported millennial-scale light δ18O peaks of planktic foraminifera Globigerinoides ruber sensu stricto (δ18Op) in the northern ECS during Marine Isotope Stage (MIS) 3 and stated that these peaks might capture larger freshwater discharge events associated with D-O interstadials. However, their age model did not allow for determination of these timings precisely enough compared to other records, and the effect of temperature changes on foraminiferal δ18O was not adequately evaluated because they used an alkenone temperature proxy, which may have been recording sea surface temperature (SST) during a different season and at different depth from those of G. ruber s.s. (Kim et al. 2015).
a Locality map showing site of KR07-12 PC-01 and reference sites. b Site location in the ECS during MIS 3 and the shore area (shaded area) at − 80 m sea level during MIS 3. c SST (Locarnini et al. 2010) and d SSS in July (Antonov et al. 2010) in the ECS
Thus, in this study, we reconstructed the local δ18O of seawater (δ18Ow-local), an indicator of SSS, and SST in the northern ECS during MIS 3 using paired δ18Op and Mg/Ca of G. ruber s.s., and estimated variability of SSS to test the presence of millennial-scale EASM precipitation oscillations in South China associated with D-O cycles. We used a marine core KR07-12 PC-01 recovered from the northern ECS that covers the time interval since 44 ka, thus covering the late half of MIS 3.
The ECS is a marginal sea in the northwestern Pacific Ocean, and the East Asian monsoons control its seasonal ocean current system (e.g., Ichikawa and Beardsley 2002). The SSS in the ECS changes drastically throughout a year, particularly in the northern part, due to the considerable influence of river discharge caused by the EASM precipitation (Ichikawa and Beardsley 2002). The Changjiang River is the most significant contributor of freshwater to the ECS, accounting for ~ 90% of the total freshwater supplied to the ECS from rivers, and its discharge is characterized by a remarkable seasonal cycle with its zenith in July and nadir in January (Chen et al. 1994; Isobe et al. 2002). Accordingly, the SSS in the northern ECS decreases in summer and increases in winter (Ichikawa and Beardsley 2002). Changjiang Diluted Water (CDW) is formed by Changjiang freshwater mixing with the ambient seawater around the estuary, which then flows eastward into the Japan Sea during the summer (e.g., Ichikawa and Beardsley 2002; Isobe et al. 2002).
Based on instrumental observations, the maximum SST near the studied site is 28.4 °C in August, and the minimum is 17.7 °C in February (Japan Oceanographic Data Center 2004). In contrast, SSS reaches a maximum of 34.7 in February and decreases to a minimum of 33.0 in July, when the maximum discharge from the Changjiang River occurs. The spatial patterns of SST and SSS during summer are characterized by lower SST and SSS in the northwest and higher SST and SSS in the southeast part of the northern ECS (Fig. 1). The lower SSS during the summer suggests that river discharge due to the EASM dominates the seasonal changes in the Kuroshio (Sun et al. 2005); otherwise, the strengthened Kuroshio results in increased SSS during summer, when its volume transport is the largest (Ichikawa and Beardsley 1993; Andres et al. 2008). It was demonstrated that the CDW reaches the easternmost part of the northern ECS across the shelf break during summer, based on 226Ra and 228Ra measurements of surface waters (Inoue et al. 2012). Inoue et al. (2012) calculated the relative contribution of the Changjiang freshwater in this area as approximately 2–3% in July and October, based on mass balance calculation of the SSS and 228Ra concentration of surface water samples collected between 2008 and 2010. Instrumental SSS data around the core site shows a negative correlation with the Changjiang River discharge during the wet season for the past 50 years (Kubota et al. 2015), confirming that the studied site is an appropriate location to reconstruct SSS.
Methods/Experimental
The piston core KR07-12 PC-01 (31° 40.63′ N, 129° 01.98′ E, 736 m water depth) was composed of olive-black or olive-gray, homogenous, well-bioturbated clay. Scattered shell fragments and mottled textures caused by burrows were occasionally identified (Fig. 2). Ash layers were identified at depth intervals of 98.0–118.0 cm and 947.5–990.3 cm, which correspond to the K-Ah and Aira (AT) tephras, respectively, based on the morphology and refractive index of glass shards in each tephra, and heavy mineral assemblages (Table 1). An ash pocket was found at 170.0–172.0 cm, identified as Sz-S (Table 1). The refractive index of each volcanic glass shard was measured using the RIMS-2002 analyzer (Danhara et al. 1992) at Kyoto Fission Track Co. Ltd. The other ashes found at 75.0–84.0 cm and 188.0–190.0 cm are unknown.
Age-depth relationship of core KR07-12 PC-01 together with 2σ age uncertainties and columnar section. The black circles and lines represent 14C-based datum and its age model. The red line indicates the fine-tuned age model. The blue crosses represent the tie points in the fine-tuned age model
Table 1 Results of tephra analyses
Approximately 10 cm3 of bulk sediment samples was subsampled at 2.5 cm intervals and washed using a 63 μm mesh to concentrate foraminiferal tests for 14C dating, δ18Op, and Mg/Ca measurements. For 14C dating, approximately 3–5 mg of the planktic foraminifera Neogloboquadrina dutertrei, from the size fraction > 250 μm, was collected from 15 horizons (Table 2). All of the 14C data were acquired at the NIES-TERRA AMS facility, National Institute for Environmental Studies (Tanaka et al. 2000; Yoneda et al. 2004; Uchida et al. 2004). We analyzed 244 samples in total for paired δ18Op and Mg/Ca. The time resolution of the analyses was ~ 80 years for MIS 3, ~ 130 years for early MIS 2 and Last Glacial Maximum (LGM), and > 300 years for the last deglaciation. For δ18Op and trace element analyses, approximately 30–40 individual G. ruber s.s. was picked from 250 to 355 μm size fractions and crushed to homogenize. After being cleaned by milli-Q water and methanol, approximately 50–70 μg of foraminiferal tests was separated and used for the oxygen isotope analysis. The δ18Op was measured by two Finnigan MAT 252 Stable Isotope Ratio Mass Spectrometers with a Keil III carbonate device installed at the Mutsu Institute for Oceanography in the Japan Agency for Marine-Earth Science and Technology and at the University of Tokyo. The reproducibility of the measurement was better than ± 0.05‰ (1σ) for δ18Op, as determined by replicate measurements of international standards NBS-19 (RM8544 Limestone) and JCp-1 (Coral Porites sp.), provided by the Geological Survey of Japan (GSJ/AIST).
Table 2 Results of AMS radiocarbon and calendar ages
Additional cleaning steps were performed for trace element analyses. The samples were cleaned using the reductive method, including reductive and oxidative cleaning steps published by Boyle and Keigwin (1985), with a slight modification (Kubota et al. 2010). Mg/Ca analysis was performed using a Thermo Finnigan ELEMENT 2 High-Resolution Multi-Sector Inductively Coupled Plasma Mass Spectrometer (HR-ICP-MS) at the Mutsu Institute for Oceanography in the Japan Agency for Marine-Earth Science and Technology. Isotopes of four elements (24Mg, 44Ca, 48Ca, and 55Mn) were analyzed using Sc as an internal standard (Uchida et al. 2008). Four working standards were prepared by successive dilutions of the stock standard solutions to match the concentrations of Ca (approximately 100 ppb, 500 ppb, 2 ppm, and 5 ppm) and Mg (0.05 ppb, 0.2 ppb, 1.0 ppb, and 5 ppb), respectively, which covered the Ca and Mg concentration ranges of all samples (Kubota et al. 2010, 2015). The precision of the replicate analysis of the working standard for Mg/Ca was better than ± 0.09 mmol/mol, corresponding to ± 0.3 °C in temperature scale. Mn/Ca was analyzed to monitor Mn-Fe oxide contamination. Mn/Ca was under 200 μmol/mol in most of the samples (Additional file 1: Figure S1), but exceeded 400 μmol/mol in 5 samples. However, these values are not excluded because there is no correlation between Mg/Ca and Mn/Ca (Additional file 1: Figure S1). In order to examine the homogeneity of the sample, G. ruber s.s. were repicked from 66 randomly selected horizons of core KR07-12 PC-01 and run for Mg/Ca analyses with the same cleaning protocol. The reanalyzed Mg/Ca value at 1392.1–1389.6 cm was 5.0 °C lower than the first one and was discarded. An average of the differences in Mg/Ca values of the replicate analyses for the rest of 65 horizons was 0.16 mmol/mol (~ 0.7 °C). Compared with the Mg/Ca values of U1429 for the same period in the previous study (Clemens et al. 2018), an average of KR07-12 PC-01 was slightly lower by 0.157 mmol/mol than those of U1429. Since the Mg/Ca values of U1429 were confirmed by an international CaCO3 reference standard, coral Porites standard material JCp-1 (4.199 ± 0.065 mmol/mol, Hathorne et al. 2013), we added 0.157 mmol/mol (~ 0.6 °C) to the original Mg/Ca values of KR07-12 PC-01 to adjust to the value of the U1429.
Although Mg/Ca of foraminiferal calcite is primarily controlled by temperature, a secondary effect of the salinity has been pointed out; Mg/Ca increases when salinity increases (e.g., Lea et al. 1999). In this study, in order to examine the salinity effect on the results to better estimate the SSS variability in the northern ECS, we tested three calibration methods to obtain SST and δ18Ow-local as follows: (1) a conventional Mg/Ca calibration without salinity correction (referred to "no salinity correction"), (2) an Mg/Ca calibration with salinity correction ("salinity correction"), (3) an Mg/Ca calibration with salinity correction in addition to incorporating the effect of changes in endmember δ18Ow ("endmember correction"). The SSS is reconstructed in the methods of "salinity correction" and "endmember correction." Here, we refer to "endmember" as physical values such as SSS and δ18Ow in the source regions. The purpose of the "endmember correction" is to examine whether or not changes in the δ18Ow in the source region account for changes in the δ18Ow in the northern ECS. The results are compared in Fig. 3.
Comparison among the results with three different calibrations for KR07-12 PC-01: "No salinity correction" (gray), "salinity correction" (red), and "endmember correction" (blue). a δ18O of planktic foraminifer (δ18Op). b SST. c δ18Ow-local. d SST vs. δ18Ow-local derived from "no salinity correction" calibration method. e SST vs. δ18Ow-local derived from "salinity correction" calibration method
First, for the "no salinity correction" calculation, we used an Mg/Ca calibration (Eq. 1) by Dekens et al. (2002), without correction for sediment water depth, and Bemis et al.'s (1998) δ18O–temperature relationship (low light) (Eq. 2). The δ18Ow-local was obtained after removing the global sea level effect from total δ18Ow using Waelbroeck et al.'s (2002) curve (Eq. 3; Thirumalai et al. 2016).
$$ \mathrm{Mg}/\mathrm{Ca}=\exp\ \left(0.09\times T\right)/0.38 $$
$$ T=16.5-4.8\times \left({\updelta}^{18}{\mathrm{O}}_{\mathrm{p}}-{\updelta}^{18}{\mathrm{O}}_{\mathrm{w}}+0.27\right) $$
$$ {\updelta}^{18}{\mathrm{O}}_{\mathrm{w}}={\updelta}^{18}{\mathrm{O}}_{\mathrm{w}-\mathrm{local}}-0.008\times \mathrm{sea}\ \mathrm{level} $$
Here, T, δ18Op, and δ18Ow denote temperature, δ18O of planktic foraminiferal calcite, and δ18O of water. δ18Ow in Eq. 3 includes local and global (sea level) components. Second, for the "salinity correction" calculation, we used an Mg/Ca calibration with salinity correction (Eq. 4; Tierney et al. 2015) instead of Eq. 1. The SST and δ18Ow-local were derived using a MATLAB script, Paleo-Seawater Uncertainty Solver (PSU Solver) (Thirumalai et al. 2016) with the methodology of Clemens et al. (2018) as explained below. As no regional Mg/Ca calibration exists for the ECS, we employed the Mg/Ca calibration of Tierney et al. (2015) that utilized all available culture data in a multivariate calibration that accounts for both salinity and temperature. We utilized the δ18O-temperature relationship of Eq. 2 and the ECS seawater δ18Ow-local-salinity relationship of Horikawa et al. (2015) (Eq. 5), as well as the global sea-level curve of Waelbroeck et al. (2002) (Eq. 3).
$$ \mathrm{Mg}/\mathrm{Ca}=\exp\ \left(0.084\times T+0.051\times S-2.54\right) $$
$$ {\updelta}^{18}{\mathrm{O}}_{\mathrm{w}-\mathrm{local}}=0.23\times S-7.74 $$
Here, S represents salinity.
Third, for the "endmember correction" calculation, we replaced Eq. 5 to 6 (LeGrande and Schmidt 2011). This model (Eq. 6) allows slope (a) and intercept (b) to vary through time so that the effect of temporal changes in endmembers is captured. The following two steps were conducted to obtain "a" and "b" in Eq. 6: step 1—estimate of the freshwater δ18O to obtain intercept "b," step 2—estimate of the Kuroshio water δ18O and salinity to obtain slope "a." Eventually, errors are given based on the Monte Carlo simulation, as will be explained later in this section. Before the calculation, each data set was linearly interpolated (100-year step).
$$ {\updelta}^{18}{\mathrm{O}}_{\mathrm{w}}=a\times S+b $$
In step 1, we regard the freshwater δ18O of the Changjiang River as intercept "b" (Kubota et al. 2015). Following Kubota et al. (2015), we utilized a composite record of Cheng et al.'s (2016) δ18Osp to estimate the freshwater δ18Ow (Kubota et al. 2015), on the assumption that the temporal variations in δ18Osp reflect the drip water δ18O in the caves, hence, the river water δ18O. We used the total δ18Ow, which includes local and global components, in Eq. 6 instead of δ18Ow-local to avoid correcting a global component for δ18Osp. Here, we define that the variability in intercept "b" is equal to that in δ18Osp (Eq. 7). Both sides of Eq. 7 represent subtracting a modern value from values at a given time in the past. We employed − 7.74‰, which is derived from Eq. 5 (Horikawa et al. 2015), as the modern intercept (left side of Eq. 7). For the modern δ18Osp value (right side of Eq. 7), an average for the last 1 ka of the Chinese speleothem record (Cheng et al. 2016) was employed. The time series of the endmember δ18Ow is shown in Additional file 2: Figure S2. The adequacy of these parameters will be discussed in the "Results and Discussion" section.
$$ b-\left(-7.74\right)={\updelta}^{18}{\mathrm{O}}_{\mathrm{sp}}-\left(-9.0\right) $$
In step 2, we obtained another set of δ18Ow and S as input data in Eq. 6 to determine slope "a" using a high-time resolution paired Mg/Ca and δ18Op data of G. ruber from core MD06-3067 (6° 31′ N, 126° 30′ E, 1575 m water depth) in the western tropical Pacific (Additional file 2: Figure S2; Bolliet et al. 2011). We regard this site as a reference location for the endmember of the Kuroshio water, since the site of core MD06-3067 is affected by the Mindanao Current that shares the same origin of waters, North Equatorial Current, with the Kuroshio. A strong upwelling affects intermediate and subsurface waters in this region, but the upwelled water does not reach the surface upper 75 m (Udarbe-Walker and Villanoy 2001) where G. ruber dwells. Therefore, it is inferred that surface water in this region still holds the origin's water properties. In fact, the SSS at nearby site MD06-3067 in the western tropical Pacific is 34.0–34.3 during July to September (World Ocean Atlas 2009; Antonov et al. 2010), which is similar to the SSS in the Kuroshio region in the ECS. Although low-resolution Mg/Ca and δ18O data exists in the north of the bifurcation latitude (~ 14° N) of the Kuroshio at Benham Rise (MD06-3047B; Jia et al. 2018), the site MD06-3067 is a better site in terms of the time resolution, as a major process of this "endmember correction" is to evaluate the influence of the millennial-scale variability in the source region. The δ18Ow and SSS of core MD06–3067 are derived from PSU Solver with Eqs. 2–4 and δ18Ow-salinity equation for Palau (Conroy et al. 2017).
In the final step, Eqs. 2–4, 6, and 7 were solved with the input data derived from steps 1 and 2. We performed the Monte Carlo simulation to constrain errors associated with the age uncertainty among the records and temperature, salinity, and δ18Ow reconstructions propagated from analytical errors for measurements of Mg/Ca and δ18Op. In the Monte Carlo simulation, temperature, salinity, and δ18Ow at a given time "t" were calculated with randomly selected input data (Mg/Ca and δ18Op of KR07-12 PC-01, δ18Osp, and δ18Ow and salinity of MD06-3067) in the range of t – 0.5 ka < t < t + 0.5 ka. The analytical error was prescribed to the Mg/Ca and δ18Op data, incorporated as a normal distribution (Thirumalai et al. 2016). We set 0.16 mmol/mol and 0.05‰ for the Mg/Ca and δ18O uncertainties, respectively. Median value and standard deviation (1σ) are yielded at each time step after 500 repetitions of the simulation (Fig. 3). All of the calculations were performed on MATLAB.
Age model
Conventional 14C ages were calibrated to calendar ages using CALIB 7.1 (Stuiver et al. 2016) with Marine13 (Reimer et al. 2013). For the local reservoir correction (ΔR), − 93 ± 69 years, a weighted mean of the 10 nearest locations to the core site (available at http://calib.org/marine/) was applied. Constant linear sedimentation rates were assumed between two adjacent 14C-based age-controlling points (listed in Table 2) when constructing the age model (Fig. 2). In addition to the 14C-based datums, published calendar ages of the K-Ah (7.165–7.303 ka) and AT (30.009 ± 0.189 ka) tephras, dated on Lake Suigetsu sediments in SG06 core (Smith et al. 2013), were used as the age datums. In this age model, instantaneous sedimentation was assumed between the top and bottom of the two thick ash layers (K-Ah 98.0–118.0 cm, AT 947.5–990.3 cm). A published 14C age of S-Sz is 11.295 ± 0.30 ka derived from soil materials below the tephra layer (Okuno et al. 1997), which was calibrated to a calendar age of 12.631–13.768 ka (2σ) in this study using CALIB 7.1 (Stuiver et al. 2016) with IntCal13 (Reimer et al. 2013). We did not apply the recalibrated age of S-Sz in our model since its uncertainty is large. The S-Sz age with our age model, which is 12.4 ka, is younger than the 2σ probability range of the soil samples, probably because the soil materials were sampled below the tephra layer (Okuno et al. 1997).
The age model was further fine-tuned to a composite record of the Chinese δ18Osp (Additional file 3: Figure S3 and Additional file 4: Table S1), assuming an in-phase relationship between the climate in the northern ECS and hydroclimate variability over China as mentioned in Clemens et al. (2018). This assumption is justified in the following discussion by correlating the AT tephra layer in our core and its age with others. AT tephra is most precisely dated using the Suigetsu Lake sediment, whose microfossil 14C ages are generally consistent with the IntCal13 curve for a 31–29 ka interval (Reimer et al. 2013). Although the speleothem 14C data from Hulu Cave, China, was available back to 27 ka in 2013, a recently published 14C data set from the same cave confirmed the consistency with IntCal13 between 10 and 33 ka within the uncertainty range of less than ± 0.5 ka (Cheng et al. 2018 and included figures). Correlating the AT tephra with the speleothem record is possible through IntCal13 without consideration of the marine reservoir, since the ages of our core around 30 ka rely on the age determined by the Suigetsu Lake sediment. We expect that the age uncertainty range of this correlation is probably less than 0.5 ka. Compared with the time series of δ18Osp, a positive peak in δ18Op in KR07-12 PC-01 at 30 ka is highly likely correlated to a positive peak in δ18Osp. This agreement allows us to conclude that this assumption is plausible and further infer that in-phase correlation is highly likely for other climatic events during MIS 3. An out-of-phase relationship is especially unlikely, as the time offset between negative and positive peaks is beyond the age uncertainty.
The age-depth curve of the fine-tuned age model was compared with the age model based only on 14C datums in Additional file 3: Figure S3. The fine-tuned age model did not contradict the 14C-based ages as the obtained ages with the fine-tuned age model at the horizons of the 14C datums were within the 2σ uncertainties (Fig. 2 and Additional file 3: Figure S3). The average linear sedimentation rates were approximately 40 cm/ka during MIS 3 and 15 cm/ka during the Holocene. The highest one (~ 50 cm/ka) was found at LGM to deglaciation. Our sampling interval is frequent enough to satisfy the required Nyquist frequency (1/500 year−1 in this case), which is 1/2 of the expected frequency to obtain accurate peaks in a time series data. The time resolution of the 14C-based datums is 3–4 ka as listed in Table 2. We assume that the age uncertainty of the sampling horizons between the datums is similar to those at the datums based on the linear age-depth relationship in our age model (Fig. 2).
Salinity effect on Mg/Ca calibration
This section discusses the results derived from "no salinity correction" and "salinity correction." The temporal patterns of the variations in SST and δ18Ow-local with the two calibrations agree well. However, the result with salinity correction for KR07-12 PC-01 shows less amplitude than that without salinity correction for both SST and δ18Ow-local (Fig. 3), indicating that salinity effect on Mg/Ca amplifies the variability of SST. This is indicative in Fig. 3, showing that the positive correlation between SST and δ18Ow-local is stronger in "no salinity correction" than in one with salinity correction for MIS 3. Furthermore, these relationships depend on the sea level for the case without salinity correction, and the positive correlation becomes stronger as the sea level lowers (Fig. 3d, e). Given that the river mouth of the Changjiang River approaches as the sea level lowers, the sea-level dependence of the SST-δ18Ow-local relationship reflects a change in salinity effect on the studied site in association with sea-level lowering. Alternatively, one would argue that the low sea-level stand prevents water inflow from the ECS to the Japan Sea, resulting in reduced freshwater transport into the Japan Sea and increased salinity variability in the northern ECS. Therefore, we conclude that SST and δ18Ow-local reconstructions with salinity correction are a better estimate than the calibration without salinity correction.
Calibrating the local salinity variations
We interpret the δ18Ow-local as a proxy for SSS and will refer to it in the discussion as such. In the modern ECS, a linear salinity-δ18Ow relationship is confirmed (e.g., Horikawa et al. 2015), reflecting mixing between the freshwater from the Changjiang River and seawater (Kuroshio Water) (e.g., Horikawa et al. 2015; Kubota et al. 2015). Namely, the salinity and δ18Ow at the studied site can be interpreted in the context of a simplified two endmember mixing model of the freshwater from the Changjiang River and Kuroshio Water (Kubota et al. 2015). By contrast, the following factors are involved in controlling the ECS δ18Ow in the past: (1) changes in δ18Ow and salinity in the source region and (2) changes in mixing ratio between the Changjiang River freshwater and Kuroshio Water (≒salinity at the studied site). An effect of the global ice volume is also involved in the changes in endmember δ18Ow and salinity. As the calibration method of "endmember correction" described in the "Methods/Experimental" section incorporates the changes in endmember δ18Ow and salinity in the source region through time, the result derived from these calculations gives a better estimate, especially for the salinity. As a result, neither SST nor δ18Ow with "endmember correction" has a large difference from those with "salinity correction" (Fig. 3), indicating that the "salinity correction" is a reasonably close approximation of those variabilities.
We utilized the δ18Osp data as the input parameter to determine the freshwater δ18O (Eq. 7) in the "endmember correction" method. In this method, we assume that the variability of the freshwater δ18O is the same as that of the δ18Osp. In general, δ18Osp can be regarded as a function of drip water δ18O and cave temperature, unless precipitation of the speleothems is at equilibrium (Hendy 1971). When drip water δ18O increases and/or cave temperature decreases, the δ18Osp increases. The absence of any identification of the kinetic effect on the published δ18Osp from the Chinese caves enables us to interpret the δ18Osp as reflecting drip water δ18O and cave temperature (Wang et al. 2008). The assumption we apply to Eq. 7 does not incorporate the temperature component in estimating the endmember freshwater δ18O. However, the cave temperature variability, if it is deduced from the SST in the ECS, would be correlated with the temporal variation in δ18Osp (Additional file 3: Figure S3). Thus, incorporating the temperature component is supposed to suppress the variability of the endmember freshwater δ18O and the calculation without the temperature component results in giving a maximum range of the variability of the freshwater δ18Ow. Moreover, high variability in the freshwater δ18O is expected to decrease the salinity variability or freshwater contribution in the northern ECS (Eqs. 6 and 7). Nevertheless, the SSS reconstruction with "endmember correction" still shows high variability in KR07-12 PC-01 (ca. ± 1) during MIS 3 (Fig. 3), indicating that the changes in endmember δ18Ow have little impact in the northern ECS. Even though the freshwater δ18O is highly variable with the amplitude of ± 1‰, its effect is diminished at the studied site due to a smaller contribution of the freshwater to the northern ECS compared with the seawater. Furthermore, neither the variability in δ18Ow nor the salinity in the western Pacific warm pool has a significant impact on the SSS reconstruction in the ECS.
Changes in SST, local seawater δ18O, and SSS
The output data from "endmember correction" will be used for discussion hereafter. As planktic foraminifer G. ruber s.s. is abundant in warm seasons in the northern ECS (Yamasaki et al. 2010), we interpret that our results reflect the signals of the warm months (Kubota et al. 2015). The δ18O, SST, and δ18Ow-local of KR07-12 PC-01 replicate those of U1429 (Additional file 3: Figure S3), but have higher time resolution. The SST of core KR07-12 PC-01 ranges from 20.2 °C to 24.4 °C, with an average of 22.3 ± 0.7 °C (1σ) during MIS 3 (Figs. 3 and 4), characterized by 1–3 °C amplitude variations at the millennial scale. The SST increases at D-O interstadials 10 and 9 in Greenland and decreases in association with Heinrich event 4 (H4). Four high SST events from 38 to 32 ka can be correlated to D-O interstadials 8–5. Heinrich event 3 (H3) is recognizable, but the magnitude of the decrease in SST is similar to other SST minima. While D-O interstadial 3 is evident in the SST records, D-O interstadial 4 is less distinct. Heinrich event 2 (H2) is characterized by a pronounced decrease in SST by ~ 3 °C. The average SST in LGM is 22.3 °C, which is the same as that in MIS 3. Thus, the LGM is not the coldest period in the last 44 ka. This is a regional phenomenon as a similar trend is found in the middle Okinawa Trough (Chen et al. 2010) and western Pacific warm pool (Stott et al. 2002) based on Mg/Ca-based SST from G. ruber. However, alkenone-based SST, which is interpreted as reflecting an annual mean SST, shows the lowest values during LGM in the northern ECS (Ijiri et al. 2005; Clemens et al. 2018). The proxy-dependent results suggest seasonality in the SST evolution since the last glacial period.
Comparison of the ECS records and other regions. a δ18O from NGRIP with age model GICC05 (Svensson et al. 2008). b SST (black) and c SSS (black) of KR07–12 PC-01 with 1σ uncertainties superimposed on δ18Osp (orange) from China (Cheng et al. 2016). d L* (purple; Bassinot and Blatzer 2002) and ESR (green; Nagashima et al. 2011) of MD01-2407 in the Japan Sea, recalibrated to IntCal13. e Gulang mean grain size, Loess Plateau, China (Sun et al. 2012). Heinrich events 2 (H2), 3 (H3), and 4 (H4) are highlighted in blue. D-O cycle numbers are shown in the NGRIP δ18O. The high SSS events in the ECS are highlighted in yellow. The 14C-based datums (circles) and tie points (crosses) are shown in b for KR07-12 PC-01 and in d for MD01-2407. Red triangles (AT) represent AT tephra
The SSS in the northern ECS is highly variable, ranging from 31.5–34.0 during MIS 3. Millennial-scale variation is recognized in SSS with major negative peaks in association with D-O interstadials 10–5 and 3 (Fig. 4). A predominant positive shift is found at H4, while H3 is less distinct. SST and SSS tend to covary on the millennial scale (H4, D-O 8, D-O 5, and H3). Contrary to the other high SSS events in association with H4 and H3, H2 is characterized by a negative shift, which suggests an alternative underlying mechanism of the EASM precipitation.
Regional and global factors affecting the Japan Sea records
Temporal variation in lightness (L*) of the hemipelagic sediments in the Japan Sea shows a millennial-scale temporal change similar to δ18O of the Greenland ice cores (Tada et al. 1995; Wang and Oba 1998; Tada et al. 1999). The L* of the sediment principally reflects organic carbon content (Corg), which is controlled by primary production and burial rate, except for during the lowest sea-level stand, such as LGM, when L* is affected by pyrite content (Tada et al. 1999). Tada et al. (1999) suggested that changes in freshwater discharge from the Changjiang and Yellow Rivers, and consequent changes in the spatial extent of the nutrient-rich and low-salinity ECS coastal water, were the main cause of changes in a nutrient influx into the Japan Sea, and primary production there during MIS 3. The increased influence of the ECS coastal water not only enhanced primary productivity, but also reduced deep-water ventilation, leading to the development of the anoxic bottom waters and deposition of the dark layers (Tada et al. 1999; Irino et al. 2018). This hypothesis can be tested by comparing the temporal variations in the SSS in the northern ECS with L* of the hemipelagic sediments in the Japan Sea (Watanabe et al. 2007). While the nutrient and salinity flux into the Japan Sea is regulated by the EASM, the strength of the East Asian winter monsoon (EAWM) is a crucial factor to form a high-oxygenated deep-water mass called the Japan Sea Proper Water (JSPW) (Suda 1932; Nitani 1972; Ikehara and Fujine 2012; Gamo et al. 2014). The cold winds of the winter monsoon cool the surface water along the far eastern coast of Russia and promote the formation of sea ice, which increases the sea surface density to form the JSPW (Suda 1932; Nitani 1972). In addition to the factors described above, sea-level change controls the influx of the ECS waters into the Japan Sea as the narrow and shallow Tsushima Strait prevents the inflow of the ECS waters at a low sea-level stand. As the Japan Sea is connected to other ocean basins only through shallow (< 130 m in sill depth) and narrow (< 90 km in width) straits in addition to the Tsushima Strait, the sea-level change and resulting restriction of the seawater inflow from other basins are crucial to control the SSS in the Japan Sea. Thus, the following three climatic/oceanic factors that potentially control the sediment color of the Japan Sea are discussed in this section: (1) SSS in the northern ECS, (2) EAWM strength, and (3) global sea-level change. Factors 1 and 3 regulate the nutrient influx into the Japan Sea, while all three factors are involved in the deep-water formation and decomposition of the organic material in the sediment-water interface on the sea floor.
In this study, we used a high time resolution (1 cm interval) L* record of core MD01-2407 in the Japan Sea (Bassinot and Blatzer 2002), which was recalibrated to calendar ages using Marine13 (Additional file 5, Additional file 6: Figure S4, and Additional file 7: Table S2), for comparison among the records (Fig. 4). The temporal resolution of the 1 cm interval data is approximately 120 years. In the intervals 44–34 ka (D-O11–7) and 30–28 ka (H3–D-O 4), the Japan Sea L* profile varies in phase with the northern ECS SSS. The dark (light) layers are correlated to the low (high) SSS in the northern ECS. Since AT tephra is found in both cores, the timing of each environmental change in both seas can be constrained (Fig. 2). In the Japan Sea, AT tephra is found at the base of the light later (H3), which is correlated with one of the high SSS events in the ECS. Based on the comparison above, the changes in the SSS in the northern ECS and consequent changes in nutrient and salinity input into the Japan Sea had been a dominant factor altering the sediment color in the Japan Sea in these intervals. The variations in SSS in the northern ECS and associating surface water density in the Japan Sea had been amplifying the variability of the sediment color through the process of production of deep water (Tada et al. 1999). The variations in the EAWM probably had also enhanced the variability of the sediment color during this time period. In fact, a high-resolution quartz grain size record of loess from Gulang section in the Chinese Loess Plateau, which is an indicator of EAWM, shows a good correlation with the L* during the interval 44–34 ka. By contrast, the millennial-scale linkage between the northern ECS and Japan Sea became less clear after 34 ka, which is probably linked to the sea-level lowering. Lambeck et al. (2014) have reported a rapid sea-level fall (by ~ 40 m in less than 2000 years) at ~ 30 ka, following the gradual fall starting from ~ 35 ka. The rapid and substantial fall in sea level from ~ − 80 to ~ − 120 m at 30 ka would have reduced the water exchange between the northern ECS and Japan Sea from 20% of today to almost zero (Additional file 8: Figure S5). Previous studies have revealed that the SSS in the Japan Sea had been lowering toward LGM, which was inferred from a shift to low values in planktic foraminiferal δ18O in the Japan Sea after 30 ka (Oba et al. 1991; Kido et al. 2007; Sagawa et al. 2018). The sediment facies of the dark layers also shifted from bioturbated layers interbedded by fine laminations to finely laminated layers in association with the decrease in the SSS in the Japan Sea around 28 ka (Watanabe et al. 2007). This facies change is accompanied by an increase in S/Corg, suggesting the euxinic condition in the bottom water (Watanabe et al. 2007), likely caused by the poor ventilation of the bottom water and not by the increase in the primary productivity, as the nutrient input from the ECS was reduced significantly (Watanabe et al. 2007). The comparison of the SSS in the ECS and L* in this study indicates that they are not correlated well in this interval, suggesting the millennial-scale variations in the ECS SSS had almost no impact on the Japan Sea at the lower sea level. Thus, there seems to be a threshold around the sea level of − 80 to − 120 m from a mode where the Japan Sea responds well to the ECS variation to the other mode where it does not. Meanwhile, one still can find interbedded light layers, such as H2, after 28 ka (Watanabe et al. 2007). These light layers might be caused by a different mechanism from those during MIS 3. We speculate that deposition of the light layers might be involved in a millennial-scale sea-level rise that increases the ECS water influx to increase the density of the Japan Sea. In this case, the deep water ventilation plays a more significant role in controlling the L*. Although the timing of the rapid sea-level rise is an issue (Yokoyama and Esat 2011), based on studies on an oxygen isotope record from the Red Sea and isotopes in corals from Papua New Guinea, it is now accepted that 15–20 m, and possibly up to 30 m, of sea level increases is associated with Heinrich events (Broecker 1994; Siddall et al. 2003; Yokoyama and Esat 2011), or with the warm period after the termination of a Heinrich event (Arz et al. 2007). A change in sea level of 15–20 m corresponds to over 10% change in the cross section in the Tsushima Strait, potentially leading a massive change in the water volume exchange between the two ocean basins.
Summer precipitation variability and westerly jet in East Asia
The most prominent feature in our new records is that the SSS in the ECS varies in association with the D-O cycles, which can provide fundamental information on summer precipitation changes under the EASM system. Although the Chinese δ18Osp is one of the most well-known proxies from the EASM region, the interpretation of the Chinese δ18Osp is still controversial on a millennial-scale (e.g., Zhang et al. 2018). In the earlier stage of the studies on δ18Osp, the δ18Osp were interpreted as the amount of summer precipitation at a given cave location (Wang et al. 2001). Subsequently, other interpretations were proposed: the fraction of water vapor removed from air masses along the moisture trajectory between the tropical Indo-Pacific and the cave sites (Hu et al. 2008) or isotopic fluctuation in the moisture source region (Pausata et al. 2011). By contrast, the most recent study argued that the δ18Osp is better understood as a measure of the large-scale monsoon circulation, not reflecting the local precipitation (Liu et al. 2014). This interpretation is also put forward in a study on trace elements of speleothems, which suggests a wetter condition in H1 and Younger Dryas during the last deglaciation (Zhang et al. 2018). In their discussion, when the δ18Osp becomes heavier, which is traditionally interpreted as "weak monsoon," the local summer rainfall increases in the Changjiang River catchment basin (Zhang et al. 2018). Our new SSS result indicates that most D-O stadials and H4 and H3 intervals correspond to less EASM precipitation and D-O interstadials correspond to more EASM precipitation. Our result contradicts what was claimed in Zhang et al. (2018), suggesting that the millennial-scale summer precipitation response might depend on a global climate setting, such as sea level. In fact, H2 in the ECS, characterized by the decreased SSS, is different from other stadials and Heinrichs. A complex response in the summer precipitation was also suggested based on pollen record from Lake Suigetsu, central Japan, which indicates a wetter condition during H1 compared to the following Bølling-Allerød (Nakagawa et al. 2006).
The SSS variation in the ECS and its relationship to the westerly jet will give us a new insight into the variability of summer precipitation and its relationship to monsoon circulation itself. Today, the EASM front migrates northward from South China from May and reaches the North China-Inner Mongolia regions in August. From the meteorological aspect, the northward migration of the EASM front follows the northward shift of the westerly jet over East Asia on a seasonal scale (Sampe and Xie 2010). Therefore, a proxy of the position of the westerly jet is regarded as an indicator of the EASM circulation. On an interannual time scale, the negative correlation between EASM rainfall in the Changjiang River basin and the summer southerly wind intensity in the northernmost region of the EASM was observed (Jiang et al. 2008); the EASM circulation was stronger, and the Changjiang River basin had less precipitation. A similar relationship was observed on a millennial-scale during the Holocene, based on a proxy of the westerly jet and summer precipitation records from inland China and ECS (Nagashima et al. 2013). According to Nagashima et al. (2011), the position of the westerly jet has varied in harmony with D-O cycles in MIS 3, based on the electron spin resonance (ESR) signal intensity of quartz in the fine silt fraction of the Japan Sea sediments, which is interpreted as a proxy for dust provenance, sourced either from the Taklimakan Desert or the Mongolian Gobi (Fig. 4). The higher (lower) ESR signal intensity during stadials (interstadials) suggests the southern (northern) position of the westerly jet and weaker (stronger) EASM circulation (Nagashima et al. 2011). In the interval 44–34 ka, the increased EASM precipitation in the Changjiang River basin followed the northward shift of the westerly jet position (EASM front), which was opposite to what was observed on a millennial-scale during the Holocene (Nagashima et al. 2013) and on an interannual to decadal scale today (e.g., Jiang et al. 2008).
Global context of East Asian monsoon variations
A comparison of the Chinese δ18Osp record and loess grain size indicates that the EASM and EAWM are coupled on a millennial time scale during the interval 60–34 ka (Sun et al. 2012). Their variations are well correlated to the North Atlantic climate variability as well (Sun et al. 2012). In the study of Sun et al. (2012), North Atlantic water-hosing experiments, using the Community Climate System Model Version 3 to mimic the Heinrich or stadial events, suggest a dynamical response of the EASM and EAWM systems to the suppressed Atlantic Meridional Overturning Circulation (AMOC) and cooling in the North Atlantic. The freshwater forcing increases the latitudinal temperature gradient and induces stronger EAWM during winter (Sun et al. 2012). During summer, cooling in North Atlantic results in a southward shift of the Intertropical Convergence Zone, followed by a stronger Walker circulation and weaker EASM (Zhang and Delworth 2005). The moisture convergence along the EASM front and precipitation in the Changjiang River basin is suppressed as the North Pacific subtropical high weakens in response to the freshwater forcing during summer (Sun et al. 2012). However, an opposite summer precipitation response is suggested in a numerical simulation using the Community Earth System Model (Zhang et al. 2018). In this simulation, a stronger meridional temperature gradient, and resultant southward-shifted and strengthened westerly jet, leads to enhanced convection along the slope of the Tibetan Plateau, increased precipitation in southern China, and decreased precipitation in northern China. Thus, the precipitation response in the East Asian monsoon region depends on the climate models, being not constrained by the numerical models as depicted in Kageyama et al. (2013). Conversely, our records give a constraint on the mechanism of the EASM response to AMOC variability during MIS 3, suggesting that the decreased precipitation in Heinrichs is caused by the reduction of the moisture supply from the surrounding oceans, probably linked to the weakening of the subtropical high in the North Pacific (Sun et al. 2012). We infer that, under this mechanism, the migration pattern of the westerly jet and associated position of the EASM front do not regulate the special summer precipitation distribution in the same manner as today, but moisture budget in the atmosphere plays a dominant role. However, an alternative explanation is required for the opposing precipitation trend, such as H2 and H1, when the summer precipitation pattern is opposite to what is observed for MIS 3. A recent study points out that the internal climate variability is important in the rainfall variation in East Asia's mid-latitude on an interannual and decadal scale today (Ueda et al. 2015). Internal variability is the natural variability of the climate system that occurs in the absence of external forcing, and includes processes intrinsic to the atmosphere, the ocean, and the coupled ocean-atmosphere system (Deser et al. 2010; Deser et al. 2012). Interannual and decadal scale variabilities arise from the intrinsic variability of the system due to a random stochastic process and results from dynamic and thermodynamic interactions of the coupled ocean-atmosphere system (Deser et al. 2012). Ueda et al. (2015) argue that today's interannual and interdecadal precipitation pattern over mid-latitude East Asia is caused by a combination of the atmospheric internal variability and SST in the western Pacific and Indian Ocean. On a millennial time scale, the EAWM variability after 30 ka, decoupled to the North Atlantic climate (Sun et al. 2012), appears to lack the forcing outside of the East Asian climate system, perhaps resulting from the increased intrinsic variability of the EAWM. One would argue that the increased summer precipitation in southern China in H2 and H1 is interpreted as reflecting the increased convection along the slope of the Tibetan Plateau, induced by the longer placement of the EASM front in the southern position (Zhang et al. 2018). Alternatively, however, the summer precipitation response in H2 and H1 may be explained in the context of the random stochastic processes in the EASM system as well as the EAWM. In the latter case, the overall monsoon system in East Asia would randomly vary, rather than responding to the North Atlantic forcing during MIS 2. Either way, the opposite millennial-scale EASM precipitation pattern during MIS 2 leads to a conclusion that the regional precipitation is not a dominant factor controlling the millennial-scale Chinese δ18Osp in this period.
We reconstructed SST, δ18Ow -local, and SSS using the paired Mg/Ca and δ18Op in the northern ECS. To better estimate the SSS variability, we tested the three calibration methods to reconstruct these records. The SST record derived from the Mg/Ca calibration without salinity correction shows higher variability than the one with salinity correction, and its variability depends on the sea level. The SSS reconstruction is not significantly affected by the endmember δ18Ow and salinity variability in the source regions, indicating that the modern local δ18Ow-salinity calibration is applicable to the MIS 3 case. The most prominent feature of our SSS record is the millennial-scale variations in association with the D-O cycle in Greenland and the Chinese δ18Osp, where high SSS events coincide with D-O stadials and Heinrichs, while low SSS events coincide with D-O interstadials. The comparison of the SSS in the northern ECS, the proxy of the EAWM, and the sediment color in the Japan Sea reveal that the changes in nutrient and salinity flux in the Japan Sea induced by the Changjiang River discharge in addition to the strength of the EAWM are likely the primary factors determining the surface productivity changes manifesting as color changes in the sediments of the Japan Sea when the sea level was higher than ca. − 80 m. The glacioeustatic sea-level changes reduced the influx of the ECS coastal water entering into the Japan Sea, especially after ~ 30 ka, at which the linkage between the SSS in the northern ECS and the sediment color of the Japan Sea became less clear.
The SSS variation in the ECS and its relationship to the westerly jet indicates that the decreased EASM precipitation in the Changjiang River basin followed the southward shift of the westerly jet position, which is opposite to what was observed on a millennial-scale during the Holocene and interannual time scale today. We infer that the moisture convergence along the EASM front and overall moisture budget in the atmosphere, which would be altered in association with the D-O cycles and Heinrichs, plays a dominant role in controlling the EASM precipitation in southern China during MIS 3. The precipitation response during MIS 2 is opposite to MIS 3, suggesting the alternative mechanism involved at different global climate backgrounds.
EASM:
East Asian summer monsoon
EAWM:
East Asian winter monsoon
ECS:
JSPW:
Japan Sea Proper Water
LGM:
Last Glacial Maximum
Marine Isotope Stage
δ18O:
Oxygen isotope
δ18Op :
Oxygen isotope of planktic foraminifera
δ18Osp :
Oxygen isotope of speleothems
δ18Ow :
Oxygen isotope of water
δ18Ow-local :
Oxygen isotope of local seawater
Andres M, Park J-H, Wimbush M, Zhu X-H, Chang K-I, Ichikawa H (2008) Study of the Kuroshio/Ryukyu current system based on satellite-altimeter and in situ measurements. J Oceanogr 64:937–950. https://doi.org/10.1007/s10872-008-0077-2
Antonov JI, Seidov D, Boyer TP, Locarnini RA, Mishonov AV, Garcia HE, Baranova OK, Zweng MM, Johnson DR (2010) World ocean atlas 2009, volume 2: salinity. U.S. Government Printing Office, Washington, D.C.
Arz HW, Lamy F, Ganopolski A, Nowaczyk N, Pätzold J (2007) Dominant Northern Hemisphere climate control over millennial-scale glacial sea-level variability. Quat Sci Rev 26:312–321. https://doi.org/10.1016/j.quascirev.2006.07.016
Bassinot F, Blatzer A (2002) WEPAMA Cruise MD 122 - IMAGES VII: Leg 1, Port Hedland (Australia), 01-05-2001 to Keelung (Taiwan), 26-05-2001; Leg 2, Keelung (Taiwan), 27-05-2001 to Kochi (Japan), 18-06-2001. Institut Polaire Français Paul-Emile Victor
Bemis BE, Spero HJ, Bijma J, Lea DW (1998) Reevaluation of the oxygen isotopic composition of planktonic foraminifera: experimental results and revised paleotemperature equations. Paleoceanography 13:150–160
Bolliet T, Holbourn A, Kuhnt W, Laj C, Kissel C, Beaufort L, Kienast M, Andersen N, Garbe-Schönberg D (2011) Mindanao dome variability over the last 160 kyr: episodic glacial cooling of the West Pacific warm pool. Paleoceanography 26:1050–1018. https://doi.org/10.1029/2010PA001966
Boyle EA, Keigwin LD (1985) Comparison of Atlantic and Pacific paleochemical records for the last 215,000 years: changes in deep ocean circulation and chemical inventories. Earth Planet Sci Lett 76:135–150. https://doi.org/10.1016/0012-821X(85)90154-2
Broecker WS (1994) Massive iceberg discharges as triggers for global climate change. Nature 372:421–424. https://doi.org/10.1038/372421a0
Chen C, Beardsley RC, Limeburner R, Kim K (1994) Comparison of winter and summer hydrographic observations in the Yellow and East China Seas and adjacent Kuroshio during 1986. Cont Shelf Res 14:909–928
Chen MT, Lin XP, Chang YP, Chen YC, Lo L, Shen CC, Yokoyama Y, Oppo DW, Thompson WG, Zhang R (2010) Dynamic millennial-scale climate changes in the northwestern Pacific over the past 40,000 years. Geophys Res Lett 37:L23603. https://doi.org/10.1029/2010GL045202
Cheng H, Edwards RL, Sinha A, Spötl C, Yi L, Chen S, Kelly M, Kathayat G, Wang X, Li X, Kong X, Wang Y, Ning Y, Zhang H (2016) The Asian monsoon over the past 640,000 years and ice age terminations. Nature 534:640–646. https://doi.org/10.1038/nature18591
Cheng H, Edwards RL, Southon J, Matsumoto K, Feinberg JM, Sinha A, Zhou W, Li H, Li X, Xu Y, Chen S, Tan M, Wang Q, Wang Y, Ning Y (2018) Atmospheric 14C/12C changes during the last glacial period from Hulu Cave. Science 362:1293–1297. https://doi.org/10.1126/science.aau0747
Clemens SC, Holbourn A, Kubota Y, Lee KE, Liu Z, Chen G, Nelson A, Fox-Kemper B (2018) Precession-band variance missing from East Asian monsoon runoff. Nat Commun 9:3364. https://doi.org/10.1038/s41467-018-05814-0
Clemens SC, Prell WL, Sun Y (2010) Orbital-scale timing and mechanisms driving Late Pleistocene Indo-Asian summer monsoons: reinterpreting cave speleothem δ18O. Paleoceanography 25:PA4207. https://doi.org/10.1029/2010pa001926
Conroy JL, Thompson DM, Cobb KM, Noone D, Rea S, LeGrande AN (2017) Spatiotemporal variability in the δ18O-salinity relationship of seawater across the tropical Pacific Ocean. Paleoceanography 32:484–497. https://doi.org/10.1002/2016PA003073
Danhara T, Yamashita T, Iwano H, Kasuya M (1992) An improved system for measuring refractive index using the thermal immersion method. Quat Int 13-14:89–91
Dansgaard W, Johnsen SJ, Clausen HB, Dahl-Jensen D, Gundestrup NS, Hammer CU, Hvidberg CS, Steffensen JP, Sveinbjörnsdottir AE, Jouzel J, Bond G (1993) Evidence for general instability of past climate from a 250-kyr ice-core record. Nature 364:218–220
Dayem KE, Molnar P, Battisti DS, Roe GH (2010) Lessons learned from oxygen isotopes in modern precipitation applied to interpretation of speleothem records of paleoclimate from eastern Asia. Earth Planet Sci Lett 295:219–230. https://doi.org/10.1016/j.epsl.2010.04.003
Dekens PS, Lea DW, Pak DK, Spero HJ (2002) Core top calibration of Mg/Ca in tropical foraminifera: refining paleotemperature estimation. Geochem Geophys Geosyst 3:1–29. https://doi.org/10.1029/2001GC000200
Deser C, Knutti R, Solomon S, Phillips AS (2012) Communication of the role of natural variability in future North American climate. Nat Clim Chang 2:775–779. https://doi.org/10.1038/nclimate1562
Deser C, Phillips A, Bourdette V, Teng H (2010) Uncertainty in climate change projections: the role of internal variability. Clim Dyn 38:527–546. https://doi.org/10.1007/s00382-010-0977-x
Gamo T, Nakayama N, Takahata N, Sano Y, Zhang J, Yamazaki E, Taniyasu S, Yamashita N (2014) The Sea of Japan and its unique chemistry revealed by time-series observations over the last 30 years. Monogr Environ Earth Planets 2:1–22
Hathorne EC, Gagnon A, Felis T, Adkins J, Asami R, Boer W, Caillon N, Case D, Cobb KM, Douville E, deMenocal P, Eisenhauer A, Garbe-Schönberg D, Geibert W, Goldstein S, Hughen K, Inoue M, Kawahata H, Kölling M, Cornec FL, Linsley BK, McGregor HV, Montagna P, Nurhati IS, Quinn TM, Raddatz J, Rebaubier H, Robinson L, Sadekov A, Sherrell R, Sinclair D, Tudhope AW, Wei G, Wong H, Wu HC, You C-F (2013) Interlaboratory study for coral Sr/Ca and other element/Ca ratio measurements. Geochem Geophys Geosyst 14:3730–3750. https://doi.org/10.1002/ggge.20230
Hendy CH (1971) The isotopic geochemistry of speleothems—I. The calculation of the effects of different modes of formation on the isotopic composition of speleothems and their applicability as palaeoclimatic indicators. Geochim Cosmochim Acta 35:801–824. https://doi.org/10.1016/0016-7037(71)90127-X
Horikawa K, Kodaira T, Zhang J, Murayama M (2015) δ18Osw estimate for Globigerinoides ruber from core-top sediments in the East China Sea. Prog Earth Planet Sci 2:19. https://doi.org/10.1186/s40645-015-0048-3
Hu C, Henderson GM, Huang J, Xie S, Sun Y, Johnson KR (2008) Quantification of Holocene Asian monsoon rainfall from spatially separated cave records. Earth Planet Sci Lett 266:221–232. https://doi.org/10.1016/j.epsl.2007.10.015
Ichikawa H, Beardsley RC (1993) Temporal and spatial variability of volume transport of the Kuroshio in the East China Sea. Deep-Sea Res I Oceanogr Res Pap 40:583–605. https://doi.org/10.1016/0967-0637(93)90147-U
Ichikawa H, Beardsley RC (2002) The current system in the Yellow and East China Seas. J Oceanogr 58:77–92
Ijiri A, Wang L, Oba T, Kawahata H, Huang CY (2005) Paleoenvironmental changes in the northern area of the East China Sea during the past 42,000 years. Palaeogeogr Palaeoclimatol Palaeoecol 219:239–261
Ikehara K, Fujine K (2012) Fluctuations in the Late Quaternary East Asian winter monsoon recorded in sediment records of surface water cooling in the northern Japan Sea. J Quat Sci 27:866–872. https://doi.org/10.1002/jqs.2573
Inoue M, Yoshida K, Minakawa M, Kiyomoto Y, Kofuji H, Nagao S, Hamajima Y, Yamamoto M (2012) Spatial variations of 226Ra, 228Ra, and 228Th activities in seawater from the eastern East China Sea. Geochem J 46:429–441
Irino T, Tada R, Ikehara K, Sagawa T, Karasuda A, Kurokawa S, Seki A, Lu S (2018) Construction of perfectly continuous records of physical properties for dark-light sediment sequences collected from the Japan Sea during Integrated Ocean Drilling Program Expedition 346 and their potential utilities as paleoceanographic studies. Prog Earth Planet Sci 5:23. https://doi.org/10.1186/s40645-018-0176-7
Isobe A, Ando M, Watanabe T, Senjyu T, Sugihara S, Manda A (2002) Freshwater and temperature transports through the Tsushima-Korea Straits. J Geophys Res 107:3065. https://doi.org/10.1029/2000jc000702
Japan Oceanographic Data Center (2004) Statistical Products. https://www.jodc.go.jp/jodcweb. Accessed 1 Apr 2019
Jia Q, Li T, Xiong Z, Steinke S, Jiang F, Chang F, Qin B (2018) Hydrological variability in the western tropical Pacific over the past 700kyr and its linkage to Northern Hemisphere climatic change. Palaeogeogr Palaeoclimatol Palaeoecol 493:44–54. https://doi.org/10.1016/j.palaeo.2017.12.039
Jiang Z, Yang S, He J, Li J, Liang J (2008) Interdecadal variations of East Asian summer monsoon northward propagation and influences on summer precipitation over East China. Meteorog Atmos Phys 100:101–119. https://doi.org/10.1007/s00703-008-0298-3
Kageyama M, Merkel U, Otto-Bliesner B, Prange M, Abe-Ouchi A, Lohmann G, Ohgaito R, Roche DM, Singarayer J, Swingedouw D (2013) Climatic impacts of fresh water hosing under Last Glacial Maximum conditions: a multi-model study. Clim Past 9:935–953. https://doi.org/10.5194/cp-9-935-2013
Kido Y, Minami I, Tada R, Fujine K, Irino T, Ikehara K, Chun JH (2007) Orbital-scale stratigraphy and high-resolution analysis of biogenic components and deep-water oxygenation conditions in the Japan Sea during the last 640 kyr. Palaeogeogr Palaeoclimatol Palaeoecol 247:32–49
Kim RA, Lee KE, Bae SW (2015) Sea surface temperature proxies (alkenones, foraminiferal Mg/Ca, and planktonic foraminiferal assemblage) and their implications in the Okinawa Trough. Prog Earth Planet Sci 2:1–16. https://doi.org/10.1186/s40645-015-0074-1
Kubota Y, Kimoto K, Tada R, Oda H, Yokoyama Y, Matsuzaki H (2010) Variations of East Asian summer monsoon since the last deglaciation based on Mg/Ca and oxygen isotope of planktic foraminifera in the northern East China Sea. Paleoceanography 25:PA4205. https://doi.org/10.1029/2009PA001891
Kubota Y, Tada R, Kimoto K (2015) Changes in East Asian summer monsoon precipitation during the Holocene deduced from a freshwater flux reconstruction of the Changjiang (Yangtze River) based on the oxygen isotope mass balance in the northern East China Sea. Clim Past 11:265–281. https://doi.org/10.5194/cp-11-265-2015
Lambeck K, Rouby H, Purcell A, Sun Y, Sambridge M (2014) Sea level and global ice volumes from the Last Glacial Maximum to the Holocene. Proc Natl Acad Sci 111:15296–15303. https://doi.org/10.1073/pnas.1411762111
Lea DW, Mashiotta TA, Spero HJ (1999) Controls on magnesium and strontium uptake in planktonic foraminifera determined by live culturing. Geochim Cosmochim Acta 63:2369–2379
LeGrande AN, Schmidt GA (2011) Water isotopologues as a quantitative paleosalinity proxy. Paleoceanography 26:PA3225. https://doi.org/10.1029/2010pa002043
Liu Z, Wen X, Brady EC, Otto-Bliesner B, Yu G, Lu H, Cheng H, Wang Y, Zheng W, Ding Y, Edwards RL, Cheng J, Liu W, Yang H (2014) Chinese cave records and the East Asia summer monsoon. Quat Sci Rev 83:115–128. https://doi.org/10.1016/j.quascirev.2013.10.021
Locarnini RA, Mishonov AV, Antonov JI, Boyer TP, Garcia HE (2010) World ocean atlas 2009, volume 1: temperature. U.S. Government Printing Office, Washington, D.C.
Maher BA, Thompson R (2012) Oxygen isotopes from Chinese caves: records not of monsoon rainfall but of circulation regime. J Quat Sci 27:615–624. https://doi.org/10.1002/jqs.2553
Nagashima K, Tada R, Tani A, Sun Y, Isozaki Y, Toyoda S, Hasegawa H (2011) Millennial-scale oscillations of the westerly jet path during the last glacial period. J Asian Earth Sci 40:1214–1220
Nagashima K, Tada R, Toyoda S (2013) Westerly jet-East Asian summer monsoon connection during the Holocene. Geochem Geophys Geosyst 14:5041–5053. https://doi.org/10.1002/2013gc004931
Nakagawa T, Tarasov PE, Kitagawa H, Yasuda Y, Gotanda K (2006) Seasonally specific responses of the East Asian monsoon to deglacial climate changes. Geology 34:521–524
Nitani H (1972) On the deep and bottom waters in the Japan Sea. In: Shoji D (ed) Research in hydrography and oceanograph. Hydrographic Department of Japan, pp 151–201
Oba T (1988) Comment for sea level change. The Quaternary Research (Daiyonki-Kenkyu) 26:243-250, (in Japanese with English abstract). https://doi.org/10.4116/jaqua.26.3_243
Oba T, Kato M, Kitazato H, Koizumi I, Omura A, Sakai T, Takayama T (1991) Paleoenvironmental changes in the Japan Sea during the last 85,000 years. Paleoceanography 6:499–518
Okuno M, Nakamura T, Moriwaki H, Kobayashi T (1997) AMS radiocarbon dating of the Sakurajima tephra group, southern Kyushu, Japan. Nucl Instrum Methods Phys Res, Sect B 123:470–474. https://doi.org/10.1016/S0168-583X(96)00614-3
Pausata FSR, Battisti DS, Nisancioglu KH, Bitz CM (2011) Chinese stalagmite δ18O controlled by changes in the Indian monsoon during a simulated Heinrich event. Nat Geosci 4:474–480. https://doi.org/10.1038/ngeo1169
Reimer PJ, Bard E, Bayliss A, Beck JW, Blackwell PG, Bronk Ramsey C, Buck CE, Cheng H, Edwards RL, Friedrich M, Grootes PM, Guilderson TP, Haflidason H, Hajdas I, Hatté C, Heaton TJ, Hoffmann DL, Hogg AG, Hughen KA, Kaiser KF, Kromer B, Manning SW, Niu M, Reimer RW, Richards DA, Scott EM, Southon JR, Staff RA, Turney CSM, van der Plicht J (2013) IntCal13 and Marine13 radiocarbon age calibration curves 0-50,000 years cal BP. Radiocarbon 55:1869–1887
Sagawa T, Nagahashi Y, Satoguchi Y, Holbourn A, Itaki T, Gallagher SJ, Saavedra-Pellitero M, Ikehara K, Irino T, Tada R (2018) Integrated tephrostratigraphy and stable isotope stratigraphy in the Japan Sea and East China Sea using IODP sites U1426, U1427, and U1429, expedition 346 Asian monsoon. Prog Earth Planet Sci 5:18. https://doi.org/10.1186/s40645-018-0168-7
Sampe T, Xie S-P (2010) Large-scale dynamics of the meiyu-baiu rainband: environmental forcing by the westerly jet. J Clim 23:113–134. https://doi.org/10.1175/2009jcli3128.1
Siddall M, Rohling EJ, Almogi-Labin A, Hemleben C, Meischner D, Schmelzer I, Smeed DA (2003) Sea-level fluctuations during the last glacial cycle. Nature 423:853–858. https://doi.org/10.1038/nature01690
Smith VC, Staff RA, Blockley SPE, Bronk Ramsey C, Nakagawa T, Mark DF, Takemura K, Danhara T (2013) Identification and correlation of visible tephras in the Lake Suigetsu SG06 sedimentary archive, Japan: chronostratigraphic markers for synchronising of east Asian/West Pacific palaeoclimatic records across the last 150 ka. Quat Sci Rev 67:121–137
Stott L, Poulsen C, Lund S, Thunell R (2002) Super ENSO and global climate oscillations at millennial time scales. Science 297:222–226
Stuiver M, Reimer PJ, Reimer RW (2016) CALIB radiocarbon calibration version 7.1. In: calib.qub.ac.uk http://calib.org/calib/. Accessed 1 Apr 2019
Suda K (1932) On the bottom water in the Japan Sea (preliminary report). Kaiyojiho 4:221–240
Sun Y, Clemens SC, Morrill C, Lin X, Wang X, An Z (2012) Influence of Atlantic meridional overturning circulation on the East Asian winter monsoon. Nat Geosci 5:46–49. https://doi.org/10.1038/ngeo1326
Sun Y, Oppo DW, Xiang R, Liu W, Gao S (2005) Last deglaciation in the Okinawa Trough: subtropical Northwest Pacific link to Northern Hemisphere and tropical climate. Paleoceanography 20:PA4005. https://doi.org/10.1029/2004pa001061
Svensson A, Andersen KK, Bigler M, Clausen HB, Dahl-Jensen D, Davies SM, Johnsen SJ, Muscheler R, Parrenin F, Rasmussen SO, Röthlisberger R, Seierstad I, Steffensen JP, Vinther BM (2008) A 60,000 year Greenland stratigraphic ice core chronology. Clim Past 4:47–57
Tada R, Irino T, Ikehara K, Karasuda A, Sugisaki S, Xuan C, Sagawa T, Itaki T, Kubota Y, Lu S, Seki A, Murray RW, Alvarez-Zarikian C, Anderson WT, Bassetti M-A, Brace BJ, Clemens SC, da Costa Gurgel MH, Dickens GR, Dunlea AG, Gallagher SJ, Giosan L, Henderson ACG, Holbourn AE, Kinsley CW, Lee GS, Lee KE, Lofi J, Lopes CICD, Saavedra-Pellitero M, Peterson LC, Singh RK, Toucanne S, Wan S, Zheng H, Ziegler M (2018) High-resolution and high-precision correlation of dark and light layers in the Quaternary hemipelagic sediments of the Japan Sea recovered during IODP expedition 346. Prog Earth Planet Sci 5:19. https://doi.org/10.1186/s40645-018-0167-8
Tada R, Irino T, Koizumi I (1995) Possible Dansgaard-Oeschger oscillation signal recorded in the Japan Sea sediments, Global fluxes of carbon and its related substances in the coastal sea-ocean-atmosphere system, Proceedings of 1994 IGBP symposium, 1995, pp 517–522
Tanaka A, Yoneda M, Uchida M, Uehiro T, Shibata Y, Morita M (2000) Recent advances in 14C measurement at NIES-TERRA. Nucl Instrum Methods Phys Res, Sect B 172:107–111. https://doi.org/10.1016/S0168-583X(00)00346-3
Thirumalai K, Quinn TM, Marino G (2016) Constraining past seawater δ18O and temperature records developed from foraminiferal geochemistry. Paleoceanography 31:1409–1422. https://doi.org/10.1002/2016PA002970
Tierney JE, Pausata FSR, deMenocal P (2015) Deglacial Indian monsoon failure and North Atlantic stadials linked by Indian Ocean surface cooling. Nat Geosci 9:46–50. https://doi.org/10.1038/ngeo2603
Uchida M, Ohkushi K, Kimoto K, Inagaki F, Ishimura T, Tsunogai U, TuZino T, Shibata Y (2008) Radiocarbon-based carbon source quantification of anomalous isotopic foraminifera in last glacial sediments in the western North Pacific. Geochem Geophys Geosyst 9:Q04N14. https://doi.org/10.1029/2006gc001558
Uchida M, Shibata Y, Yoneda M, Kobayashi T, Morita M (2004) Technical progress in AMS microscale radiocarbon analysis. Nucl Instrum Methods Phys Res, Sect B 223–224:313–317. https://doi.org/10.1016/j.nimb.2004.04.062
Udarbe-Walker MJB, Villanoy CL (2001) Structure of potential upwelling areas in the Philippines. Deep-Sea Res I Oceanogr Res Pap 48:1499–1518. https://doi.org/10.1016/S0967-0637(00)00100-X
Ueda H, Kamae Y, Hayasaki M, Kitoh A, Watanabe S, Miki Y, Kumai A (2015) Combined effects of recent Pacific cooling and Indian Ocean warming on the Asian monsoon. Nat Commun 6:1–8. https://doi.org/10.1038/ncomms9854
Waelbroeck C, Labeyrie L, Michel E, Duplessy JC, McManus JF, Lambeck K, Balbon E, Labracherie M (2002) Sea-level and deep water temperature changes derived from benthic foraminifera isotopic records. Quat Sci Rev 21:295–305
Wang L, Oba T (1998) Tele-connections between East Asian monsoon and the high-latitude climate. Quat Res 37:211–219. https://doi.org/10.4116/jaqua.37.211
Wang Y, Cheng H, Edwards RL, Kong X, Shao X, Chen S, Wu J, Jiang X, Wang X, An Z (2008) Millennial- and orbital-scale changes in the East Asian monsoon over the past 224,000 years. Nature 451:1090–1093. https://doi.org/10.1038/nature06692
Wang YJ, Cheng H, Edwards RL, An ZS, Wu JY, Shen CC, Dorale JA (2001) A high-resolution absolute-dated late Pleistocene monsoon record from Hulu Cave, China. Science 294:2345–2348. https://doi.org/10.1126/science.1064618
Watanabe S, Tada R, Ikehara K, Fujine K, Kido Y (2007) Sediment fabrics, oxygenation history, and circulation modes of Japan Sea during the Late Quaternary. Palaeogeogr Palaeoclimatol Palaeoecol 247:50–64
Yamasaki M, Murakami T, Tsuchihashi M, Oda M (2010) Seasonal variation in living planktic foraminiferal assemblage in the northeastern part of the East China Sea. Fossils 87:35–46
Yokoyama Y, Esat TM (2011) Global climate and sea level: enduring variability and rapid fluctuations over the past 150,000 years. Oceanography 24:54–69
Yokoyama Y, Kido Y, Tada R, Minami I, Finkel RC, Matsuzaki H (2007) Japan Sea oxygen isotope stratigraphy and global sea-level changes for the last 50,000 years recorded in sediment cores from the Oki Ridge. Palaeogeogr Palaeoclimatol Palaeoecol 247:5–17
Yoneda M, Shibata Y, Tanaka A, Uehiro T, Morita M, Uchida M, Kobayashi T, Kobayashi C, Suzuki R, Miyamoto K, Hancock B, Dibden C, Edmonds JS (2004) AMS 14C measurement and preparative techniques at NIES-TERRA. Nucl Instrum Methods Phys Res, Sect B 223–224:116–123. https://doi.org/10.1016/j.nimb.2004.04.026
Yoshikawa S (1976) The volcanic ash layers of the Osaka Group. J Geol Soc Jpn 82:497–515
Zhang H, Griffiths ML, Chiang JCH, Kong W, Wu S, Atwood A, Huang J, Cheng H, Ning Y, Xie S (2018) East Asian hydroclimate modulated by the position of the westerlies during termination I. Science 362:580–583. https://doi.org/10.1126/science.aat9393
Zhang R, Delworth TL (2005) Simulated tropical response to a substantial weakening of the Atlantic thermohaline circulation. J Clim 18:1853–1860
We thank the onboard scientists and cruise staffs of KR07-12. We also thank T. Omura, H. Yamamoto, M. Takada, M. Sato, Y. Nakamura, N. Kisen, N. Nakamura, and A. Kobayashi for their assistance in our experiments. We are grateful to K. Nagashima, T. Sagawa, T. Irino, and T. Nakagawa for their helpful comment and suggestion.
JSPS KAKENHI Grant supported this work number 23221022 awarded to RT, and grants from the National Museum of Nature and Science and Japan Agency for Marine-Earth Science and Technology (JAMSTEC) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT, Japan). A grant-in-aid funded YK for JSPS Fellows, grant number 10914, and Program for Advancing Strategic International Networks to Accelerate the Circulation of Talented Researchers, grant number R2901.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
The data sets supporting the conclusions of this article are included in this article and its additional files.
Department of Geology and Paleontology, National Museum of Nature and Science, 4-1-1, Amakubo, Tsukuba, Ibaraki, 305-0005, Japan
Yoshimi Kubota
Research Institute for Global Change, Japan Agency for Marine-Earth Science and Technology, 2-15 Natsushima-Cho, Yokosuka, Kanagawa, 237-0061, Japan
Katsunori Kimoto
Department of Earth and Planetary Science, Graduate School of Science,, The University of Tokyo, 7-3-1, Hongo, Bunkyo-Ku, Tokyo, 113-0033, Japan
National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki, 305-8506, Japan
Masao Uchida
Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology, 1-1-1 Higashi, Tsukuba, 305-8567, Japan
Ken Ikehara
Search for Yoshimi Kubota in:
Search for Katsunori Kimoto in:
Search for Masao Uchida in:
Search for Ken Ikehara in:
RT proposed the topic for YK's doctoral research. YK carried out the experimental study under the guidance of RT and KK. MU and KI analyzed the 14C and tephra, respectively, and helped in their interpretation. YK wrote a draft of the manuscript, and RT collaborated with the corresponding author in the construction of the manuscript. All authors read and approved the final manuscript.
Correspondence to Yoshimi Kubota.
RT was a co-chief scientist of KR07-12, and YK and KI were on-board scientists of KR07-12. RT and KK supervised YK's doctoral research.
Figure S1. (a) Mg/Ca vs. Mn/Ca of KR07-12 PC-01 (all range) and (b) Mg/Ca vs. Mn/Ca (0–200 μmol/mol). (PDF 201 kb)
Figure S2. Comparison among δ18Ow of KR07-12 PC-01 (red) and δ18Ow of endmembers. The freshwater endmember is derived from Chinese speleothem δ18O (blue; Cheng et al. 2016), and Kuroshio Water endmember is derived from MD06–3067 (purple; Bolliet et al. 2011). The vertical axis on the left is for the freshwater endmember and on the right is for KR07-12 PC-01 and MD06-3067. (PDF 165 kb)
Figure S3. Comparison of the results of KR07-12 PC-01, U1429 (Clemens et al. 2018), and Chinese speleothem δ18O (δ18Osp) with (a) 14C-based age and (b) fine-tuned age. The top panels show comparisons between the planktic foraminiferal δ18O (δ18Op) of KR07-12 PC-01 and U1429 and δ18Osp. The middle and bottom panels show SST and local δ18Ow (δ18Ow-local), respectively. Open circles on the top of the figure represent the 14C-based datums and 2σ uncertainties. Crosses and triangles in the right panel indicate the tie points and the ages of the horizons of 14C datums with the fine-tuned age model, respectively. (PDF 276 kb)
Table S1. The tie points of the fine-tuned age model of KR07-12 PC-01. (XLS 27 kb)
Age model for MD01-2407. AMS 14C ages for core MD01-2407 published in Yokoyama et al. (2007) were recalibrated to calendar age with Marine13 (ΔR = 45) (Reimer et al. 2013). All of the calendar age and depths are listed in Table S1. Choosing a reasonable sedimentation rate, seven datums were not employed in our age model (Table S1). Before 50 ka, we refer to datums in Kido et al. (2007). (DOCX 15 kb)
Figure S4. Depth vs. calendar age for core MD01-2407. Data in blue are not used for the age model. (PDF 75 kb)
Table S2. The calendar ages and depths for core MD01-2407 and references. (XLSX 10 kb)
Figure S5. Depth in the Tsushima Strait vs. cross-section area in the Tsushima Strait (Oba 1988). (PDF 70 kb)
Kubota, Y., Kimoto, K., Tada, R. et al. Millennial-scale variability of East Asian summer monsoon inferred from sea surface salinity in the northern East China Sea (ECS) and its impact on the Japan Sea during Marine Isotope Stage (MIS) 3. Prog Earth Planet Sci 6, 39 (2019) doi:10.1186/s40645-019-0283-0
MIS 3
Mg/Ca-derived SST
D-O cycles
Evolution and variability of Asian Monsoon and its linkage with Cenozoic global cooling
|
CommonCrawl
|
Search SpringerLink
Side-Effects Causing Hidden Conflicts in Software-Defined Networks
Part of a collection:
Software Technology and Its Enabling Computing Platforms
Vitalian Danciu1 &
Cuong Ngoc Tran ORCID: orcid.org/0000-0001-9092-88461
SN Computer Science volume 1, Article number: 278 (2020) Cite this article
The Software-Defined Networking (SDN) architecture facilitates the flexible deployment of network functions by detaching them from network devices to a logically centralized point, the so-called SDN controller, and maintaining a common communication interface between them. While promoting innovation for each side, this architecture also induces a higher chance of conflicts between concurrent control applications compared to existing traditional networks. We have discovered a new type of anomalies that we call hidden conflicts. They appear to occur only due to side-effects of control application's behaviour and to be independent of and distinct from the class of conflicts between rules present in the network devices. We analyse the SDN interaction primitives susceptible to such disruptions and present experiments supporting our analysis, the result of which indicates the necessity of the knowledge on the control mechanics in detecting hidden conflicts. We present a hidden conflict prediction approach that employs speculative provocation to determine the deployed applications' behaviour. The observed behaviour can be leveraged to predict undesired network state. Evaluation of our prediction prototype suggests that prediction functions should be integrated into control applications.
Working on a manuscript?
Avoid the most common mistakes and prepare your manuscript for journal editors.
In Software-Defined networking (SDN) architecture, the network elements (SDN devices) forming the data plane lack a control plane of their own. The control functions are centralized in a logical component, the so-called SDN controller, that serves as a platform for control applications. These applications issue rules that govern the behaviour of the SDN devices in the data plane. The devices themselves retain only the essential functions for forwarding messages according to the rules stored in their flow tables and to process instructions from the controller.
SDN offers a higher degree of flexibility in the specification of network behaviour than is achievable in traditional networks composed of autonomous network elements. The need for control protocols facilitating the negotiation between autonomous elements in traditional networks is eliminated in SDN and replaced by a central specification of network behaviour.
This architectural feature increases the flexibility in specifying network behaviour. In particular, new or experimental network behaviour can be introduced at one single point in the network (the controller) instead of requiring changes to every network element.
This same flexibility renders SDN prone to conflicts between the intents of concurrently active control applications. Different control applications may intend to specify different behaviour that possibly leads to conflicts at the policy level. In other cases, its implementation in terms of rules may include rules conflicting with each other at a technical level.
We consider a conflict to be present when the network's behaviour differs from the expected behaviour, as a result of the combined deployment of control applications. The new type of conflict demonstrated in this paper originates from side-effects and is hidden from analysis of the rules in the data plane alone.
Hidden Conflicts
An instance of a generalization conflict in our experimental setup for conflicts in SDN showed unexpected, anomalous network behaviour different from that described in literature. The generalization conflict class [3, 21] is defined by two rules i and j differing in their action while the match expression of the rule with higher priority describes a subset of the other's: \(priority_i > priority_j, match_i \subseteq match_j, action_i \ne action_j\). The consequence of generalization conflicts has been assessed to be minor.
Scope of the rules issued by the control applications
The conflict instance caused by our two control applications conformed to the class definition: one application installed a "broader" rule with a more general match field, while the other of higher priority installed a rule matching a subset of the first. Figure 1 shows the scope of the rules introduced by the applications.
Figure 2 illustrates the case where two control applications verified to function correctly in isolation create a generalization conflict when executed concurrently. The upper box of the sequence diagram shows the simple reactive mechanics of App. 2: it reacts to new flows, of which it is notified, by installing new rules in the (Device 1).
Interactions of a control application in isolation and when conflicting with another. For clarity, the controller intermediary has been omitted
The case of concurrent execution of both applications is shown in the lower box of Fig. 2, in which App. 2 is effectively disabled. Analysis of this behaviour showed the observed effect to be not a consequence of the generalization conflict between rule1 and rule1234 but contingent on the suppression of notifications (or events) issued to the applications. The presence of the broader rule rule1234 resulted in packets interesting to App. 2 being processed locally by the device, instead of being escalated to the controller. Hence, App. 2 was deprived of the notifications it requires to function as expected. A concrete experiment corresponding to this case is presented in "EpLB and TE1" section.
The expectation from the descriptions in literature and the apparent effect of the rules would suggest, indeed, only a minor issue: normally, the broader, low-priority rule would defer to the more specific one of higher priority. In our case, the sheer presence of the broader rule causes suppression of events and thus the failure of one application, as a side-effect of its (correct) handling of incoming packets. An alternative interpretation of this effect is a conflict between the broader rule and the default behaviour of the device to escalate unknown flows to the controller.
We name this type of conflicts hidden conflicts or side-effect conflicts, as their cause cannot be discerned from analysis of the rules in the data plane's devices alone but requires insight into the mechanics of the control plane. Their discovery raises the question how sensitive the SDN control is to side-effects. To address it, we analyse the operational model for OpenFlow SDN [16] to identify potential side-effects that can cause anomalous behaviour in the network.
We first describe a new type of conflicts in SDN that we call hidden conflict. In contrast to the conflict types portrayed in literature, hidden conflicts are not detectable by rule analysis alone. The cause and mechanism of hidden conflicts appear to be orthogonal to those of conflicts between rules. Thus, hidden conflicts appear to be a different dimension of conflicts. Our initial examination of this conflict type shows it to occur due to suppression of the event mechanism as a side-effect of an otherwise conflict-free rule set. Consequently, events necessary for the function of a control application are no longer provided to it.
Aiming to identify all possible side-effect sources, we examine the interaction primitives of SDN in "Analysis" section and identify those combinations of primitives that can be influenced by the operation of a control application. Where possible, we determine the probable observable consequences of such influence. We have conducted experiments, discussed in "Empirical examination" section, that indicate the consequences of side-effects to be uncorrelated to conflicts between rules. In particular, we demonstrate identical side-effects for two different types of conflicts between rules, as well as the absence of side-effects in a situation in which the influence on the control primitives is removed.
Being deprived of rule analysis as an effective method for the detection of hidden conflicts, we introduce a speculative method to predict hidden conflicts through provocation of side-effects. By issuing surrogate, fake events to a control application, a predictor is capable of observing the application's response behaviour and determining if the state of the data plane, including the rule set, would hinder that behaviour. We describe this approach and the function of our predictor prototype in "Hidden conflict predictor" section.
Our prediction approach is general in that it is not application specific. Conversely, it is initially agnostic of the behaviour of the control applications that are being examined. Thus, the prediction itself may cause undesired effects in the network, in response to the surrogate/fake events issues by the predictor. Similarly, run-time prediction may cause race conditions between genuine and fake events. We discuss these issues in "Discussion" section. We review related research on conflicts and their detection in "Related work" section, including race conditions and conflicts between rules.
We conclude in "Conclusion" section by highlighting properties of the hidden conflict class and propose that predictor code should be included into applications at design time. The specialization of our prediction approach to render it useful during control application development is an interesting topic for further study.
This article extends our previous work [28], that introduced the notion of hidden conflicts, by emphasizing the following points:
We elaborate the implementation details of the hidden conflict predictor (Sect. "Hidden conflict predictor") ranging from the prediction mechanism, the choice of candidate traffic for generating fake events and the interception of methods as reactions of control applications once receiving the fake events.
We discuss the properties of the hidden conflict predictor, the challenges encountered during its deployment and find that they suggest the integration of predictor code in control applications to be beneficial.
We indicate interesting research directions based on the analysis of the limitations of our own work, particularly when dealing with dynamic topology change and for choosing a fixed matching policy during the examination of conflicts.
We discuss the methodology to analyse the hidden conflicts described in "Introduction" section, which requires the introduction of the interaction primitives between SDN participants at different layers. We infer the disturbance factors that could lead to hidden conflicts and their possible impact.
The occurrence of the conflict instance demonstrated in "Hidden conflicts" section is contingent on an influence of one of the applications on the control mechanics of the other. Our examination therefore targets the potential influences exerted by applications and the SDN control mechanics that are susceptible to each influence.
As a starting point, we use an analytic examination method in that we decompose the operational model of OpenFlow into primitive interactions between the devices, the controller and the control applications. Such interactions are triggered by events in the network, including packets arriving at a device. Each combination of interactions is a candidate for influence. We assess each candidate with respect to its susceptibility to influence, i.e. we enumerate the conditions in which the interaction can be disturbed. For each of these susceptible candidates we attempt to assess the impact on an application whose correct function relies on it. Thus, we acquire a conflict model, that includes (i) the susceptible primitive interaction combinations, (ii) the conditions in which they may be influenced and (iii) the potential impact on the function of an application relying on a given interaction combination at a time when one of the conditions is met. We validate the model in "Empirical examination" section by documenting experiments that support our analysis.
Interaction Primitives
From the study of the OpenFlow specification Footnote 1 we extract the basic actions of the devices and a controller. Since OpenFlow does not specify a north-bound interface, we define the interaction between controller and applications to consist of a controller interface and an event system, as it is commonly implemented in SDN controller software.
We list the trigger events along with the possible actions being triggered in Tables 1, 2 and 3.
Table 1 Device primitives
Table 2 Controller primitives
Table 3 Application primitives
Table 4 Combinations of interaction primitives
Table 5 Mock events based on the interaction primitives
Interaction Combinations
We assume that devices and applications do not interact directly and thus all interactions are relayed and translated by the controller. We note that some combinations are impossible in practice. We constrain the actions listed to those pertaining to an interaction and refrain from assumptions on the internal actions of controller and applications. Some of the combinations shown in Table 4 can be eliminated from further analysis. These include:
the items marked "device only", as these do not reflect an actual interaction;
the items where the application action is void, marked "NoP".
It is possible to generate mock events, i.e. events that have no base in an actual state change in the network, to exploit the north-bound interface, either for productive use, e.g. to diagnose a network problem, or with malicious intent. The combinations of the interactions between applications and the controller, shown in Table 5, result from the mock events being introduced at the controller level or at the application level. Note that a mock event could be any of the events from Table 4 intended for the controller or the application.
Disturbance Factors
Network behaviour can be influenced (negatively) by the disruption of the interaction between the actors. In the following, we list disturbance factors that have been observed in an experiment or that are conceivable and give an example of how they can disturb the mechanics of an application.
Event suppression by local handling A switch handles an incoming packet locally, instead of escalating it to the controller. Consequently, the application is deprived of the event notification. An illustration for this disturbance factor is shown in Fig. 2: the presence of rule 1234 results in the missing notification for App. 2 when flow 2 arrives at Device 1. Experiments 3.2.1 and 3.2.2 also reveal the same effect.
Event suppression by changes to paths Prevention of escalation by changes to paths, e.g. when a packet interesting to an application is routed around the switch holding a rule that would escalate the packet to the controller.
Action suppression by packet modification A device executing rules that modify packet fields before the packet is escalated by itself or by subsequent devices could modify the packet so that it is no longer accepted within an application's scope. For example, application A1 instals a rule on switch S1 to modify all packets to D1 by changing the destination to D2 before sending these packets out of S1–S2. Application A2 is interested in the traffic destined to D1 and subscribes to event E originated from switch S2. As a result, the event escalated at switch S2 is ignored by A2.
Undue trigger Conversely to Action suppression by packet modification, an application can be "tricked" into, e.g. installing or removing rules by packets modified before escalation. This can happen in the course of an attack by mock packets sent by attackers.
Tampering with event subscription This disruption is contingent on applications being able to modify each other's subscriptions. In that case, an application might cause "undue trigger" disruption to another application or simply suppress events by unsubscribing events for it. This case can also happen as a result of an attack.
Susceptible Interactions and Impact
The combinations shown in Tables 4 and 5 may be susceptible to one or more of the disturbance factors. We analyse each of them to determine which, if any, disturbances they are sensitive to and the result are shown in the Disturbance factor column of these tables. In our analysis, we have determined a combination to be sensitive if it is conceivable that one of the disruption factors may be able to disturb its process. We present only the results and the effects we consider possible consequences of a disruption. Unsurprisingly, these effects strongly relate to the purpose of the interaction set. They include missing rules, redundant rules or wrong rules in one device or more, which may cause anomalous network behaviour. We note that the suppression of handling and the suppression of events appear prevalent in our assessment of the susceptibility of interaction combinations.
In partial validation of our analytical assessment on the disruption of interaction primitive combinations, we present selected experiments in "Empirical examination" section. One of them is a detailed description of the motivating example sketched in "Introduction" section.
Empirical Examination
Table 6 Experimental settings
We conduct two experiments to demonstrate the consequences of side-effects on applications operating in reaction to events issued by the SDN controller. In another experiment, we show that side-effects do not occur at all for event-free applications.
Topology for the experiments. The numbers surrounding a switch indicate the port number assigned by the SDN controller
Experiments are deployed on the topology shown in Fig. 3. The testbed are built based on virtual machines as described in [7]. We use the Ryu SDN frameworkFootnote 2 for SDN controller with OpenFlow1.3 as the controller southbound API. Open vSwitch [20] with OpenFlow support is employed for SDN switches. Traffic among end-points is generated by common tools: iperf Footnote 3, nc and ping.
Applications for Experiments
We employ several control applications, that are run concurrently in different combinations, depending on experiment. They are described in the following.
Shortest Path First (SPF)
The SPF application uses the topology information provided by the controller to realize the shortest path first routing function for all common kinds of traffic: ARP, ICMP, TCP, UDP.
SPF can be configured to deploy rules in two manners for IP traffic including ICMP, TCP, UDP:
SPF1 the rule's match field includes: source IP address, destination IP address, IP protocol number Footnote 4
SPF2 the rule's match field includes only destination IP address.
End-point load balancer (EpLB)
The session-based end-point load balancer balances the TCP/UDP traffic among configurable replicas. To change the target replica transparently to the sender of the packet, EpLB modifies specific fields (e.g. destination MAC address, destination IP address) of the packets. This operation is implemented by installing rules with a setfield action, as specified for OpenFlow SDN devices.
In the experiment presented in this paper, EpLB is deployed on switch S7 to balance the UDP/TCP sessions between PC3 and PC4. The first incoming session destined to PC3 will be sent to PC3, the second session to PC3 will be changed to PC4 by rewriting the destination information of the relevant traffic, the third will come to PC3 and so on. The balancing operation is transparent to end users in that the traffic in response from PC4 to the original source will be changed to appear as if it was sent from PC3.
Traffic Engineering (TE)
In the role of a network administrator, we employ the Ryu REST API Footnote 5 to pro-actively perform traffic engineering in two different manners in different experiments. Note, that in these experiments the TE "application" has been simulated by manual entry of the flow rules, however an actual application for the Ryu controller performing these actions automatically in response to policy configuration is easily conceivable. Due to the intended function of the application (static configuration of flows), its only benefit compared to the manual input would lie in the automation of the task.
TE1 The traffic engineering application redirects all traffic of the same port, destination, e.g. all traffic to the web server on port 80, on a dedicated path which is supposed to be more secure and reliable. In our experiment, all UDP traffic to PC3 with destination port being 5001 will be sent through the link S7–S6 by installing a flow entry on switch S7 to direct all these traffic out of its port 4.
TE2 The traffic engineering application directs all TCP traffic to PC3 out of port 3 of switch S7 on the link S7–S5 and all TCP traffic to PC4 out of port 4 of switch S7 on the link S7–S6.
Table 6 shows the settings for the experiments according to the concerned factors discovered in our earlier work [26, 27]. Each application is deployed with only one configuration in each experiment, they may start at the same time or one after another and their rules may have different priority or the same. The routing application SPF1/2 is necessary for the functioning of the network, so it will affect all switches while the EpLB and TE1/2 deploy their rules on switch S7 only. In our experiments, traffic sources are from PC1 and PC2 while traffic sinks are on PC3 and PC4. We employ the constant bit rate (CBR) traffic profile for all related end-points in the experiment. The network topology is shown in Fig. 3. We test with UDP traffic in the first and second experiments and a mix traffic of TCP/UDP in the third one.
EpLB and SPF1
Table 7 Experiment 1: switch S7's flow table after the first UDP session
In this experiment, PC1 and PC2 send traffic to PC3, PC4 acts as a replica of PC3. The flow table of switch S7 has only rules 1, 7 (controller management rules) in the beginning. Rule generation happens on first traffic both for EpLB and SPF1 (Table 7).
Observation and analysis Further UDP sessions from PC1 to PC3 cannot be balanced as expected. All the next UDP sessions from PC1 always come to PC3 while they were meant to be alternately handled by PC3 and PC4.
The problem can be identified by comparing the flow entries 2 and 6 highlighted in Table 7. EpLB features a UDP session by additional information of layer 4 source and destination ports as reflected in flow entry 2. It is supposed to instal new flow entries to handle further UDP traffic from PC1 to PC3 having different combination of layer 4 source–destination ports when being triggered by the corresponding packet-in events for this kind of traffic from the controller. However, since flow entry 6 matches the mentioned incoming traffic already, no packet-in event will be generated. As a consequence, the EpLB intention cannot be achieved.
Table 8 Experiment 2: switch S7's flow table after the first UDP session and deploying TE1's rules
EpLB and TE1
SPF1 is modified to work in concert with the EpLB and TE1 in this experiment, so its rules will be overwritten or not deployed at all where EpLB's or TE1's rules are active. EpLB balances sessions between PC3 and PC4 where PC4 acts as a replica of PC3. TE1 instals static rules to direct all UDP traffic having the specified destination port (5001 in this case) to PC3. The flow table of switch S7 has only rules 1, 7 (controller management rules) in the beginning of the experiment. Rule generation happens on first traffic both for EpLB and SPF1. In the role of an administrator, we instal TE1 rules later via REST API. This experiment shows the importance of the application deployment order (Table 8).
Observation and analysis Similar to the first experiment, EpLB is completely disabled for subsequent UDP sessions having the destination port of 5001 after the TE1 rule becomes effective.
Two flow entries 2 and 6 are identified to be responsible for the problem and are highlighted in Table 8. Again, since flow entry 6 is more general in that it matches only the destination IP address and the destination UDP port, further UDP sessions having these fields will be handled by this flow entry and no packet-in event will be generated to keep EpLB functioning correctly.
Table 9 Experiment 3: switch S7's flow table after establishing TCP sessions from PC1 to PC3 and PC4 and deploying TE2's rules
TE2 and SPF2
This experiment shows that side-effects do not happen at all when the application with more specific rules does not operate on the basis of the packet-in event (Table 9).
The flow table of switch S7 has only rules 1, 11 (controller management rules) in the beginning. Rule generation happens on first traffic by SPF2. TE2 rules are installed subsequently.
Observation and analysis The flow rules 2 and 8 follow the redundancy conflict pattern [3, 21], which features a similar relationship between two rules as in the generalization pattern but their actions are the same and their priority relationship does not matter. The flow rules 3 and 10 exhibit the generalization conflict pattern. The network behaves as expected for the main effect and there is no side-effect at all: all TCP traffic to PC3 and PC4 are forwarded according to rules 2 and 3, other traffic, e.g. UDP, ICMP is controlled by SPF2 rules.
Hidden Conflict Predictor
The hidden conflict demonstrated in the experiment described in "EpLB and SPF1" section leads to the rather severe consequence of a control application becoming ineffective. To protect the correct behaviour of the network, it is therefore necessary to detect this class of conflicts. Unfortunately, hidden conflicts cannot be detected by mere analysis of the data plane's flow tables (the collection of flow tables of all devices in the data plane). The assertion of their presence requires information on the control plane's behaviour in a certain state, in reaction to an event.
Full knowledge about the control plane's behaviour includes all combinations of control application action options given the state of the data plane's flow tables and the incoming traffic at the data plane. In any practical case, this level of knowledge about the network is obviated by several of its properties:
We may not know exactly the control application's behaviour. This is conceivable in SDN since control applications can come from different parties.
The behaviour of a control application varies generally according to the network state while the network state also changes from time to time, and mostly in an unpredictable fashion.
The incoming traffic at the data plane is also unpredictable, e.g. end-points can generate diverse traffic types (TCP, UDP, ICMP...) with different traffic profiles (CBR, VBR, bursty...) and in different groups (unicast, multicast).
To address this issue, we have experimented with speculative provocation of conflicts as a method to predict the creation of conflicts. We rely on a conflict predictor that selects possible conflict situations ("speculative") and simulates situations in which the applications may issue rules conflicting with the existing rule set ("provocation").
Application-controller communication [9]
We exploit the interaction between an SDN controller and the control applications being realized by the event/method mechanism [9] illustrated in Fig. 4. A control application registers as a listener for certain events, and the controller will invoke the application's callback method whenever such events occur.
Our chosen SDN controller for experiments, the Ryu SDN framework, also complies to this model and facilitates the conflict predictor implemented as a built-in control application in the controller to create and dispatch events to other control applications. In our experiments, the predictor generates the packet-in events associated with the candidate packets to provoke the reactions of the control applications which register to this event type; the choice of the candidate packets is elaborated in "Choice of candidate traffic" section. In practice, the predictor can generate any type of event, e.g. those related to topology change, SDN devices' state change.
Prediction Mechanism
Interaction of the predictor component with controller and control applications
The procedure has three main steps in which the predictor component interacts with the controller and control applications, as illustrated in Fig. 5:
The predictor analyses the rule tables and determines what type of additional rules would lead to conflict. The potential additional rules correspond to those completing one of the conflict patterns.
The predictor provokes control applications into generating such rules, by having the controller issue them fake events and subsequently intercepting the calls for rule installation in response to the fake events. As with the additional rules, the content of the fake events is derived from the conflict patterns describing a conflict class. Thus, the predictor attempts to provoke a specific class of conflict.
Each intercepted rule installation call is analysed by the predictor to determine if an actual installation of that rule would create a conflict.
These steps can be performed in parallel for several classes of conflicts in several situations. Therefore, this method allows for a trade-off between expended computing power and detection latency. In addition, it relies on the network state at the time of prediction, thus limiting the number of cases that would need to be probed.
In the followings, we demonstrate how a conflict predictor can be realized by elaborating the above steps for a subset of hidden conflicts related to the generalization and redundancy conflicts identifiable in data plane's flow tables.
Choice of Candidate Traffic
One of the necessary conditions for two rules i and j to expose a generalization or redundancy conflict is that the matching scope of one rule is "broader" than the other. The more specific rule must have higher priority for generalization conflict (cf. Sects. 1.1 and 3.2.3). Without loss of generality, we assume that: \(\mathrm{priority}_i > \mathrm{priority}_j, \mathrm{match}_i \subseteq \mathrm{match}_j\).
Candidate traffic for probing hidden conflicts
The experiments in "Empirical examination" section show that hidden conflicts occur as one of the control application is deprived of the events it requires to function. These events are expected to be generated for the traffic being matched against the more general rule (rule j). This observation indicates that the candidate traffic for generating the fake event to probe hidden conflicts must belong to the set difference of the set created from the matching scope of rule i and the set from that of rule j, as illustrated in Fig. 6.
An exemplary candidate packet to probe hidden conflicts deduced from rules 2 and 6 in Table 7 would have header fields:
$$\begin{aligned} L3:src=192.168.1.1, dst=192.168.1.3, \\ \mathrm{Prot}=\mathrm{UDP}, L4:src=50000, dst=5001 \end{aligned}$$
Note that a packet for the fake event generation must be complete, i.e, the above candidate packet requires layer 2 headers. Assuming it is an Ethernet frame, this includes source and destination MAC addresses and the EtherType value. The MAC addresses can be set to arbitrary values or given correct values obtained from an ARP cache control program such as the one described in [13].
Interception of Methods
The predictor has to supervise the rule deployment of the control applications as a result of its fake event generation in order to intercept this action and determine the presence of hidden conflicts. For this reason, we have implemented the predictor as an application integrated in the controller and providing the add-flow interface for other control applications to instal their rules in data plane devices. We notice another possibility in deploying the predictor as an independent program like any other control application, which appears more elegant but causes more overhead in communication between the rule deployment module of the controller and the predictor and higher latency in rule installation process. Besides, the predictor may well be part of an orchestrator who logically situates centrally below all control applications and moderates their actions, which advocates our choice.
The pseudo-code sketching the prediction procedure is shown in the Algorithm 1. The predictor uses the controller interfaces to regularly pull the data plane's flow tables every interval period, analyses them to detect conflicts based on the provided conflict patterns (e.g. redundancy, generalization). If conflicts exist, a conflict_flag is set and the predictor will choose and create candidate packets to generate fake events associated with them. For each generated event, there may be multiple reactions from different control applications. During the conflict_flag_timeout period when the conflict_flag is set, all calls to the add-flow function by control applications will be checked if their rules to be installed correspond to the generated fake events and whether installing these rules in the data plane would cause conflicts there; if yes, an alarm of the likelihood of the hidden conflicts relevant to the chosen candidate packets is raised. A rule to be installed in the data plane is asserted to be corresponding to a generated fake event if its matching scope covers the chosen packet for that event. Thus, the predictor can decide whether a method issuing a rule will create a conflict. This is the case if the new rule would contradict one of the existing rules in the device where it would be installed, i.e. they have overlapping matching scope but different actions.
Orthogonality between hidden conflicts and those described in literature
The experiments presented in "Empirical examination" section were selected as to show the independence of hidden conflicts from those between rules, that have been described in literature. Fig. 7 illustrates the combination of single cause–effect pairs of hidden conflicts from the experiments correlated with named conflicts between rules. The arrangement shows the same hidden conflict for two conflicts between rules, in cases 1 and 2 in Fig. 7, described in "EpLB and SPF1" and "EpLB and TE1" sections, respectively. It also shows, that the effect reverts to that of the conflict between rules if the cause for the hidden conflict is removed, as illustrated by cases 3 and 4, described in "TE2 and SPF2" section. This indicates, that hidden conflicts form a dimension of their own, i.e. they are orthogonal to the classes of conflicts between rules. The independence of the two dimensions raises interesting questions.
Challenges to Hidden Conflict Detection
One important question is whether the presence of a conflict between rules is necessary at all in order for applications to exhibit a hidden conflict. If so, the detection of the conflict pattern specified for the rules can be used as a starting point of the search for an associated hidden conflict, even if the effect of the conflict between rules were negligible. If not, then the abnormal behaviour can be said to represent a hidden conflict purely between the assumptions of applications on their environment, that occurs depending on the type of side-effect present.
Properties of the Hidden Conflict Predictor
The detection of hidden conflicts represents a different challenge from the detection of the conflict classes hitherto described in literature. Given that some of the mechanisms leading to abnormal behaviour in the network are hidden within the application code (hence, hidden conflict), we require a black-box analysis approach of the applications as a prerequisite of conflict detection. Our hidden conflict predictor described in "Hidden conflict predictor" section follows such a black-box approach. As such, it is initially agnostic of the behaviour of the control application for which it tries to predict. This allows the approach to be employed with any control application, while also introducing inaccuracy and risks of interference.
The predictor cannot conclude the existence of a hidden conflict with absolute certainty as detection is contingent on the prospective state of the data plane. Hidden conflicts manifest only if the anticipated events and/or the chosen candidate packets (cf. Sect. "Choice of candidate traffic") occur. The results of the predictor can be improved by providing it with management information regarding the data plane. For example, to leverage hidden conflict prediction during maintenance, a predictor could make use of the maintenance schedule, the concrete maintenance activities and the target state of the data plane. It can then be used to predict the reaction of control applications in response to the planned management activities. The predictor can acquire information about end-points (necessary for the packet-in event) from the data plane's flow tables or by consulting end-point discovery applications such as an address resolution proxy [13] if available.
Discrimination of Reactions from Fake Events and Genuine Events
The predictor issues fake events to a control application and records its reaction. However, the application may concurrently receive genuine events and exhibit a reaction to them, as well. Thus, the predictor must be able to differentiate between reactions from fake and genuine events in order to intercept the former and ignore the latter ones. As a partial solution, we have demonstrated the association of an issued rule with a previously generated fake packet-in event by matching its scope to that of the candidate packet for that event. A comprehensive solution to this problem remains for further study. We note that a wrong association of a rule installation request due to a genuine event to a fake event will lead to a benign effect in our current demonstrating predictor prototype: that rule is checked for possible conflict consequence in the data plane and an alarm is raised if conflict may arise.
In cases where the need for discrimination is to be avoided, the race condition between events can be eliminated by mutual exclusion of genuine and fake events.
Poisoning of Control Application State
Our prediction approach makes the quiet assumption that an application exhibits idempotent behaviour. The assumption is not unreasonable, given the reactive nature of many network functions formulated as applications. However, more sophisticated network software may be designed to hold and evaluate network state separate from the controller's. Also, it may change its behaviour in response to the frequency of certain events. In such cases, attempting to provoke reactions from the application by issuing it surrogate or "fake" events may cause the corruption of its internal state or a change in the strategy used by the application to perform its functions.
For example, a round-robin end-point load balancer balancing traffic between two servers may assume that it has directed flow to the first server in response to a fake event from the predictor. Consequently it will assign the next flow to the second server in response to the next genuine event, thus creating an imbalance. Similarly, an ARP cache application may cache the wrong association of an IP address to a MAC address present in the fake packet-in event.
Applications can avoid state poisoning by verifying that the intended change has been implemented in the data plane. For instance, the above-mentioned load balancer may check if its rule to forward the traffic is present in the data plane's flow tables and rectify its state accordingly, the ARP cache may reexamine the existence of the end-point via its discovery mechanism before putting its information into use.
However, we cannot rely on such verifications being included in the applications' behaviour. Hence, if such issues are diagnosed, it may be prudent to record the results of the provoked reactions and re-use them in subsequent executions of the application in question.
Hidden Conflict Prediction within Control Applications
A predictor can be integrated in a control application to detect hidden conflicts possibly impacting its behaviour. Our predictor (Sect. "Hidden conflict predictor") functioning as a black-box analysis approach probes for control applications' reactions to determine their behaviour. In contrast, a predictor integrated within the control application has the advantage of full knowledge of the application's behaviour and the state it holds. Hence, it can predict hidden conflicts with higher certainty, though in narrower scope pertaining only to that application, without the risk of state poisoning.
Our experiments have shown that hidden conflicts may render the control application inactive in the data plane, leading to severe network faults. At the same time, the drawbacks of external prediction pose risks for malfunction, as well. Hence, we recommend that the design of control applications includes an integrated hidden conflict prediction specific to the application. We envision that the functional primitives necessary for such prediction (conveying the fake events to internal application code, evaluating the resulting reaction) may be collected in a common library shared between application developers.
Hidden Conflict Handling
Our work focuses on detection and eschews handling strategies for the time being. Therefore, we refrain from specifying concrete measures as a reaction to positive results from the predictor. Although automated handling of conflicts is desirable, it is impossible to determine the importance to a rule being issued or the "reason" of the application for issuing it. Thus, the obvious alternatives for conflict resolution appear unsatisfactory. The conflict may be avoided if
the creation of rules is suppressed, or if the existing rules they conflict with are removed or altered. However, the effects of such changes cannot be evaluated with respect to their impact on the compliance with network management policy.
the application provoked into issuing conflicting rules is disabled. Similarly to the removal of rules, the effects on network service cannot be determined beforehand.
Alerting the network administrator delegates the problem of understanding the conflict to a human, but it does not constitute actual handling of the conflict. In addition, the handling of the conflict is relegated to a much wider time-frame, compared to an attempt at automatic handling. However, until reliable resolution strategies are available, warning network management seems the most responsible manner of reacting to a potential conflict. It is conceivable that applications will incorporate probing for potential conflict situations themselves.
The study of the dynamic and distributed aspects of conflicts within rule sets introduced in this paper aims to go beyond the existing local and static view of conflict detection. However, there remain dynamic aspects of SDN which we have not taken into account. Philosophically, conflicts occur because of different assumptions in concurrent applications. Therefore, any violation of such assumptions bears the risk of conflict. The basis for such assumptions is tied to the behaviour or the state of an application, i.e. the program being executed as an application or the information about the network that the application may hold itself.
Topology Change
It seems plausible that if switches or links are added or removed from the network or if they fail, then the resulting change in topology may either trigger new conflicts or render the existing rules conflicting, as the assumptions possibly made by the issuing application are invalidated. This may be an interesting topic for future study.
Matching Policy
OpenFlow SDN devices employ a first-match policy when choosing which rules to apply to a packet. In principle, SDN devices could employ other strategies as an alternative or as an option, e.g. exact matching or most-specific-first. To some degree, the conflict instances we study are tied to the first-match policy employed by the devices in our experimental setup. Thus, the conflict patterns we find and the detection code created from it is dependent on a first-match processing of the rule tables. We consider this to be a minor limitation, technologically, as first-match appears to be the most common policy in rule-based packet processing.
It is an interesting question whether a different matching policy may influence the occurrence of conflicts: first, whether conflict classes can be independent of matching policy, i.e. they would occur in some form under any choice of policy; and second, if the choice of matching policy would reduce the propensity for conflict in SDN. When comparing the results of the first-match and most-specific-first policies in the cases exemplified in this text, we find the conflict occurring in both cases. One would expect that the local generalization conflict would be eliminated by using a most-specific-first policy, as that policy would ensure the application of the less general rule irrespective of the rules' priority values. However, any packet not matching the more specific rule would, under a most-specific-first policy, be handled by the general rule, leading to the same effect as under the first-match policy.
While this observation is hardly conclusive on its own, a more in-depth study of the influence of matching policy on conflict emergence may yield insights regarding data and control plane design as well as the improvement of prediction techniques.
Where contention between multiple management entities exists, conflicts are potential. Research on conflicts in SDN and in traditional network environments shares certain similarity.
Al-Shaer et al. introduced noticeable results in their research on conflicts in security applications, specifically with firewall policies [2, 3], and generalized them to a conflict taxonomy differentiating between various conflict classes for filtering-based network security policies [10]. We refer to two of their conflict class definitions, namely generalization and redundancy, in this paper. While our experiments show conflicts falling within these classes, the side-effects and thus the hidden conflicts that are our focus have not been described in the taxonomy. This is unsurprising, given that the operation mechanism of firewalls in traditional networks is different from that of SDN, which encompasses interaction between applications, the SDN controller and network devices.
Conflicts in SDN have been extensively studied, e.g. [1, 4, 8, 11, 12, 19, 23,24,25], albeit with focus on contradictions within the rule set in the data plane. Side-effects affecting the interactions seem to be a new, unexplored topic. Pisharody et al. extended the conflict taxonomy mentioned above [10] in SDN by a new conflict class, namely imbrication, which considers conflicts between rules with matching fields representing different OSI layers [21, 22]. They assumed conflict effects corresponding to those stated by Hamed, e.g. for the generalization conflict class, the effect is assessed to be a "warning" since "the specific rule just makes an exception of the general rule" [3]. Conflicts were considered on the basis of rules in the data plane only, which precludes the examination of anomalies originating from side-effects.
Chowdhary et al examined conflicts in the SDN-based Cloud networks [5, 6] and put their focus, similar to other research, on conflicts between rules in the data plane. Zarca et al developed a framework relying on semantic technologies for policy-based security orchestration in SDN/NFV-enabled IoT systems [30], some conflict classes established their similarity to those categorized in the research group of Sloman [14, 15, 18], e.g. conflict of priorities, conflict of duties, multiple managers. Their studies appear also orthogonal to hidden conflicts presented in our work.
Similar to hidden conflicts caused by side-effects, race conditions can lead to unexpected effects in SDN and are also hard to catch. Race conditions have been studied separately in SDN in the control plane [29] and in the data plane [17]. They appear to be a disjoint problem domain to the side-effects examined in this paper, due to the necessary temporal relationships between the participants of a race condition. However, it might be interesting to learn if the combination of concurrency in both control and data planes of SDN might cause side-effect conflicts.
Hidden conflicts are a new conflict type that occurs due to side-effects or unfulfilled expectations of control plane elements. From the starting point of a conflict instance discovered in our experiments, we have presented a systematic analysis of the propensity of SDN interaction primitives to be disrupted so as to expose hidden conflicts. We complemented the analysis with experiments demonstrating the same side-effect cause and effect in the presence of different conflicts from existing taxonomies. This suggests that the dimension of hidden conflicts is orthogonal to that of hitherto described patterns of conflicts between rules.
We found that hidden conflicts are contingent on the forthcoming data plane traffic and hence, cannot be detected with certainty. In response, we have developed a hidden conflict predictor that speculatively provokes action from control applications to acquire the information necessary for the detection of hidden conflicts. Our current predictor design is application independent but, as a corollary, introduces the risk of changing application state and behaviour in an undesired manner. Hence, in future research we propose to isolate prediction primitives in order to make them available to application developers for use at design time, allowing hidden conflict prediction to be an integral part of applications.
https://www.opennetworking.org/wp-content/uploads/2014/10/openflow-switch-v1.3.5.pdf
https://ryu.readthedocs.io/en/latest/.
https://iperf.fr/
https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
https://ryu.readthedocs.io/en/latest/app/ofctl_rest.html
Al-Shaer E, Al-Haj S. FlowChecker: configuration analysis and verification of federated openflow infrastructures. In: Proceedings of the 3rd ACM workshop on assurable and usable security configuration, SafeConfig '10, New York, NY, USA. Association for Computing Machinery; 2010. p. 37–44.
Ehab A-S, Hazem H, Raouf B, Masum H. Conflict classification and analysis of distributed firewall policies. IEEE J Select Areas Commun. 2005;23(10):2069–84.
Al-Shaer ES, Hamed HH. Firewall policy advisor for anomaly discovery and rule editing. In: Proceedings of the IFIP/IEEE eighth international symposium on integrated network management, Colorado Springs, CO, USA, March 2003. p. 17–30.
AuYoung A, Ma Y, Banerjee S, Lee J, Sharma P, Turner Y, Liang C, Mogul JC. Democratic resolution of resource conflicts between SDN control programs. In: Proceedings of the 10th ACM International on Conference on Emerging Networking Experiments and Technologies, CoNEXT '14, pages 391–402, New York, NY, USA. Association for Computing Machinery 2014.
Chowdhary A, Alshamrani A, Huang D. SUPC: SDN enabled universal policy checking in cloud network. In: Proceedings of the 2019 International Conference on Computing, Networking and Communications (ICNC), Feb 2019;pages 572–576.
Chowdhary A, Huang D, Ahn G-J, Kang M, Kim A, Velazquez A. SDNSOC: Object oriented SDN framework. In: Proceedings of the ACM International Workshop on Security in Software Defined Networks and Network Function Virtualization, SDN-NFVSec '19, 2019; pages 7–12, New York, NY, USA. Association for Computing Machinery.
Danciu V, Guggemos T, Kranzlmüller D. Schichtung virtueller Maschinen zu Labor- und Lehrinfrastruktur. In: 9. DFN-Forum Kommunikationstechnologien, pages 35–44, Bonn, 2016. Gesellschaft für Informatik e.V.
Ferguson AD, Guha A, Liang C, Fonseca R, Krishnamurthi S. Hierarchical policies for Software Defined Networks. In: Proceedings of the First Workshop on Hot Topics in Software Defined Networks, HotSDN '12, 2012;pages 37–42, New York, NY, USA. Association for Computing Machinery.
Paul G, Chuck B. Software defined networks: a comprehensive approach, 1st ed. Morgan Kaufmann; 2014.
Hazem H, Ehab A-S. Taxonomy of conflicts in network security policies. IEEE Commun Mag. 2006;44(3):134–41.
Kazemian P, Chang M, Zeng H, Varghese G, McKeown N, Whyte S. Real time network policy checking using header space analysis. In: Proceedings of the 10th USENIX Conference on Networked Systems Design and Implementation, nsdi'13, 2013;pages 99–112, USA. USENIX Association.
Khurshid A, Zhou W, Caesar M, Godfrey PB. VeriFlow: Verifying network-wide invariants in real time. In: Proceedings of the First Workshop on Hot Topics in Software Defined Networks, HotSDN '12, 2012;pages 49–54, New York, NY, USA. Association for Computing Machinery.
Li J, Gu Z, Ren Y, Wu H, Shi SS. A software-defined address resolution proxy. In: Proceedings of the 2017 IEEE Symposium on Computers and Communications (ISCC), July 2017; pages 404–410
Lupu E, Sloman M. Conflict analysis for management policies. In: Proceedings of the International Symposium on Integrated Network Management, 1997; pages 430–443. Springer.
Lupu EC, Sloman M. Conflicts in policy-based distributed systems management. IEEE Trans Softw Eng. 1999;25(6):852–69.
McKeown N, Anderson T, Balakrishnan H, Parulkar G, Peterson L, Rexford J, Shenker S, Turner J. OpenFlow: enabling innovation in campus networks. SIGCOMM Comput Commun Rev. 2008;38(2):69–74.
Miserez J, Bielik P, El-Hassany A, Vanbever L, Vechev M. SDNRacer: Detecting concurrency violations in Software-Defined Networks. In: Proceedings of the 1st ACM SIGCOMM Symposium on Software Defined Networking Research, SOSR '15, New York, NY, USA, 2015. Association for Computing Machinery.
Moffett Jonathan D, Sloman MS. Policy conflict analysis in distributed system management. J Organ Comput Electron Com. 1994;4(1):1–22.
Mogul JC, AuYoung A, Banerjee S, Popa L, Lee J, Mudigonda J, Sharma P, Turner YC: Towards the modular composition of SDN control programs. In: Proceedings of the Twelfth ACM Workshop on Hot Topics in Networks, HotNets-XII, New York, NY, USA, 2013. Association for Computing Machinery.
Pfaff B, Pettit J, Koponen T, Jackson EJ, Zhou A, Rajahalme J, Gross J, Wang A, Stringer J, Shelar P, Amidon K, Casado M. The design and implementation of Open vSwitch. In: Proceedings of the 12th USENIX Conference on Networked Systems Design and Implementation, NSDI'15, pages 117–130, USA, 2015. USENIX Association.
Pisharody S. Policy Conflict Management in Distributed SDN Environments. PhD thesis, Arizona State University, 2017.
Sandeep Pisharody, Janakarajan Natarajan, Ankur Chowdhary, Abdullah Alshalan, Dijiang Huang. Brew: a security policy analysis framework for distributed sdn-based cloud environments. IEEE Trans Depend Secure Comput. 2019;16(6):1011–25.
Porras P, Shin S, Yegneswaran V, Fong M, Tyson M, Gu G. A security enforcement kernel for OpenFlow networks. In: Proceedings of the First Workshop on Hot Topics in Software Defined Networks, HotSDN '12, pages 121–126, New York, NY, USA, 2012. Association for Computing Machinery.
Shin S, Porras PA, Yegneswaran V, Fong MW, Gu G, Tyson M. FRESCO: Modular composable security services for Software-Defined Networks. In: Proceedings of the 20th Annual Network & Distributed System Security (NDSS) Symposium, San Diego, CA United States, 2013.
Peng S, Ratul M, Jennifer R, Lihua Y, Ming Zhang, Ahsan Arefin. A network-state management service. ACM SIGCOMM Comput Commun Rev. 2015;44(4):563–74.
Tran CN, Danciu V. On conflict handling in software-defined networks. In: Proceedings of the 2018 International Conference on Advanced Computing and Applications (ACOMP), pages 50–57. CPS, 2018.
Tran CN, Danciu V. A general approach to conflict detection in software-defined networks. SN Comput Sci. 2019;1(1):9.
Tran CN, Danciu V. Hidden conflicts in software-defined networks. In Proceedings of the 2019 International Conference on Advanced Computing and Applications (ACOMP), pages 127–134. IEEE, 2019.
Xu L, Huang J, Hong S, Zhang J, Gu G. Attacking the brain: Races in the SDN control plane. In: Proceedings of the 26th USENIX Conference on Security Symposium, SEC'17, pages 451–468, USA, 2017. USENIX Association.
Molina ZA, Miloud B, Bernal BJ, Tarik T, Skarmeta AF. Semantic-aware security orchestration in SDN/NFV-enabled IoT systems. Sensors. 2020;20(13):3622.
The authors wish to thank the members of the Munich Network Management Team (www.mnm-team.org), directed by Prof. Dr. Dieter Kranzlmüller, for valuable comments on previous versions of this paper.
Open Access funding provided by Projekt DEAL.
Ludwig-Maximilians-Universität München, Oettingenstr. 67, 80538, Munchen, Germany
Vitalian Danciu & Cuong Ngoc Tran
Vitalian Danciu
Cuong Ngoc Tran
Correspondence to Cuong Ngoc Tran.
This article is part of the topical collection "Software Technology and Its Enabling Computing Platforms" guest edited by Lam-Son Lê and Michel Toulouse.
Danciu, V., Tran, C.N. Side-Effects Causing Hidden Conflicts in Software-Defined Networks. SN COMPUT. SCI. 1, 278 (2020). https://doi.org/10.1007/s42979-020-00282-0
Conflict detection
Conflict prediction
Software-defined networks
Speculative provocation
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
Not affiliated
© 2023 Springer Nature Switzerland AG. Part of Springer Nature.
|
CommonCrawl
|
Fixed point theorems
It is surprising that fixed point theorems (FPTs) appear in so many different contexts throughout Mathematics: Applying Kakutani's FPT earned Nash a Nobel prize; I am aware of some uses in logic; and of course everyone should know Picard's Theorem in ODEs. There are also results about local and global structure OF the fixed points themselves, and quite some famous conjectures (also labeled FPT for the purpose of this question).
Many results are so far removed from my field that I am sure there are plenty of FPTs out there that I have never encountered. I know of several, and will post later if you do not beat me to them :)
Community wiki rules apply. One FPT per answer, preferably with an inspiring list of interesting applications.
big-list fixed-point-theorems
Rodrigo A. Pérez
$\begingroup$ Not a FPT but a book: "Fixed point theory" by Granas and Dugundji. $\endgroup$ – jbc Apr 10 '13 at 9:01
$\begingroup$ Also: Journal of Fixed Point Theory and Applications, Fixed Point Theory and Applications, Fixed Point Theory, Advances in Fixed Point Theory, and JP Journal of Fixed Point Theory and Applications. $\endgroup$ – Rodrigo A. Pérez Apr 10 '13 at 13:19
$\begingroup$ Not so surprising, perhaps: whenever you want to construct an object whose definition involves the object again, you want to construct some fixed point. This is a general and natural thing to want. $\endgroup$ – Qiaochu Yuan Apr 11 '13 at 1:46
$\begingroup$ Nash actually used the Brouwer FPT. David Gale suggested to him that he could use the Kakutani FPT. $\endgroup$ – Michael Greinecker Jun 27 '13 at 6:49
$\begingroup$ Some nice order-theoretical fixed point theorems (and also related interesting problems) are discussed in B. Schröder's survey dx.doi.org/10.1016/S0304-3975(98)00273-4 (see also dx.doi.org/10.1007/s40065-012-0049-7) and his book dx.doi.org/10.1007/978-1-4612-0053-6 . $\endgroup$ – Michał Kukieła Apr 19 '14 at 17:08
The Lefschetz Fixed Point Theorem is wonderful. It generalizes the Fixed Point Theorem of Brouwer, and is an indispensable tool in topological analysis of dynamical systems.
The weakest form goes like this. For any continuous function $f:X \to X$ from a triangulable space $X$ to itself, let $H_\ast f:H_\ast X\to H_\ast X$ denote the induced endomorphism of the Rational homology groups. If the alternating sum (over dimension) of the traces
$$\Lambda(f) := \sum_{d \in \mathbb{N}}(-1)^d\text{ Tr}(H_df)$$
is non-zero, then $f$ has a fixed point! Since everything is defined in terms of homology, which is a homotopy invariant, one gets to add "for free" the conclusion that any other self-map of $X$ homotopic to $f$ also has a fixed point.
When $f$ is the identity map, $\Lambda(f)$ equals the Euler characteristic of $X$.
Update: Here is a lively document written by James Heitsch as a tribute to Raoul Bott. Along with an outline of the standard proof of the LFPT, you can find a large list of interesting applications.
4 revisions, 2 users
Vidit Nanda 87%
$\begingroup$ One cute application is to the fundamental theorem of algebra: a linear map $f : \mathbb{C}^{n+1} \to \mathbb{C}^{n+1}$ has an eigenvector iff the induced map on projective spaces has a fixed point. $\mathbb{CP}^n$ has Euler characteristic $n+1$ and $\text{GL}_n(\mathbb{C})$ is path-connected, so the conclusion follows by the Lefschetz fixed point theorem. The corresponding calculation for real projective spaces is enlightening as to "why" FTA fails over the reals: $\mathbb{RP}^n$ has Euler characteristic $0$ if $n$ is odd and $1$ if $n$ is even... $\endgroup$ – Qiaochu Yuan Apr 11 '13 at 1:13
$\begingroup$ Another immediate corollary, of the intersection number form of the theorem (the Lefschetz invariant of f equals the intersection no. of the graph of f with the diagonal), is the injectivity of the representation of the group of holomorphic automorphisms of a compact Riemann surface X of genus > 1, as a group of linear automorphisms on homology. I.e. a non trivial automorphism of X cannot induce the identity on homology. This yields the same statement for the representation as automorphisms of the Jacobian of X. $\endgroup$ – roy smith Apr 11 '13 at 17:28
Lawvere's fixed point theorem. If $f \colon A \to Y^A$ is a surjective morphism in a Cartesian closed category, then any $t \colon Y \to Y$ has a fixed point.
(Surjectivity is a technical term, which basically means that any $g \colon A \to Y$ equals $f(a)$ on points for some point $a$ of $A$. See here)
Applications: Cantor's diagonal argument, Turing's halting problem, Russell's paradox, Gödel's incompleteness theorem, Tarski's incompleteness theorem, Rice's theorem, and many more, see here.
Chris Heunen
$\begingroup$ My favorite fixed point theorem! I'm glad that other people also call it Lawvere's fixed point theorem; I didn't know if it had a good name. An application which is not in Yanofsky's paper is to the construction of the Y combinator (en.wikipedia.org/wiki/Fixed-point_combinator#Y_combinator). $\endgroup$ – Qiaochu Yuan Apr 11 '13 at 1:04
$\begingroup$ Does this fixed point lemma has an application outside of logic and set theory? $\endgroup$ – Martin Brandenburg Apr 12 '13 at 12:04
$\begingroup$ Chris, could you please elaborate the statement a bit? What is $A$ here? Any single object of category? What is a fixed point for a morphism in a cartesian closed category? $\endgroup$ – Anton Fetisov Jun 7 '13 at 21:49
$\begingroup$ Anton: Yes, $A$ is a single object, and a fixed point for a morphism $t \colon Y \to Y$ is a map $y \colon 1 \to Y$ satisfying $t \circ y = y$. I can really recommend Lawvere's paper, it's a great read. $\endgroup$ – Chris Heunen Jun 8 '13 at 15:00
$\begingroup$ Martin: I don't know -- in Yanofski's words: "As for more instances [Lawvere's fixed point theorem], the field is wide open" ... $\endgroup$ – Chris Heunen Jun 8 '13 at 15:01
Euler's Theorem, that every non-trivial rotation $R$ of 3-space has a unique axis. It really just days that $R$ acting on the space of lines through the origin has a unique fixed point.
(Added April 11, 2013) I just received my copy of the latest issue of The Journal of Fixed Point Theory and its Applications (Vol.12, Nos. 1--2) and starting on page 27 there is an article with the title "Chasles' fixed point theorem for Euclidean motions". Chasles' theorem is a generalization of Euler's Theorem; it says that every orientation preserving Euclidean motion of 3-space that is not a pure translation is a "twist" or "screw motion", that is, a rotation about some unique line (NOT necessarily through the origin) called the axis followed by a translation that is parallel to the axis. I really should have given this as my example rather than Euler's Theorem, since as I said it is more general. And I have no excuse for not recalling it since the authors of that paper are myself and my son Bob.
Knaster-Tarski's fixed point theorem: If $L$ is a complete lattice and $f:L \rightarrow L$ is order preserving, then the set of fixed points of $f$ form a (non-empty) complete lattice.
Nate Ackerman
$\begingroup$ More generally, if $C$ is a category with colimits of $\omega$-chains and an initial object $0$, then every functor $F : C \to C$ has an initial $F$-algebra (namely the colimit of $0 \to F(0) \to F(F(0)) \to \dotsc$). Actually this gives a neat construction of the Banach space $L^1([0,1])$, including the integral $L^1([0,1]) \to \mathbb{R}$, see mathoverflow.net/questions/23143 $\endgroup$ – Martin Brandenburg Apr 10 '13 at 13:10
$\begingroup$ @Martin: no, you also need for $F$ to preserve colimits of $\omega$-chains. (E.g., otherwise you could prove that the covariant power-set functor on $Set$ has an initial algebra, which would run counter to Cantor's theorem.) $\endgroup$ – Todd Trimble♦ Apr 10 '13 at 20:14
$\begingroup$ Sure. I wish I could edit comments. In the link the statement is correct ;). $\endgroup$ – Martin Brandenburg Apr 11 '13 at 20:43
$\begingroup$ Application: The (Cantor–)Schroeder-Bernstein Theorem. The proof is described in the second paragraph of this answer: mathoverflow.net/questions/42485/…. $\endgroup$ – Zach N May 28 '13 at 6:55
Let $p$ be a prime and let $G$ be a finite $p$-group which acts on a finite set $X$. Suppose that $p$ does not divide $|X|$ then this action has a fixed point.
This has many applications, e.g. the proof of the fact that Sylow subgroups are conjugated.
$\begingroup$ This is false. For example, let $G = C_p \times C_q$. Then $G$ acts on a set of size $p$ and on a set of size $q$, hence on a set of size $p + q$. If $p, q > 1$ then this action doesn't have a fixed point, but we can arrange to have $\gcd(pq, p + q) = 1$ (e.g. take $p = 2, q = 3$). The correct statement is that $G$ needs to be a $p$-group. $\endgroup$ – Qiaochu Yuan Apr 11 '13 at 1:02
$\begingroup$ Nice example. You can get the full Sylow theorem(s) from this fixed point theorem (not just the conjugacy), see mathoverflow.net/questions/18716/sylow-subgroups/19543#19543. $\endgroup$ – j.p. May 28 '13 at 8:17
Brouwer's FPT: Every continuous function from a closed ball in $\mathbb{R}^n$ to itself has a FP.
For applications see this question.
Ryll-Nardzewski FPT: If $K$ is a nonempty weakly compact convex subset of a Banach space, then every semigroup of affine isometries of $K$ has a common fixed point.
This implies the existence of Haar measures on compact groups.
$\begingroup$ Also used all over the place in various reaults about Banach algebras, especially in questions regarding derivations $\endgroup$ – Yemon Choi Apr 11 '13 at 2:49
$\begingroup$ Similar en.wikipedia.org/wiki/Markov–Kakutani_fixed-point_theorem also proves the existence of Haar measures on compact groups. This is explained in Rudin, for example. I wonder whether the locally compact group case corresponds to some fixed point theorem. $\endgroup$ – Fedor Petrov Dec 8 '18 at 23:19
Suppose that $S$ is a finite set with an odd number of elements. Then every involution $f:S\rightarrow S$ has a fixed point.
Application: Every prime of the form $p=4m+1$ may be written as a sum of two squares. The result above is used on p.20 here.
Also, although not of the usual fixed point theorem form, is something I call the fixed point factor theorem. If $f:\mathbb{C}\rightarrow \mathbb{C}$ is a polynomial and $n>1$, then $f(x)-x$ is a factor of $f^n (x)-x$ for natural $n$. This has a very obvious generalisation...
There is the Bruhat-Tits theorem that a group acting by isometries on a CAT(0) space with a bounded orbit has a fixed point. This is often applied to compact subgroups of grous acting on Euclidean buildings.
Benjamin Steinberg
$\begingroup$ Indeed! And thanks to Davis' proof that arbitrary buildings admit a CAT(0) realization, it generalizes to those as well. There, a fixed point corresponds to a spherical residue being stabilized. This has many useful applications, as it can often be used to reduce from a general non-spherical Kac-Moody group (with infinite Weyl group) to a spherical one -- i.e., to an algebraic group, where one can then apply all the usual tools. Very handy! $\endgroup$ – Max Horn May 27 '13 at 22:00
Kleene's Second Recursion Theorem If $F$ is a total computable function then there is an index $e$ such that $\{e\} \simeq \{F(e)\}$.
This has many applications such as effective transfinite recursion.
The Banach fixed-point theorem (or contraction mapping principle) was already mentioned by Rodrigo A. Pérez, but I would like to stress another application. The principle says that a contraction of a complete metric space $(X,d)$ (namely, a continuous function $f:X\to X$ such that $d\big(f(x),f(y)\big)\leq \rho d(x,y)$ for each $x,y\in X$ where $\rho<1$ is some positive constant depending on $f$ only) has a unique fixed point.
In his milestone 1981 paper Fractals and Self Similarities, (Indiana Univ. Math. J., vol. 30, n. 5) J. Hutchinson axiomatized the relation between fractals and collections of contractions of $\mathbb{R}^n$. He showed that for each set $\mathscr{S}=\{S_1,\dots,S_N\}$ of contractions $S_i\colon\mathbb{R}^n\to\mathbb{R}^n$, there exists a unique closed, bounded set $K$ such that $$ K=\bigcup_{i=1}^N S_i(K)\;. $$ Such fixed closed sets are "fractals" in a very natural way. For instance, the Koch curve can be obtained in $\mathbb{R}^2$ by using two contractions (see p. 729 of Hutchinson's work), as well as the Cantor set - for this, take $\mathscr{S}=\{S_1,S_2\}$ with $$ S_1(x)=\frac{x}{3}\quad\text{and}\quad S_2(x)=\frac{x}{3}+\frac{2}{3}\;. $$ The three-line proof of the existence of $K$ is an application of the contraction mapping principle (and is Theorem 1 on p. 728 of Hutchinsons's work) and goes as follows: let, as before, $n\geq 1$ and $\mathscr{S}=\{S_1,\dots,S_N\}$ be contractions of $\mathbb{R}^n$. Let $\mathscr{B}$ be the set of all closed bounded subsets of $\mathbb{R}^n$ and, for two bounded closed $A,B\in\mathscr{B}$, let $\delta(A,B)=\sup \{d(a,B),d(b,A):a\in A,b\in B\}$. This turns $(\mathscr{B},\delta)$ into a complete metric space for which $$ \mathscr{S}:A\mapsto \bigcup _{i=1}^{N}S_i(A) $$ is a contraction. Hence, there is a unique fixed point $K\in\mathscr{B}$. Needless to say, one can replace $\mathbb{R}^n$ with any other complete metric space without affecting the proof.
Filippo Alberto Edoardo
Here is a teeny tiny toy version of the Lefschetz fixed point theorem: let $f : S \to S$ be an endomorphism of a finite set and let $K[f] : K[S] \to K[S]$ be the induced linear map on free vector spaces. Then $\text{tr}(K[f])$ is the number of fixed points of $f$. This is one way to prove Burnside's lemma.
Qiaochu Yuan
$\begingroup$ For a finite $S$ it is presumably easier to count the fixed points of $f$ "by hand". Of course, this old-school method doesn't quite give you Burnside's Lemma... $\endgroup$ – Vidit Nanda Apr 11 '13 at 2:48
One of the most awesome fixed-point theorems I know of is due to Pataraia:
If $L$ is a poset with a bottom element and with joins of directed subsets, then every monotone function $f: L \to L$ has a (least) fixed point.
It is a strengthening of the Knaster-Tarski theorem, and is somewhat reminiscent of the Bourbaki-Witt theorem, but is entirely constructive. Related discussion at the n-Category Café here.
Todd Trimble
The infinite dimensional version of Brouwer's FPT is Schauder's FPT. If $K$ is a non-void closed convex subset of a TVS, and $f:K\rightarrow K$ is compact ($f$ is continuous and $f(K)$ is compact), then $f$ has a fixed point.
It has numerous applications in nonlinear analysis. One of the earliest being the existence of a solution to the stationnary Navier-Stokes equations with Dirichlet boundary condition, proven by J. Leray.
$\begingroup$ It is worth mentioning the sensationally short proof given by Lomonosov of his theorem that every continuous linear mapping on a Banach space which commutes with a non-zero compact operator has a non-trivial invariant subspace. This was then the strongest positive result on the invariant subspace problem (and might still be for all I know) and the key ingredient was the Schauder-Tychonoff FTP. $\endgroup$ – jbc Apr 11 '13 at 15:54
Kakutani's FPT: Let $S$ be a non-empty, compact, convex subset of $\mathbb{R}^n$, and $\varphi:S \longrightarrow 2^S$ a set-valued function with a closed graph and the property that $\varphi(x)$ is non-empty and convex for all $x \in S$. Then $\varphi$ has a fixed point.
Application: Consider a game with finitely many players and finitely many strategies. If players are allowed to choose mixed strategies, there is always a Nash equilibrium; that is, a set of strategy choices for all players such that no player can do better by unilaterally switching to a different strategy. This is the theorem that resulted in J. Nash getting the 1994 Nobel Prize in Economics.
$\begingroup$ Another contribution to the theme "FTP's and Nobel Prizes in economics". The Arrow-Debreu theory of equilibrium in economics uses the Brouwer FTP and its extension by Kakutani in an essential way. Both are laureates and this theory is generally regarded as one of their most significant contributions. $\endgroup$ – jbc Apr 11 '13 at 16:08
$\begingroup$ @jbc I think McKenzie used the Kakutani FPT; but the original proof of Arrow and Debreu relied on Debreu's A Social Equilibrium Existence Theorem, which was proven with the Eilenberg-Montgomery FPT. $\endgroup$ – Michael Greinecker Jun 27 '13 at 6:57
Allow me to mention another version of the Lefschetz fixed-point theorem. If $X$ is a (say smooth projective, though this works in greater generality) variety over $\mathbb F_q$ of dimension $d$, then \begin{equation} \left|X\left(\mathbb F_{q^n}\right)\right| = q^d \sum_i (-1)^i \mathrm{tr}\left(\Phi_{q^n} : H_{et}^i(\bar X,\mathbb Q_\ell)\right) \end{equation} where $\ell$ is prime to $q$ and $\Phi_{q^n}$ is the geometric Frobenius.
As a corollary one gets the rationality of the zeta-function of $X$.
(Note that this actually is a fixed-point theorem. $X(\mathbb F_{q^n})$ is just the set of fixed points under $\Phi_{q^n}$ applied to $X$.)
I forgot who proved it, but the statement is nice and very easy to prove: A function $f:X\to X$ is fixed point free if and only if there is a partition of $X$ into three subsets s.t. $f$ maps each of the three subsets into the union of the other two. An immediate application is that if $f$ is fixed point free on a set $X$ then so is its continuous extension to a function on $\beta X$.
Banach's FPT (or contraction FPT): Every contraction in a complete metric space has a unique FP.
Application: If $f(t,y(t))$ is a real-valued function, Lipschitz continuous in $ y$ and continuous in $t$, then the initial value problem $$y'(t) = f(t,y(t)),\quad y(t_0)=y_0,\quad t \in [t_0-\varepsilon,t_0+\varepsilon]$$ has a unique solution.
$\begingroup$ ...has a unique solution, provided ε is sufficiently small. $\endgroup$ – Dick Palais Apr 11 '13 at 5:43
$\begingroup$ Another application is a nice proof of the Inverse Function Theorem. $\endgroup$ – jbc Apr 11 '13 at 15:45
$\begingroup$ And with very little extra work you also obtain smooth dependence on initial conditions, and the vector field $f$ itself, see this 2 page article by Robbin the proceedings of the AMS (1968). $\endgroup$ – Jaap Eldering Apr 19 '14 at 19:54
The Arithmetic fixed point theorem (see also MO/30874) states that if $F$ is a formula in number theory with only one free variable $v$, then there is a sentence $A$ such that number theory can prove $A \Leftrightarrow F_v(\underline{[A]})$. An immediate application is Gödel's Theorem.
Martin Brandenburg
The Fiber contraction theorem due to Hirsch and Pugh:
Let $F: E \to E$ be a mapping on the fiber bundle $\pi: E \to B$ covering $f: B \to B$, where $B$ is a topological space and the fibers $Y$ of $E$ are complete metric spaces. Let $f$ have a globally attractive fixed point $b \in B$ and the fiber mapping is a uniform contraction in a neighborhood $\pi^{-1}(U), b \in U \subset B$ (and thus there exists a unique fixed point $e = (b,y) \in \pi^{-1}(b)$), and $b \mapsto F(b,y)$ be continuous. Then $e$ is the unique, globally attracting fixed point of $F$.
This result is an extension of the Banach fixed point theorem that can be used to prove e.g. the existence of center manifolds and normally hyperbolic invariant manifolds. It is specifically useful when one cannot find a contraction on an space of $C^k$ functions, but can construct inductively a contraction on the $k$-th jet when the $k-1$ jets are known to converge to a fixed point.
I thought this result was a bit interesting. Mahlon M. Day in the paper [1] showed that the amenable groups are precisely the groups where there Markov-Kakutani theorem holds.
If $(X,\mathcal{M})$ is an algebra of sets, then a function $\mu:\mathcal{M}\rightarrow[0,1]$ is said to be a finitely additive probability measure if $\mu(\emptyset)=0,\mu(X)=1$ and $\mu(A\cup B)=\mu(A)+\mu(B)$ whenever $A,B\in\mathcal{M}$ and $A\cap B=\emptyset$. If $G$ is a group, then a finitely additive probability measure $\mu:P(G)\rightarrow G$ on the algebra of sets $(G,P(G))$ is said to be left-invariant if $\mu(aR)=\mu(R)$ for each $R\subseteq G$.
A group $G$ is said to be amenable if there exists a left-invariant finitely additive probability measure $\mu:P(G)\rightarrow[0,1]$. For example, every finite group is amenable, and every abelian group is amenable. Furthermore, the class of amenable groups is closed under taking quotients, subgroups, direct limits, and finite products.
Let $C$ be a convex subset of a real vector space. Then a function $f:C\rightarrow C$ is said to be an affine map if $f(\lambda x+(1-\lambda)y)=\lambda f(x)+(1-\lambda)f(y)$ for each $\lambda\in[0,1]$ and $x,y\in C$.
$\textbf{Theorem}$(Day) Let $G$ be a group. Then the following are equivalent.
$G$ is amenable.
Let $X$ be a Hausdorff topological vector space and let $C\subseteq X$ be a compact convex subset. Let $\phi:G\rightarrow C^{C}$ be a group action such that each $\phi(g)$ is a continuous affine map. Then there is a point in $C$ fixed by every element of $G$.
Let $X$ be a locally convex topological vector space and let $C\subseteq X$ be a compact convex subset. Let $\phi:G\rightarrow C^{C}$ be a group action such that each $\phi(g)$ is a continuous affine map. Then there is a point in $C$ fixed by every element in $G$.
[1] Fixed-point theorems for compact convex sets. Mahlon M. Day.Illinois J. Math. Volume 5, Issue 4 (1961), 585-590.
[2] Ceccherini-Silberstein, Tullio, and M. Coornaert. Cellular Automata and Groups. Heidelberg: Springer, 2010.
Joseph Van Name
Arnold's Conjecture: A Hamiltonian map on a compact symplectic manifold $(M,\omega)$ has at least as many fixed points as a function on $M$ has critical points.
$\begingroup$ (the Hamiltonian and the function had better be nondegenerate) $\endgroup$ – Nathaniel Bottman Apr 12 '13 at 0:47
Alexander Abian (1923-1999) proved around 1998 the following result he named "the most fundamental fixed-point theorem". "Let F be a mapping from a set A into itself. Let G(x,0)=x, G(1,x)=F(x), G(2,x)= F(F(x)) be the iterates values of the function F for the argument x in A. Then F has a fixed point if and only if: there exists an element x of A such that, for every ordinal v, G(v,x) is an element of A and if G(v) is not a fixed point of A then G(u,x)'s are all distincts elements of A for u∈v." Details can be found at http://us2.metamath.org:88//abianfp.html Gérard Lang
Gérard Lang
$\begingroup$ Please read the correct source as "us2.metamath.org:88//mpegif//abianfp.html" $\endgroup$ – Gérard Lang May 28 '13 at 14:29
$\begingroup$ Now there's a name from Usenet past. $\endgroup$ – Aaron Bergman May 28 '13 at 15:02
The main theorem of Smith theory asserts that if a $p$-group $G$ acts on a mod-$p$-acyclic space $X$ (which must also be 'finitistic', a fairly weak condition), then the fixed point set $X^G$ is also mod-$p$ acyclic; in particular, it is non-empty.
This is especially useful because $X$ is not assumed to be compact, as is the case for the Lefschetz fixed point theorem, say.
HJRW
Another one, from MR0151632 Michel Hervé: Several complex variables. Local theory. Published for the Tata Institute of Fundamental Research, Bombay by Oxford University Press, London 1963 vii+134 pp.
Let $G$ be an open and connected set of the affine space $X$. If the image $f(G)$ under a holomorphic map $f: G \to G$ is relatively compact in $G$, then $f$ has a unique fixed point.
The proof uses Montel theorem and the fact that every analytic and compact subset of an affine space must be finite.
Margaret Friedland
The Caristi fixed point theorem is a generalisation of the Banach fixed point theorem.
Theorem. Let $(X, d)$ be a complete metric space. Let $T : X \rightarrow X$ and $f : X → [0, +∞)$ be a lower semicontinuous function from $X$ into the non-negative real numbers. Suppose that, for all points $x$ in $X$,
$$d \big( x, T(x) \big) \leq f(x) - f \big( T(x) \big).$$
Then $T$ has a fixed point in $X$.
Take $f(x) = \sum_{k \in N} d(T^{k+1}(x),T^{k}(x))$ to recover the Banach fixed point theorem.
coudy
I really like the following result, which allows one to drop the usual compactness assumption.
Okhezin's theorem: For a polyhedron $K$ and a continouous map $f\colon K\to K$ at least one of the following conditions is true:
$f$ has a fixed point;
$f$ is not nullhomotopic;
$K$ contains a closed subset homeomorphic to $[0,\infty)$ (a closed ray).
Since $[0,\infty)$ is an absolute retract without fixed point property, no polyhedron containing it as a closed subset has the fixed point property. This gives the following corollary.
Corollary (Okhezin): A contractible polyhedron has the fixed point property if and only if it is rayless, i.e. contains no closed subset homeomorphic to $[0,\infty)$.
This was not noticed by Okhezin, but the following stronger result is implied.
Corollary: An acyclic polyhedron has the fixed point property if and only if it is rayless.
Proof: As noted above, the "only if" part is obvious. For the "if" part, let $f\colon K\to K$ be a self-map of an acyclic, rayless polyhedron. The suspension $SK$ is a contractible, rayless polyhedron. Thus, by the results of Okhezin, the map $\tilde{f}\colon SK\to SK$ that extends $f$ and swaps the two added cones has a fixed point, which must also be a fixed point of $f$.
Okhezin also proved some fixed point theorems that apply to other classes of rayless spaces, including some Lefschetz-type results.
Michał Kukieła
Let $X$ be a nonempty compact Hausdorff space, and $f\colon X\to X$ be continuous. Denote by $\mathcal P(X)$ the powerset of $X$. Then the function $f^+\colon\mathcal P(X)\to\mathcal P(X)$ defined by $f^+(A)=f[A]$ has a fixed point $f^+(A)=A$, where $A\subseteq X$ is nonempty and closed.
R salimi 67%
$\begingroup$ Don't you want to assume that $X$ is not discrete? $\endgroup$ – Martin Brandenburg Apr 10 '13 at 12:55
$\begingroup$ It does work for nonempty discrete $X$. $\endgroup$ – Emil Jeřábek supports Monica Apr 10 '13 at 13:45
$\begingroup$ @R salimi: Can you explain your notation for readers from different areas of Mathematics? $\endgroup$ – Rodrigo A. Pérez Apr 10 '13 at 14:14
$\begingroup$ I have taken the liberty to clarify the notation. $\endgroup$ – Emil Jeřábek supports Monica Apr 10 '13 at 15:21
$\begingroup$ Notice that this is a special case of the Pataraia fixed-point theorem from Todd Trimble's answer ($L$ is the poset of nonempty closed subspaces of $X$ ordered by reverse inclusion). $\endgroup$ – Emil Jeřábek supports Monica Apr 12 '13 at 12:06
It would be a pity not to mention the work of F. Browder, in particular his study of non linear pde's, the main tool being FPT's on Banach spaces. This is documented in many of his publications, perhaps most memorably in his "Nonlinear operators and nonlinear equations of evolution".
Let $p:E\rightarrow B$($B$ is locally path wise connected) be a covering map then every isomorphism $h:E\rightarrow E$(isomorphism between covering spaces)is called automorphism and the set of automorphisms of $E$ relative to $p$ has a group structure and is shown with $A(E,p)$,now if $f\in A(E,p)$ has a fixed point then $f=I_{E}$.
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged big-list fixed-point-theorems or ask your own question.
when mapping cone is contractible
Classes of (non-continuous) functions with the fixed point property
Any results on rayless simplicial complexes?
Banach and Knaster-Tarski fixed point theorems — are they related?
Converse of the Banach fixed point theorem
fixed point of a particular vector valued function
Fixed point theorem on graphs?
A fixed point problem
Fixed point problem with a monotone vector as a fixed point?
Almost fixed point property
Totally non fixed point property
leray schauder fixed point and schauder fixed point
|
CommonCrawl
|
American Institute of Mathematical Sciences
Journal Prices
Book Prices/Order
Proceeding Prices
About AIMS
E-journal Policy
Global attractors for strongly damped wave equations with displacement dependent damping and nonlinear source term of critical exponent
DCDS Home
Classification of local asymptotics for solutions to heat equations with inverse-square potentials
March 2011, 31(1): 109-118. doi: 10.3934/dcds.2011.31.109
Strichartz estimates for Schrödinger operators with a non-smooth magnetic potential
Michael Goldberg 1,
Department of Mathematical Sciences, University of Cincinnati, Cincinnati, OH 45221-0025, United States
Received January 2010 Revised March 2011 Published June 2011
We prove Strichartz estimates for the absolutely continuous evolution of a Schrödinger operator $H = (i\nabla + A)^2 + V$ in $R^n$, $n \ge 3$. Both the magnetic and electric potentials are time-independent and satisfy pointwise polynomial decay bounds. The vector potential $A(x)$ is assumed to be continuous but need not possess any Sobolev regularity. This work is a refinement of previous methods, which required extra conditions on ${\rm div}\,A$ or $|\nabla|^{\frac12}A$ in order to place the first order part of the perturbation within a suitable class of pseudo-differential operators.
Keywords: Strichartz estimates., resolvents, Magnetic Schrödinger operators, local smoothing.
Mathematics Subject Classification: Primary: 35Q20; Secondary: 35C15, 42A1.
Citation: Michael Goldberg. Strichartz estimates for Schrödinger operators with a non-smooth magnetic potential. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 109-118. doi: 10.3934/dcds.2011.31.109
S. Agmon, Spectral properties of Schrödinger operators and scattering theory,, Ann. Sc. Norm. Super. Pisa. Cl. Sci., 2 (1975), 151. Google Scholar
S. Agmon and L. Hörmander, Asymptotic properties of solutions of differential equations with simple characteristics,, J. Anal. Math., 30 (1976), 1. doi: 10.1007/BF02786703. Google Scholar
J.-M. Bouclet and N. Tzvetkov, On global Strichartz estimates for non-trapping metrics,, J. Funct. Anal., 254 (2008), 1661. doi: 10.1016/j.jfa.2007.11.018. Google Scholar
F. Cardoso, C. Cuevas and G. Vodev, Dispersive estimates for the Schrödinger equation in dimension four and five,, Asymptot. Anal., 62 (2009), 125. Google Scholar
M. Christ and A. Kiselev, Maximal functions associated to filtrations,, J. Funct. Anal., 179 (2001), 409. doi: 10.1006/jfan.2000.3687. Google Scholar
P. Constantin and J.-C. Saut, Local smoothing properties of dispersive equations,, J. Amer. Math. Soc., 1 (1988), 413. doi: 10.1090/S0894-0347-1988-0928265-0. Google Scholar
M. B. Erdoǧan, M. Goldberg and W. Schlag, Strichartz and smoothing estimates for Schrödinger operators with almost critical magnetic potentials in three and higher dimensions,, Forum Math., 21 (2009), 687. doi: 10.1515/FORUM.2009.035. Google Scholar
M. Goldberg and M. Visan, A Counterexample to dispersive estimates for Schrödinger operators in higher dimensions,, Comm. Math. Phys., 266 (2006), 211. doi: 10.1007/s00220-006-0013-5. Google Scholar
L. Hörmander, "The Analysis of Linear Partial Differential Operators. II,'', Grundlehren der Mathematischen Wissenschaften, (1983). Google Scholar
A. Ionescu and W. Schlag, Agmon-Kato-Kuroda theorems for a large class of perturbations,, Duke Math. J., 131 (2006), 397. doi: 10.1215/S0012-7094-06-13131-9. Google Scholar
A. Jensen and T. Kato, Spectral properties of Schrödinger operators and time-decay of the wave functions,, Duke Math. J., 46 (1979), 583. doi: 10.1215/S0012-7094-79-04631-3. Google Scholar
T. Kato, Wave operators and similarity for some non-selfadjoint operators,, Math. Ann., 162 (): 258. Google Scholar
H. Koch and D. Tataru, Carleman estimates and absence of embedded eigenvalues,, Comm. Math. Phys., 267 (2006), 419. doi: 10.1007/s00220-006-0060-y. Google Scholar
J. Marzuola, J. Metcalfe and D. Tataru, Strichartz estimates and local smoothing estimates for asymptotically flat Schrödinger equations,, J. Funct. Anal., 255 (2008), 1497. doi: 10.1016/j.jfa.2008.05.022. Google Scholar
D. Robert, Asymptotique de la phase de diffusion à haute énergie pour des perturbations du second ordre du Laplacien,, Ann. Sci. École Norm. Sup., 25 (1992), 107. Google Scholar
I. Rodnianski and W. Schlag, Time decay for solutions of Schrödinger equations with rough and time-dependent potentials,, Invent. Math., 155 (2004), 451. doi: 10.1007/s00222-003-0325-4. Google Scholar
B. Simon, Best constants in some operator smoothness estimates,, J. Funct. Anal., 107 (1992), 66. doi: 10.1016/0022-1236(92)90100-W. Google Scholar
H. Smith and C. Sogge, Global Strichartz estimates for nontrapping perturbations of the Laplacian,, Comm. PDE, 25 (2000), 2171. doi: 10.1080/03605300008821581. Google Scholar
K. Yajima, Existence of solutions for Schrödinger evolution equations,, Comm. Math. Phys., 110 (1987), 415. doi: 10.1007/BF01212420. Google Scholar
Younghun Hong. Strichartz estimates for $N$-body Schrödinger operators with small potential interactions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5355-5365. doi: 10.3934/dcds.2017233
Vladimir Georgiev, Atanas Stefanov, Mirko Tarulli. Smoothing-Strichartz estimates for the Schrodinger equation with small magnetic potential. Discrete & Continuous Dynamical Systems - A, 2007, 17 (4) : 771-786. doi: 10.3934/dcds.2007.17.771
Chu-Hee Cho, Youngwoo Koh, Ihyeok Seo. On inhomogeneous Strichartz estimates for fractional Schrödinger equations and their applications. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1905-1926. doi: 10.3934/dcds.2016.36.1905
Youngwoo Koh, Ihyeok Seo. Strichartz estimates for Schrödinger equations in weighted $L^2$ spaces and their applications. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4877-4906. doi: 10.3934/dcds.2017210
Hyeongjin Lee, Ihyeok Seo, Jihyeon Seok. Local smoothing and Strichartz estimates for the Klein-Gordon equation with the inverse-square potential. Discrete & Continuous Dynamical Systems - A, 2020, 40 (1) : 597-608. doi: 10.3934/dcds.2020024
Leyter Potenciano-Machado, Alberto Ruiz. Stability estimates for a magnetic Schrödinger operator with partial data. Inverse Problems & Imaging, 2018, 12 (6) : 1309-1342. doi: 10.3934/ipi.2018055
Haruya Mizutani. Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials II. Superquadratic potentials. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2177-2210. doi: 10.3934/cpaa.2014.13.2177
M. Burak Erdoǧan, William R. Green. Dispersive estimates for matrix Schrödinger operators in dimension two. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4473-4495. doi: 10.3934/dcds.2013.33.4473
Roberta Bosi, Jean Dolbeault, Maria J. Esteban. Estimates for the optimal constants in multipolar Hardy inequalities for Schrödinger and Dirac operators. Communications on Pure & Applied Analysis, 2008, 7 (3) : 533-562. doi: 10.3934/cpaa.2008.7.533
Niels Jacob, Feng-Yu Wang. Higher order eigenvalues for non-local Schrödinger operators. Communications on Pure & Applied Analysis, 2018, 17 (1) : 191-208. doi: 10.3934/cpaa.2018012
Vagif S. Guliyev, Ramin V. Guliyev, Mehriban N. Omarova, Maria Alessandra Ragusa. Schrödinger type operators on local generalized Morrey spaces related to certain nonnegative potentials. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 671-690. doi: 10.3934/dcdsb.2019260
Jin-Cheng Jiang, Chengbo Wang, Xin Yu. Generalized and weighted Strichartz estimates. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1723-1752. doi: 10.3934/cpaa.2012.11.1723
Robert Schippa. Generalized inhomogeneous Strichartz estimates. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3387-3410. doi: 10.3934/dcds.2017143
Younghun Hong, Changhun Yang. Uniform Strichartz estimates on the lattice. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3239-3264. doi: 10.3934/dcds.2019134
Nakao Hayashi, Pavel I. Naumkin, Patrick-Nicolas Pipolo. Smoothing effects for some derivative nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 685-695. doi: 10.3934/dcds.1999.5.685
Lassaad Aloui, Imen El Khal El Taief. The Kato smoothing effect for the nonlinear regularized Schrödinger equation on compact manifolds. Mathematical Control & Related Fields, 2019, 0 (0) : 0-0. doi: 10.3934/mcrf.2020016
Hiroshi Isozaki, Hisashi Morioka. A Rellich type theorem for discrete Schrödinger operators. Inverse Problems & Imaging, 2014, 8 (2) : 475-489. doi: 10.3934/ipi.2014.8.475
Jaime Cruz-Sampedro. Schrödinger-like operators and the eikonal equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 495-510. doi: 10.3934/cpaa.2014.13.495
Jean Bourgain. On random Schrödinger operators on $\mathbb Z^2$. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 1-15. doi: 10.3934/dcds.2002.8.1
Jean Bourgain. On quasi-periodic lattice Schrödinger operators. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 75-88. doi: 10.3934/dcds.2004.10.75
2018 Impact Factor: 1.143
PDF downloads (19)
HTML views (0)
on AIMS
Copyright © 2019 American Institute of Mathematical Sciences
Recipient's E-mail*
Content*
|
CommonCrawl
|
When does the world split in MWI
I've been reading Eliezer Yudkowsky's blog post regarding decoherence and many worlds, and although he is not a physics but a strong proponent of MWI, I can basically see why he feels that MWI is a "simple" explanation. However, in all the posts there it's decoherence that's the natural (and mathematically simple) approach to QM, while the step to MWI seems to take a certain "leap of faith".
Can someone tell me the practical difference of "world splitting" in MWI, and the original "wavefunction collapse"? Even if there is no such thing as an "abrupt" split, I don't see why you couldn't also argue the same for the "collapse".
One of the reasons for why one should accept the fact that these other worlds exist is:
If a spaceship goes over the cosmological horizon relative to us, so that it can no longer communicate with us, should we believe that the spaceship instantly ceases to exist?
But, I fail to see the analogy here. The knowledge of ship's existence is there at the beginning, and there is only one ship. With many worlds, you start with a knowledge of only a single world.
Also, since decoherence seems to have a certain finite speed, is the splitting also something that propagates at a certain finite speed? In other words, if two far separated observers take measurements on two entangled photons at the same angle, whoever makes the first measurement (presuming they are sharing the same inertial frame) will decohere the mutual wave function of both photons, right?
quantum-mechanics quantum-entanglement quantum-interpretations decoherence
$\begingroup$ I've deleted an unconstructive comment discussion. $\endgroup$ – David Z♦ Apr 11 '16 at 10:13
If you want to use nonrelativistic quantum mechanics you have to first start with the basics. Firstly it doesn't handle particle creation or destruction, so you need to fix how many particles you have of each type.
Then you want a function from the configuration space $\mathbb R^{3n}$ into the joint spin state $\mathbb C^{k_1}\otimes\mathbb C^{k_2}\otimes ... \otimes \mathbb C^{k_n}$. Where $k_m=2s_m+1$ when $s_m$ is the spin of the $m$th particle.
Then you need a Hamiltonian which tells you how it evolves. And the evolution includes that the region in configuration space where the wavefunction is nonzero (called the support) can change in time as determined by the present values in configuration space and the Hamiltonian.
When you model an isolated system you only need to include the particles in the isolated system.
When you want to model a measurement, you need a huge number of particles to represented the thermodynamically irreversible (so in principle could be reversed, but in practice can't) interaction whereby you get the measurement results.
The state splits into a sum of two waves, each of which could be called a world. They are called worlds because they don't meaningfully interact. And it's too hard to ever cause them to meaningfully interact in the future. It's like if you had a function whose support was one blob and then it evolves into a function whose support was two disconnected blobs. If those blobs move around in ways where the supports never overlap then each is basically following it's own wave equation as if it were the only wave.
It's like an amoeba that splits into two and then each baby amoeba gets placed into a different rocket and sent to different planets. At some point, each amoeba could forget about the other amoeba.
With many worlds, you start with a knowledge of only a single world.
If you want to bring up knowledge, then knowledge is a fact about a world: when a world splits, a fact is created, knowledge can be created later. When the blob that is the support of a wavefunction splits into two blobs then the wavefunction restricted to each blob could contain information about its world. But only if that blob is a world. And the location in configuration space where the blob is right now isn't the world. A world isn't a region of configuration space, it is the fully evolving independent wave that happens to equal the full wave right now when restricted to that blob in configuration space and (unlike the full wave) is zero everywhere else. And we only call it a world when it won't interact with the other worlds.
So you asked about knowledge. Knowledge has to be a fact about a world. Some facts might be the same across all worlds, such as the number of particles (since we are doing non relativistic quantum mechanics). Some of it might be localized such that a small number of particles in one world have interactions that encode a fact about the world. But this kind of knowledge can be copied and shared within the world in which it exists.
It's a very different thing. When the world splits, information is created. A kind of information that can be shared and copied within that world. And yes, the measurements you purposefully do are ultimately about distributing that information, but that happens after the world splits.
The world splitting was something that happened earlier. It happened when the support of the waves allowed distinct waves to act independently.
Also, since decoherence seems to have a certain finite speed, is the splitting also something that propagates at a certain finite speed?
The wave has a support in configuration space. It includes a huge number of different configurations, and each one corresponds to a different configuration for all $n$ particles. And the streamlines move according to the probability current so they are constrained by the Hamiltonian, but what generally happens is that two regions of support that are already far away move even farther away and the region in between develops a gap.
It's like of you had an electrically charged slinky that was stretching but was weak in the middle. It gets longer and longer and then snaps. And then you have two slinkies that move away from each other. But the two ends of the one slinky might have been light years apart and so when it split into two, the center of mass of each slinky might instantly be super far away from each other but the smooth evolution was completely local.
The wave evolves in a continuous fashion. There isn't some place where it happens and then propagates outwards. There is one wave, and at some points you can start treating it as two waves because they are separated.
In other words, if two far separated observers take measurements on two entangled photons at the same angle, whoever makes the first measurement (presuming they are sharing the same inertial frame) will decohere the mutual wave function of both photons, right?
No. But it's also strange that you want to use non relativistic quantum mechanics and then your go to example was a massless photon. If you have two electrons with an entangled spin to have identical spins upon measurement and send them through a Stern-Gerlach device then it starts out with support in a square in configuration space (representing the spread of positions of each particle) and if you measured just one particle the square gets wider and splits and the spin state continuously changes so that on the right branch it becomes up-up and on the left branch it becomes down-down. A later measurement of the spin of the other particle just deflects the left square down or deflects the right square up. If you measure the other particle the initial square support gets longer and splits and the spin state continuously changes so that on the top branch it's up-up and on the bottom branch it is down-down. A later measurement of the spin of the first particle just deflects the top square right or deflects the bottom square left.
If both happen at the same time, parts of the original support have streamlines that head to the top right corner and the spin state for that portion ends up becoming up-up. And other parts of the original support have streamlines that head to the bottom left corner and the spin state for that portion becomes down-down.
And every other possibility unfolds exactly as the Hamiltonian says it will. A frame is irrelevant. The Hamiltonian is a function of the entire configuration space. And the configuration space includes the configuration of the measurement device, so points in the configuration space that correspond to one device being used at one event have the wave start to start to split at that event. And another configuration that corresponds to the one device being used at a different event have the wave start to split at that different event.
But a frame is irrelevant. There is a wave defined on configuration space, and it evolves according to a completely local PDE and it doesn't care which frame you picked. The Hamiltonian just evolves the wave. Nothing else.
If you watch the mathematical evolution, you'll see regions of support that are currently connected stretch and disconnect and then move around independently and you can call those independently moving things worlds.
And if you use Occam's razor then at some point you can say that each world when modelling itself can ignore the other ones since they now move independently. That's many worlds.
Or if you like magic or solipsism you can pretend that one world is somehow somewhen magically selected and that the others then cease to exist. That's an untestable claim, but it isn't falsifiable so you won't really make errors in your predictions. It's straight up solipsism, but it doesn't make predictions. And that's Copenhagen. The magic part is that you never saw any evidence for anything other than evolution according the Hamiltonian, yet you proposed something else anyway, and you hide the different dynamics specifically to be in a place that can't be tested.
TimaeusTimaeus
$\begingroup$ +1 Thanks for a detailed answer. There is something that's not quite clear to me from Everett's interpretation: 1) If I got it right, if an observer was entangled with such system and it has decohered, there are two blobs (i.e. two separate "worlds"). At the same time, a distant observer hasn't yet interacted with them. So, a) is this 2nd observer only in a single world which sees the other observer as a superposition (until s/he interacts with them too)? Or b) does the first observer already split the universe for everyone else? $\endgroup$ – Lou Apr 10 '16 at 9:43
$\begingroup$ And the other thing: 2) it is said in the article that MWI removed the need for "spooky action at a distance". Apparently, two observers measuring entangled particles at a same angle both create two worlds, but when they "meet", their worlds are always the ones where particles are completely (anti)correlated. What mechanism in MWI is responsible for this, that wasn't already considered "spooky" previously? Shouldn't there be 4 worlds in total, given the fact that two observers each created their own pair of universes (this is kind of related to the first question)? $\endgroup$ – Lou Apr 10 '16 at 9:49
$\begingroup$ @LousyCoder If you want to bring a mutually distant particle in then you can have height coming towards you in addition to width and length. But the cube doesn't get any taller, it gets wider when one particle is sent through a Stern-Gerlach and it gets longer when the other one is sent through a Stern-Gerlach but neither makes it taller. The third particle is only affected when some interaction later affects it based on what happened far away. It's two worlds as soon as the blobs have separated in an effectively irreversible way. $\endgroup$ – Timaeus Apr 10 '16 at 19:29
$\begingroup$ @LousyCoder Regarding spooky action, non relativistic quantum theory has configuration space as a domain, which means a nonlocal function can have one possible support of initial configurations and another possible support of later configurations. The non relativistic theory bears no resemblance to locality at all. But in configuration space it is just a local evolution by a PDE, the evolution of the value of the wave at a point in configuration space in an interval of time is only affected by the values of nearby configurations in that interval of time. In that sense it is as local as can be. $\endgroup$ – Timaeus Apr 10 '16 at 22:29
There is no world splitting. A state that starts out as having a single value for some relevant observable gradually changes to have a non-zero amplitude for various states. As a result of information spreading between systems, different versions of the same system gradually become less able to interfere with one another. Worlds are a large scale, approximate and emergent feature of the multiverse.
Wave functions are not terribly useful for thinking about the flow of information, and not terribly useful in relativistic theories. Measuring the electromagnetic field in some region will decohere the field in that region. It will also decohere quantum information that can be obtained by measurements on the field in that region alone. However, entanglement involves locally inaccessible quantum information. It is possible for a system to instantiate quantum information that can't be accessed by measuring that system alone. Rather, the information can only be obtained by comparing the results of measurements on your system with those on another system with which it is entangled - the information is locally inaccessible. For detailed discussions, see
http://arxiv.org/abs/quant-ph/9906007
http://arxiv.org/abs/1109.6223.
As a result that information does not decohere until some system can contain information about correlations between measurement outcomes on both systems. In that situation the multiverse is not divided into universes with respect to the joint observable until that comparison can take place.
You do start with knowledge of only one world, but single world explanations of how reality works are ruled out by experiment and are incompatible with quantum theory. To take one example, the explanation of the EPR experiments given in the papers above require that decoherent systems are represented by quantum observables. Those observables do not represent a single version of the systems involved: they represent a complex structure that includes multiple versions of the systems involved. And there is no other explanation. The standard account of that experiment runs as follows: the quantum state represents the system until it is measured and then somehow the systems are each represented by a single number, and those numbers somehow end up correlated as predicted by quantum theory. This is not an explanation.
I haven't read the essay you linked to, but the logic of a sensible argument involving that analogy would run as follows. Our only existing explanation of cosmology requires the existence of stuff outside the horizon. For example, the rocket doesn't just vanish because that would violate conservation of energy. If you're going to deny that stuff outside the horizon exists because you can't see it, then you have no explanation of where the energy in the rocket went. More generally, the stuff that exists is the stuff entailed by explanations that solve problems and haven't been ruled out by other problems such as contradicting experiments. To understand the multiverse and the theory of knowledge better see "The Fabric of Reality" and "The Beginning of Infinity" by David Deutsch.
One last note. One commentator on this question claimed that quantum field theory somehow makes the MWI unnecessary. This is false. QFT still has unsharp observables, and interference and entanglement. QFT adds more structure on top of that, like the fact that observables in space like separated regions commute, identical particles and other stuff, but none of that gets rid of the multiverse.
alanfalanf
$\begingroup$ But I still don't find the leap from decoherence to MWI quite clear. I agree that "(observables) represent a complex structure that includes multiple versions of the systems involved", but aren't these simply probabilities of observing a certain "version" of the system? More precisely, when you say these "numbers somehow end up correlated" (presumably Copenhagen/spooky interpretation), can you explain a bit better how MWI solves this issue (different worlds end up correlated)? And not through decoherence, but through an actual simultaneous existence of separate worlds? $\endgroup$ – Lou Apr 10 '16 at 10:06
$\begingroup$ There is no non-multiverse explanation of the EPR correlations because the probability of a correlation depends on measurements conducted at different locations, which could be space like separated. In the MWI the results can be established when the results are compared, not when each individual result is measured. There is only a world that includes the correlation after the comparison is made. And that can happen because both versions of each system continue to exist after the measurement, so no choice about which result happened is necessary at the time of the measurement. $\endgroup$ – alanf Apr 11 '16 at 10:29
$\begingroup$ There is only a world that includes the correlation after the comparison is made. - I find this statement confusing. I thought each observation creates 2 separate worlds? So now you are saying that decoherence doesn't happen at all until observers meet? $\endgroup$ – Lou Apr 11 '16 at 17:17
$\begingroup$ An observation of system A results in two distinct decohered versions of A and the observer of A. But if A is part of an entangled system AB, that observation does not magically change B. The correlation between A and B only exists after the results of measurements on them have been compared. You can't have a world with a correlation that doesn't exist, so there is no world with such a correlation until the comparison is made. You may want to read arxiv.org/abs/quant-ph/0104033 $\endgroup$ – alanf Apr 11 '16 at 20:45
$\begingroup$ Ok, so I got it the first time then. You are saying that 1) two observers each create two distinct decohered versions of themselves, but 2) only two worlds finally exist when they meet. So, first conclusion is that there were never four worlds in the first place, because otherwise two of them would vanish when they meet. But this doesn't solve anything then, because both observers must end up in "correct" worlds the moment they perform the measurement at the same angle, no matter how far away they are? $\endgroup$ – Lou Apr 12 '16 at 12:09
Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-entanglement quantum-interpretations decoherence or ask your own question.
Is the "consistent histories" interpretation of QM a "many worlds interpretation" in disguise?
Does the wave function/density state actually exist?
Is quantum mechanics intrinsically dualistic?
What limits causality in the many-worlds interpretation (MWI)?
Do measurements of time-scales for decoherence disprove some versions of Copenhagen or MWI?
Is Bohmian mechanics really incompatible with relativity?
How do "many worlds" theorists explain particle double-slit interference patterns?
|
CommonCrawl
|
Large solutions of semilinear elliptic equations with a gradient term: existence and boundary behavior
Optimal regularity for parabolic Schrödinger operators
May 2013, 12(3): 1393-1406. doi: 10.3934/cpaa.2013.12.1393
Multiple solutions for a class of $(p_1, \ldots, p_n)$-biharmonic systems
John R. Graef 1, , Shapour Heidarkhani 2, and Lingju Kong 1,
Department of Mathematics, University of Tennessee at Chattanooga, Chattanooga, TN 37403, United States
Department of Mathematics, Faculty of Sciences, Razi University, Kermanshah 67149, Iran
Received April 2012 Revised July 2012 Published September 2012
In this paper, the authors prove the existence of at least three weak solutions for the $(p_{1},\ldots,p_{n})$-- biharmonic system $$\begin{cases} \Delta(|\Delta u_{i}|^{p_i-2}\Delta u_{i}) = \lambda F_{u_{i}}(x,u_{1},\ldots,u_{n}), & \mbox{in} \ \Omega,\\ u_{i}=\Delta u_i=0, & \mbox{on} \ \partial\Omega. \end{cases} $$ The main tool is a recent three critical points theorem of Averna and Bonanno ({\it A three critical points theorem and its applications to the ordinary Dirichlet problem}, Topol. Methods Nonlinear Anal. 22 (2003), 93-104).
Keywords: p_{n})$-biharmonic systems, critical points, Three solutions, $(p_{1}, variational methods, multiplicity results., \ldots.
Mathematics Subject Classification: Primary: 35J65, 47J1.
Citation: John R. Graef, Shapour Heidarkhani, Lingju Kong. Multiple solutions for a class of $(p_1, \ldots, p_n)$-biharmonic systems. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1393-1406. doi: 10.3934/cpaa.2013.12.1393
G. A. Afrouzi and S. Heidarkhani, Existence of three solutions for a class of Dirichlet quasilinear elliptic systems involving the $(p_1, \ldots, p_n)$-Laplacian,, Nonlinear Anal., 70 (2009), 135. doi: 10.1016/j.na.2007.11.038. Google Scholar
G. A. Afrouzi and S. Heidarkhani, Multiplicity theorems for a class of Dirichlet quasilinear elliptic systems involving the $(p_1, \ldots, p_n)$-Laplacian, , Nonlinear Anal., 73 (2010), 2594. doi: 10.1016/j.na.2010.06.038. Google Scholar
G. A. Afrouzi, S. Heidarkhani and D. O'Regan, Existence of three solutions for a doubly eigenvalue fourth-order boundary value problem,, Taiwanese J. Math., (2011), 201. Google Scholar
G. A. Afrouzi, S. Heidarkhani and D. O'Regan, Three solutions to a class of Neumann doubly eigenvalue elliptic systems driven by a $(p_1, \ldots, p_n)$-Laplacian, , Bull. Korean Math. Soc., 47 (2010), 1235. doi: 10.4134/BKMS.2010.47.6.1235. Google Scholar
C. Amrouche, Singular boundary conditions and regularity for the biharmonic problem in the half-space,, Commun. Pure Appl. Anal. 6 (2007), 6 (2007), 957. doi: 10.3934/cpaa.2007.6.957. Google Scholar
D. Averna and G. Bonanno, A three critical points theorem and its applications to the ordinary Dirichlet problem,, Topol. Methods Nonlinear Anal., 22 (2003), 93. Google Scholar
D. Averna and G. Bonanno, A mountain pass theorem for a suitable class of functions,, Rocky Mountain J. Math., 39 (2009), 707. doi: 10.1216/RMJ-2009-39-3-707. Google Scholar
M. B. Ayed and M. Hammami, On a fourth order elliptic equation with critical nonlinearity in dimension six,, Nonlinear Anal., 64 (2006), 924. doi: 10.1016/j.na.2005.05.050. Google Scholar
M. Ayed and A. Selmi, Asymptotic behavior and existence results for a biharmonic equation involving the critical Sobolev exponent in a five-dimensional domain,, Commun. Pure Appl. Anal., 9 (2012), 1705. Google Scholar
Z. Bai and H. Wang, On positive solutions of some nonlinear fourth-order beam equations,, J. Math. Anal. Appl., 270 (2002), 357. doi: 10.1016/S0022-247X(02)00071-9. Google Scholar
L. Boccardo and D. Figueiredo, Some remarks on a system of quasilinear elliptic equations,, Nonlinear Differential Equations Appl., 9 (2002), 309. doi: 10.1007/s00030-002-8130-0. Google Scholar
G. Bonanno and B. Di Bella, A boundary value problem for fourth-order elastic beam equations,, J. Math. Anal. Appl., 343 (2008), 1166. doi: 10.1016/j.jmaa.2008.01.049. Google Scholar
Y. Bozhkov and E. Mitidieri, Existence of multiple solutions for quasilinear systems via fibering method,, J. Differential Equations, 190 (2003), 239. doi: 10.1016/S0022-0396(02)00112-2. Google Scholar
A. Cabada, J. A. Cid and L. Sanchez, Positivity and lower and upper solutions for fourth-order boundary value problems,, Nonlinear Anal., 67 (2007), 1599. doi: 10.1016/j.na.2006.08.002. Google Scholar
J. Chabrowski and J. Marcos do Ó, On some fourth-order semilinear elliptic problems in $R^N$,, Nonlinear Anal., 49 (2002), 861. doi: 10.1016/S0362-546X(01)00144-4. Google Scholar
C. Cowan, P. Esposito and N. Ghoussoub, Regularity of extremal solutions in fourth order nonlinear eigenvalue problems on general domains,, Discrete Contin. Dyn. Syst., 28 (2010), 1033. doi: 10.3934/dcds.2010.28.1033. Google Scholar
A. Djellit and S. Tas, On some nonlinear elliptic systems,, Nonlinear Anal., 59 (2004), 695. Google Scholar
A. Djellit and S. Tas, Quasilinear elliptic systems with critical Sobolev exponents in $R^N$,, Nonlinear Anal., 66 (2007), 1485. doi: 10.1016/j.na.2006.02.005. Google Scholar
P. Drábek, N. M. Stavrakakis and N. B. Zographopoulos, Multiple nonsemitrivial solutions for quasilinear elliptic systems,, Differential Integral Equations, 16 (2003), 1519. Google Scholar
S. Federica, A biharmonic equation in $R^4$ involving nonlinearities with critical exponential growth,, Commun. Pure Appl. Anal., 12 (2013), 405. Google Scholar
M. R. Grossinho, L. Sanchez and S. A. Tersian, On the solvability of a boundary value problem for a fourth-order ordinary differential equation,, Appl. Math. Lett., 18 (2005), 439. doi: 10.1016/j.aml.2004.03.011. Google Scholar
G. Han and Z. Xu, Multiple solutions of some nonlinear fourth-order beam equation,, Nonlinear Anal., 68 (2008), 3646. doi: 10.1016/j.na.2007.04.007. Google Scholar
S. Heidarkhani and Y. Tian, Multiplicity results for a class of gradient systems depending on two parameters,, Nonlinear Anal., 73 (2010), 547. doi: 10.1016/j.na.2010.03.051. Google Scholar
A. Kristály, Existence of two non-trivial solutions for a class of quasilinear elliptic variational systems on strip-like domains,, Proc. Edinb. Math. Soc., 48 (2005), 465. doi: 10.1017/S0013091504000112. Google Scholar
A. C. Lazer and P. J. McKenna, Large amplitude periodic oscillations in suspension bridges: Some new connections with nonlinear analysis,, SIAM Rev., 32 (1990), 537. doi: 10.1137/1032120. Google Scholar
C. Li and C.-L. Tang, Three solutions for a class of quasilinear elliptic systems involving the $(p,q)$-Laplacian,, Nonlinear Anal., 69 (2008), 3322. doi: 10.1016/j.na.2007.09.021. Google Scholar
C. Li and C.-L. Tang, Three solutions for a Navier boundary value problem involving the p-biharmonic,, Nonlinear Anal., 72 (2010), 1339. doi: 10.1016/j.na.2009.08.011. Google Scholar
L. Li and C.-L. Tang, Existence of three solutions for (p,q)-biharmonic systems,, Nonlinear Anal., 73 (2010), 796. doi: 10.1016/j.na.2010.04.018. Google Scholar
X.-L. Liu and W.-T. Li, Existence and multiplicity of solutions for fourth-order boundary value problems with parameters,, J. Math. Anal. Appl., 327 (2007), 362. doi: 10.1016/j.jmaa.2006.04.021. Google Scholar
S. Liu and M. Squassina, On the existence of solutions to a fourth-order quasilinear resonant problem,, Abstr. Appl. Anal., 7 (2002), 125. doi: 10.1155/S1085337502000805. Google Scholar
A. M. Micheletti and A. Pistoia, Multiplicity results for a fourth-order semilinear elliptic problem,, Nonlinear Anal., 31 (1998), 895. doi: 10.1016/S0362-546X(97)00446-X. Google Scholar
S. I. Pokhozhaev, On a constructive method of the calculus of variations, (Russian), Dokl. Akad. Nauk SSSR, 298 (1988), 1330. Google Scholar
B. Ricceri, On a three critical points theorem,, Arch. Math. (Basel), 75 (2000), 220. Google Scholar
B. Ricceri, A three critical points theorem revisited,, Nonlinear Anal., 70 (2009), 3084. doi: 10.1016/j.na.2008.04.010. Google Scholar
J. Simon, Regularitè de la solution d'une equation non lineaire dans $R^N$,, in, (1977), 205. Google Scholar
J. Su and Z. Liu, A bounded resonance problem for semilinear elliptic equations,, Discrete Contin. Dyn. Syst., 19 (2007), 431. doi: 10.3934/dcds.2007.19.431. Google Scholar
T. Teramoto, On Positive radial entire solutions of second-order quasilinear elliptic systems,, J. Math. Anal. Appl., 282 (2003), 531. doi: 10.1016/S0022-247X(03)00153-7. Google Scholar
W. Wang and P. Zhao, Nonuniformly nonlinear elliptic equations of p-biharmonic type,, J. Math. Anal. Appl., 348 (2008), 730. doi: 10.1016/j.jmaa.2008.07.068. Google Scholar
Z. Wang, Nonradial positive solutions for a biharmonic critical growth problem,, Commun. Pure Appl. Anal., 11 (2012), 517. doi: 10.3934/cpaa.2012.11.517. Google Scholar
G. Warnault, Regularity of the extremal solution for a biharmonic problem with general nonlinearity,, Commun. Pure Appl. Anal., 8 (2009), 1709. doi: 10.3934/cpaa.2009.8.1709. Google Scholar
E. Zeidler, "Nonlinear Functional Analysis and its Applications," Vol. II,, Berlin-Heidelberg-New York, (1985). Google Scholar
G. Q. Zhang, X. P. Liu and S. Y. Liu, Remarks on a class of quasilinear elliptic systems involving the $(p,q)$-Laplacian,, Electron. J. Differential Equations, 20 (2005), 1. Google Scholar
J. Zhang and S. Li, Multiple nontrivial solutions for some fourth-order semilinear elliptic problems,, Nonlinear Anal., 60 (2005), 221. Google Scholar
Zheng-Hai Huang, Nan Lu. Global and global linear convergence of smoothing algorithm for the Cartesian $P_*(\kappa)$-SCLCP. Journal of Industrial & Management Optimization, 2012, 8 (1) : 67-86. doi: 10.3934/jimo.2012.8.67
Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar. Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1295-1304. doi: 10.3934/cpaa.2018062
Yuxiang Zhang, Shiwang Ma. Some existence results on periodic and subharmonic solutions of ordinary $P$-Laplacian systems. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 251-260. doi: 10.3934/dcdsb.2009.12.251
Vladimir Ejov, Anatoli Torokhti. How to transform matrices $U_1, \ldots, U_p$ to matrices $V_1, \ldots, V_p$ so that $V_i V_j^T= {\mathbb O} $ if $ i \neq j $?. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 293-299. doi: 10.3934/naco.2012.2.293
Pasquale Candito, Giovanni Molica Bisci. Multiple solutions for a Navier boundary value problem involving the $p$--biharmonic operator. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 741-751. doi: 10.3934/dcdss.2012.5.741
Jiří Benedikt. Continuous dependence of eigenvalues of $p$-biharmonic problems on $p$. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1469-1486. doi: 10.3934/cpaa.2013.12.1469
Marcin Dumnicki, Tomasz Szemberg, Halszka Tutaj-Gasińska. New results on fat points schemes in $\mathbb{P}^2$. Electronic Research Announcements, 2013, 20: 51-54. doi: 10.3934/era.2013.20.51
Leszek Gasiński, Nikolaos S. Papageorgiou. Three nontrivial solutions for periodic problems with the $p$-Laplacian and a $p$-superlinear nonlinearity. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1421-1437. doi: 10.3934/cpaa.2009.8.1421
Vincenzo Ambrosio, Teresa Isernia. Multiplicity and concentration results for some nonlinear Schrödinger equations with the fractional p-Laplacian. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5835-5881. doi: 10.3934/dcds.2018254
Arrigo Cellina. The regularity of solutions to some variational problems, including the p-Laplace equation for 3≤p < 4. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 4071-4085. doi: 10.3934/dcds.2018177
Giuseppina Barletta, Gabriele Bonanno. Multiplicity results to elliptic problems in $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 715-727. doi: 10.3934/dcdss.2012.5.715
Claudianor O. Alves, J. V. Gonçalves, Olimpio Hiroshi Miyagaki. Remarks on multiplicity of positive solutions of nonlinear elliptic equations in $IR^N$ with critical growth. Conference Publications, 1998, 1998 (Special) : 51-57. doi: 10.3934/proc.1998.1998.51
Jie Xiao. On the variational $p$-capacity problem in the plane. Communications on Pure & Applied Analysis, 2015, 14 (3) : 959-968. doi: 10.3934/cpaa.2015.14.959
Michael E. Filippakis, Nikolaos S. Papageorgiou. Existence and multiplicity of positive solutions for nonlinear boundary value problems driven by the scalar $p$-Laplacian. Communications on Pure & Applied Analysis, 2004, 3 (4) : 729-756. doi: 10.3934/cpaa.2004.3.729
Zhongliang Wang. Nonradial positive solutions for a biharmonic critical growth problem. Communications on Pure & Applied Analysis, 2012, 11 (2) : 517-545. doi: 10.3934/cpaa.2012.11.517
Ziqing Yuana, Jianshe Yu. Existence and multiplicity of nontrivial solutions of biharmonic equations via differential inclusion. Communications on Pure & Applied Analysis, 2020, 19 (1) : 391-405. doi: 10.3934/cpaa.2020020
Wenbin Liu, Zhaosheng Feng. Periodic solutions for $p$-Laplacian systems of Liénard-type. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1393-1400. doi: 10.3934/cpaa.2011.10.1393
Maya Chhetri, D. D. Hai, R. Shivaji. On positive solutions for classes of p-Laplacian semipositone systems. Discrete & Continuous Dynamical Systems - A, 2003, 9 (4) : 1063-1071. doi: 10.3934/dcds.2003.9.1063
Petru Jebelean. Infinitely many solutions for ordinary $p$-Laplacian systems with nonlinear boundary conditions. Communications on Pure & Applied Analysis, 2008, 7 (2) : 267-275. doi: 10.3934/cpaa.2008.7.267
Monica Lazzo. Existence and multiplicity results for a class of nonlinear elliptic problems in $\mathbb(R)^N$. Conference Publications, 2003, 2003 (Special) : 526-535. doi: 10.3934/proc.2003.2003.526
John R. Graef Shapour Heidarkhani Lingju Kong
|
CommonCrawl
|
Only show content I have access to (207)
Only show open access (33)
Last 3 months (4)
Last 12 months (16)
Over 3 years (707)
Physics and Astronomy (374)
Materials Research (314)
Statistics and Probability (37)
MRS Online Proceedings Library Archive (260)
Microscopy and Microanalysis (48)
Journal of Materials Research (45)
Epidemiology & Infection (35)
The Journal of Laryngology & Otology (20)
European Psychiatry (19)
Radiocarbon (17)
British Journal of Nutrition (15)
Journal of Fluid Mechanics (11)
Canadian Journal of Emergency Medicine (10)
Development and Psychopathology (10)
Proceedings of the International Astronomical Union (10)
Parasitology (9)
Infection Control & Hospital Epidemiology (8)
Psychological Medicine (8)
Proceedings of the Nutrition Society (7)
Public Health Nutrition (7)
Canadian Journal of Neurological Sciences (6)
Journal of the Marine Biological Association of the United Kingdom (6)
Prehospital and Disaster Medicine (6)
Materials Research Society (314)
International Astronomical Union (22)
Royal College of Speech and Language Therapists (22)
Nestle Foundation - enLINK (20)
European Psychiatric Association (19)
AMA Mexican Society of Microscopy MMS (13)
MSC - Microscopical Society of Canada (13)
Nutrition Society (11)
Canadian Association of Emergency Physicians (CAEP) (10)
Society for Healthcare Epidemiology of America (SHEA) (8)
MiMi / EMAS - European Microbeam Analysis Society (7)
BSAS (6)
Canadian Neurological Sciences Federation (6)
MBA Online Only Members (6)
World Association for Disaster and Emergency Medicine (6)
MSA - Microscopy Society of America (5)
Ryan Test (5)
Test Society 2018-05-10 (5)
The Australian Society of Otolaryngology Head and Neck Surgery (5)
MAS - Microbeam Analysis Society (4)
International Glaciological Society (3)
Cambridge Handbooks in Psychology (3)
Advances in Personal Relationships (1)
Cambridge Pocket Clinicians (1)
Cambridge Studies in Biological and Evolutionary Anthropology (1)
Literature in Context (1)
Space Telescope Science Institute Symposium Series (1)
Cambridge Handbooks (3)
Cambridge Handbooks of Psychology (3)
Cambridge Handbooks of Linguistics (1)
Which grades are better, A's and C's, or all B's? Effects of variability in grades on mock college admissions decisions
Woo-kyoung Ahn, Sunnie S. Y. Kim, Kristen Kim, Peter K. McNally
Journal: Judgment and Decision Making / Volume 14 / Issue 6 / November 2019
Students may need to decide whether to invest limited resources evenly across all courses and thus end with moderate grades in all, or focus on some of the courses and thus end with variable grades. This study examined which pattern of grades is perceived more favorably. When judging competency, people give more weight to positive than negative information, in which case heterogeneous grades would be perceived more favorably as they have more positive grades than homogeneous moderate grades. Furthermore, high school students are told to demonstrate their passion in college applications. Nonetheless, people generally overweigh negative information, which can result in a preference for a student with homogeneous grades lacking extremely negative grades. The college admissions decisions in particular may also involve emphasis on long-term stable, consistent, and responsible character, which the homogeneous grades may imply. Study 1 found that laypeople, undergraduate students, and admissions officers preferred to admit a student with homogeneous grades to a college than a student with heterogeneous grades even when their GPAs were the same. Study 2 used a heterogeneous transcript signaling a stereotypic STEM or humanities student, and found that while undergraduate students were more split in their choices, laypeople and admissions officers still preferred a student with homogeneous grades. Study 3 further replicated the preference for a student with homogeneous grades by using higher or lower average GPAs and wider or narrower range of grades for the heterogeneous grades. Possible reasons and limitations of the studies are discussed.
From harmful nutrients to ultra-processed foods: exploring shifts in 'foods to limit' terminology used in national food-based dietary guidelines
Kim Anastasiou, Patricia Ribeiro De Melo, Scott Slater, Gilly A Hendrie, Michalis Hadjikakou, Phillip K Baker, Mark Andrew Lawrence
Journal: Public Health Nutrition , First View
Published online by Cambridge University Press: 02 December 2022, pp. 1-12
The choice of terms used to describe 'foods to limit' (FTL) in food-based dietary guidelines (FBDG) can impact public understanding, policy translation and research applicability. The choice of terms in FBDG has been influenced by available science, values, beliefs and historical events. This study aimed to analyse the terms used and definitions given to FTL in FBDG around the world, including changes over time and regional differences.
A review of terms used to describe FTL and their definitions in all current and past FBDG for adults was conducted, using a search strategy informed by the FAO FBDG website. Data from 148 guidelines (96 countries) were extracted into a pre-defined table and terms were organised by the categories 'nutrient-based', 'food examples' or 'processing-related'.
National FBDG from all world regions.
Nutrient-based terms (e.g. high-fat foods) were the most frequently used type of term in both current and past dietary guidelines (91 %, 85 %, respectively). However, food examples (e.g. cakes) and processing-related terms (e.g. ultra-processed foods) have increased in use over the past 20 years and are now often used in conjunction with nutrient-based terms. Regional differences were only observed for processing-related terms.
Diverse, and often poorly defined, terms are used to describe FTL in FBDG. Policymakers should ensure that FTL terms have clear definitions and can be integrated with other disciplines and understood by consumers. This may facilitate the inclusion of the most contemporary and potentially impactful terminology in nutrition research and policies.
Ten new insights in climate science 2022
Maria A. Martin, Emmanuel A. Boakye, Emily Boyd, Wendy Broadgate, Mercedes Bustamante, Josep G. Canadell, Edward R. Carr, Eric K. Chu, Helen Cleugh, Szilvia Csevár, Marwa Daoudy, Ariane de Bremond, Meghnath Dhimal, Kristie L. Ebi, Clea Edwards, Sabine Fuss, Martin P. Girardin, Bruce Glavovic, Sophie Hebden, Marina Hirota, Huang-Hsiung Hsu, Saleemul Huq, Karin Ingold, Ola M. Johannessen, Yasuko Kameyama, Nilushi Kumarasinghe, Gaby S. Langendijk, Tabea Lissner, Shuaib Lwasa, Catherine Machalaba, Aaron Maltais, Manu V. Mathai, Cheikh Mbow, Karen E. McNamara, Aditi Mukherji, Virginia Murray, Jaroslav Mysiak, Chukwumerije Okereke, Daniel Ospina, Friederike Otto, Anjal Prakash, Juan M. Pulhin, Emmanuel Raju, Aaron Redman, Kanta K. Rigaud, Johan Rockström, Joyashree Roy, E. Lisa F. Schipper, Peter Schlosser, Karsten A. Schulz, Kim Schumacher, Luana Schwarz, Murray Scown, Barbora Šedová, Tasneem A. Siddiqui, Chandni Singh, Giles B. Sioen, Detlef Stammer, Norman J. Steinert, Sunhee Suk, Rowan Sutton, Lisa Thalheimer, Maarten van Aalst, Kees van der Geest, Zhirong Jerry Zhao
Journal: Global Sustainability / Volume 5 / 2022
Published online by Cambridge University Press: 10 November 2022, e20
We summarize what we assess as the past year's most important findings within climate change research: limits to adaptation, vulnerability hotspots, new threats coming from the climate–health nexus, climate (im)mobility and security, sustainable practices for land use and finance, losses and damages, inclusive societal climate decisions and ways to overcome structural barriers to accelerate mitigation and limit global warming to below 2°C.
We synthesize 10 topics within climate research where there have been significant advances or emerging scientific consensus since January 2021. The selection of these insights was based on input from an international open call with broad disciplinary scope. Findings concern: (1) new aspects of soft and hard limits to adaptation; (2) the emergence of regional vulnerability hotspots from climate impacts and human vulnerability; (3) new threats on the climate–health horizon – some involving plants and animals; (4) climate (im)mobility and the need for anticipatory action; (5) security and climate; (6) sustainable land management as a prerequisite to land-based solutions; (7) sustainable finance practices in the private sector and the need for political guidance; (8) the urgent planetary imperative for addressing losses and damages; (9) inclusive societal choices for climate-resilient development and (10) how to overcome barriers to accelerate mitigation and limit global warming to below 2°C.
Social media summary
Science has evidence on barriers to mitigation and how to overcome them to avoid limits to adaptation across multiple fields.
The Evolutionary Map of the Universe Pilot Survey – ADDENDUM
Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov
Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022
Published online by Cambridge University Press: 02 November 2022, e055
Foodborne illness outbreaks linked to unpasteurised milk and relationship to changes in state laws – United States, 1998–2018
Lia Koski, Hannah Kisselburgh, Lisa Landsman, Rachel Hulkower, Mara Howard-Williams, Zainab Salah, Sunkyung Kim, Beau B. Bruce, Michael C. Bazaco, Michael B. Batz, Cary Chen Parker, Cynthia L. Leonard, Atin R. Datta, Elizabeth N. Williams, G. Sean Stapleton, Matthew Penn, Hilary K. Whitham, Megin Nichols
Journal: Epidemiology & Infection / Volume 150 / 2022
Published online by Cambridge University Press: 25 October 2022, e183
Consumption of unpasteurised milk in the United States has presented a public health challenge for decades because of the increased risk of pathogen transmission causing illness outbreaks. We analysed Foodborne Disease Outbreak Surveillance System data to characterise unpasteurised milk outbreaks. Using Poisson and negative binomial regression, we compared the number of outbreaks and outbreak-associated illnesses between jurisdictions grouped by legal status of unpasteurised milk sale based on a May 2019 survey of state laws. During 2013–2018, 75 outbreaks with 675 illnesses occurred that were linked to unpasteurised milk; of these, 325 illnesses (48%) were among people aged 0–19 years. Of 74 single-state outbreaks, 58 (78%) occurred in states where the sale of unpasteurised milk was expressly allowed. Compared with jurisdictions where retail sales were prohibited (n = 24), those where sales were expressly allowed (n = 27) were estimated to have 3.2 (95% CI 1.4–7.6) times greater number of outbreaks; of these, jurisdictions where sale was allowed in retail stores (n = 14) had 3.6 (95% CI 1.3–9.6) times greater number of outbreaks compared with those where sale was allowed on-farm only (n = 13). This study supports findings of previously published reports indicating that state laws resulting in increased availability of unpasteurised milk are associated with more outbreak-associated illnesses and outbreaks.
Weighty Matters: A Real-World Comparison of the Handtevy and Broselow Methods of Prehospital Weight Estimation
Chloe Knudsen-Robbins, Phung K. Pham, Kim Zaky, Shelley Brukman, Carl Schultz, Claus Hecht, Kellie Bacon, Maxwell Wickens, Theodore Heyming
Journal: Prehospital and Disaster Medicine / Volume 37 / Issue 5 / October 2022
Print publication: October 2022
The majority of pediatric medications are dosed according to weight and therefore accurate weight assessment is essential. However, this can be difficult in the unpredictable and peripatetic prehospital care setting, and medication errors are common. The Handtevy method and the Broselow tape are two systems designed to guide Emergency Medical Services (EMS) providers in both pediatric patient weight estimation and medication dosing. The accuracy of the Handtevy method of weight estimation as practiced in the field by EMS has not been previously examined.
The primary objective of this study was to examine the field performance of the Handtevy method and the Broselow tape with respect to prehospital patient weight estimation.
This was a retrospective chart review of trauma and non-trauma patients transported by EMS to the emergency department (ED) of a quaternary care children's hospital from January 1, 2021 through June 30, 2021. Demographic data, ED visit information, prehospital weight estimation, and medication dosing were collected and analyzed. Scale-based weight from the ED was used as the standard for comparison.
A total of 509 patients <13 years of age were included in this study. The EMS providers using the Broselow method estimated patient weight to within +/-10% of ED scale weight in 51.3% of patients. When using the Handtevy method, the EMS providers estimated patient weight to within +/-10% of ED scale weight in 43.7% of patients. When comparing the Handtevy versus Broselow method of prehospital weight estimation, there was no significant association between method and categorized weight discrepancy (over, under, or accurate estimates – defined as within 10% of ED scale weight; P = .25) or percent weight discrepancy (P = .75). On average, prehospital weight estimation was 6.33% lower than ED weight with use of the Handtevy method and 6.94% lower with use of the Broselow method.
This study demonstrated no statistically significant difference between the use of the Handtevy or Broselow methods with respect to prehospital weight estimation. While further research is necessary, these results suggest similar field performance of the Broselow and Handtevy methods.
Burkholderia cepacia complex outbreak linked to a no-rinse cleansing foam product, United States – 2017–2018
Sharon L. Seelman, Michael C. Bazaco, Allison Wellman, Cerisé Hardy, Marianne K. Fatica, Mei-Chiung Jo Huang, Anna-Marie Brown, Kimberly Garner, William C. Yang, Carla Norris, Heather Moulton-Meissner, Julie Paoline, Cara Bicking Kinsey, Janice J. Kim, Moon Kim, Dawn Terashita, Jason Mehr, Alvin J. Crosby, Stelios Viazis, Matthew B. Crist
Published online by Cambridge University Press: 04 August 2022, e154
In March 2018, the US Food and Drug Administration (FDA), US Centers for Disease Control and Prevention, California Department of Public Health, Los Angeles County Department of Public Health and Pennsylvania Department of Health initiated an investigation of an outbreak of Burkholderia cepacia complex (Bcc) infections. Sixty infections were identified in California, New Jersey, Pennsylvania, Maine, Nevada and Ohio. The infections were linked to a no-rinse cleansing foam product (NRCFP), produced by Manufacturer A, used for skin care of patients in healthcare settings. FDA inspected Manufacturer A's production facility (manufacturing site of over-the-counter drugs and cosmetics), reviewed production records and collected product and environmental samples for analysis. FDA's inspection found poor manufacturing practices. Analysis by pulsed-field gel electrophoresis confirmed a match between NRCFP samples and clinical isolates. Manufacturer A conducted extensive recalls, FDA issued a warning letter citing the manufacturer's inadequate manufacturing practices, and federal, state and local partners issued public communications to advise patients, pharmacies, other healthcare providers and healthcare facilities to stop using the recalled NRCFP. This investigation highlighted the importance of following appropriate manufacturing practices to minimize microbial contamination of cosmetic products, especially if intended for use in healthcare settings.
Quick and Correlative TOF-SIMS Analysis of Dispersoid Content in Powder Feedstock and Printed Oxide Dispersion Strengthened Alloys
Laura G. Wilson, David L. Ellis, Timothy M. Smith, John T. K. Kim, Jennifer. L. W. Carter
Published online by Cambridge University Press: 22 July 2022, pp. 954-956
Determining the 3D Atomic Structure of Metallic Glass
Yao Yang, Jihan Zhou, Fan Zhu, Yakun Yuan, Dillan J. Chang, Dennis S. Kim, Minh Pham, Arjun Rana, Xuezeng Tian, Yonggang Yao, Stanley J. Osher, Andreas K. Schmid, Liangbing Hu, Peter Ercius, Jianwei Miao
Derivation and validation of risk prediction for posttraumatic stress symptoms following trauma exposure
Raphael Kim, Tina Lin, Gehao Pang, Yufeng Liu, Andrew S. Tungate, Phyllis L. Hendry, Michael C. Kurz, David A. Peak, Jeffrey Jones, Niels K. Rathlev, Robert A. Swor, Robert Domeier, Marc-Anthony Velilla, Christopher Lewandowski, Elizabeth Datner, Claire Pearson, David Lee, Patricia M. Mitchell, Samuel A. McLean, Sarah D. Linnstaedt
Journal: Psychological Medicine , First View
Published online by Cambridge University Press: 01 July 2022, pp. 1-10
Posttraumatic stress symptoms (PTSS) are common following traumatic stress exposure (TSE). Identification of individuals with PTSS risk in the early aftermath of TSE is important to enable targeted administration of preventive interventions. In this study, we used baseline survey data from two prospective cohort studies to identify the most influential predictors of substantial PTSS.
Self-identifying black and white American women and men (n = 1546) presenting to one of 16 emergency departments (EDs) within 24 h of motor vehicle collision (MVC) TSE were enrolled. Individuals with substantial PTSS (⩾33, Impact of Events Scale – Revised) 6 months after MVC were identified via follow-up questionnaire. Sociodemographic, pain, general health, event, and psychological/cognitive characteristics were collected in the ED and used in prediction modeling. Ensemble learning methods and Monte Carlo cross-validation were used for feature selection and to determine prediction accuracy. External validation was performed on a hold-out sample (30% of total sample).
Twenty-five percent (n = 394) of individuals reported PTSS 6 months following MVC. Regularized linear regression was the top performing learning method. The top 30 factors together showed good reliability in predicting PTSS in the external sample (Area under the curve = 0.79 ± 0.002). Top predictors included acute pain severity, recovery expectations, socioeconomic status, self-reported race, and psychological symptoms.
These analyses add to a growing literature indicating that influential predictors of PTSS can be identified and risk for future PTSS estimated from characteristics easily available/assessable at the time of ED presentation following TSE.
Expanded Phase Model for Transformable Design in Defining Its Usage Scenarios for Merits and Demerits
H. Lee, M. Tufail, K. Kim
Journal: Proceedings of the Design Society / Volume 2 / May 2022
Published online by Cambridge University Press: 26 May 2022, pp. 2127-2136
The product's transformation is considered for its fascination but it is not studied for its usage scenario. This study proposes an expanded phase model that can evaluate the usefulness of transformable products from the perspective of form, function and user scenario of a transformable product. We analyzed purpose of transformation, and identified user benefits from existing transformable products. This model allows designers/team to evaluate usefulness of transformable products by comparing user benefits of the product with appropriateness of form and function in a given usage scenario.
Impact of standardizing care for agitation in dementia using an integrated care pathway on an inpatient geriatric psychiatry unit
Sanjeev Kumar, Amruta Shanbhag, Amer M. Burhan, Sarah Colman, Philip Gerretsen, Ariel Graff-Guerrero, Donna Kim, Clement Ma, Benoit H. Mulsant, Bruce G. Pollock, Vincent L. Woo, Simon J.C. Davies, Tarek K. Rajji
Journal: International Psychogeriatrics / Volume 34 / Issue 10 / October 2022
Published online by Cambridge University Press: 12 May 2022, pp. 919-928
This study examined the effectiveness of an integrated care pathway (ICP), including a medication algorithm, to treat agitation associated with dementia.
Analyses of data (both prospective and retrospective) collected during routine clinical care.
Geriatric Psychiatry Inpatient Unit.
Patients with agitation associated with dementia (n = 28) who were treated as part of the implementation of the ICP and those who received treatment-as-usual (TAU) (n = 28) on the same inpatient unit before the implementation of the ICP. Two control groups of patients without dementia treated on the same unit contemporaneously to the TAU (n = 17) and ICP groups (n = 36) were included to account for any secular trends.
Intervention:
ICP.
Cohen Mansfield Agitation Inventory (CMAI), Neuropsychiatric Inventory Questionnaire (NPIQ), and assessment of motor symptoms were completed during the ICP implementation. Chart review was used to obtain length of inpatient stay and rates of psychotropic polypharmacy.
Patients in the ICP group experienced a reduction in their scores on the CMAI and NPIQ and no changes in motor symptoms. Compared to the TAU group, the ICP group had a higher chance of an earlier discharge from hospital, a lower rate of psychotropic polypharmacy, and a lower chance of having a fall during hospital stay. In contrast, these outcomes did not differ between the two control groups.
These preliminary results suggest that an ICP can be used effectively to treat agitation associated with dementia in inpatients. A larger randomized study is needed to confirm these results.
Health Technology Assessment in Support of National Health Insurance in South Africa
Maryke Wilkinson, Andrew Lofts Gray, Roger Wiseman, Tamara Kredo, Karen Cohen, Jacqui Miot, Mark Blecher, Paul Ruff, Yasmina Johnson, Mladen Poluta, Shelley McGee, Trudy D Leong, Mark Brand, Fatima Suleman, Esnath Maramba, Marc Blockman, Janine Jugathpal, Susan Cleary, Noluthando Nematswerani, Sarvashni Moodliar, Andy Parrish, Khadija K Jamaloodien, Tienie Stander, Kim MacQuilkan, Nicholas Crisp, Thomas Wilkinson
Journal: International Journal of Technology Assessment in Health Care / Volume 38 / Issue 1 / 2022
Published online by Cambridge University Press: 06 May 2022, e44
South Africa has embarked on major health policy reform to deliver universal health coverage through the establishment of National Health Insurance (NHI). The aim is to improve access, remove financial barriers to care, and enhance care quality. Health technology assessment (HTA) is explicitly identified in the proposed NHI legislation and will have a prominent role in informing decisions about adoption and access to health interventions and technologies. The specific arrangements and approach to HTA in support of this legislation are yet to be determined. Although there is currently no formal national HTA institution in South Africa, there are several processes in both the public and private healthcare sectors that use elements of HTA to varying extents to inform access and resource allocation decisions. Institutions performing HTAs or related activities in South Africa include the National and Provincial Departments of Health, National Treasury, National Health Laboratory Service, Council for Medical Schemes, medical scheme administrators, managed care organizations, academic or research institutions, clinical societies and associations, pharmaceutical and devices companies, private consultancies, and private sector hospital groups. Existing fragmented HTA processes should coordinate and conform to a standardized, fit-for-purpose process and structure that can usefully inform priority setting under NHI and for other decision makers. This transformation will require comprehensive and inclusive planning with dedicated funding and regulation, and provision of strong oversight mechanisms and leadership.
Efficacy of halosulfuron-methyl in the management of Navua sedge (Cyperus aromaticus): differential responses of plants with and without established rhizomes
Aakansha Chadha, Singarayer K. Florentine, Kunjithapatham Dhileepan, Christopher Turville, Kim Dowling
Journal: Weed Technology / Volume 36 / Issue 3 / June 2022
Navua sedge is a creeping perennial sedge commonly found in tropical environments and is currently threatening many agroecosystems and ecosystems in Pacific Island countries and northern Queensland, Australia. Pasture and crop productions have been significantly impacted by this weed. The efficacy of halosulfuron-methyl on Navua sedge plants with and without well-established rhizomes was evaluated under glasshouse conditions. Halosulfuron-methyl was applied to plants with established rhizomes at three stages; mowed, pre-flowering, and flowering growth stages, whereas plants without established rhizomes were treated at seedling, pre-flowering and flowering growth stages. At each application time, halosulfuron-methyl was applied at four dose rates of 0, 38, 75, and 150 g ai ha−1. Mortality of 27.5%, 0%, and 5% was recorded in rhizomatous Navua sedge when treated with 75 g ai ha−1 of halosulfuron-methyl at the mowed, pre-flowering stage and flowering stages, respectively. At 10 wk after treatment (WAT), there were no tillers in surviving plants treated at any of the application times. By 16 WAT, the number of tillers increased to 15, 24, and 26 in mowed, pre-flowering, and flowering stages, respectively. Although halosulfuron-methyl is effective in controlling aboveground growth, subsequent emergence of new growth from the rhizome confirms the failure of the herbicide to kill the rhizome. Application of 75 g ai ha−1 of halosulfuron-methyl provided 100% mortality in plants treated at seedling and pre-flowering stages, and 98% mortality when treated at flowering stage in non-rhizomatous plants. A single application of halosulfuron-methyl is highly effective at controlling Navua sedge seedlings but not effective at controlling plants with established rhizomes.
Association of serum PUFA and linear growth over 12 months among 6–10 years old Ugandan children with or without HIV
Ruth A Pobee, Jenifer I Fenton, Alla Sikorskii, Sarah K Zalwango, Isabella Felzer-Kim, Ilce M Medina, Bruno Giordani, Amara E Ezeamama
Journal: Public Health Nutrition / Volume 25 / Issue 5 / May 2022
Published online by Cambridge University Press: 04 April 2022, pp. 1194-1204
To quantify PUFA-associated improvement in linear growth among children aged 6–10 years.
Serum fatty acids (FA), including essential FA (EFA) (linoleic acid (LA) and α-linolenic acid (ALA)) were quantified at baseline using GC-MS technology. FA totals by class (n-3, n-6, n-9, PUFA and SFA) and FA ratios were calculated. Height-for-age Z-score (HAZ) relative to WHO population reference values were calculated longitudinally at baseline, 6 and 12 months. Linear regression models estimated PUFA, HIV status and their interaction-associated standardised mean difference (SMD) and 95 % CI in HAZ over 12 months.
Community controls and children connected to community health centre in Kampala, Uganda, were enrolled.
Children perinatally HIV-infected (CPHIV, n 82), or HIV-exposed but uninfected (CHEU, n 76) and community controls (n 78).
Relative to highest FA levels, low SFA (SMD = 0·31, 95 % CI: 0·03, 0·60), low Mead acid (SMD = 0·38, 95 % CI: 0·02, 0·74), low total n-9 (SMD = 0·44, 95 % CI: 0·08, 0·80) and low triene-to-tetraene ratio (SMD = 0·42, 95 % CI: 0·07, 0·77) predicted superior growth over 12 months. Conversely, low LA (SMD = -0·47, 95 % CI: −0·82, −0·12) and low total PUFA (sum of total n-3, total n-6 and Mead acid) (SMD = -0·33 to −0·39, 95 % CI: −0·71, −0·01) predicted growth deficit over 12 months follow-up, regardless of HIV status.
Low n-3 FA (ALA, EPA and n-3 index) predicted growth deficits among community controls. EFA sufficiency may improve stature in school-aged children regardless of HIV status. Evaluating efficacy of diets low in total SFA, sufficient in EFA and enriched in n-3 FA for improving child growth is warranted.
GASKAP-HI pilot survey science I: ASKAP zoom observations of Hi emission in the Small Magellanic Cloud
N. M. Pingel, J. Dempsey, N. M. McClure-Griffiths, J. M. Dickey, K. E. Jameson, H. Arce, G. Anglada, J. Bland-Hawthorn, S. L. Breen, F. Buckland-Willis, S. E. Clark, J. R. Dawson, H. Dénes, E. M. Di Teodoro, B.-Q. For, Tyler J. Foster, J. F. Gómez, H. Imai, G. Joncas, C.-G. Kim, M.-Y. Lee, C. Lynn, D. Leahy, Y. K. Ma, A. Marchal, D. McConnell, M.-A. Miville-Deschènes, V. A. Moss, C. E. Murray, D. Nidever, J. Peek, S. Stanimirović, L. Staveley-Smith, T. Tepper-Garcia, C. D. Tremblay, L. Uscanga, J. Th. van Loon, E. Vázquez-Semadeni, J. R. Allison, C. S. Anderson, Lewis Ball, M. Bell, D. C.-J. Bock, J. Bunton, F. R. Cooray, T. Cornwell, B. S. Koribalski, N. Gupta, D. B. Hayman, L. Harvey-Smith, K. Lee-Waddell, A. Ng, C. J. Phillips, M. Voronkov, T. Westmeier, M. T. Whiting
We present the most sensitive and detailed view of the neutral hydrogen ( ${\rm H\small I}$ ) emission associated with the Small Magellanic Cloud (SMC), through the combination of data from the Australian Square Kilometre Array Pathfinder (ASKAP) and Parkes (Murriyang), as part of the Galactic Australian Square Kilometre Array Pathfinder (GASKAP) pilot survey. These GASKAP-HI pilot observations, for the first time, reveal ${\rm H\small I}$ in the SMC on similar physical scales as other important tracers of the interstellar medium, such as molecular gas and dust. The resultant image cube possesses an rms noise level of 1.1 K ( $1.6\,\mathrm{mJy\ beam}^{-1}$ ) $\mathrm{per}\ 0.98\,\mathrm{km\ s}^{-1}$ spectral channel with an angular resolution of $30^{\prime\prime}$ ( ${\sim}10\,\mathrm{pc}$ ). We discuss the calibration scheme and the custom imaging pipeline that utilises a joint deconvolution approach, efficiently distributed across a computing cluster, to accurately recover the emission extending across the entire ${\sim}25\,\mathrm{deg}^2$ field-of-view. We provide an overview of the data products and characterise several aspects including the noise properties as a function of angular resolution and the represented spatial scales by deriving the global transfer function over the full spectral range. A preliminary spatial power spectrum analysis on individual spectral channels reveals that the power law nature of the density distribution extends down to scales of 10 pc. We highlight the scientific potential of these data by comparing the properties of an outflowing high-velocity cloud with previous ASKAP+Parkes ${\rm H\small I}$ test observations.
Barriers and solutions to developing and maintaining research networks during a pandemic: An example from the iELEVATE perinatal network
Donna A. Santillan, Debra S. Brandt, Rachel Sinkey, Sheila Scheib, Susan Peterson, Rachel LeDuke, Lisa Dimperio, Cindy Cherek, Angela Varsho, Melissa Granza, Kim Logan, Stephen K. Hunter, Boyd M. Knosp, Heather A. Davis, Joseph C. Spring, Debra Piehl, Rani Makkapati, Thomas Doering, Stacy Harris, Lyndsey Day, Milton Eder, Patricia Winokur, Mark K. Santillan
Journal: Journal of Clinical and Translational Science / Volume 6 / Issue 1 / 2022
Published online by Cambridge University Press: 17 January 2022, e56
To improve maternal health outcomes, increased diversity is needed among pregnant people in research studies and community surveillance. To expand the pool, we sought to develop a network encompassing academic and community obstetrics clinics. Typical challenges in developing a network include site identification, contracting, onboarding sites, staff engagement, participant recruitment, funding, and institutional review board approvals. While not insurmountable, these challenges became magnified as we built a research network during a global pandemic. Our objective is to describe the framework utilized to resolve pandemic-related issues.
We developed a framework for site-specific adaptation of the generalized study protocol. Twice monthly video meetings were held between the lead academic sites to identify local challenges and to generate ideas for solutions. We identified site and participant recruitment challenges and then implemented solutions tailored to the local workflow. These solutions included the use of an electronic consent and videoconferences with local clinic leadership and staff. The processes for network development and maintenance changed to address issues related to the COVID-19 pandemic. However, aspects of the sample processing/storage and data collection elements were held constant between sites.
Adapting our consenting approach enabled maintaining study enrollment during the pandemic. The pandemic amplified issues related to contracting, onboarding, and IRB approval. Maintaining continuity in sample management and clinical data collection allowed for pooling of information between sites.
Adaptability is key to maintaining network sites. Rapidly changing guidelines for beginning and continuing research during the pandemic required frequent intra- and inter-institutional communication to navigate.
P.187 Evaluating congruency between intramedullary and subdural pressure in a porcine model of acute spinal cord injury
MA Rizzuto, A Allard Brown, K Kim, K So, N Manouchehri, M Webster, S Fisk, DE Griesdale, MS Sekhon, F Streijger, BK Kwon
Journal: Canadian Journal of Neurological Sciences / Volume 48 / Issue s3 / November 2021
Published online by Cambridge University Press: 05 January 2022, p. S74
Background: Clinical guidelines recommend MAP maintenance at 85-90 mmHg to optimize spinal cord perfusion post-SCI. Recently, there has been increased interest in spinal cord perfusion pressure as a surrogate marker for spinal cord blood flow. The study aims to determine the congruency of subdural and intramedullary spinal cord pressure measurements at the site of SCI, both rostral and caudal to the epicenter of injury. Methods: Seven Yucatan pigs underwent a T5 to L1 laminectomy with intramedullary (IM) and subdural (SD) pressure sensors placed 2 mm rostral and 2 mm caudal to the epicenter of SCI. A T10 contusion SCI was performed followed by an 8-hour period of monitoring. Axial ultrasound images were captured at the epicenter of injury pre-SCI, post-SCI, and hourly thereafter. Results: Pigs with pre-SCI cord to dural sac ratio (CDSR) of >0.8 exhibited greater occlusion of the subdural space post-SCI with a positive correlation between IM and SD pressure rostral to the injury and a negative correlation caudal to the epicenter. Pigs with pre-SCI CDSR <0.8 exhibited no correlation between IM and SD pressure. Conclusions: Congruency of IM and SD pressure is dependent on compartmentalization of the spinal cord occurring secondary to swelling that occludes the subdural space.
The formation of planetary systems with SPICA
Exploring Astronomical Evolution with SPICA
I. Kamp, M. Honda, H. Nomura, M. Audard, D. Fedele, L. B. F. M. Waters, Y. Aikawa, A. Banzatti, J.E. Bowey, M. Bradford, C. Dominik, K. Furuya, E. Habart, D. Ishihara, D. Johnstone, G. Kennedy, M. Kim, Q. Kral, S.-P. Lai, B. Larsson, M. McClure, A. Miotello, M. Momose, T. Nakagawa, D. Naylor, B. Nisini, S. Notsu, T. Onaka, E. Pantin, L. Podio, P. Riviere Marichalar, W. R. M. Rocha, P. Roelfsema, T. Shimonishi, Y.-W. Tang, M. Takami, R. Tazaki, S. Wolf, M. Wyatt, N. Ysard
In this era of spatially resolved observations of planet-forming disks with Atacama Large Millimeter Array (ALMA) and large ground-based telescopes such as the Very Large Telescope (VLT), Keck, and Subaru, we still lack statistically relevant information on the quantity and composition of the material that is building the planets, such as the total disk gas mass, the ice content of dust, and the state of water in planetesimals. SPace Infrared telescope for Cosmology and Astrophysics (SPICA) is an infrared space mission concept developed jointly by Japan Aerospace Exploration Agency (JAXA) and European Space Agency (ESA) to address these questions. The key unique capabilities of SPICA that enable this research are (1) the wide spectral coverage $10{-}220\,\mu\mathrm{m}$ , (2) the high line detection sensitivity of $(1{-}2) \times 10^{-19}\,\mathrm{W\,m}^{-2}$ with $R \sim 2\,000{-}5\,000$ in the far-IR (SAFARI), and $10^{-20}\,\mathrm{W\,m}^{-2}$ with $R \sim 29\,000$ in the mid-IR (SPICA Mid-infrared Instrument (SMI), spectrally resolving line profiles), (3) the high far-IR continuum sensitivity of 0.45 mJy (SAFARI), and (4) the observing efficiency for point source surveys. This paper details how mid- to far-IR infrared spectra will be unique in measuring the gas masses and water/ice content of disks and how these quantities evolve during the planet-forming period. These observations will clarify the crucial transition when disks exhaust their primordial gas and further planet formation requires secondary gas produced from planetesimals. The high spectral resolution mid-IR is also unique for determining the location of the snowline dividing the rocky and icy mass reservoirs within the disk and how the divide evolves during the build-up of planetary systems. Infrared spectroscopy (mid- to far-IR) of key solid-state bands is crucial for assessing whether extensive radial mixing, which is part of our Solar System history, is a general process occurring in most planetary systems and whether extrasolar planetesimals are similar to our Solar System comets/asteroids. We demonstrate that the SPICA mission concept would allow us to achieve the above ambitious science goals through large surveys of several hundred disks within $\sim\!2.5$ months of observing time.
Conserving migratory waterbirds and the coastal zone: the future of South-east Asia's intertidal wetlands
Ding Li Yong, Jing Ying Kee, Pyae Phyo Aung, Anuj Jain, Chin-Aik Yeap, Nyat Jun Au, Ayuwat Jearwattanakanok, Kim Keang Lim, Yat-Tung Yu, Vivian W. K. Fu, Paul Insua-Cao, Yusuke Sawa, Mike Crosby, Simba Chan, Nicola J. Crockford
Journal: Oryx / Volume 56 / Issue 2 / March 2022
South-east Asia's diverse coastal wetlands, which span natural mudflats and mangroves to man-made salt pans, offer critical habitat for many migratory waterbird species in the East Asian–Australasian Flyway. Species dependent on these wetlands include nearly the entire population of the Critically Endangered spoon-billed sandpiper Calidris pygmaea and the Endangered spotted greenshank Tringa guttifer, and significant populations of several other globally threatened and declining species. Presently, more than 50 coastal Important Bird and Biodiversity Areas (IBAs) in the region (7.4% of all South-east Asian IBAs) support at least one threatened migratory species. However, recent studies continue to reveal major knowledge gaps on the distribution of migratory waterbirds and important wetland sites along South-east Asia's vast coastline, including undiscovered and potential IBAs. Alongside this, there are critical gaps in the representation of coastal wetlands across the protected area networks of many countries in this region (e.g. Viet Nam, Indonesia, Malaysia), hindering effective conservation. Although a better understanding of the value of coastal wetlands to people and their importance to migratory species is necessary, governments and other stakeholders need to do more to strengthen the conservation of these ecosystems by improving protected area coverage, habitat restoration, and coastal governance and management. This must be underpinned by the judicious use of evidence-based approaches, including satellite-tracking of migratory birds, ecological research and ground surveys.
|
CommonCrawl
|
On elliptic systems with Sobolev critical exponent
DCDS Home
Diffusive KPP equations with free boundaries in time almost periodic environments: I. Spreading and vanishing dichotomy
June 2016, 36(6): 3339-3356. doi: 10.3934/dcds.2016.36.3339
On two-sided estimates for the nonlinear Fourier transform of KdV
Jan-Cornelius Molnar 1,
Winterthrerstrasse 190, 8057 Zurich, Switzerland
Received April 2015 Revised October 2015 Published December 2015
The KdV-equation $u_t = -u_{xxx} + 6uu_x$ on the circle admits a global nonlinear Fourier transform, also known as Birkhoff map, linearizing the KdV flow. The regularity properties of $u$ are known to be closely related to the decay properties of the corresponding nonlinear Fourier coefficients. In this paper we obtain two-sided polynomial estimates of all integer Sobolev norms $||u||_m$, $m\ge 0$, in terms of the weighted norms of the nonlinear Fourier transformed, which are linear in the highest order.
Keywords: Birkhoff coordinates., integrable PDEs, Nonlinear Fourier transform, action-angle variables, Korteweg-de Vries equation.
Mathematics Subject Classification: Primary: 37K15; Secondary: 35Q53, 37K1.
Citation: Jan-Cornelius Molnar. On two-sided estimates for the nonlinear Fourier transform of KdV. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3339-3356. doi: 10.3934/dcds.2016.36.3339
J. Bona and R. Smith, The initial-value problem for the Korteweg-de Vries equation,, Philos. Trans. Roy. Soc. London Ser. A, 278 (1975), 555. doi: 10.1098/rsta.1975.0035. Google Scholar
J. Colliander, T. Tao, M. Keel, G. Staffilani and H. Takaoka, Sharp global well-posedness for KdV and modified KdV on $\mathbb R$ and $\mathbb T$,, J. Amer. Math. Soc., 16 (2003), 705. doi: 10.1090/S0894-0347-03-00421-1. Google Scholar
J. Colliander, T. Tao, M. Keel, G. Staffilani and H. Takaoka, Multilinear estimates for periodic KdV equations, and applications,, J. Funct. Anal., 211 (2004), 173. doi: 10.1016/S0022-1236(03)00218-0. Google Scholar
P. Djakov and B. Mityagin, Instability zones of periodic 1-dimensional Schrödinger and Dirac operators,, Russian Math. Surveys, 61 (2006), 663. doi: 10.1070/RM2006v061n04ABEH004343. Google Scholar
H. Flaschka and D. W. McLaughlin, Canonically conjugate variables for the Korteweg-de Vries equation and the Toda lattice with periodic boundary conditions,, Progr. Theoret. Phys., 55 (1976), 438. doi: 10.1143/PTP.55.438. Google Scholar
B. Grébert and T. Kappeler, The Defocusing NLS Equation and Its Normal Form,, European Mathematical Society (EMS), (2014). doi: 10.4171/131. Google Scholar
T. Kappeler, A. Maspero, J.-C. Molnar and P. Topalov, On the convexity of the KdV Hamiltonian,, , (). Google Scholar
T. Kappeler and B. Mityagin, Gap estimates of the spectrum of Hill's equation and action variables for KdV,, Trans. Amer. Math. Soc., 351 (1999), 619. doi: 10.1090/S0002-9947-99-02186-8. Google Scholar
T. Kappeler and B. Mityagin, Estimates for periodic and Dirichlet eigenvalues of the Schrödinger operator,, SIAM J. Math. Anal., 33 (2001), 113. doi: 10.1137/S0036141099365753. Google Scholar
T. Kappeler, C. Möhr and P. Topalov, Birkhoff coordinates for KdV on phase spaces of distributions,, Selecta Math. (N.S.), 11 (2005), 37. doi: 10.1007/s00029-005-0009-6. Google Scholar
T. Kappeler and J. Pöschel, KdV & KAM,, Springer, (2003). doi: 10.1007/978-3-662-08054-2. Google Scholar
T. Kappeler and J. Pöschel, On the periodic KdV equation in weighted Sobolev spaces,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 841. doi: 10.1016/j.anihpc.2008.03.004. Google Scholar
T. Kappeler, B. Schaad and P. Topalov, Asymptotics of spectral quantities of Schrödinger operators,, in Spectral Geometry, (2012), 243. doi: 10.1090/pspum/084/1360. Google Scholar
E. Korotyaev, Estimates for the Hill operator. I,, J. Differential Equations, 162 (2000), 1. doi: 10.1006/jdeq.1999.3684. Google Scholar
E. Korotyaev, Estimates for the Hill operator. II,, J. Differential Equations, 223 (2006), 229. doi: 10.1016/j.jde.2005.04.017. Google Scholar
V. A. Marčenko and I. V. Ostrovs ki, A characterization of the spectrum of the Hill operator,, Mat. Sb. (N.S.), 97(139) (1975), 540. Google Scholar
H. P. McKean and P. van Moerbeke, The spectrum of Hill's equation,, Invent. Math., 30 (1975), 217. doi: 10.1007/BF01425567. Google Scholar
H. P. McKean and K. L. Vaninsky, Action-angle variables for the cubic Schrödinger equation,, Comm. Pure Appl. Math., 50 (1997), 489. doi: 10.1002/(SICI)1097-0312(199706)50:6<489::AID-CPA1>3.0.CO;2-4. Google Scholar
J.-C. Molnar, New estimates of the nonlinear Fourier transform for the defocusing NLS equation,, Int. Math. Res. Not., 2015 (2015), 8309. doi: 10.1093/imrn/rnu208. Google Scholar
J.-C. Molnar, New estimates of the nonlinear Fourier transform for the defocusing NLS equation,, \arXiv{1403.1369}., (). doi: 10.1093/imrn/rnu208. Google Scholar
J.-C. Molnar, On two-sided estimates for the nonlinear fourier transform of KdV,, , (). Google Scholar
J. Pöschel, Hill's potentials in weighted Sobolev spaces and their spectral gaps,, Math. Ann., 349 (2011), 433. doi: 10.1007/s00208-010-0513-7. Google Scholar
T. Tao, J. Colliander, M. Keel, G. Staffilani and H. Takaoka, Global well-posedness for KdV in Sobolev spaces of negative index,, Electron. J. Differential Equations, (2011), 1. Google Scholar
Eduardo Cerpa. Control of a Korteweg-de Vries equation: A tutorial. Mathematical Control & Related Fields, 2014, 4 (1) : 45-99. doi: 10.3934/mcrf.2014.4.45
M. Agrotis, S. Lafortune, P.G. Kevrekidis. On a discrete version of the Korteweg-De Vries equation. Conference Publications, 2005, 2005 (Special) : 22-29. doi: 10.3934/proc.2005.2005.22
Guolian Wang, Boling Guo. Stochastic Korteweg-de Vries equation driven by fractional Brownian motion. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5255-5272. doi: 10.3934/dcds.2015.35.5255
Muhammad Usman, Bing-Yu Zhang. Forced oscillations of the Korteweg-de Vries equation on a bounded domain and their stability. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1509-1523. doi: 10.3934/dcds.2010.26.1509
Eduardo Cerpa, Emmanuelle Crépeau. Rapid exponential stabilization for a linear Korteweg-de Vries equation. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 655-668. doi: 10.3934/dcdsb.2009.11.655
Pierre Garnier. Damping to prevent the blow-up of the korteweg-de vries equation. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1455-1470. doi: 10.3934/cpaa.2017069
Ludovick Gagnon. Qualitative description of the particle trajectories for the N-solitons solution of the Korteweg-de Vries equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1489-1507. doi: 10.3934/dcds.2017061
Arnaud Debussche, Jacques Printems. Convergence of a semi-discrete scheme for the stochastic Korteweg-de Vries equation. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 761-781. doi: 10.3934/dcdsb.2006.6.761
Qifan Li. Local well-posedness for the periodic Korteweg-de Vries equation in analytic Gevrey classes. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1097-1109. doi: 10.3934/cpaa.2012.11.1097
Anne de Bouard, Eric Gautier. Exit problems related to the persistence of solitons for the Korteweg-de Vries equation with small noise. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 857-871. doi: 10.3934/dcds.2010.26.857
Shou-Fu Tian. Initial-boundary value problems for the coupled modified Korteweg-de Vries equation on the interval. Communications on Pure & Applied Analysis, 2018, 17 (3) : 923-957. doi: 10.3934/cpaa.2018046
Roberto A. Capistrano-Filho, Shuming Sun, Bing-Yu Zhang. General boundary value problems of the Korteweg-de Vries equation on a bounded domain. Mathematical Control & Related Fields, 2018, 8 (3&4) : 583-605. doi: 10.3934/mcrf.2018024
John P. Albert. A uniqueness result for 2-soliton solutions of the Korteweg-de Vries equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3635-3670. doi: 10.3934/dcds.2019149
Eduardo Cerpa, Emmanuelle Crépeau, Julie Valein. Boundary controllability of the Korteweg-de Vries equation on a tree-shaped network. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020028
Ivonne Rivas, Muhammad Usman, Bing-Yu Zhang. Global well-posedness and asymptotic behavior of a class of initial-boundary-value problem of the Korteweg-De Vries equation on a finite domain. Mathematical Control & Related Fields, 2011, 1 (1) : 61-81. doi: 10.3934/mcrf.2011.1.61
Dugan Nina, Ademir Fernando Pazoto, Lionel Rosier. Global stabilization of a coupled system of two generalized Korteweg-de Vries type equations posed on a finite domain. Mathematical Control & Related Fields, 2011, 1 (3) : 353-389. doi: 10.3934/mcrf.2011.1.353
Netra Khanal, Ramjee Sharma, Jiahong Wu, Juan-Ming Yuan. A dual-Petrov-Galerkin method for extended fifth-order Korteweg-de Vries type equations. Conference Publications, 2009, 2009 (Special) : 442-450. doi: 10.3934/proc.2009.2009.442
Brian Pigott. Polynomial-in-time upper bounds for the orbital instability of subcritical generalized Korteweg-de Vries equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 389-418. doi: 10.3934/cpaa.2014.13.389
Olivier Goubet. Asymptotic smoothing effect for weakly damped forced Korteweg-de Vries equations. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 625-644. doi: 10.3934/dcds.2000.6.625
Zhaosheng Feng, Yu Huang. Approximate solution of the Burgers-Korteweg-de Vries equation. Communications on Pure & Applied Analysis, 2007, 6 (2) : 429-440. doi: 10.3934/cpaa.2007.6.429
Jan-Cornelius Molnar
|
CommonCrawl
|
Warren Buffett's Investing Strategy: An Inside Look
Andrew Bloomenthal
Andrew Bloomenthal has 20+ years of editorial experience as a financial journalist and as a financial services marketing writer.
Suzanne Kvilhaug
Fact checked by Suzanne Kvilhaug
Suzanne is a content marketer, writer, and fact-checker. She holds a Bachelor of Science in Finance degree from Bridgewater State University and helps develop content strategies for financial brands.
A staunch believer in the value-based investing model, investment guru Warren Buffett has long held the belief that people should only buy stocks in companies that exhibit solid fundamentals, strong earnings power, and the potential for continued growth. Although these seem like simple concepts, detecting them is not always easy. Fortunately, Buffet has developed a list of tenets that help him employ his investment philosophy to maximum effect.
Warren Buffett is noted for introducing the value investing philosophy to the masses, advocating investing in companies that show robust earnings and long-term growth potential.
To granularly drill down on his analysis, Buffett has identified several core tenets, in the categories of business, management, financial measures, and value.
Buffett favors companies that distribute dividend earnings to shareholders and is drawn to transparent companies that cop to their mistakes.
Alison Czinkota / Investopedia
Buffett's Investing Style
Buffett's tenets fall into the following four categories:
Financial measures
This article explores the different concepts housed within each silo.
What Is Warren Buffett's Investment Style?
Business Tenets
Buffett restricts his investments to businesses he can easily analyze. After all, if a company's operational philosophy is ambiguous, it's difficult to reliably project its performance. For this reason, Buffett did not suffer significant losses during the dot-com bubble burst of the early 2000s due to the fact that most technology plays were new and unproven, causing Buffett to avoid these stocks.
Management Tenets
Buffett's management tenets help him evaluate the track records of a company's higher-ups, to determine if they've historically reinvested profits back into the company, or if they've redistributed funds to back shareholders in the form of dividends. Buffett favors the latter scenario, which suggests a company is eager to maximize shareholder value, as opposed to greedily pocketing all profits.
Buffett also places high importance on transparency. After all, every company makes mistakes, but only those that disclose their errors are worthy of a shareholder's trust.
Lastly, Buffett seeks out companies who make innovative strategic decisions, rather than copycatting another company's tactics.
Tenets in Financial Measures
In the financial measures silo, Buffett focuses on low-levered companies with high profit margins. But above all, he prizes the importance of the economic value added (EVA) calculation, which estimates a company's profits, after the shareholders' stake is removed from the equation. In other words, EVA is the net profit, minus the expenditures involved with raising the initial capital.
On first glance, calculating the EVA metric is complex, because it potentially factors in more than 160 adjustments. But in practice, only a few adjustments are typically made, depending on the individual company and the sector in which it operates.
Economic Value Added = N O P A T − ( C I × W A C C ) where: N O P A T = net operating profit after taxes C I = capital invested W A C C = weighted average cost of capital \begin{aligned} &\text{Economic Value Added}= NOPAT-(CI \times WACC)\\ &\textbf{where:}\\ &NOPAT = \text{net operating profit after taxes} \\ &CI = \text{capital invested} \\ &WACC=\text{weighted average cost of capital}\\ \end{aligned} Economic Value Added=NOPAT−(CI×WACC)where:NOPAT=net operating profit after taxesCI=capital investedWACC=weighted average cost of capital
Buffett's final two financial tenets are theoretically similar to the EVA. First, he studies what he refers to as "owner's earnings." This is essentially the cash flow available to shareholders, technically known as free cash flow-to-equity (FCFE). Buffett defines this metric as net income plus depreciation, minus any capital expenditures (CAPX) and working capital (W/C) costs. The owners' earnings help Buffett evaluate a company's ability to generate cash for shareholders.
Value Tenets
In this category, Buffett seeks to establish a company's intrinsic value. He accomplishes this by projecting the future owner's earnings, then discounting them back to present-day levels. Furthermore, Buffett generally ignores short-term market moves, focusing instead on long-term returns. But on rare occasions, Buffett will act on short-term fluctuations, if a tantalizing deal presents itself. For example, if a company with strong fundamentals suddenly drops in price from $50 per share to $40 per share, Buffett might acquire a few extra shares at a discount.
Finally, Buffett famously coined the term "moat," which he describes as "something that gives a company a clear advantage over others and protects it against incursions from the competition."
Buffett realizes that not all investors possess the expertise needed to set his analytical tools in action and advises newer investors to consider low-cost index funds over individual stocks.
Buffett's tenets provide a foundation on which he rests his value investing philosophy. But applying these tenets can be difficult, given the data that must be cultivated and the metrics that must be calculated. But those who can successfully employ these analytical tools can invest like Buffett and watch their portfolios thrive.
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
Berkshire Hathaway. "1986 Annual Report."
Take the Next Step to Invest
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
Guide to Microeconomics
What to Know About Economic Value Added (EVA)
How Can a Company Improve Its Economic Value Added (EVA)?
Rules That Warren Buffett Lives By
Warren Buffett's Best Buys
Who Is Warren Buffett?
The Best Books on Warren Buffett
Shareholder Value Added (SVA): Definition, Uses, Formula
Shareholder value added (SVA) is a measure of the operating profits that a company has produced in excess of its funding costs, or cost of capital.
Internal Rate of Return (IRR) Rule: Definition and Example
The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments.
Return on Invested Capital: What Is It, Formula and Calculation, and Example
Return on invested capital (ROIC) is a way to assess a company's efficiency at allocating the capital under its control to profitable investments.
Who Was Benjamin Graham?
Benjamin Graham was an influential investor who is regarded as the father of value investing.
Economic Value Added (EVA) Definition: Pros and Cons, With Formula
Economic value added (EVA) is a financial metric based on residual wealth, calculated by deducting a firm's cost of capital from operating profit.
Weighted Average Cost of Capital (WACC) Explained with Formula and Example
The weighted average cost of capital (WACC) calculates a firm's cost of capital, proportionately weighing each category of capital.
|
CommonCrawl
|
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS
MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS
Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS
The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance.
ISSN 1088-6850(online) ISSN 0002-9947(print)
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | Most recent issue | All issues (1900–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article
An example of a two-term asymptotics for the "counting function" of a fractal drum
Authors: Jacqueline Fleckinger-Pellé and Dmitri G. Vassiliev
Journal: Trans. Amer. Math. Soc. 337 (1993), 99-116
MSC: Primary 58G18; Secondary 58G25
DOI: https://doi.org/10.1090/S0002-9947-1993-1176086-7
MathSciNet review: 1176086
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: In this paper we study the spectrum of the Dirichlet Laplacian in a bounded domain $\Omega \subset {\mathbb {R}^n}$ with fractal boundary $\partial \Omega$. We construct an open set $\mathcal {Q}$ for which we can effectively compute the second term of the asymptotics of the "counting function" $N(\lambda ,\mathcal {Q})$, the number of eigenvalues less than $\lambda$. In this example, contrary to the M. V. Berry conjecture, the second asymptotic term is proportional to a periodic function of In $\lambda$, not to a constant. We also establish some properties of the $\zeta$-function of this problem. We obtain asymptotic inequalities for more general domains and in particular for a connected open set $\mathcal {O}$ derived from $\mathcal {Q}$. Analogous periodic functions still appear in our inequalities. These results have been announced in $[{\text {FV}}]$.
References [Enhancements On Off] (What's this?)
M. V. Berry, Some geometric aspects of wave motion: wavefront dislocations, diffraction catastrophes, diffractals, Geometry of the Laplace operator (Proc. Sympos. Pure Math., Univ. Hawaii, Honolulu, Hawaii, 1979) Proc. Sympos. Pure Math., XXXVI, Amer. Math. Soc., Providence, R.I., 1980, pp. 13–28. MR 573427
M. Š. Birman and M. Z. Solomjak, The principal term of the spectral asymptotics for "non-smooth" elliptic problems, Funkcional. Anal. i Priložen. 4 (1970), no. 4, 1–13 (Russian). MR 0278126
Jean Brossard and René Carmona, Can one hear the dimension of a fractal?, Comm. Math. Phys. 104 (1986), no. 1, 103–122. MR 834484
R. Courant and D. Hilbert, Methods of mathematical physics. Vol. I, Interscience Publishers, Inc., New York, N.Y., 1953. MR 0065391
J. J. Duistermaat and V. W. Guillemin, The spectrum of positive elliptic operators and periodic bicharacteristics, Invent. Math. 29 (1975), no. 1, 39–79. MR 405514, DOI https://doi.org/10.1007/BF01405172
K. J. Falconer, The geometry of fractal sets, Cambridge Tracts in Mathematics, vol. 85, Cambridge University Press, Cambridge, 1986. MR 867284
Jacqueline Fleckinger and Guy Métivier, Théorie spectrale des opérateurs uniformément elliptiques sur quelques ouverts irréguliers, C. R. Acad. Sci. Paris Sér. A-B 276 (1973), A913–A916 (French). MR 320550
Jacqueline Fleckinger and Dmitri G. Vasil′ev, Tambour fractal: exemple d'une formule asymptotique à deux termes pour la "fonction de comptage", C. R. Acad. Sci. Paris Sér. I Math. 311 (1990), no. 13, 867–872 (French, with English summary). MR 1084044
C. F. Gauss, Disquisitiones arithmeticae, Leipzig, 1801.
V. Ja. Ivriĭ, The second term of the spectral asymptotics for a Laplace-Beltrami operator on manifolds with boundary, Funktsional. Anal. i Prilozhen. 14 (1980), no. 2, 25–34 (Russian). MR 575202
Victor Ivriĭ, Precise spectral asymptotics for elliptic operators acting in fiberings over manifolds with boundary, Lecture Notes in Mathematics, vol. 1100, Springer-Verlag, Berlin, 1984. MR 771297
Michel L. Lapidus, Fractal drum, inverse spectral problems for elliptic operators and a partial resolution of the Weyl-Berry conjecture, Trans. Amer. Math. Soc. 325 (1991), no. 2, 465–529. MR 994168, DOI https://doi.org/10.1090/S0002-9947-1991-0994168-5
Michel L. Lapidus, Spectral and fractal geometry: from the Weyl-Berry conjecture for the vibrations of fractal drums to the Riemann zeta-function, Differential equations and mathematical physics (Birmingham, AL, 1990) Math. Sci. Engrg., vol. 186, Academic Press, Boston, MA, 1992, pp. 151–181. MR 1126694, DOI https://doi.org/10.1016/S0076-5392%2808%2963379-2
Michel L. Lapidus and Jacqueline Fleckinger, The vibrations of a "fractal drum", Differential equations (Xanthi, 1987) Lecture Notes in Pure and Appl. Math., vol. 118, Dekker, New York, 1989, pp. 423–436. MR 1021743
Michel L. Lapidus and Jacqueline Fleckinger-Pellé, Tambour fractal: vers une résolution de la conjecture de Weyl-Berry pour les valeurs propres du laplacien, C. R. Acad. Sci. Paris Sér. I Math. 306 (1988), no. 4, 171–175 (French, with English summary). MR 930556
Michel L. Lapidus and Carl Pomerance, Fonction zêta de Riemann et conjecture de Weyl-Berry pour les tambours fractals, C. R. Acad. Sci. Paris Sér. I Math. 310 (1990), no. 6, 343–348 (French, with English summary). MR 1046509
R. B. Melrose, Weyl's conjecture for manifolds with concave boundary, Geometry of the Laplace operator (Proc. Sympos. Pure Math., Univ. Hawaii, Honolulu, Hawaii, 1979) Proc. Sympos. Pure Math., XXXVI, Amer. Math. Soc., Providence, R.I., 1980, pp. 257–274. MR 573438
Richard Melrose, The trace of the wave group, Microlocal analysis (Boulder, Colo., 1983) Contemp. Math., vol. 27, Amer. Math. Soc., Providence, RI, 1984, pp. 127–167. MR 741046, DOI https://doi.org/10.1090/conm/027/741046
G. Métivier, Etude asymptotique des valeurs propres et de la fonction spectrale de problèmes aux limites, Thèse de Doctorat d'Etat, Mathématiques, Université de Nice, France, 1976.
Guy Métivier, Valeurs propres de problèmes aux limites elliptiques irrégulières, Bull. Soc. Math. France Suppl. Mém. 51-52 (1977), 125–219 (French). MR 473578
Yu. G. Safarov, Asymptotics of the spectrum of a boundary value problem with periodic billiard trajectories, Funktsional. Anal. i Prilozhen. 21 (1987), no. 4, 88–90 (Russian). MR 925085
---, Precise asymptotics of the spectrum of a boundary value problem and periodic billiards, Izv. Akad. Nauk SSSR Math. Ser. 52 (1988), no. 6, 1230-1251; English transl, in Math. USSR-Izv.
D. G. Vasil′ev, Asymptotic behavior of the spectrum of a boundary value problem, Trudy Moskov. Mat. Obshch. 49 (1986), 167–237, 240 (Russian). MR 853539
---, One can hear the dimension of a connected fractal in ${\mathbb {R}^2}$, Petkov & Lazarov, Integral Equations and Inverse Problems, Longman Academic, Scientific & Technical, 1991, pp. 270-273. H. Weyl, Über die asymptotische Verteilung der Eigenwerte, Gött. Nach. (1911), 110-117.
Hermann Weyl, Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung), Math. Ann. 71 (1912), no. 4, 441–479 (German). MR 1511670, DOI https://doi.org/10.1007/BF01456804
M. V. Berry, Some geometric aspects of wave motion: wavefront dislocations, diffraction catastrophes, diffractals, Geometry of the Laplace Operator, Proc. Sympos. Pure Math., vol. 36, Amer. Math. Soc., Providence, R.I., 1980, pp. 13-38. M. Birman and M. Solomjak, On the principal term of the spectral asymptotics for nonsmooth elliptic problems, Funktsional Anal, i Prilozhen. 4 (1970), no. 4, 1-13; English transl. in Funct. Anal. Appl. 4 (1970). J. Brossard and R. Carmona, Can one hear the dimension of a fractal?, Comm. Math. Phys. 104 (1986), 103-122. R. Courant and D. Hilbert, Methods of mathematical physics, Vol. 1, English transl., Interscience, New York, 1953. J. J. Duistermaat and V. W. Guillemin, The spectrum of positive elliptic operators and periodic bicharacteristics, Invent. Math. 29 (1975), 39-79. K. J. Falconer, The geometry of fractal sets, Cambridge Univ. Press, 1985. J. Fleckinger and G. Métivier, Théorie spectrale des opérateurs uniformément elliptiques sur quelques ouverts irréguliers, CR. Acad. Sci. Paris Sér. A 276 (1973), 913-916. J. Fleckinger and D. Vasil'ev, Tambour fractal: exemple d'une formule asymptotique a deux termes pour la "fonction de comptage", C.R. Acad. Sci. Paris Sér. I Math. 311 (1990), 867-872. C. F. Gauss, Disquisitiones arithmeticae, Leipzig, 1801. V. Ya. Ivrii, On the second term of the spectral asymptotics for the Laplace-Beltrami operator on a manifold with a boundary, Funktsional Anal, i Prilozhen 14 (1980), no. 2, 25-35; English transi, in Funct. Anal. Appl. 14 (1980). ---, Precise spectral asymptotics for elliptic operators acting in fiberings over manifolds with boundary, Lecture Notes in Math., vol. 1100, Springer-Verlag, Berlin, 1984. M. L. Lapidus, Fractal drum, inverse spectral problems for elliptic operators and a partial resolution of the Weyl-Berry conjecture, Trans. Amer. Math. Soc. 325 (1991), 465-529. ---, Spectral and fractal geometry: from the Weyl-Berry conjecture for the vibrations of fractal drums to the Riemann zeta-function, Differential Equations and Mathematical Physics, Proc. UAB Internat. Conf. on Math. Phys. and Differential Equations held in Birmingham in March 1990 (C. Bennewitz et al., eds.), Academic Press, New York, 1992, pp. 151-182. M. L. Lapidus and J. Fleckinger, The vibrations of a fractal drum, Differential Equations, Lecture Notes in Pure and Appl. Math., Dekker, New York and Basel, 1989, pp. 423-436. ---, Tambour fractal: vers une résolution de la conjecture de Weyl-Berry pour les valeurs propres du laplacien, C.R. Acad. Sci. Paris Sér. I Math. 306 (1988), 171-175. M. L. Lapidus and C. Pomerance, Fonction zêta de Riemann et conjecture de Weyl-Berry pour les tambours fractals, C.R. Acad. Sci. Paris Sér. I Math. 310 (1990), 343-348. R. B. Melrose, Weyl's conjecture for manifolds with concave boundary, Geometry of the Laplace Operator, Proc. Sympos. Pure Math., vol. 36, Amer. Math. Soc., Providence, R.I., 1980, pp. 257-273. ---, The trace of the wave group, Contemp. Math., vol. 27, Amer. Math. Soc., Providence, R.I., 1984, pp. 127-167. G. Métivier, Etude asymptotique des valeurs propres et de la fonction spectrale de problèmes aux limites, Thèse de Doctorat d'Etat, Mathématiques, Université de Nice, France, 1976. ---, Valeurs propres de problèmes aux limites elliptiques irréguliers, Bull. Soc. Math. France Mém. 51-52 (1977), 125-219. Yu. G. Safarov, Asymptotics of the spectrum of a boundary value problem with periodic billiard trajectories, Funktsional Anal, i Prilozhen. 21 (1987), no. 4, 90-92; English transl. in Funct. Anal. Appl. 21 (1987). ---, Precise asymptotics of the spectrum of a boundary value problem and periodic billiards, Izv. Akad. Nauk SSSR Math. Ser. 52 (1988), no. 6, 1230-1251; English transl, in Math. USSR-Izv. D. Vasil'ev, Asymptotics of the spectrum of a boundary value problem, Trudy Moskov. Mat. Obsch. 49 (1986), 167-237; English transl, in Trans. Moscow Math. Soc., 1987, pp. 173-245. ---, One can hear the dimension of a connected fractal in ${\mathbb {R}^2}$, Petkov & Lazarov, Integral Equations and Inverse Problems, Longman Academic, Scientific & Technical, 1991, pp. 270-273. H. Weyl, Über die asymptotische Verteilung der Eigenwerte, Gött. Nach. (1911), 110-117. ---, Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen, Math. Ann. 71 (1912), 441-479.
Retrieve articles in Transactions of the American Mathematical Society with MSC: 58G18, 58G25
Retrieve articles in all journals with MSC: 58G18, 58G25
Article copyright: © Copyright 1993 American Mathematical Society
Join the AMS
AMS Conferences
News & Public Outreach
Math in the Media
Mathematical Imagery
Mathematical Moments
The Profession
Data on the Profession
Fellows of the AMS
Mathematics Research Communities
AMS Fellowships
Collaborations and position statements
Appropriations Process Primer
Congressional briefings and exhibitions
About the AMS
Jobs at AMS
Notices of the AMS · Bulletin of the AMS
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility
|
CommonCrawl
|
Predictive side decoding for human-centered multiple description image coding
Yuanyuan Xu1
Multiple description coding (MDC) provides a favorable solution for human-centered image communication, which takes into account people's varying watching situations as well as people's demand for real-time image display. As an effective technique for MDC, three-description lattice vector quantization (3D-LVQ) is considered for image coding in this paper. Based on intra- and inter-correlation in the 3D-LVQ index assignment as well as wavelet intra-subband correlation, a novel predictive decoding method for 3D-LVQ-based image coding is proposed to enhance side decoding performance, which attempts to predict lost descriptions (sublattice points) in a good way for better reconstructions of wavelet vectors (fine lattice points) in the side decoding. Experimental results validate effectiveness of the proposed decoding scheme in terms of rate-distortion performance.
The revolutionary computing technology advances have changed almost every aspect of human lives [1]. However, these changes intended to be positive are not always so. Since a lot of computing technologies are designed ignorant of human's needs or social cultural contexts, these technologies are complex, difficult to use, and demanding, especially for ordinary people who do not possess skilled knowledge [1]. These issues bring a need to develop new computing paradigms that focus more on people instead of machines. Human-centered computing (HCC) [2–9] aims to bridge gaps between multiple disciplines and tries to design and implement the computing systems that support human endeavor.
According to [1], HCC system and algorithm design needs to take into account individual human abilities and limitation, social and cultural awareness, and the adaptability across individuals and specific situations, for example, designing recommender systems or recommending services that consider individual's social and cultural context [10–17]. An interesting topic in multimedia applications in HCC is the adaptation of multimedia communication to varying demands of different people, whose communication channels may have varying bandwidth and loss probabilities. When packet loss occurs during online image browsing, people tend to prefer viewing a degraded version of a whole image immediately instead of waiting and staring at a partially displayed fine image. The design of human-centered image coding scheme that takes into account people's varying watching situations as well as people's demand for real-time image display is a challenging problem.
Multiple description coding (MD coding or MDC) [18] provides a favorable solution to this problem. Although the reliability of multimedia communication can be improved from the perspective of multicore real-time system design [19–21] or load balancing of cloud-edge computing [22–27], MDC offers an error-resilient source coding method to combat information loss over lossy networks without retransmission. MDC generates different encoded versions for the same source. Each version is referred to as a description and transmitted separately over unreliable networks. Each description can provide a degraded version of the source independently, while a finer reconstruction quality can be obtained with increasing number of descriptions received. Generally, the decoding of one or partial descriptions is known as side decoding corresponding to side distortions, while the decoding of all the descriptions is central decoding resulting in a central distortion [28]. Using MDC, people with varying bandwidth can select different number of descriptions that correspond to different reconstruction qualities. During network congestion, people can get access to a coarsely reconstructed source immediately, instead of waiting for retransmission of all the lost packets.
Vaishampayan introduced the earliest practical MD technique known as multiple description scalar quantizer (MDSQ) [29]. MDSQ generates descriptions by performing scalar quantization, followed by an index assignment. A wavelet image coding based on MDSQ was developed in [30]. Another wavelet-based MD image coding scheme is proposed in [31] for image transmission with mixed impulse noise, where multi-objective evolutionary algorithm is used to solve the side quantization optimization problem and the parameter optimization problem of the denoising filter simultaneously.
Multiple description lattice vector quantization (multiple description LVQ or MDLVQ) was later developed in [32], and a study on optimal MDLVQ design was presented in [33]. MDLVQ generates descriptions by performing vector quantization first, and then, an index assignment maps a fine lattice point to multiple sublattice points. An image coding scheme based on two-description LVQ was developed in [34], which shows better coding performance than the corresponding MDSQ-based counterpart [30]. In [35], the design of M-description LVQ is investigated, where the MDLVQ index assignment design is translated into a transportation problem. The effectiveness of the proposed index assignment design in [35] is verified under high-resolution assumption. In [36], an analytical expression for optimal entropy-constrained asymmetric MDLVQ design is presented, which allows unequal packet-loss probabilities and side entropies. In [37], the design of symmetric MD coinciding LVQ is proposed, where the coinciding sublattices refer to sublattices with the same index but generated by different generator matrices. The developed MD coinciding LVQ scheme is applied to standard test images.
Other MD schemes include using forward error correction codes [38], MDC via polyphase transform and selective quantization [39], set partitioning of hierarchical trees (SPHIT)-based image MDC [40], and a JPEG 2000-based MD approach presented in [41]. In [42], a just noticeable difference (JND)-based MD image coding scheme is proposed utiltizing the charactersitics of human visual model. In [43], an adaptive reconstruction-based MD image coding scheme is proposed with randomly offset quantizations. Deep learning approaches [44] have been applied in the MDC. In [45], a standard-compliant multiple description coding framework is proposed, where the input image is polyphase downsampled to form two descriptions for the standard codec, while during decoding deep convolutional neural networks are utilized to conduct artifact removal and image super-resolution to enhance reconstructed image quality. In [46], MDC and convolutional autoencoders are combined for image compression to achieve high coding efficiency.
Besides traditional images, a few research works on MDC target at 3D depth images or single-view and multiview video sequences. In [47], observing that the 3D depth images have special characteristics, which can be classified into edge blocks and smooth blocks, a two-description LVQ scheme is proposed for efficient compression of 3D depth images. In [48], a novel coding scheme has been proposed for video sequences based on the spatial-temporal masking characteristics of human visual system. In [49], the multiview sequence is spatial polyphase subsampled and "cross-interleaved" sampling grouped to generate two subsequences, and an MDC scheme is proposed which directly reuses the computed modes and prediction vectors of one subsequence to the other one. This work is extended in [50], where one subsequence is directly coded by joint multiview video coding (JMVC) encoder, and the other subsequence selectively chooses the prediction mode and the prediction vector of the coded subsequence to improve the rate-distortion performance. On the decoder side, the side reconstruction quality is improved using a gradient-based interpolation.
Among the abovementioned works, most of them center on two-channel MDC or two-description coding. Comparing with two-description MDC, more-description case is able to provide better robustness against description loss, especially for networks with high loss ratios. However, redundancy increases apparently with the increasing number of descriptions. Three-description coding may thereby be a good trade-off choice in some cases. On the other hand, compared with MDSQ, MDLVQ exhibits better coding efficiency and the ease of extension to more-description coding. Therefore, a three-description lattice vector quantization (3D-LVQ)-based image coding scheme is considered in this paper.
The general design of 3D-LVQ is concerned with index assignment, which is discussed in [33] and [51]. Here, we consider how to take good advantage of the index assignment result for better reconstruction quality in image decoding. For the vector reconstruction at the decoder side in the case of some descriptions (i.e., sublattice points in MD-LVQ) being lost, the existing MD-LVQ coding schemes employ a simple side decoding of each vector individually based on the sublattice points of the vector. We observe a good correlation characteristic of the 3D-LVQ index assignment result, which can be exploited to enhance side decoding for memory source. Specifically in the context of wavelet image coding, a predictive side decoding method is proposed accordingly to improve reconstruction quality in side decoding. Compared with the existing work in [33, 51] which only decodes the received sublattice points during description losses, the proposed scheme can predict the lost sublattice points based on index correlation.
The main contributions of this paper can be summarized as follows:
∙ The intra- and inter-correlation between sublattice points in the 3D-LVQ index assignment has been analyzed and discussed, followed by the correlation discussion of wavelet intra-subbands.
∙ Based on correlation discussion, a novel predictive decoding method for 3D-LVQ-based image coding is proposed to enhance side decoding performance. The performance of the proposed predictive decoding scheme is verified by experimental results.
The remainder of the paper is structured as follows. Section 2 provides a 3D-LVQ-based image coding scheme. Section 3 presents a novel predictive side decoding approach. Experimental settings and results are presented in Sections 4 and 5, respectively, while Section 6 concludes the paper.
Three-description LVQ-based image coding
In this section, we first provide a concise description of 3D-LVQ and then present a 3D-LVQ-based image coding scheme.
3D-LVQ
For a given lattice Λ in the L-dimensional Euclidean space, a sublattice Λ′⊆Λ is said to be geometrically similar to Λ, if Λ′ can be obtained from Λ by applying a scaling, rotation, or reflection. The index number N of the sublattice Λ′ is defined as the number of elements of Λ (fine lattice points) in each Voronoi cell of Λ′. 3D-LVQ aims to map one fine lattice point λ (λ∈Λ) to three sublattice points λ1′, λ2′, and λ3′(λ1′,λ2′,λ3′∈Λ′) based on a bijective labeling function α(.) (also known as index assignment) as:
$$\begin{array}{@{}rcl@{}} \alpha (\lambda) = (\lambda_{1}',\lambda_{2}',\lambda_{3}') \end{array} $$
for minimizing the side distortions when only one or two sublattice points are received. The overall 1-description side distortion Ds1,λ and the overall 2-description side distortion Ds2,λ are given as:
$$\begin{array}{@{}rcl@{}} D_{s1,\lambda} = \left\| {\lambda - \lambda_{1}} \right\|^{2} + \left\| {\lambda - \lambda_{2}} \right\|^{2} + \left\| {\lambda - \lambda_{3}} \right\|^{2} \end{array} $$
$$\begin{array}{@{}rcl@{}} D_{s2,\lambda} = \left\| {\lambda - \frac{{\lambda_{1} + \lambda_{2} }}{2}} \right\|^{2} + \left\| {\lambda - \frac{{\lambda_{2} + \lambda_{3} }}{2}} \right\|^{2} + \left\| {\lambda - \frac{{\lambda_{1} + \lambda_{3} }}{2}} \right\|^{2} \end{array} $$
respectively, where the midpoint of two received sublattice points is taken as the reconstructed vector for the 2-description-based side decoding. The optimal index assignment design to minimize the side distortions or the expected distortion is a challenging task, and the index assignment based on A2 lattice can be found in [33] and [51]. Figure 1 shows an example of the labeling function obtained with the index assignment result in [33] and [51] based on A2 lattice with index number N=31, which has been shown to minimize the side distortions. For instance, the lattice point "OAB" in Fig. 1 is represented by the three sublattice points "O," "A," and "B," while another lattice point "BOO" is mapped to the three sublattice points "B," "O," and "O." In this paper, we consider the 3D-LVQ with the optimal index assignment as shown in Fig. 1.
Index assignment based on A2 lattice with N = 31. Lattice points λ, sublattice points λ′ are marked by × and ∙, respectively
3D-LVQ-based image coding
As in [34], a simple 3D-LVQ-based image encoding scheme is shown in Fig. 2. As a popular technique for image compression, discrete wavelet transform (DWT) can provide multiresolution representation and subband decomposition for images and capture feature information in horizontal, vertical, and diagonal directions [52]. DWT is considered for image coding in this paper. After applying a DWT to the input image, an input vector x is constructed in a subband. It is then quantized to a (fine) lattice point λ(x), which is mapped to three sublattice points λ1′(x), λ2′(x), and λ3′(x) to be transmitted in separate channels after performing arithmetic coding.
At the receiver, decoding is the exact reverse of encoding. Due to network congestion or channel errors, some channels of information (descriptions) may be lost. Therefore, three different types of 3D-LVQ decoders may be needed, that is, one-description-based and two-description-based side decoding as well as three-description-based central decoding. Denote by \(\hat {\textbf {x}}\) the reconstructed vector x. If all the three sublattice points of vector x are received, the central decoder yields α−1(λ1′(x),λ2′(x),λ3′(x))=λ(x), where α−1 is the inverse function of the labeling function α. If two sublattice points are received while one is lost, the conventional two-description-based side decoder simply takes the average of the two sublattice points λi′(x) and λj′(x) (1≤i,j≤3,i≠j) as the reconstructed vector:
$$\begin{array}{@{}rcl@{}} \hat{\textbf{x}} = (\lambda'_{i}(\textbf{x}) + \lambda'_{j}(\textbf{x}))/2. \end{array} $$
In the case of only one sublattice point λi′(x) being received, the conventional one-description-based side decoder just uses the received sublattice point for the reconstruction:
$$\begin{array}{@{}rcl@{}} \hat{\textbf{x}} = \lambda'_{i}(\textbf{x}). \end{array} $$
In the following section, We will propose a more effective vector reconstruction method to improve the side decoding performance by taking advantage of the correlation of sublattice points in the 3D-LVQ index assignment and the wavelet intra-subband correlation characteristics.
3D-LVQ-based predictive side decoding
Correlation discussion
As can be seen from Fig. 1, each fine lattice point is mapped to an ordered 3-tuple with the three sublattice points being as close as possible to the fine lattice point for minimizing side distortions [33, 51]. In this way, we can see that there is a strong intra-correlation among the three sublattice points for a fine lattice point. More importantly, there exists a substantial inter-correlation among neighboring fine lattice points in terms of their corresponding sublattice points. In other words, neighboring fine lattice points share most sublattice points in the index assignment. In Fig. 1, for instance, the fine lattice point labeled as "OOA" shares at least two sublattice points with its six closest neighbors "AOO," "OAO," "BOO," "OAB," "AOB," and "OAF," regardless of the order. Statistically, we observe from the figure that the immediately neighboring fine lattice points have the same three sublattice points (but in different order) with a probability of 78/186, while they share two sublattice points with a probability of 108/186. That is to say, these immediately neighboring fine lattice points share at least two sublattice points. As the distance between two fine lattice points increases, they have fewer sublattice points in common.
On the other hand, it is well known that a wavelet image normally exhibits strong intra-subband correlation especially in low-frequency subbands, as the discrete wavelet transform re-distributes the energy of the image into different subbands. One-dimensional DWT passes the signal through a low-pass filter and a high-pass filter simultaneously, providing approximation coefficients (low-frequency subband) and detail coefficients (high-frequency subband), respectively. For two-dimensional DWT performed on images, one level of transform generates four subbands. The subband with low-pass filters in both horizontal and vertical directions is termed as the "LL" subband. Similarly, the subbands resulting from a high-pass filter in the horizontal direction and a low-pass filter in the vertical direction, a low-pass filter in the horizontal direction and a high-pass filter in the vertical direction, and high-pass filters in both directions are termed as the "HL," "LH," and "HH" subbands, respectively. As an example, two-stage wavelet decomposition of the image "Couple" is shown in Fig. 3. It can be seen that coefficients in subband "LL" exhibit high correlation in both horizontal and vertical directions due to the fact that "LL" is the low-pass filtered version of the original image in both directions. Likewise, the coefficients in the "HL" and "LH" subbands are highly correlated either vertically or horizontally. However, the coefficients in subband "HH" have less correlation in the subband of high frequency in both directions.
Two-stage wavelet decomposed image exhibiting directional correlations in different subbands
In view of the concurrent correlations in the 3D-LVQ index assignment and wavelet subbands, with properly constructed vectors based on the correlation of wavelet coefficients, the neighboring wavelet vectors will most likely share some sublattice points, which motivates us to develop a better side decoding approach by predicting lost descriptions (sublattice points) using neighboring information. To exploit the directional correlations in the wavelet subbands, we consider constructing a vector for the "LH" subband with two horizontally neighboring coefficient, whereas for the "HL" subband, a vector is constructed with two vertical neighboring coefficients. For simplicity, vectors for the "LL" and "HH" subbands are also constructed horizontally.
Proposed 3D-LVQ side decoding with prediction
Consider a wavelet vector x which is mapped to (λ1′(x),λ2′(x),λ3′(x)) in the 3D-LVQ coding, where λk′(x) is assigned to kth description. We will first study the two-description-based side decoding, that is, the reconstruction of the vector x if one description such as description k is lost (λk′(x) is missing). As discussed above, there is strong intra- and inter-correlation in the assignment of sublattice points for the 3D-LVQ mapping, while neighboring wavelet vectors may most likely share most or all sublattice points. Therefore, it is reasonable to predict the lost λk′(x) from those received sublattice points for the vector x as well as from its neighboring vectors. A list of sublattice point candidates can be formed for the estimation of λk′(x). Subsequently, we can reconstruct the vector x by taking each sublattice point in the list as an estimate of the missing sublattice point for decoding and finally averaging the decoded results.
As an example, we consider the vector x and its neighboring vector y labeled as (λ1′(y),λ2′(y),λ3′(y)) with description 1 being lost. Then, we receive {λ2′(x),λ3′(x)} for vector x and {λ2′(y),λ3′(y)} for vector y at the decoder side, while λ1′(x) and λ1′(y) in description 1 are missing. Based on the above discussion, the candidate list for estimating the lost λ1′(x) can be obtained as {λ2′(x),λ3′(x),λ2′(y),λ3′(y)}, in which each element may be a good prediction. Note that these sublattice points in the list may be duplicate. We can thereby use all the candidates in the list one by one as an estimate of the missing sublattice point for decoding and then take the average as the reconstruction \(\hat {\textbf {x}}\). That can be represented as:
$$\begin{array}{@{}rcl@{}} \hat{\textbf{x}} &=& (\alpha^{- 1} (\lambda_{2}'(\textbf{x}),\lambda_{2}'(\textbf{x}), \lambda_{3}'(\textbf{x})) \\ &+& \alpha^{- 1} (\lambda_{3}'(\textbf{x}),\lambda_{2}'(\textbf{x}), \lambda_{3}'(\textbf{x})) \\ &+& \alpha^{- 1} (\lambda_{2}'(\textbf{y}),\lambda_{2}'(\textbf{x}), \lambda_{3}'(\textbf{x})) \\ &+& \alpha^{- 1} (\lambda_{3}'(\textbf{y}),\lambda_{2}'(\textbf{x}), \lambda_{3}'(\textbf{x})))/4. \end{array} $$
If there are more neighboring vectors of x, their sublattice points can be included in the candidate list. Note that there may be some invalid 3-tuple combinations with the prediction scheme, which are not decodable by the inverse mapping function. In that case, those sublattice points causing invalid combinations are removed from the candidate list. Then, all the valid combinations based on the final candidate list are decoded and averaged as the final reconstruction of x.
We now consider one-description-based side decoding where only one description is received while the other two are missing. Assuming description 1 and description 2 are lost, only the sublattice points {λ3′(x)} and {λ3′(y)} are received for the vector x and its neighboring vector y, respectively. Similarly, we can also construct a candidate list of {λ3′(x),λ3′(y)}. Instead of estimating the two missing sublattice points which are harder or unreliable to be predicted based on one received sublattice point and its neighbor, we simply use the sublattice points in the list as possible reconstructions for vector x followed by an averaging that is \(\hat {\textbf {x}} = (\lambda _{3}'(\textbf {x}) + \lambda _{3}'(\textbf {y}))/2\). Like the two-description-based side decoding, we also need to perform a validation for each candidate in the list by checking whether the candidate point is the same as or immediately neighboring to the received sublattice point {λ3′(x)}. Invalid sublattice points are removed from the list. Then, all the valid sublattice points are averaged to obtain the final reconstruction \(\hat {\textbf {x}}\).
In the above, we show the way to obtain the reconstruction given one neighboring vector for vector x, which can be extended to the case of more neighboring vectors. Consider a two-dimensional wavelet image, there are four directly neighboring vectors for a vector. Denote by λ(i,j) the current vector to be decoded, while λ(i−1,j),λ(i+1,j) and λ(i,j−1),λ(i,j+1) are the four adjacent vectors horizontally and vertically, respectively.
For the band "LL," in view of both horizontal and vertical correlation, prediction for the current vector λ(i,j) can utilize the four adjacent vectors. All the received sublattice points of vector λ(i,j) and these four neighboring vectors are put into the candidate list with possible duplicates. For the band "HL" exhibiting the vertical correlation, the two vertically adjacent vectors λ(i,j−1) and λ(i,j+1) are employed for the prediction. Therefore, the candidate list consists of received sublattice points for λ(i,j),λ(i,j−1), and λ(i,j+1). For band "LH" showing the horizontal correlation, we use horizontally adjacent vectors λ(i−1,j) and λ(i+1,j) in the prediction. Consequently, the candidate list comprises the received sublattice points for λ(i,j),λ(i−1,j), and λ(i+1,j). For the band "HH," no prediction is considered and the conventional MDLVQ decoding is performed, that is, the received sublattice point or the average of two received sublattice points is used as the reconstruction of the current vector for one-description-based or two-description-based side decoding. Figure 4 illustrates the predictive side decoding using neighboring vectors with respect to the different subbands.
Predictive side decoding using neighboring vectors in different wavelet subbands
Experimental methods
Five standard 512×512 images, "Lena," "Couple," "Baboon," "Aerial," and "Goldhill," were tested in the experiment. The input image was applied with a discrete wavelet transform (DWT), where four-stage decomposition with the 10/18 Daubechies wavelet was employed. As mentioned before, to exploit the directional correlations in the wavelet subbands, we constructed a 2×1 vector with two horizontally neighboring coefficients in the "LH" subband or two vertically neighboring coefficients in the "HL" subband, while the vectors in the "LL" and "HH" subbands could be formed horizontally or vertically (horizontally in our experiments). Such a vector x is then quantized to a (fine) lattice point λ(x), which was mapped to three sublattice points λ1′(x), λ2′(x), and λ3′(x) based on the pre-designed index assignment. Lastly, adaptive three-order arithmetic coding was applied to compress the three sequences of sublattice indexes. The three produced descriptions may be transmitted in separate channels. At the receiver, the conventional decoding method and the proposed predictive decoding method were used to reconstruct images based on the received descriptions. Note that our focus is to test the effectiveness of the proposed side decoding in terms of rate-distortion performance, as compared to the conventional side decoding [51] as shown in (4) and (5). We implemented both the algorithms with the sublattice index number N=31.
Experimental results and discussion
Rate-distortion curves are plotted in Fig. 5 to compare the two decoding schemes in decoding all the five testing images. It can be seen that our proposed predictive scheme consistently outperforms the conventional method in both one-description-based and two-description-based side decoding, where up to 1.68 dB (at 0.531 bpp for "Goldhill") and 1.64 dB (at 0.531 bpp for "Goldhill") gains are obtained in the cases of 2-description side decoding and 1-description side decoding, respectively. Reconstructed images for "Lena" in the case of losses of one and two descriptions are shown in Fig. 6 for a subjective visual comparison. In the figure, the proposed scheme can achieve 1.37 dB gain at 0.537 bpp in the 2-description side decoding and 1.25 dB gain at 1.012 bpp in the 1-description side decoding over the conventional method for "Lena," respectively. The coding gain tends to become more significant at lower bit rates where the side distortion is normally larger, as expected. With a higher coding bit rate, the conventional side decoding may also reconstruct a vector fairly well even with one or two received sublattice points due to a finer quantization in that case, leaving less room of improvement for the predictive side decoding.
Rate-distortion performance comparison of reconstructed images using the proposed predictive side decoding and the conventional side decoding: a "Lena," b "Couple," c "Baboon," d "Aerial," and e "Goldhill"
Comparison of reconstructed images of "Lena" by the proposed predictive side decoding and the conventional side decoding : a the conventional 2-description side decoding (PSNR = 28.31 dB) versus b the proposed predictive 2-description side decoding (PSNR = 29.69 dB) at the same total bit rate of 0.537 bpp; c the conventional 1-description side decoding (PSNR = 26.43 dB) versus d the proposed predictive 1-description side decoding (PSNR = 27.68 dB) at the same total bit rate of 1.012 bpp
In this paper, we consider the design of human-centered image coding scheme that can adapt to people's varying watching situations and consider people's demand for real-time image display. Specifically, a novel predictive side decoding scheme for 3D-LVQ-based image coding has been proposed. In view of the strong intra- and inter-correlation in the index assignment of 3D-LVQ mapping as well as the intra-subband correlation exhibited in the low-frequency wavelet subbands, we have developed an effective prediction approach for lost descriptions (sublattice points) to enhance side decoding performance. The prediction scheme adapts to the different subbands with varying intra-subband correlation characteristics. Experimental results have substantiated the effectiveness of the proposed predictive side coding in reducing side distortions significantly for both two-description-based and one-description-based cases. As compared to the conventional side decoding method, the proposed decoding scheme has shown up to 1.68 dB and 1.64 dB performance gains in the cases of 2-description side decoding and 1-description side decoding, respectively, in our experiments.
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
3D-LVQ:
Three-description lattice vector quantization
DWT:
Discrete wavelet transform
HCC:
Human-centered computing
JMVC:
Joint multiview video coding
JPEG:
Joint photographic expert group
LVQ:
Lattice vector quantization
MD:
Multiple description
MDC:
Multiple description coding
MDSQ:
Multiple description scalar quantization
MDLVQ:
Multiple description lattice vector quantization
SPIHT:
Set partitioning of hierarchical trees
A. Jaimes, D. Gatica-Perez, N. Sebe, T. S. Huang, Guest editors' introduction: human-centered computing–toward a human revolution. Computer. 40(5), 30–34 (2007).
M. L. Dertouzos, T. Foreword By-Berners-Lee, The Unfinished Revolution: Human-centered Computers and What They Can do for Us (HarperInformation, 2002).
A. Jaimes, N. Sebe, D. Gatica-Perez, in Proceedings of the 14th ACM International Conference on Multimedia. Human-centered computing: a multimedia perspective, (2006), pp. 855–864. https://doi.org/10.1145/1180639.1180829.
N. Sebe, in Handbook of Ambient Intelligence and Smart Environments. Human-centered computing (Springer, 2010), pp. 349–370.
L. Bunch, J. M. Bradshaw, R. R. Hoffman, M. Johnson, Principles for human-centered interaction design, part 2: can humans and machines think together?IEEE Intell. Syst.30(3), 68–75 (2015).
P. Garcia Lopez, A. Montresor, D. Epema, A. Datta, T. Higashino, A. Iamnitchi, M. Barcellos, P. Felber, E. Riviere, Edge-centric computing: vision and challenges. ACM SIGCOMM Comput. Commun. Rev.45(5), 37–42 (2015).
S. Choi, Understanding people with human activities and social interactions for human-centered computing. Hum. Centric Comput. Inf. Sci.6(1), 9 (2016).
X. Ren, Rethinking the relationship between humans and computers. IEEE Comput.49(8), 104–108 (2016).
M. Chen, F. Herrera, K. Hwang, Cognitive computing: architecture, technologies and intelligent applications. IEEE Access. 6:, 19774–19783 (2018).
L. Qi, X. Zhang, W. Dou, Q. Ni, A distributed locality-sensitive hashing-based approach for cloud service recommendation from multi-source data. IEEE J. Sel. Areas Commun.35(11), 2616–2624 (2017).
W. Gong, L. Qi, Y. Xu, Privacy-aware multidimensional mobile service quality prediction and recommendation in distributed fog environment. Wirel. Commun. Mob. Comput.2018: (2018). https://doi.org/10.1155/2018/3075849.
S. Kumar, M. Singh, Big data analytics for healthcare industry: impact, applications, and tools. Big Data Min. Anal.2(1), 48–57 (2018).
Y. Liu, S. Wang, M. S. Khan, J. He, A novel deep hybrid recommender system based on auto-encoder with neural collaborative filtering. Big Data Min. Anal.1(3), 211–221 (2018).
L. Qi, X. Zhang, W. Dou, C. Hu, C. Yang, J. Chen, A two-stage locality-sensitive hashing based approach for privacy-preserving mobile service recommendation in cross-platform edge environment. Futur. Gener. Comput. Syst.88:, 636–643 (2018).
A. Ramlatchan, M. Yang, Q. Liu, M. Li, J. Wang, Y. Li, A survey of matrix completion methods for recommendation systems. Big Data Min. Anal.1(4), 308–323 (2018).
C. Zhang, M. Yang, J. Lv, W. Yang, An improved hybrid collaborative filtering algorithm based on tags and time factor. Big Data Min. Anal.1(2), 128–136 (2018).
H. Liu, H. Kou, C. Yan, L. Qi, Link prediction in paper citation network to construct paper correlation graph. EURASIP J. Wirel. Commun. Netw.2019(1), 1–12 (2019).
V. K. Goyal, Multiple description coding: compression meets the network. IEEE Signal Process. Mag.18(5), 74–93 (2001).
J. Zhou, J. Sun, X. Zhou, T. Wei, M. Chen, S. Hu, X. S. Hu, Resource management for improving soft-error and lifetime reliability of real-time MPSoCs. IEEE Trans. Comput. Aided Des. Integr. Circ. Syst. (2018). https://doi.org/10.1109/tcad.2018.2883993.
J. Zhou, J. Sun, P. Cong, Z. Liu, X. Zhou, T. Wei, S. Hu, Security-critical energy-aware task scheduling for heterogeneous real-time MPSoCs in IoT. IEEE Trans. Serv. Comput. (2019). https://doi.org/10.1109/tsc.2019.2963301.
J. Zhou, X. S. Hu, Y. Ma, J. Sun, T. Wei, S. Hu, Improving availability of multicore real-time systems suffering both permanent and transient faults. IEEE Trans. Comput.68(12), 1785–1801 (2019).
X. Xu, Q. Cai, G. Zhang, J. Zhang, W. Tian, X. Zhang, A. X. Liu, An incentive mechanism for crowdsourcing markets with social welfare maximization in cloud-edge computing. Concurr. Comput. Pract. Experience, 4961 (2018). https://doi.org/10.1002/cpe.4961.
X. Xu, R. Mo, F. Dai, W. Lin, S. Wan, W. Dou, Dynamic resource provisioning with fault tolerance for data-intensive meteorological workflows in cloud. IEEE Trans. Ind. Inform. (2019). https://doi.org/10.1109/tii.2019.2959258.
X. Xu, X. Liu, Z. Xu, C. Wang, S. Wan, X. Yang, Joint optimization of resource utilization and load balance with privacy preservation for edge services in 5G networks. Mob. Netw. Appl., 1–12 (2019). https://doi.org/10.1007/s11036-019-01448-8.
X. Xu, Y. Li, T. Huang, Y. Xue, K. Peng, L. Qi, W. Dou, An energy-aware computation offloading method for smart edge computing in wireless metropolitan area networks. J. Netw. Comput. Appl.133:, 75–85 (2019).
X. Xu, Q. Liu, Y. Luo, K. Peng, X. Zhang, S. Meng, L. Qi, A computation offloading method over big data for IoT-enabled cloud-edge computing. Futur. Gener. Comput. Syst.95:, 522–533 (2019).
X. Xu, S. Fu, L. Qi, X. Zhang, Q. Liu, Q. He, S. Li, An IoT-oriented data placement method with privacy preservation in cloud environment. J. Netw. Comput. Appl.124:, 148–157 (2018).
Y. Xu, C. Zhu, in 2009 Fifth International Conference on Image and Graphics. Joint multiple description coding and network coding for wireless image multicast, (2009), pp. 819–823. https://doi.org/10.1109/icig.2009.73.
V. A. Vaishampayan, Design of multiple description scalar quantizer. IEEE Trans. Inf. Theory. 39(3), 821–834 (1993).
S. D. Servetto, K. Ramchandran, V. A. Vaishampayan, K. Nahrstedt, Multiple description wavelet based image coding. IEEE Trans. Image Process.9(5), 813–826 (2000).
H. Kusetogullari, A. Yavariabdi, Evolutionary multiobjective multiple description wavelet based image coding in the presence of mixed noise in images. Appl. Soft Comput.73:, 1039–1052 (2018).
V. A. Vaishampayan, N. J. A. Sloane, S. D. Servetto, Multiple description vector quantization with lattice codebooks: design and analysis. IEEE Trans. Inf. Theory. 47(5), 1718–1734 (2001).
X. Huang, Multiple Description Lattice Vector Quantization. Master's Thesis (McMaster University, Department of Electrical & Computer Engineering, Canada, 2006).
H. Bai, C. Zhu, Y. Zhao, Optimized multiple description lattice vector quantization for wavelet image coding. IEEE Trans. Circ. Syst. Video Technol.17(7), 912–917 (2007).
M. Liu, C. Zhu, M-description lattice vector quantization: index assignment and analysis. IEEE Trans. Signal Process.57(6), 2258–2274 (2009).
J. Ostergaard, R. Heusdens, J. Jensen, n-channel asymmetric entropy-constrained multiple-description lattice vector quantization. IEEE Trans. Inf. Theory. 56(12), 6354–6375 (2010).
E. Akhtarkavan, M. F. M. Salleh, Multiple descriptions coinciding lattice vector quantizer for wavelet image coding. IEEE Trans. Image Process.21(2), 653–661 (2011).
R. Puri, K. Ramchandran, in Proc. 33rd Asilomar Conf. on Signals, Systems and Computers 1999, vol. 1. Multiple description source coding using forward error correction codes, (1999), pp. 342–346. https://doi.org/10.1109/acssc.1999.832349.
W. Jiang, A. Ortega, in Proc. SPIE, vol. 3653. Multiple description coding via polyphase transform and selective quantization, (1999), pp. 998–1008. https://doi.org/10.1109/icassp.1999.760613.
A. C. Miguel, A. E. Mohr, E. A. Riskin, in Proc. ICIP'99, vol. 3. SPIHT for generalized multiple description coding, (1999), pp. 842–846. https://doi.org/10.1109/icip.1999.817251.
T. Tillo, G. Olmo, A novel multiple description codinig scheme compatible with the JPEG 2000 decoder. IEEE Signal Process. Lett.12(4), 329–332 (2005).
J. Zong, L. Meng, H. Zhang, W. Wan, JND-based multiple description image coding. KSII Trans. Internet Inf. Syst.11(8), 3935–3949 (2017).
J. Zong, L. Meng, Y. Tan, J. Zhang, Y. Ren, H. Zhang, Adaptive reconstruction based multiple description coding with randomly offset quantizations. Multimed. Tools Appl.77(20), 26293–26313 (2018).
C. Dai, K. Zhu, R. Wang, B. Chen, Contextual multi-armed bandit for cache-aware decoupled multiple association in UDNs: a deep learning approach. IEEE Trans. Cogn. Commun. Netw.5(4), 1046–1059 (2019).
L. Zhao, H. Bai, A. Wang, Y. Zhao, Multiple description convolutional neural networks for image compression. IEEE Trans. Circ. Syst. Video Technol.29(8), 2494–2508 (2019).
H. Li, L. Meng, J. Zhang, Y. Tan, Y. Ren, H. Zhang, Multiple description coding based on convolutional auto-encoder. IEEE Access. 7:, 26013–26021 (2019).
H. Zhang, H. Bai, M. Liu, Y. Zhao, Optimized multiple description lattice vector quantization coding for 3D depth image. Ksii Trans. Internet Inf. Syst.9(3) (2015).
H. Bai, W. Lin, M. Zhang, A. Wang, Y. Zhao, Multiple description video coding based on human visual system characteristics. IEEE Trans. Circ. Syst. Video Technol.24(8), 1390–1394 (2014).
J. Chen, C. Cai, X. Wang, H. Zeng, K. -K. Ma, in Proceedings of the 16th International Conference on Advanced Concepts for Intelligent Vision Systems, ACIVS 2015, vol. 9386. Multiple description coding for multi-view video (Springer, 2015), pp. 876–882. https://doi.org/10.1007/978-3-319-25903-1_75.
J. Chen, J. Liao, H. Zeng, C. Cai, K. -K. Ma, An efficient multiple description coding for multi-view video based on the correlation of spatial polyphase transformed subsequences. J. Imaging Sci. Technol.63:, 50401–1504017 (2019).
M. Liu, C. Zhu, Index assignment for 3-description lattice vector quantization based on A2 lattice. Signal Process.88(11), 2754–2763 (2008).
S. N. Talbar, A. K. Deshmane, in Proc. of 2010 International Conference on Computer Applications and Industrial Electronics. Biomedical image coding using dual tree discrete wavelet transform and noise shaping algorithm, (2010), pp. 473–476. https://doi.org/10.1109/iccaie.2010.5735126.
This work is supported by the National Natural Science Foundation of China under grant no. 61801167 and the Fundamental Research Funds for the Central Universities under grant no. B200202189.
College of Computer and Information, Hohai University, 8 Fo Cheng Road, Nanjing, China
Yuanyuan Xu
Intra- and inter-correlation in the index assignment of 3D-LVQ mapping has been analyzed, as well as the intra-subband correlation exhibited in the low-frequency wavelet subbands. In the context of wavelet image coding, a predictive side decoding method is proposed to improve reconstruction quality in side decoding. The author read and approved the final manuscript.
Correspondence to Yuanyuan Xu.
The author declares that there are no competing interests.
Xu, Y. Predictive side decoding for human-centered multiple description image coding. J Wireless Com Network 2020, 93 (2020). https://doi.org/10.1186/s13638-020-01719-z
Image coding
|
CommonCrawl
|
Comparison of machine learning algorithms applied to symptoms to determine infectious causes of death in children: national survey of 18,000 verbal autopsies in the Million Death Study in India
Susan Idicula-Thomas1,2,
Ulka Gawde1 &
Prabhat Jha2
Machine learning (ML) algorithms have been successfully employed for prediction of outcomes in clinical research. In this study, we have explored the application of ML-based algorithms to predict cause of death (CoD) from verbal autopsy records available through the Million Death Study (MDS).
From MDS, 18826 unique childhood deaths at ages 1–59 months during the time period 2004–13 were selected for generating the prediction models of which over 70% of deaths were caused by six infectious diseases (pneumonia, diarrhoeal diseases, malaria, fever of unknown origin, meningitis/encephalitis, and measles). Six popular ML-based algorithms such as support vector machine, gradient boosting modeling, C5.0, artificial neural network, k-nearest neighbor, classification and regression tree were used for building the CoD prediction models.
SVM algorithm was the best performer with a prediction accuracy of over 0.8. The highest accuracy was found for diarrhoeal diseases (accuracy = 0.97) and the lowest was for meningitis/encephalitis (accuracy = 0.80). The top signs/symptoms for classification of these CoDs were also extracted for each of the diseases. A combination of signs/symptoms presented by the deceased individual can effectively lead to the CoD diagnosis.
Overall, this study affirms that verbal autopsy tools are efficient in CoD diagnosis and that automated classification parameters captured through ML could be added to verbal autopsies to improve classification of causes of death.
The ongoing COVID-19 pandemic has sharply revealed the long standing fact that many of the deaths, especially in low-income countries are not well documented as most of the deaths occur at home and not in well-regulated hospital settings. A second reason for poor documentation of death is because unlike birth, family members are not sufficiently incentivised to register death. This gap in death records and associated data is a serious impediment in assessing disease patterns and public health needs of a country. To address this gap, the Million Death Study (MDS) was initiated in India to quantify premature mortality through verbal autopsy (VA) [1, 2] in a nationally representative sample of homes. VA uses a set of symptoms and signs captured through a structured questionnaire to assign cause of death (CoD) [3,4,5]. The questionnaire is administered to family or caretakers of the deceased by non-medical surveyors. Each data record is then assigned randomly to two of the several trained physicians in the team. The physicians independently assign the CoD, based on the surveyor's report. In cases, where the CoD assignment for a record does not match for the two physicians, it is adjudicated by a third senior physician.
It would be worthwhile to study how efficiently the signs and symptoms captured by the surveyors could be used to predict CoD using supervised machine-learning (ML) algorithms. Such a study, in addition to revealing the scope for automation of VA tools, will also give insights on improvement of methodology for more accurate diagnosis at reduced cost of implementation.
Supervised ML algorithms learn from a set of input variables to predict a response variable. Many of the classification problems in biological and medical fields have been successfully solved using ML methods such as support vector machine (SVM), gradient boosting modelling (GBM), C5.0 (C5), artificial neural network (ANN), k-nearest neighbour (kNN), classification and regression tree (CART) [6, 7]. SVM and ANN algorithms have been successfully used for disease detection [8,9,10,11].
In this study, MDS dataset captured from 2004 to 2013 for ages 1–59 months has been explored for ML-based prediction of CoD for six infectious diseases viz. pneumonia, diarrhoeal diseases, malaria, meningitis/encephalitis, measles and fever of unknown origin (FOUO).
Population-based mortality data
The rationale, methodology, and efficacy of the MDS have been described elsewhere [12, 13]. The RHIME (Routine, Reliable, Representative and Re-sampled Household Investigation of Mortality with Medical Evaluation) form was used by trained surveyors to obtain information from family or caretakers of the deceased [14]. Each completed survey in the MDS was reviewed independently by two trained physicians, who were randomly assigned VAs through an online portal based on matching language proficiency of the physician and the language in which the VA was completed. Two independent physicians reviewed all the completed RHIME forms and assigned the underlying CoD according to the International Classification of Diseases, tenth revision (ICD-10) [15], and included a number of "keywords" in the record, which are signs and symptoms observed in the VA that support their diagnosis. The CoD was approved for records wherein the two physicians assigned the same CoD. For the remaining records, a third senior physician was referred to finalise the CoD based on the physicians keywords [2, 16]. Initial differences in coding (about 30% of records) were reconciled by both physicians, who each anonymously received the other's keywords justifying their choice of underlying CoD. After this reconciliation stage, any outstanding differences were assigned to and adjudicated by one of 40 senior physicians (about 10% of records). The steps involved in the MDS underwent various quality assurance checks, including resampling by an independent team in 2001–2003 that yielded similar results to the original survey.
The MDS records obtained for India from 2004 to 2013 were filtered for age between 1 to 59 months and cases wherein both physicians initially agreed on the underlying CoD. These filtering criteria led to 18,826 unique records and this data was further segregated based on six infectious disease categories: pneumonia, diarrhoeal diseases, malaria, fever of unknown origin, meningitis/encephalitis, and measles (Table 1). Previous analyses by Dingra et al. and review of ICD coding by Aleksandrowicz et al. suggest 'fever of unknown origin' as predominantly infectious, thus we have included it in the infectious disease category [17, 18].
Table 1 Number of MDS 2004–13 VA records with initial physician agreement for ages 1–59 months across six infectious causes of death
These six diseases constituted ~ 70% (13,216 out of 18,826) of the total deaths across all CoDs in this age category. The remaining 30% (5610 out of 18,826) constituted the other five diseases such as tuberculosis, injury, non-communicable disease (NCD), ill defined conditions (ILDF), and communicable, perinatal and nutritional disorders (CMPND).
Physicians keywords for each record were aggregated for both physicians and grouped into 35 symptom categories and subcategories, selected based on their medical relevance to the six CoD included in this study. The 35 groupings as well as inclusion and exclusion terms for symptom categories and subcategories are shown in S1 Appendix. Symptom groups were coded in a binary fashion: each of the 18,826 records received a "1" if either coding physician listed keywords reflecting the symptom category, and a "0" if they did not. Four of the symptom categories (fever, breathing problems, cough, diarrhoea) also contained subcategories that were aggregated under the parent category (S1 Appendix). For example, if one of the physician keywords for a death record was "high fever," the record was coded to reflect both the "fever" and "high fever" categories. Stata version 14.2 [19] was used for the physician keyword classification.
ML-based algorithms for prediction of CoD
Machine learning (ML) algorithms are popularly used for predicting an outcome or dependent variable from a pool of high dimensional input variables. In this study, the outcome or dependent variable is the CoD assignment and the input variables are physician's keywords for each record. The ML algorithms such as support vector machine (SVM), gradient boosting modelling (GBM), C5.0 (C5), artificial neural network (ANN), k-nearest neighbour (kNN), classification and regression tree (CART) were implemented using e1071, rpart, gbm and caret R packages with default parameter settings [20,21,22,23]. In case of SVM, the radial basis function (RBF) kernel was selected for transforming the input features into the high dimensional space for hyperplane differentiation of the positive and negative classes. RBF is known to be more generalized and robust as compared to the other kernel functions available for SVM [24]. The values for cost 'C' and 'sigma' were optimised for each model individually.
SVM, as a supervised machine learning algorithm, can be used for generating classification and regression models. For classification models, SVM algorithms plot each record of a dataset as a point in n-dimensional space, where n is the number of numerical features for each record and creates a hyperplane for the separation of two or more classes of datasets. The points closest to the hyperplane/separator are called support vectors as it holds the separating plane. The algorithm aims to generate a hyperplane that maximises the distance between the classes/datasets and simultaneously minimises the classification errors. In cases where data points are not linearly separable, SVM uses the kernel function [6, 25,26,27,28,29].
ANN algorithms function by mimicking the biological nervous system, which has many neurons connected in a layered manner. ANNs consist of an input layer that captures the features/variables of the datasets; one or more hidden layers which process the information, and an output layer that displays the outcome. Each variable can be denoted as a node and their interactions are denoted by edges. ANN can detect non-linear relationships between variables and generate predictions based on nodes, and edge weights. The advantages of ANNs are its tolerance to noise, capability of learning complex data, and classify instances into more than one output. For large neural networks, the interpretation of the algorithm may be difficult and can require high processing time [6, 25, 27, 29, 30].
kNN is a supervised machine learning algorithm which is conceptually simple, and non-parametric in nature. kNNs work by capturing the closest data points for the query record to a known dataset and then assign the class of query based on majority of class votes. The input features of the dataset are used to identify the closeness between the records. Here, k denotes the number of closest data points considered for the vote and hence is an important parameter for the prediction outcome. The advantages of kNNs are its easy implementation, quick learning and not prone to overfitting. The disadvantages of kNN are its sensitivity to noise, and requirement of large storage space [6, 27].
CART is a decision tree-based algorithm in which each root node of a tree represent an input variable and leaf nodes of tree represent the output variable. A binary decision tree is generated at each step by splitting a node into two child nodes. It creates a set of logical rules, the response to which determines the split in the dataset. The advantages of CART algorithm include fast processing of data and easy interpretation of the algorithm [31,32,33].
GBM is a tree-based method that combines predictions from multiple decision trees. Each of the decision trees can be considered as weak learners which eventually are converted into strong learners by minimising the errors of the previous decision tree. The advantages of GBM include its high predictive accuracy and ability to predict multiclass data. The disadvantages of GBM include overfitting of data, sensitivity to noisy data, and requirement of high processing time [28, 34, 35].
C5 is also a tree-based algorithm that functions by minimising the information entropy or maximising the information gain at each split. The data is split initially based on the biggest information gain and continued till it cannot be split further. The features that do not contribute to the splits are removed from the final model. While C5 algorithms are easy to implement and interpret, it requires categorical (ordinal/nominal) data as target variable and may not work well on small datasets [31, 36].
Generation of training and test datasets
Individual prediction models were generated for each of the six infectious diseases namely pneumonia, diarrhoeal diseases, malaria, meningitis/encephalitis, measles and fever of unknown origin (FOUO). For each disease model, records belonging to the disease that is being predicted were marked as positive and the remaining records (not limited to the 6 diseases considered in the study) are marked as negative. An unbalanced dataset can be converted to balanced dataset (with equal representation from positive and negative classes) by random resampling either by oversampling the minority class or undersampling the majority class. Here, we opted for creation of balanced 2-class classifier by undersampling the majority class (negative class) to match the number of records in minority class (positive class) for each of the six disease datasets. Subsequently, for each of the models, the dataset was partitioned into training and test datasets using 80:20 random split. The robustness of each ML-based model was evaluated by performing a 10-fold cross-validation with 10 iterations on the training dataset.
The SVM prediction models were also generated to differentiate between pair of diseases with overlapping symptoms such as i) pneumonia and diarrhoeal diseases, ii) malaria and meningitis/encephalitis, and iii) malaria and FOUO. In these cases, the positive and negative classes comprised of the records of first and second disease respectively and undersampling of the majority class was used to generate a balanced classifier.
Evaluation of prediction models
The test datasets were used to evaluate the performance of each of the selected models using the below performance metrics:
$$ accuracy=\frac{TP+ TN}{TP+ FN+ TN+ FP} $$
$$ recall/ sensitivity=\frac{TP}{TP+ FN} $$
$$ specificity=\frac{TN}{TN+ FP} $$
$$ precision=\frac{TP}{TP+ FP} $$
$$ F1=2\times \frac{precision\times recall}{precision+ recall} $$
TP (true positives) and TN (true negatives) denote the number of outcomes where the model correctly predicts the positive and negative class respectively. FP (false positives) and FN (false negatives) denote the number of outcomes where the model incorrectly predicts the positive and negative class respectively.
Cohen's kappa evaluates model by measuring the agreement between predicted accuracy and observed class accuracy.
$$ Cohe{n}^{\prime }s\ kappa=\frac{Po- Pe}{1- Pe} $$
where Po is the relative observed agreement and Pe is the random chance of agreement [37].
Hierarchical clustering of physicians' keywords
The relationship/co-occurrence of symptoms reported for each disease were studied using ascendant hierarchical clustering using hclustvar function in ClustOfVar R package [38]. The number of clusters was set to six, as we were studying six diseases.
Disease-wise distribution of records
Amongst these six diseases, deaths due to pneumonia and diarrhoeal diseases were most common, respectively reflecting 43 and 37% of the deaths in the dataset, while deaths due to measles and meningitis/encephalitis were the least common, each reflecting just under 4% of the deaths in the dataset (Table 1).
Pneumonia and diarrhoeal diseases are known to be major cause of childhood mortality in India, especially in poorer communities [39] and this is reflected in the MDS data.
Distribution of symptoms across disease datasets
In VA, physicians use the questionnaire notes of non-medical surveyors to identify keywords that eventually form the basis of CoD assignment. In this study, these keywords were converted to rule-based, non-redundant set of symptoms for ease of automation (S1 Appendix). The distribution of these symptoms across the six CoD are visualised in Fig. 1, and record counts for each of the six CoD can be found in S2 Appendix. It was observed that 17 of the 35 symptoms viz. vomiting, jaundice, abdominal pain /distention (abdompain), diarrhoea, anaemia, weight loss, low birth weight (lbw), poor feeding, stiffness/ body pain (stiffpain), unconsciousness (unconscious), convulsion, cough, breathing problems (breathprob), cold, fever chills, high fever and fever were present, with varying frequencies, in all six diseases. For e.g., vomiting and diarrhoea were frequent in diarrhoeal diseases; fever chills were present in most of malarial cases; rash was common in measles and breathing problems were observed in most of pneumonia cases. These observations were in concordance with WHO manual for disease diagnosis [41,42,43].
Bubble plot depicting distribution of symptoms across six infectious diseases. X-axis represents disease class and y-axis represents symptoms coded by rule-based method. The bubble size is proportional to percentage of records positive for the symptom in the disease class. The plot was generated using ggplot2 R package [40]
To evaluate if symptoms can self-cluster, based on their co-occurrence into distinct disease classes, unsupervised (without CoD annotation) hierarchical clustering was performed on 13,216 records belonging to six diseases (Fig. 2).
Tree-based clustering of symptoms for six clusters. The vertical axis represents distance between clusters
The clustering algorithm was forced to generate six clusters and interestingly the six clusters represented the six CoDs as can be deduced based on the nature of symptoms. Cluster 1 had symptoms such as breathing problem (breathprob), cough, cold, chest indrawing (indraw), fast breathing (fastbreath), grunting, respiratory distress (respdistress) and wheezing which are attributable to Pneumonia; Cluster 2 had rash and abscess that is characteristic of measles; Cluster 3 had fever chills, fever, high fever typical of malaria; Cluster 4 had cholera, dehydration, diarrhoea, vomiting, blood in stools, abdominal pain /distention (abdompain), night sweats and swelling that are commonly observed in diarrhoeal diseases; Cluster 5 had delirium, unconsciousness, convulsion and stiffness/body pain (stiffpain) distinctive features of meningitis/encephalitis and Cluster 5 had jaundice, low birth weight (lbw), anaemia, poor feeding and weight loss representing fever of unknown origin (Fig. 2). Hierarchical clustering was also performed individually for each of the six infectious diseases using the symptoms that were present in at least 10% of the records for each of the disease, to gain further disease-specific insights on symptom co-occurrence and its distribution (S1 Fig).
ML-based models using symptoms for CoD prediction
To confirm if symptoms can be used to predict CoDs for each record, ML-based classification models were built individually for each of the six diseases. Each model was build using 80% of training data and the remaining as test data. Six ML algorithms viz., SVM, GBM, C5, ANN, kNN and CART were used to predict CoD. The prediction performances of these models were evaluated based on accuracy, kappa, recall/sensitivity, specificity, precision and F1 score (Table 2 and S2 Fig).
Table 2 Comparison of the prediction accuracy of ML-based algorithms for six diseases
Of 6 ML-based algorithms, SVM and GBM performed better than the other four algorithms (Table 2). Literature evidences also suggest that SVM models are superior for developing disease classification models [44, 45].
The SVM based prediction model performed best for diarrhoeal diseases and lowest for meningitis/encephalitis (Table 3). SVM models could classify pneumonia, diarrhoeal diseases, malaria, meningitis/encephalitis, measles and FOUO with 91, 95, 90, 83, 97 and 87% precision respectively using the associated symptom data (Table 3).
Table 3 Performance matrix of SVM models for six infectious diseases
The 10 most relevant symptoms for CoD prediction for each of the SVM-based prediction models were also extracted (S3 Fig) and this data concurs well with WHO manual for disease diagnosis [41,42,43, 46, 47]. The co-occurrence of these top 10 features of each of the six diseases was visualised using disease-symptom network plot (Fig. 3). Of 35 symptoms used for the ML-based disease model generation, 19 symptoms were critical for classification of the six diseases and five symptoms viz., fever, diarrhoea, breathing problem (breathprob), cough and vomiting were associated with all the six infectious diseases (Fig. 3). Six symptoms viz., abdominal pain /distention (abdompain), convulsion, fever chills, grunting, low birth weight (lbw) and rash were identified as important predictors specifically for a single disease (Fig. 3). For eg., rash was identified as one of the top predictors specifically for measles while grunting was an important predictor only for pneumonia.
Disease-symptom network of top 10 features obtained from SVM model. Green nodes represent symptoms and blue nodes represent diseases. The size of disease node is proportional to number of records corresponding to the disease in the dataset. Edge represents association between disease and symptom and its width is proportional to percentage of records positive for the symptom. The network was created using igraph R package [48]
SVM models for classifying diseases with overlapping symptoms
SVM models were built for classifying pairs of diseases that had several overlapping symptoms. The performance measures of SVM models for predicting CoD for pneumonia-diarrhoeal diseases, malaria- meningitis/encephalitis and malaria-fever of unknown origin are shown in Table 4 and the 10 most important features for disease prediction can be viewed in S4 Fig. SVM model for pneumonia- diarrhoeal diseases depicted highest accuracy (98%). SVM models for malaria – meningitis/encephalitis and malaria - fever of unknown origin were able to classify with 84 and 85% accuracy respectively.
Table 4 Performance matrix of SVM models for pairwise disease classification
Using ML-based algorithms, we could effectively predict CoD from the signs/symptoms captured by the VA tools. We have documented the ability of the symptoms to form disease-based clusters, in spite of being present in multiple diseases, suggesting that they can be effectively exploited as input variables to predict the corresponding CoDs.
Although all the ML algorithms (except CART) performed well for disease prediction, SVM models displayed consistently superior performance for all six diseases. In previous studies on disease prediction, SVMs using RBF kernel have found to be better performers than other ML-algorithms such as SVMs with linear or polynomial kernels, Random Forest and Decision Trees [44, 45].
Our study is, as far as we can determine, the first to systematically compare various ML-based algorithms applied to physician coded VAs. While there has been substantial debate if algorithms outperform physician coding, a recent randomized trial among 10,000 deaths showed the physician coding outperformed most currently available algorithms [16]. Moreover, the worldwide clinically accepted standard of medical diagnosis or of certification of the causes of deaths are by physicians. Our paper adds to the literature suggesting that ML-assisted algorithms may help to improve and standardize physician-based coding. This is especially relevant for childhood conditions, where the major reasons for death are few, and reasonably similar across African and Asian countries [49].
The strengths of the study were its large size, representative sampling of deaths in India and standard ways of coding of records by physicians. Moreover, the keywords used by physicians while variable, were amenable for binning into broader categories that permitted reducing the input feature space and application of ML-algorithms. Nonetheless the study has some limitations. Three important parameters that are missed in this study are the type, duration and intensity of the illness. Hence, symptoms such as dry or wet cough; cough for a week or a month; intense vomiting/diarrhoea over mild vomiting/diarrhoea cannot be distinguished. The study relies on the cognitive abilities of the respondents and in cases where the death has occurred in distant past, the recollection of the symptoms may not be perfect.
For the foreseeable future, national verbal autopsy studies are critical to capture rural, home deaths until a time when deaths start to occur mostly in facilities that mandate medical certification. Under these circumstances, innovations to improve verbal autopsy methods are essential. ML-algorithms applied to physician-derived keywords offer a simple, practicable way to improve the classification of causes of death in children, and should be considered as one of the strategies for advances in verbal autopsy methodology.
The MDS dataset is the property of the Government of India and cannot be shared. Requests to access the MDS data need to be approved by the Registrar General of India- https://censusindia.gov.in/AboutUs/Contactus/Contactus.html.
ANN:
Artificial neural network
Classification and regression tree
GBM:
Gradient boosting modelling
ICD:
International classification of diseases
kNN:
k-nearest neighbour
MDS:
Million Death Study
ML:
SVM:
VA:
Verbal autopsy
Soleman N, Chandramohan D, Shibuya K. Verbal autopsy: current practices and challenges; 2006.
Hsiao M, Morris SK, Bassani DG, Montgomery AL, Thakur JS, Jha P. Factors associated with physician agreement on verbal autopsy of over 11500 injury deaths in India. PLoS One. 2012;7(1):e30336. https://doi.org/10.1371/journal.pone.0030336.
Byass P, Hussain-Alkhateeb L, D'Ambruoso L, Clark S, Davies J, Fottrell E, et al. An integrated approach to processing WHO-2016 verbal autopsy data: The InterVA-5 model. BMC Med. 2019;17. https://doi.org/10.1186/s12916-019-1333-6.
Nichols EK, Byass P, Chandramohan D, Clark SJ, Flaxman AD, Jakob R, et al. The WHO 2016 verbal autopsy instrument: An international standard suitable for automated analysis by InterVA, InSilicoVA, and Tariff 2.0. PLoS Med. 2018;15. https://doi.org/10.1371/journal.pmed.1002486.
McCormick TH, Li ZR, Calvert C, Crampin AC, Kahn K, Clark SJ. Probabilistic cause-of-death assignment using verbal autopsies. J Am Stat Assoc. 2016;111(515):1036–49. https://doi.org/10.1080/01621459.2016.1152191.
Uddin S, Khan A, Hossain ME, Moni MA. Comparing different supervised machine learning algorithms for disease prediction. BMC Med Inform Decis Mak. 2019;19(1):1–16. https://doi.org/10.1186/s12911-019-1004-8.
Tama BA, Im S, Lee S. Improving an intelligent detection system for coronary heart disease using a two-tier classifier ensemble. Biomed Res Int. 2020;2020:1–10. https://doi.org/10.1155/2020/9816142.
Thurston RC, Matthews KA, Hernandez J, De La Torre F. Improving the performance of physiologic hot flash measures with support vector machines. Psychophysiology. 2009;46(2):285–92. https://doi.org/10.1111/j.1469-8986.2008.00770.x.
Varrecchia T, Castiglia SF, Ranavolo A, Conte C, Tatarelli A, Coppola G, et al. An artificial neural network approach to detect presence and severity of Parkinson's disease via gait parameters. PLoS One. 2021;16. https://doi.org/10.1371/journal.pone.0244396.
Andrade A, Lopes K, Lima B, Maitelli A. Development of a methodology using artificial neural network in the detection and diagnosis of faults for pneumatic control valves. Sensors. 2021;21(3):1–21. https://doi.org/10.3390/s21030853.
Yu W, Liu T, Valdez R, Gwinn M, Khoury MJ. Application of support vector machine modeling for prediction of common diseases: the case of diabetes and pre-diabetes. BMC Med Inform Decis Mak. 2010;10(1):1–7. https://doi.org/10.1186/1472-6947-10-16.
Jha P, Gajalakshmi V, Gupta PC, Kumar R, Mony P, Dhingra N, et al. Prospective study of one million deaths in India: rationale, design, and validation results. PLoS Med. 2006;3(2):0191–200. https://doi.org/10.1371/journal.pmed.0030018.
Gomes M, Begum R, Sati P, Dikshit R, Gupta PC, Kumar R, et al. Nationwide mortality studies to quantify causes of death: relevant lessons from India's Million Death Study. Health Aff. 2017;36(11):1887–95. https://doi.org/10.1377/hlthaff.2017.0635.
Morris SK, Bassani DG, Kumar R, Awasthi S, Paul VK, Jha P. Factors associated with physician agreement on verbal autopsy of over 27000 childhood deaths in India. PLoS One. 2010;5. https://doi.org/10.1371/JOURNAL.PONE.0009583.
World Health Organization, editor. ICD-10: international statistical classification of diseases and related health problems: tenth revision. 2nd ed. World Health Organization; 2004. https://apps.who.int/iris/handle/10665/42980.
Jha P, Kumar D, Dikshit R, Budukh A, Begum R, Sati P, et al. Automated versus physician assignment of cause of death for verbal autopsies: randomized trial of 9374 deaths in 117 villages in India. BMC Med. 2019;17(1):1–11. https://doi.org/10.1186/s12916-019-1353-2.
Aleksandrowicz L, Malhotra V, Dikshit R, Gupta PC, Kumar R, Sheth J, et al. Performance criteria for verbal autopsy-based systems to estimate national causes of death: development and application to the Indian Million Death Study. BMC Med. 2014;12:1–14. https://doi.org/10.1186/1741-7015-12-21.
Dhingra N, Jha P, Sharma VP, Cohen AA, Jotkar RM, Rodriguez PS, et al. Adult and child malaria mortality in India. Lancet. 2010;376(9754):1768–74. https://doi.org/10.1016/S0140-6736(10)60831-8.
StataCorp. Stata statistical software: release 14. College Station: StataCorp LP; 2015.
Brandon G, Bradley B, Jay C, GBM Developers. Generalized Boosted Regression Models version 2.1.8 from CRAN, (n.d.). https://rdrr.io/cran/gbm/.
Terry T, Beth A. Recursive Partitioning and Regression Trees version 4.1–15 from CRAN, (n.d.). https://rdrr.io/cran/rpart/.
David M, Evgenia D, Kurt H, Andreas W, Friedrich L. Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien version 1.7–6 from R-Forge, (n.d.). https://rdrr.io/rforge/e1071/.
Kuhn M. Building predictive models in R using the caret package. J Stat Softw. 2008;28(5):1–26. https://doi.org/10.18637/jss.v028.i05.
Xu H, Caramanis C, Mannor S. Robustness and regularization of support vector machines. J Mach Learn Res. 2008;10:1485–510. http://arxiv.org/abs/0803.3490.
Kundu I, Paul G, Banerjee R. A machine learning approach towards the prediction of protein–ligand binding affinity based on fundamental molecular properties. RSC Adv. 2018;8:12127–37. https://doi.org/10.1039/C8RA00003D.
Huang S, Cai N, Pacheco PP, Narrandes S, Wang Y, Xu W. Applications of Support Vector Machine (SVM) Learning in Cancer Genomics. Cancer Genomics Proteomics. 2018;15:41–51. https://doi.org/10.21873/CGP.20063.
Tomar D, Agarwal S. A survey on data mining approaches for healthcare. Int J Bio Sci Technol. 2013;5(5):241–66. https://doi.org/10.14257/IJBSBT.2013.5.5.25.
Alsaleem F, Tesfay MK, Rafaie M, Sinkar K, Besarla D, Arunasalam P. An IoT framework for modeling and controlling thermal comfort in buildings. Front Built Environ. 2020;6:87. https://doi.org/10.3389/FBUIL.2020.00087.
Amornsamankul S, Pimpunchat B, Triampo W, Charoenpong J, Nuttavut N. A comparison of machine learning algorithms and their applications. Int J Simul Syst Sci Technol. 2019. https://doi.org/10.5013/IJSSST.A.20.04.08.
Renganathan V. Overview of artificial neural network models in the biomedical domain. Bratislavske Lekarske Listy. 2019;120:536–40. https://doi.org/10.4149/BLL_2019_087.
Patil N, Lathi R, Chitre V. Comparison of C5.0 & CART Classification algorithms using pruning technique. Undefined. 2012.
Aguiar FS, Almeida LL, Ruffino-Netto A, Kritski AL, Mello FC, Werneck GL. Classification and regression tree (CART) model to predict pulmonary tuberculosis in hospitalized patients. BMC Pulm Med. 2012;12(1):40. https://doi.org/10.1186/1471-2466-12-40.
Arifuzzaman M, Gazder U, Alam MS, Sirin O, Al Mamun A. Modelling of Asphalt's adhesive behaviour using classification and regression tree (CART) analysis. Comput Intell Neurosci. 2019;2019:1–7. https://doi.org/10.1155/2019/3183050.
Natekin A, Knoll A. Gradient boosting machines, a tutorial. Front Neurorobot. 2013;7:21. https://doi.org/10.3389/FNBOT.2013.00021.
Zhang Z, Zhao Y, Canes A, Steinberg D, Lyashevska O, Written on behalf of A.B.-D.C.T.C. Group. Predictive analytics with gradient boosting in clinical medicine. Ann Transl Med. 2019;7:152. https://doi.org/10.21037/ATM.2019.03.29.
Elsayad AM, Nassef AM, Al-Dhaifallah M, Elsayad KA. Classification of biodegradable substances using balanced random trees and boosted C5.0 Decision Trees. Int J Environ Res Public Health. 2020;17:1–22. https://doi.org/10.3390/IJERPH17249322.
Ogura K, Sato T, Yuki H, Honma T. Support vector machine model for hERG inhibitory activities based on the integrated hERG database using descriptor selection by NSGA-II. Sci Rep. 2019;9(1):1–12. https://doi.org/10.1038/s41598-019-47536-3.
Chavent M, Kuentz V, Liquet B, Saracco J. Clustering of Variables [R package ClustOfVar version 1.1]. 2017. https://cran.r-project.org/package=ClustOfVar.
Million Death Study Collaborators. Causes of neonatal and child mortality in India: A nationally representative mortality survey. Lancet. 2010;376:1853–60. https://doi.org/10.1016/S0140-6736(10)61461-4.
Wickham H. ggplot2. New York: Springer; 2009. https://doi.org/10.1007/978-0-387-98141-3.
World Health Organization (WHO). Diarrhoeal disease: WHO Fact Sheets; 2017. https://www.who.int/en/news-room/fact-sheets/detail/diarrhoeal-disease.
World Health Organization (WHO). Malaria: WHO Fact Sheets; 2021. https://www.who.int/en/news-room/fact-sheets/detail/malaria.
World Health Organization (WHO). Pneumonia: WHO Fact Sheets; 2019. https://www.who.int/news-room/fact-sheets/detail/pneumonia.
Harimoorthy K, Thangavelu M. Multi-disease prediction model using improved SVM-radial bias technique in healthcare monitoring system. J Ambient Intell Humaniz Comput. 2021;12(3):3715–23. https://doi.org/10.1007/s12652-019-01652-0.
Tapak L, Mahjub H, Hamidi O, Poorolajal J. Real-data comparison of data mining methods in prediction of diabetes in Iran. Healthc Inform Res. 2013;19(3):177–85. https://doi.org/10.4258/hir.2013.19.3.177.
World Health Organization (WHO). Measles: WHO Fact Sheets; 2019. https://www.who.int/news-room/fact-sheets/detail/measles.
World Health Organization (WHO). Meningococcal meningitis: WHO Fact Sheets; 2018. https://www.who.int/news-room/fact-sheets/detail/meningococcal-meningitis.
Gabor C, Tamas N. The igraph software package for complex network research. InterJ Complex Syst. 2006;1695. https://igraph.org/.
Black RE, Cousens S, Johnson HL, Lawn JE, Rudan I, Bassani DG, et al. Global, regional, and national causes of child mortality in 2008: a systematic analysis. Lancet. 2010;375(9730):1969–87. https://doi.org/10.1016/S0140-6736(10)60549-1.
We thank various staff at the Centre for Global Health Research who assisted during the study's initial phases and/or with the keyword extraction: Lade Adeusi, Rehana Begum, Shaza Fadel, Peter Rodriguez, and Leah Watson.
S I-T acknowledges funding from Queen Elizabeth Scholars award, Indian Council of Medical Research (ICMR) [RA/1076/05–2021], and Department of Biotechnology (DBT), India [BT/PR40165/BTIS/137/12/2021]. The funders had no role in study design, data collection, and analysis, decision to publish, or preparation of the manuscript.
Biomedical Informatics Centre, Indian Council of Medical Research-National Institute for Research in Reproductive Health, Mumbai, 400012, India
Susan Idicula-Thomas & Ulka Gawde
Centre for Global Health Research, St. Michael's Hospital, Unity Health Toronto, and Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
Susan Idicula-Thomas & Prabhat Jha
Susan Idicula-Thomas
Ulka Gawde
Prabhat Jha
SIT and PJ conceptualized the study and methodology. SIT and UG performed the data analysis, implementation of algorithms, data visualization and wrote the manuscript. PJ supervised the work and reviewed the manuscript. All authors contributed to data interpretation. All authors read and approved the final manuscript.
Correspondence to Susan Idicula-Thomas or Prabhat Jha.
Ethics approval for the Million Death Study was obtained from the Post Graduate Institute of Medical Research, St. John's Research Institute and St. Michael's Hospital, Toronto, Ontario, Canada. The MDS was conducted in accordance with local guidelines and Institutional review board (IRB) approvals, including from the Ministry of Health's Health Ministry's Screening Committee. As per procedures of the Registrar General of India, informed consent was obtained from all the MDS participants and in case of minors/children, informed consent was obtained from the parents or legal guardians.
Idicula-Thomas, S., Gawde, U. & Jha, P. Comparison of machine learning algorithms applied to symptoms to determine infectious causes of death in children: national survey of 18,000 verbal autopsies in the Million Death Study in India. BMC Public Health 21, 1787 (2021). https://doi.org/10.1186/s12889-021-11829-y
DOI: https://doi.org/10.1186/s12889-021-11829-y
Prediction model
Child mortality
|
CommonCrawl
|
neverendingbooks
email-form
How to play Nimbers?
Published January 7, 2011 by lievenlb
Nimbers is a 2-person game, winnable only if you understand the arithmetic of the finite fields $\mathbb{F}_{2^{2^n}} $ associated to Fermat 2-powers.
It is played on a rectangular array (say a portion of a Go-board, for practical purposes) having a finite number of stones at distinct intersections. Here's a typical position
The players alternate making a move, which is either
removing one stone, or
moving a stone to a spot on the same row (resp. the same column) strictly to the left (resp. strictly lower), and if there's already a stone on this spot, both stones are removed, or
adding stones to the empty corners of a rectangle having as its top-right hand corner a chosen stone and removing stones at the occupied corners
Here we illustrate two possible moves from the above position, in the first we add two new stones and remove 2 existing stones, in the second we add three new stones and remove only the top right-hand stone.
As always, the last player able to move wins the game!
Note that Nimbers is equivalent to Lenstra's 'turning corners'-game (as introduced in his paper Nim-multiplication or mentioned in Winning Ways Chapter 14, page 473).
If all stones are placed on the left-most column (or on the bottom row) one quickly realizes that this game reduces to classical Nim with Nim-heap sizes corresponding to the stones (for example, the left-most stone corresponds to a heap of size 3).
Nim-addition $n \oplus m $ is defined inductively by
$n \oplus m = mex(n' \oplus m,n \oplus m') $
where $n' $ is any element of ${ 0,1,\ldots,n-1 } $ and $m' $ any element of ${ 0,1,\ldots,m-1 } $ and where 'mex' stands for Minimal EXcluded number, that is the smallest natural number which isn't included in the set. Alternatively, one can compute $n \oplus m $ buy writing $n $ and $m $ in binary and add these binary numbers without carrying-over. It is well known that a winning strategy for Nim tries to shorten one Nim-heap such that the Nim-addition of the heap-sizes equals zero.
This allows us to play Nimber-endgames, that is, when all the stones have been moved to the left-column or the bottom row.
To evaluate general Nimber-positions it is best to add another row and column, the coordinate axes of the array
and so our stones lie at positions (1,3), (4,7), (6,4), (10,3) and (14,8). In this way all legal moves follow the rectangle-rule when we allow rectangles to contain corners on the added coordinate axes. For example, removing a stone is achieved by taking a rectangle with two sides on the added axes, and, moving a stone to the left (or the bottom) is done by taking a rectangle with one side at the x-axes (resp. the y-axes)
However, the added stones on the coordinate axes are considered dead and may be removed from the game. This observation allows us to compute the Grundy number of a stone at position (m,n) to be
$G(m,n)=mex(G(m',n') \oplus G(m',n) \oplus G(m,n')~:~0 \leq m' < m, 0 \leq n' < n) $
and so by induction these Grundy numbers are equal to the Nim-multiplication $G(m,n) = m \otimes n $ where
$m \otimes n = mex(m' \otimes n' \oplus m' \otimes n \oplus m \otimes n'~:~0 \leq m' < m, 0 \leq n' < n) $
Thus, we can evaluate any Nimbers-position with stone-coordinates smaller than $2^{2^n} $ by calculating in a finite field using the identification (as for example in the odd Knights of the round table-post) $\mathbb{F}_{2^{2^n}} = \{ 0,1,2,\ldots,2^{2^n}-1 \} $
For example, when all stones lie in a 15×15 grid (as in the example above), all calculations can be performed using
Here, we've identified the non-zero elements of $\mathbb{F}_{16} $ with 15-th roots of unity, allowing us to multiply, and we've paired up couples $(n,n \oplus 1) $ allowing u to reduce nim-addition to nim-multiplication via
$n \oplus m = (n \otimes \frac{1}{m}) \otimes (m \oplus 1) $
In particular, the stone at position (14,8) is equivalent to a Nim-heap of size $14 \otimes 8=10 $. The nim-value of the original position is equal to 8
Suppose your opponent lets you add one extra stone along the diagonal if you allow her to start the game, where would you have to place it and be certain you will win the game?
Seating the first few thousand Knights
Published February 3, 2010 by lievenlb
The Knight-seating problems asks for a consistent placing of n-th Knight at an odd root of unity, compatible with the two different realizations of the algebraic closure of the field with two elements.
The first identifies the multiplicative group of its non-zero elements with the group of all odd complex roots of unity, under complex multiplication. The second uses Conway's 'simplicity rules' to define an addition and multiplication on the set of all ordinal numbers.
The odd Knights of the round table-problem asks for a specific one-to-one correspondence between two realizations of 'the' algebraic closure $\overline{\mathbb{F}_2} $ of the field of two elements.
The first identifies the multiplicative group of its non-zero elements with the group of all odd complex roots of unity, under complex multiplication. The addition on $\overline{\mathbb{F}_2} $ is then recovered by inducing an involution on the odd roots, pairing the one corresponding to x to the one corresponding to x+1.
The second uses Conway's 'simplicity rules' to define an addition and multiplication on the set of all ordinal numbers. Conway proves in ONAG that this becomes an algebraically closed field of characteristic two and that $\overline{\mathbb{F}_2} $ is the subfield of all ordinals smaller than $\omega^{\omega^{\omega}} $. The finite ordinals (the natural numbers) form the quadratic closure of $\mathbb{F}_2 $.
On the natural numbers the Conway-addition is binary addition without carrying and Conway-multiplication is defined by the properties that two different Fermat-powers $N=2^{2^i} $ multiply as they do in the natural numbers, and, Fermat-powers square to its sesquimultiple, that is $N^2=\frac{3}{2}N $. Moreover, all natural numbers smaller than $N=2^{2^{i}} $ form a finite field $\mathbb{F}_{2^{2^i}} $. Using distributivity, one can write down a multiplication table for all 2-powers.
The Knight-seating problems asks for a consistent placing of n-th Knight $K_n $ at an odd root of unity, compatible with the two different realizations of $\overline{\mathbb{F}_2} $. Last time, we were able to place the first 15 Knights as below, and asked where you would seat $K_{16} $
$K_4 $ was placed at $e^{2\pi i/15} $ as 4 was the smallest number generating the 'Fermat'-field $\mathbb{F}_{2^{2^2}} $ (with multiplicative group of order 15) subject to the compatibility relation with the generator 2 of the smaller Fermat-field $\mathbb{F}_2 $ (with group of order 15) that $4^5=2 $.
To include the next Fermat-field $\mathbb{F}_{2^{2^3}} $ (with multiplicative group of order 255) consistently, we need to find the smallest number n generating the multiplicative group and satisfying the compatibility condition $n^{17}=4 $. Let's first concentrate on finding the smallest generator : as 2 is a generator for 1st Fermat-field $\mathbb{F}_{2^{2^1}} $ and 4 a generator for the 2-nd Fermat-field $\mathbb{F}_{2^{2^2}} $ a natural conjecture might be that 16 is a generator for the 3-rd Fermat-field $\mathbb{F}_{2^{2^3}} $ and, more generally, that $2^{2^i} $ would be a generator for the next field $\mathbb{F}_{2^{2^{i+1}}} $.
However, an "exercise" in the 1978-paper by Hendrik Lenstra Nim multiplication asks : "Prove that $2^{2^i} $ is a primitive root in the field $\mathbb{F}_{2^{2^{i+1}}} $ if and only if i=0 or 1."
I've struggled with several of the 'exercises' in Lenstra's paper to the extend I feared Alzheimer was setting in, only to find out, after taking pen and paper and spending a considerable amount of time calculating, that they are indeed merely exercises, when looked at properly… (Spoiler-warning : stop reading now if you want to go through this exercise yourself).
In the picture above I've added in red the number $x(x+1)=x^2+1 $ to each of the involutions. Clearly, for each pair these numbers are all distinct and we see that for the indicated pairing they make up all numbers strictly less than 8.
By Conway's simplicity rules (or by checking) the pair (16,17) gives the number 8. In other words, the equation
$x^2+x+8 $ is an irreducible polynomial over $\mathbb{F}_{16} $ having as its roots in $\mathbb{F}_{256} $ the numbers 16 and 17. But then, 16 and 17 are conjugated under the Galois-involution (the Frobenius $y \mapsto y^{16} $). That is, we have $16^{16}=17 $ and $17^{16}=16 $ and hence $16^{17}=8 $. Now, use the multiplication table in $\mathbb{F}_{16} $ given in the previous post (or compute!) to see that 8 is of order 5 (and NOT a generator). As a consequence, the multiplicative order of 16 is 5×17=85 and so 16 cannot be a generator in $\mathbb{F}_{256} $.
For general i one uses the fact that $2^{2^i} $ and $2^{2^i}+1 $ are the roots of the polynomial $x^2+x+\prod_{j<i} 2^{2^j} $ over $\mathbb{F}_{2^{2^i}} $ and argues as before.
Right, but then what is the minimal generator satisfying $n^{17}=4 $? By computing we see that the pairings of all numbers in the range 16…31 give us all numbers in the range 8…15 and by the above argument this implies that the 17-th powers of all numbers smaller than 32 must be different from 4. But then, the smallest candidate is 32 and one verifies that indeed $32^{17}=4 $ (use the multiplication table given before).
Hence, we must place Knight $K_{32} $ at root $e^{2 \pi i/255} $ and place the other Knights prior to the 256-th at the corresponding power of 32. I forgot the argument I used to find-by-hand the requested place for Knight 16, but one can verify that $32^{171}=16 $ so we seat $K_{16} $ at root $e^{342 \pi i/255} $.
But what about Knight $K_{256} $? Well, by this time I was quite good at squaring and binary representations of integers, but also rather tired, and decided to leave that task to the computer.
If we denote Nim-addition and multiplication by $\oplus $ and $\otimes $, then Conway's simplicity results in ONAG establish a field-isomorphism between $~(\mathbb{N},\oplus,\otimes) $ and the field $\mathbb{F}_2(x_0,x_1,x_2,\ldots ) $ where the $x_i $ satisfy the Artin-Schreier equations
$x_i^2+x_i+\prod_{j < i} x_j = 0 $
and the i-th Fermat-field $\mathbb{F}_{2^{2^i}} $ corresponds to $\mathbb{F}_2(x_0,x_1,\ldots,x_{i-1}) $. The correspondence between numbers and elements from these fields is given by taking $x_i \mapsto 2^{2^i} $. But then, wecan write every 2-power as a product of the $x_i $ and use the binary representation of numbers to perform all Nim-calculations with numbers in these fields.
Therefore, a quick and dirty way (and by no means the most efficient) to do Nim-calculations in the next Fermat-field consisting of all numbers smaller than 65536, is to use sage and set up the field $\mathbb{F}_2(x_0,x_1,x_2,x_3) $ by
R.< x,y,z,t > =GF(2)[]
S.< a,b,c,d >=R.quotient((x^2+x+1,y^2+y+x,z^2+z+x*y,t^2+t+x*y*z))
To find the smallest number generating the multiplicative group and satisfying the additional compatibility condition $n^{257}=32 $ we have to find the smallest binary number $i_1i_2 \ldots i_{16} $ (larger than 255) satisfying
(i1*a*b*c*t+i2*b*c*t+i3*a*c*t+i4*c*t+i5*a*b*t+i6*b*t+
i7*a*t+i8*t+i9*a*b*c+i10*b*c+i11*a*c+i12*c+i13*a*b+
i14*b+i15*a+i16)^257=a*c
It takes a 2.4GHz 2Gb-RAM MacBook not that long to decide that the requested generator is 1051 (killing another optimistic conjecture that these generators might be 2-powers). So, we seat Knight
$K_{1051} $ at root $e^{2 \pi i/65535} $ and can then arrange seatings for all Knight queued up until we reach the 65536-th! In particular, the first Knight we couldn't place before, that is Knight $K_{256} $, will be seated at root $e^{65826 \pi i/65535} $.
If you're lucky enough to own a computer with more RAM, or have the patience to make the search more efficient and get the seating arrangement for the next Fermat-field, please drop a comment.
I'll leave you with another Lenstra-exercise which shouldn't be too difficult for you to solve now : "Prove that $x^3=2^{2^i} $ has three solutions in $\mathbb{N} $ for each $i \geq 2 $."
can -oids save group-theory 101?
Published February 15, 2009 by lievenlb
Two questions from my last group-theory 101 exam:
(a) : What are the Jordan-Holder components of the Abelian group $\mathbb{Z}/20 \mathbb{Z} $?
(b) : Determine the number of order 7 elements in a simple group of order 168.
Give these to any group of working mathematicians, and, I guess all of them will solve (a), whereas the number of correct solutions to (b) will be (substantially) smaller.
Guess what? All(!) my students solved (b) correctly, whereas almost none of them had anything sensible to say about (a). A partial explanation is that they had more drill-exercises applying the Sylow-theorems than ones concerning the Jordan-Holder theorem.
A more fundamental explanation is that (b) has to do with sub-structures whereas (a) concerns quotients. Over the years I've tried numerous methods to convey the quotient-idea : putting things in bags, dividing a big group-table into smaller squares, additional lessons on relations, counting modulo numbers … No method appears to have an effect, lasting until the examination.
At the moment I'm seriously considering to rewrite the entire course, ditching quotients and using them only in disguise via groupoids. Before you start bombarding me with comments, I'm well aware of the problems inherent in this approach.
Before you do groupoids, students have to know some basic category theory. But that's ok with me. Since last year it has been decided that I should sacrifice the first three weeks of the course telling students the basics of sets, maps and relations. After this, the formal definition of a category will appear more natural to them than the definition of a group, not? Besides, most puzzle-problems I use to introduce groups are actually examples of groupoids…
But then, what are the main theorems on finite groupoids? Well, I can see the groupoid cardinality result, giving you in one stroke Lagrange's theorem as well as the orbit-counting method. From this one can then prove the remaining classical group-results such as Cauchy and the Sylows, but perhaps there are more elegant approaches?
Have you seen a first-year group-theory course starting off with groupoids? Do you know an elegant way to prove a classical group-result using groupoids?
On2 : extending Lenstra's list
Published January 27, 2009 by lievenlb
We have seen that John Conway defined a nim-addition and nim-multiplication on the ordinal numbers in such a way that the subfield $[\omega^{\omega^{\omega}}] \simeq \overline{\mathbb{F}}_2 $ is the algebraic closure of the field on two elements. We've also seen how to do actual calculations in that field provided we can determine the mystery elements $\alpha_p $, which are the smallest ordinals not being a p-th power of ordinals lesser than $[\omega^{\omega^{k-1}}] $ if $p $ is the $k+1 $-th prime number.
Hendrik Lenstra came up with an effective method to compute these elements $\alpha_p $ requiring a few computations in certain finite fields. I'll give a rundown of his method and refer to his 1977-paper "On the algebraic closure of two" for full details.
For any ordinal $\alpha < \omega^{\omega^{\omega}} $ define its degree $d(\alpha) $ to be the degree of minimal polynomial for $\alpha $ over $\mathbb{F}_2 = [2] $ and for each prime number $p $ let $f(p) $ be the smallest number $h $ such that $p $ is a divisor of $2^h-1 $ (clearly $f(p) $ is a divisor of $p-1 $).
In the previous post we have already defined ordinals $\kappa_{p^k}=[\omega^{\omega^{k-1}.p^{n-1}}] $ for prime-power indices, but we now need to extend this definition to allow for all indices. So. let $h $ be a natural number, $p $ the smallest prime number dividing $h $ and $q $ the highest power of $p $ dividing $h $. Let $g=[h/q] $, then Lenstra defines
$\kappa_h = \begin{cases} \kappa_q~\text{if q divides}~d(\kappa_q)~\text{ and} \\ \kappa_g + \kappa_q = [\kappa_g + \kappa_q]~\text{otherwise} \end{cases} $
With these notations, the main result asserts the existence of natural numbers $m,m' $ such that
$\alpha_p = [\kappa_{f(p)} + m] = [\kappa_{f(p)}] + m' $
Now, assume by induction that we have already determined the mystery numbers $\alpha_r $ for all odd primes $r < p $, then by teh argument of last time we can effectively compute in the field $[\kappa_p] $. In particular, we can compute for every element its multiplicative order $ord(\beta) $ and therefore also its degree $d(\beta) $ which has to be the smallest number $h $ such that $ord(\beta) $ divides $[2^h-1] $.
Then, by the main result we only have to determine the smallest number m such that $\beta = [\kappa_{f(p)} +m] $ is not a p-th power in $\kappa_p $ which is equivalent to the condition that
$\beta^{(2^{d(\beta)}-1)/p} \not= 1 $ if $p $ divides $[2^{d(\beta)}-1] $
All these conditions can be verified within suitable finite fields and hence are effective. In this manner, Lenstra could extend Conway's calculations (probably using a home-made finite field program running on a slow 1977 machine) :
[tex]\begin{array}{c|c|c} p & f(p) & \alpha_p \\ \hline 3 & 2 & [2] \\ 5 & 4 & [4] \\ 7 & 3 & [\omega]+1 \\ 11 & 10 & [\omega^{\omega}]+1 \\ 13 & 12 & [\omega]+4 \\ 17 & 8 & [16] \\ 19 & 18 & [\omega^3]+4 \\ 23 & 11 & [\omega^{\omega^3}]+1 \\ 29 & 28 & [\omega^{\omega^2}]+4 \\ 31 & 5 & [\omega^{\omega}]+1 \\ 37 & 36 & [\omega^3]+4 \\ 41 & 20 & [\omega^{\omega}]+1 \\ 43 & 14 & [\omega^{\omega^2}]+ 1 \end{array}[/tex]
Right, so let's try the case $p=47 $. To begin, $f(47)=23 $ whence we have to determine the smallest field containg $\kappa_{23} $. By induction (Lenstra's tabel) we know already that
$\kappa_{23}^{23} = \kappa_{11} + 1 = [\omega^{\omega^3}]+1 $ and $\kappa_{11}^{11} = \kappa_5 + 1 = [\omega^{\omega}]+1 $ and $\kappa_5^5=[4] $
Because the smallest field containg $4 $ is $[16]=\mathbb{F}_{2^4} $ we have that $\mathbb{F}_2(4,\kappa_5,\kappa_{11}) \simeq \mathbb{F}_{2^{220}} $. We can construct this finite field, together with a generator $a $ of its multiplicative group in Sage via
sage: f1.< a >=GF(2^220)
In this field we have to pinpoint the elements $4,\kappa_5 $ and $\kappa_{11} $. As $4 $ has order $15 $ in $\mathbb{F}_{2^4} $ we know that $\kappa_5 $ has order $75 $. Hence we can take $\kappa_5 = a^{(2^{220}-1)/75} $ and then $4=\kappa_5^5 $.
If we denote $\kappa_5 $ by x5 we can obtain $\kappa_{11} $ as x11 by the following sage-commands
sage: c=x5+1
sage: x11=c.nth_root(11)
It takes about 7 minutes to find x11 on a 2.4 GHz MacBook. Next, we have to set up the field extension determined by $\kappa_{23} $ (which we will call x in sage). This is done as follows
sage: p1.=PolynomialRing(f1)
sage: f=x^23-x11-1
sage: F2=f1.extension(f,'u')
The MacBook needed 8 minutes to set up this field which is isomorphic to $\mathbb{F}_{2^{5060}} $. The relevant number is therefore $n=\frac{2^{5060}-1}{47} $ which is the gruesome
34648162040462867047623719793206539850507437636617898959901744136581<br/>
259144476645069239839415478030722644334257066691823210120548345667203443<br/>
Remains 'only' to take x,x+1,etc. to the n-th power and verify which is the first to be unequal to 1. For this it is best to implement the usual powering trick (via digital expression of the exponent) in the field F2, something like
sage: def power(e,n):
...: le=n.bits()
...: v=n.digits()
...: mn=F2(e)
...: out=F2(1)
...: i=0
...: while i< le :
...: if v[i]==1 : out=F2(out_mn)
...: m=F2(mn_mn)
...: mn=F2(m)
...: i=i+1
...: return(out)
...:
then it takes about 20 seconds to verify that power(x,n)=1 but that power(x+1,n) is NOT! That is, we just checked that $\alpha_{47}=\kappa_{11}+1 $.
It turns out that 47 is the hardest nut to crack, the following primes are easier. Here's the data (if I didn't make mistakes…)
[tex]\begin{array}{c|c|c} p & f(p) & \alpha_p \\ \hline 47 & 23 & [\omega^{\omega^{7}}]+1 \\ 53 & 52 & [\omega^{\omega^4}]+1 \\ 59 & 58 & [\omega^{\omega^8}]+1 \\ 61 & 60 & [\omega^{\omega}]+[\omega] \\ 67 & 66 & [\omega^{\omega^3}]+[\omega] \end{array}[/tex]
It seems that Magma is substantially better at finite field arithmetic, so if you are lucky enough to have it you'll have no problem finding $\alpha_p $ for all primes less than 100 by the end of the day. If you do, please drop a comment with the results…
On2 : Conway's nim-arithmetics
Last time we did recall Cantor's addition and multiplication on ordinal numbers. Note that we can identify an ordinal number $\alpha $ with (the order type of) the set of all strictly smaller ordinals, that is, $\alpha = { \alpha'~:~\alpha' < \alpha } $. Given two ordinals $\alpha $ and $\beta $ we will denote their Cantor-sums and products as $[ \alpha + \beta] $ and $[\alpha . \beta] $.
The reason for these square brackets is that John Conway constructed a well behaved nim-addition and nim-multiplication on all ordinals $\mathbf{On}_2 $ by imposing the 'simplest' rules which make $\mathbf{On}_2 $ into a field. By this we mean that, in order to define the addition $\alpha + \beta $ we must have constructed before all sums $\alpha' + \beta $ and $\alpha + \beta' $ with $\alpha' < \alpha $ and $\beta' < \beta $. If + is going to be a well-defined addition on $\mathbf{On}_2 $ clearly $\alpha + \beta $ cannot be equal to one of these previously constructed sums and the 'simplicity rule' asserts that we should take $\alpha+\beta $ the least ordinal different from all these sums $\alpha'+\beta $ and $\alpha+\beta' $. In symbols, we define
$\alpha+ \beta = \mathbf{mex} { \alpha'+\beta,\alpha+ \beta'~|~\alpha' < \alpha, \beta' < \beta } $
where $\mathbf{mex} $ stands for 'minimal excluded value'. If you'd ever played the game of Nim you will recognize this as the Nim-addition, at least when $\alpha $ and $\beta $ are finite ordinals (that is, natural numbers) (to nim-add two numbers n and m write them out in binary digits and add without carrying). Alternatively, the nim-sum n+m can be found applying the following two rules :
the nim-sum of a number of distinct 2-powers is their ordinary sum (e.g. $8+4+1=13 $, and,
the nim-sum of two equal numbers is 0.
So, all we have to do is to write numbers n and m as sums of two powers, scratch equal terms and add normally. For example, $13+7=(8+4+1)+(4+2+1)=8+2=10 $ (of course this is just digital sum without carry in disguise).
Here's the beginning of the nim-addition table on ordinals. For example, to define $13+7 $ we have to look at all values in the first 7 entries of the row of 13 (that is, ${ 13,12,15,14,9,8,11 } $) and the first 13 entries in the column of 7 (that is, ${ 7,6,5,4,3,2,1,0,15,14,13,12,11 } $) and find the first number not included in these two sets (which is indeed $10 $).
In fact, the above two rules allow us to compute the nim-sum of any two ordinals. Recall from last time that every ordinal can be written uniquely as as a finite sum of (ordinal) 2-powers :
$\alpha = [2^{\alpha_0} + 2^{\alpha_1} + \ldots + 2^{\alpha_k}] $, so to determine the nim-sum $\alpha+\beta $ we write both ordinals as sums of ordinal 2-powers, delete powers appearing twice and take the Cantor ordinal sum of the remaining sum.
Nim-multiplication of ordinals is a bit more complicated. Here's the definition as a minimal excluded value
$\alpha.\beta = \mathbf{mex} { \alpha'.\beta + \alpha.\beta' – \alpha'.\beta' } $
for all $\alpha' < \alpha, \beta' < \beta $. The rationale behind this being that both $\alpha-\alpha' $ and $\beta – \beta' $ are non-zero elements, so if $\mathbf{On}_2 $ is going to be a field under nim-multiplication, their product should be non-zero (and hence strictly greater than 0), that is, $~(\alpha-\alpha').(\beta-\beta') > 0 $. Rewriting this we get $\alpha.\beta > \alpha'.\beta+\alpha.\beta'-\alpha'.\beta' $ and again the 'simplicity rule' asserts that $\alpha.\beta $ should be the least ordinal satisfying all these inequalities, leading to the $\mathbf{mex} $-definition above. The table gives the beginning of the nim-multiplication table for ordinals. For finite ordinals n and m there is a simple 2 line procedure to compute their nim-product, similar to the addition-rules mentioned before :
the nim-product of a number of distinct Fermat 2-powers (that is, numbers of the form $2^{2^n} $) is their ordinary product (for example, $16.4.2=128 $), and,
the square of a Fermat 2-power is its sesquimultiple (that is, the number obtained by multiplying with $1\frac{1}{2} $ in the ordinary sense). That is, $2^2=3,4^2=6,16^2=24,… $
Using these rules, associativity and distributivity and our addition rules it is now easy to work out the nim-multiplication $n.m $ : write out n and m as sums of (multiplications by 2-powers) of Fermat 2-powers and apply the rules. Here's an example
$5.9=(4+1).(4.2+1)=4^2.2+4.2+4+1=6.2+8+4+1=(4+2).2+13=4.2+2^2+13=8+3+13=6 $
Clearly, we'd love to have a similar procedure to calculate the nim-product $\alpha.\beta $ of arbitrary ordinals, or at least those smaller than $\omega^{\omega^{\omega}} $ (recall that Conway proved that this ordinal is isomorphic to the algebraic closure $\overline{\mathbb{F}}_2 $ of the field of two elements). From now on we restrict to such 'small' ordinals and we introduce the following special elements :
$\kappa_{2^n} = [2^{2^{n-1}}] $ (these are the Fermat 2-powers) and for all primes $p > 2 $ we define
$\kappa_{p^n} = [\omega^{\omega^{k-1}.p^{n-1}}] $ where $k $ is the number of primes strictly smaller than $p $ (that is, for p=3 we have k=1, for p=5, k=2 etc.).
Again by associativity and distributivity we will be able to multiply two ordinals $< \omega^{\omega^{\omega}} $ if we know how to multiply a product
$[\omega^{\alpha}.2^{n_0}].[\omega^{\beta}.2^{m_0}] $ with $\alpha,\beta < [\omega^{\omega}] $ and $n_0,m_0 \in \mathbb{N} $.
Now, $\alpha $ can be written uniquely as $[\omega^t.n_t+\omega^{t-1}.n_{t-1}+\ldots+\omega.n_2 + n_1] $ with t and all $n_i $ natural numbers. Write each $n_k $ in base $p $ where $p $ is the $k+1 $-th prime number, that is, we have for $n_0,n_1,\ldots,n_t $ an expression
$n_k=[\sum_j p^j.m(j,k)] $ with $0 \leq m(j,k) < p $
The point of all this is that any of the special elements we want to multiply can be written as a unique expression as a decreasing product
$[\omega^{\alpha}.2^{n_0}] = [ \prod_q \kappa_q^m(q) ] $
where $q $ runs over all prime powers. The crucial fact now is that for this decreasing product we have a rule similar to addition of 2-powers, that is Conway-products coincide with the Cantor-products
$[ \prod_q \kappa_q^m(q) ] = \prod_q \kappa_q^m(q) $
But then, using associativity and commutativity of the Conway-product we can 'nearly' describe all products $[\omega^{\alpha}.2^{n_0}].[\omega^{\beta}.2^{m_0}] $. The remaining problem being that it may happen that for some q we will end up with an exponent $m(q)+m(q')>p $. But this can be solved if we know how to take p-powers. The rules for this are as follows
$~(\kappa_{2^n})^2 = \kappa_{2^n} + \prod_{1 \leq i < n} \kappa_{2^i} $, for 2-powers, and,
$~(\kappa_{p^n})^p = \kappa_{p^{n-1}} $ for a prime $p > 2 $ and for $n \geq 2 $, and finally
$~(\kappa_p)^p = \alpha_p $ for a prime $p > 2 $, where $\alpha_p $ is the smallest ordinal $< \kappa_p $ which cannot be written as a p-power $\beta^p $ with $\beta < \kappa_p $. Summarizing : if we will be able to find these mysterious elements $\alpha_p $ for all prime numbers p, we are able to multiply in $[\omega^{\omega^{\omega}}]=\overline{\mathbb{F}}_2 $.
Let us determine the first one. We have that $\kappa_3 = \omega $ so we are looking for the smallest natural number $n < \omega $ which cannot be written in num-multiplication as $n=m^3 $ for $m < \omega $ (that is, also $m $ a natural number). Clearly $1=1^3 $ but what about 2? Can 2 be a third root of a natural number wrt. nim-multiplication? From the tabel above we see that 2 has order 3 whence its cube root must be an element of order 9. Now, the only finite ordinals that are subfields of $\mathbf{On}_2 $ are precisely the Fermat 2-powers, so if there is a finite cube root of 2, it must be contained in one of the finite fields $[2^{2^n}] $ (of which the mutiplicative group has order $2^{2^n}-1 $ and one easily shows that 9 cannot be a divisor of any of the numbers $2^{2^n}-1 $, that is, 2 doesn't have a finte 3-th root in nim! Phrased differently, we found our first mystery number $\alpha_3 = 2 $. That is, we have the marvelous identity in nim-arithmetic
$\omega^3 = 2 $
Okay, so what is $\alpha_5 $? Well, we have $\kappa_5 = [\omega^{\omega}] $ and we have to look for the smallest ordinal which cannot be written as a 5-th root. By inspection of the finite nim-table we see that 1,2 and 3 have 5-th roots in $\omega $ but 4 does not! The reason being that 4 has order 15 (check in the finite field [16]) and 25 cannot divide any number of the form $2^{2^n}-1 $. That is, $\alpha_5=4 $ giving another crazy nim-identity
$~(\omega^{\omega})^5 = 4 $
And, surprises continue to pop up… Conway showed that $\alpha_7 = \omega+1 $ giving the nim-identity $~(\omega^{\omega^2})^7 = \omega+1 $. The proof of this already uses some clever finite field arguments. Because 7 doesn't divide any number $2^{2^n}-1 $, none of the finite subfields $[2^{2^n}] $ contains a 7-th root of unity, so the 7-power map is injective whence surjective, so all finite ordinal have finite 7-th roots! That is, $\alpha_7 \geq \omega $. Because $\omega $ lies in a cubic extension of the finite field [4], the field generated by $\omega $ has 64 elements and so its multiplicative group is cyclic of order 63 and as $\omega $ has order 9, it must be a 7-th power in this field. But, as the only 7th powers in that field are precisely the powers of $\omega $ and by inspection $\omega+1 $ is not a 7-th power in that field (and hence also not in any field extension obtained by adjoining square, cube and fifth roots) so $\alpha_7=\omega +1 $.
Conway did stop at $\alpha_7 $ but I've always been intrigued by that one line in ONAG p.61 : "Hendrik Lenstra has computed $\alpha_p $ for $p \leq 43 $". Next time we will see how Lenstra managed to do this and we will use sage to extend his list a bit further, including the first open case : $\alpha_{47}= \omega^{\omega^7}+1 $.
For an enjoyable video on all of this, see Conway's MSRI lecture on Infinite Games. The nim-arithmetic part is towards the end of the lecture but watching the whole video is a genuine treat!
|
CommonCrawl
|
Stability and bifurcation of a viscous incompressible plasma fluid contained between two concentric rotating cylinders
DCDS-B Home
Identification of focus and center in a 3-dimensional system
March 2014, 19(2): 523-541. doi: 10.3934/dcdsb.2014.19.523
Expanding Baker Maps as models for the dynamics emerging from 3D-homoclinic bifurcations
Antonio Pumariño 1, , José Ángel Rodríguez 2, , Joan Carles Tatjer 3, and Enrique Vigil 2,
Departamento de Matemáticas, Universidad de Oviedo, Calvo Sotelo s/n, 33007 Oviedo
Dep. de Matemáticas, Universidad de Oviedo, Calvo Sotelo s/n, 33007, Oviedo, Spain, Spain
Departament de Matemàtica Aplicada i Anàlisi, Universitat de Barcelona, Gran Via, 585, 08080 Barcelona, Spain
Received March 2013 Revised December 2013 Published February 2014
For certain 3D-homoclinic tangencies where the unstable manifold of the saddle point involved in the homoclinic tangency has dimension two, many different strange attractors have been numerically observed for the corresponding family of limit return maps. Moreover, for some special value of the parameter, the respective limit return map is conjugate to what was called bidimensional tent map. This piecewise affine map is an example of what we call now Expanding Baker Map, and the main objective of this paper is to show how many of the different attractors exhibited for the limit return maps resemble the ones observed for Expanding Baker Maps.
Keywords: Expanding Baker Maps., strange attractors, Limit return maps.
Mathematics Subject Classification: Primary: 37C70, 37D45; Secondary: 37G3.
Citation: Antonio Pumariño, José Ángel Rodríguez, Joan Carles Tatjer, Enrique Vigil. Expanding Baker Maps as models for the dynamics emerging from 3D-homoclinic bifurcations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (2) : 523-541. doi: 10.3934/dcdsb.2014.19.523
V. I. Arnold and A. Avez, Problemes Ergodiques De La Mecanique Classique, Paris: Gauthier-Villars, 1967. Google Scholar
M. Benedicks and L. Carleson, On iterations of $1-ax^2$ on $(0,1)$, Ann. Math., 122 (1985), 1-25. doi: 10.2307/1971367. Google Scholar
M. Benedicks and L. Carleson, The dynamics of the Hénon map, Ann. Math., 133 (1991), 73-169. doi: 10.2307/2944326. Google Scholar
J. Buzzi, Absolutely continuous invariant measures for generic multi-dimensional piecewise affine expanding maps, Inter. Jour. Bif. and Chaos, 9 (1999), 1743-1750. doi: 10.1142/S021812749900122X. Google Scholar
P. Collet and J. P. Eckmann, Iterated Maps of the Interval as Dynamical Systems, Birkhauser, Boston, 1980. Google Scholar
E. Colli, Infinitely many coexisting strange attractors, Ann. Inst. H. Poincaré, 15 (1998), 539-579. doi: 10.1016/S0294-1449(98)80001-2. Google Scholar
S. V. Gonchenko, L. P. Shilnikov and D. V. Turaev, Dynamical phenomena in multidimensional systems with a non-rough Poincare homoclinic curve, Russ. Acad. Sci. Dokl. Math., 47 (1993), 410-415. Google Scholar
S. V. Gonchenko, L. P. Shilnikov and D. V. Turaev, On the existence of Newhouse regions near systems with non-rough Poincare homoclinic curve (multidimensional case), Russian Acad. Sci. Dokl. Math., 47 (1993), 268-273. Google Scholar
S. V. Gonchenko, L. P. Shilnikov and D. V. Turaev, On dynamical properties of diffeomorphisms with homoclinic tangencies, J. Math. Sci., 126 (2005), 1317-1343. Google Scholar
S. V. Gonchenko, L. P. Shilnikov and D. V. Turaev, On dynamical properties of multidimensional diffeomorphisms from Newhouse regions. I, Nonlinearity, 21 (2008), 923-972. doi: 10.1088/0951-7715/21/5/003. Google Scholar
S. V. Gonchenko, V. S. Gonchenko and J. C. Tatjer, Bifurcations of three-dimensional diffeomorphisms with non-simple quadratic homoclinic tangencies and generalized Hénon maps, Regular and Chaotic Dynamics, 12 (2007), 233-266. doi: 10.1134/S156035470703001X. Google Scholar
M. R. Herman, Sur la conjugaison des difféomorphismes du cercle à des rotations, (French), Publ. Math. IHES., 46 (1976), 181-188. Google Scholar
M. V. Jakobson, Absolutely continuous invariant measures for one-parameter families of one-dimensional maps, Comm. Math. Phys., 81 (1981), 39-88. doi: 10.1007/BF01941800. Google Scholar
A. Lasota and J. A. Yorke, On the existence of invariant measures for piecewise monotonic transformations, Trans. Am. Math. Soc., 186 (1973), 481-488. doi: 10.1090/S0002-9947-1973-0335758-1. Google Scholar
W. de Melo and S. van Strien, One Dimensional Dynamics, Berlin, Springer-Verlag, 1993. Google Scholar
J. Milnor and P. Thurston, On iterated maps of the interval, Lecture Notes in Math., 1342. Springer-Verlag (1988), 465-563. doi: 10.1007/BFb0082847. Google Scholar
C. Mira, L. Gardini, A. Barugola and J. C. Cathala, Chaotic Dynamics in Two-Dimensional Noninvertible Maps, World Scientific, Publishing Co., Inc., River Edge, NJ, 1996. doi: 10.1142/9789812798732. Google Scholar
L. Mora and M. Viana, Abundance of strange attractors, Acta Math., 171 (1993), 1-71. doi: 10.1007/BF02392766. Google Scholar
S. Newhouse, Non-density of Axiom A (a) on $S^2$, Proc. Amer. Math. Soc. Sympos. Pure Math., 14 (1970), 191-202. Google Scholar
S. Newhouse, Diffeomorphisms with infinitely many sinks, Topology, 13 (1974), 9-18. doi: 10.1016/0040-9383(74)90034-2. Google Scholar
S. Newhouse, The abundance of wild hyperbolic sets and nonsmooth stable sets for diffeomorphisms, Publ. Math. I.H.E.S. 50 (1979), 101-151. Google Scholar
J. Palis and F. Takens, Hyperbolicity and Sensitive Chaotic Dynamics at Homoclinic Bifurcations, Cambridge University Press, 1993. Google Scholar
J. Palis and J. C. Yoccoz, Homoclinic tangencies for hyperbolic sets of large Hausdorff dimension, Acta Math., 172 (1994), 91-136. doi: 10.1007/BF02392792. Google Scholar
J. Palis and M. Viana, High dimension diffeomorphisms displaying infinitely many sinks, Ann. Math., 140 (1994), 207-250. doi: 10.2307/2118546. Google Scholar
W. Parry, Symbolic dynamics and transformations of the unit interval, Trans. AMS, 122 (1966), 368-378. doi: 10.1090/S0002-9947-1966-0197683-5. Google Scholar
A. Pumariño and J. C. Tatjer, Dynamics near homoclinic bifurcations of three-dimensional dissipative diffeomorphisms, Nonlinearity, 19 (2006), 2833-2852. doi: 10.1088/0951-7715/19/12/006. Google Scholar
A. Pumariño and J. C. Tatjer, Attractors for return maps near homoclinic tangencies of three-dimensional dissipative diffeomorphism, Discrete and continuous dynamical systems, series B. Volume 8, number 4, (2007), 971-1005. doi: 10.3934/dcdsb.2007.8.971. Google Scholar
N. Romero, Persistence of homoclinic tangencies in higher dimensions, Ergod. Th. Dyn. Sys., 15 (1995), 735-757. doi: 10.1017/S0143385700008634. Google Scholar
J. C. Tatjer, Three-dimensional dissipative diffeomorphisms with homoclinic tangencies, Ergodic Theory and Dynamical Systems, 21 (2001), 249-302. doi: 10.1017/S0143385701001146. Google Scholar
L. Tedeschini-Lalli and J. A. Yorke, How often do simple dynamical processes have infinitely many coexisting sinks? Comm. Math. Phys., 106 (1986), 635-657. doi: 10.1007/BF01463400. Google Scholar
M. Tsujii, Absolutely continuous invariant measures for expanding piecewise linear maps, Invent. Math., 143 (2001), 349-373. doi: 10.1007/PL00005797. Google Scholar
M. Viana, Strange attractors in higher dimensions, Bull. Braz. Math. Soc., 24 (1993), 13-62. doi: 10.1007/BF01231695. Google Scholar
M. Viana, Homoclinic bifurcations and strange attractors,, Available from: , (). doi: 10.1007/978-94-015-8439-5_10. Google Scholar
J. A. Yorke and K. T. Alligood, Cascades of period doubling bifurcations: A prerequisite for horseshoes, Bull. A.M.S. 9 (1983), 319-322. doi: 10.1090/S0273-0979-1983-15191-1. Google Scholar
Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalizable Expanding Baker Maps: Coexistence of strange attractors. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1651-1678. doi: 10.3934/dcds.2017068
Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Persistent two-dimensional strange attractors for a two-parameter family of Expanding Baker Maps. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 657-670. doi: 10.3934/dcdsb.2018201
Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979
Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalization of two-dimensional piecewise linear maps: Abundance of 2-D strange attractors. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 941-966. doi: 10.3934/dcds.2018040
Carlangelo Liverani. A footnote on expanding maps. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3741-3751. doi: 10.3934/dcds.2013.33.3741
Antonio Pumariño, Joan Carles Tatjer. Attractors for return maps near homoclinic tangencies of three-dimensional dissipative diffeomorphisms. Discrete & Continuous Dynamical Systems - B, 2007, 8 (4) : 971-1005. doi: 10.3934/dcdsb.2007.8.971
Peter Haïssinsky, Kevin M. Pilgrim. An algebraic characterization of expanding Thurston maps. Journal of Modern Dynamics, 2012, 6 (4) : 451-476. doi: 10.3934/jmd.2012.6.451
Peter Haïssinsky, Kevin M. Pilgrim. Examples of coarse expanding conformal maps. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2403-2416. doi: 10.3934/dcds.2012.32.2403
Yushi Nakano, Shota Sakamoto. Spectra of expanding maps on Besov spaces. Discrete & Continuous Dynamical Systems, 2019, 39 (4) : 1779-1797. doi: 10.3934/dcds.2019077
José F. Alves. Stochastic behavior of asymptotically expanding maps. Conference Publications, 2001, 2001 (Special) : 14-21. doi: 10.3934/proc.2001.2001.14
Hans-Otto Walther. Contracting return maps for monotone delayed feedback. Discrete & Continuous Dynamical Systems, 2001, 7 (2) : 259-274. doi: 10.3934/dcds.2001.7.259
Rafael De La Llave, Michael Shub, Carles Simó. Entropy estimates for a family of expanding maps of the circle. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 597-608. doi: 10.3934/dcdsb.2008.10.597
Michael Blank. Finite rank approximations of expanding maps with neutral singularities. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 749-762. doi: 10.3934/dcds.2008.21.749
Xu Zhang, Yuming Shi, Guanrong Chen. Coupled-expanding maps under small perturbations. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1291-1307. doi: 10.3934/dcds.2011.29.1291
Viviane Baladi, Daniel Smania. Smooth deformations of piecewise expanding unimodal maps. Discrete & Continuous Dynamical Systems, 2009, 23 (3) : 685-703. doi: 10.3934/dcds.2009.23.685
Alena Erchenko. Flexibility of Lyapunov exponents for expanding circle maps. Discrete & Continuous Dynamical Systems, 2019, 39 (5) : 2325-2342. doi: 10.3934/dcds.2019098
Yong Fang. On smooth conjugacy of expanding maps in higher dimensions. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 687-697. doi: 10.3934/dcds.2011.30.687
Ralf Spatzier, Lei Yang. Exponential mixing and smooth classification of commuting expanding maps. Journal of Modern Dynamics, 2017, 11: 263-312. doi: 10.3934/jmd.2017012
Damien Thomine. A spectral gap for transfer operators of piecewise expanding maps. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 917-944. doi: 10.3934/dcds.2011.30.917
Magnus Aspenberg, Viviane Baladi, Juho Leppänen, Tomas Persson. On the fractional susceptibility function of piecewise expanding maps. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 679-706. doi: 10.3934/dcds.2021133
Antonio Pumariño José Ángel Rodríguez Joan Carles Tatjer Enrique Vigil
|
CommonCrawl
|
Population pharmacokinetics and pharmacodynamics of the artesunate–mefloquine fixed dose combination for the treatment of uncomplicated falciparum malaria in African children
Monia Guidi1,2,
Thomas Mercier2,
Manel Aouri2,
Laurent A. Decosterd2,
Chantal Csajka1,2,
Bernhards Ogutu3,
Gwénaëlle Carn4 &
Jean-René Kiechel4
The World Health Organization (WHO) recommends combinations of an artemisinin derivative plus an anti-malarial drug of longer half-life as treatment options for uncomplicated Plasmodium falciparum infections. In Africa, artesunate–mefloquine (ASMQ) is an infrequently used artemisinin-based combination therapy (ACT) because of perceived poor tolerance to mefloquine. However, the WHO has recommended reconsideration of the use of ASMQ in Africa. In this large clinical study, the pharmacokinetics (PK) of a fixed dose combination of ASMQ was investigated in an African paediatric population to support dosing recommendations used in Southeast Asia and South America.
Among the 472 paediatric patients aged 6–59 months from six African centres included in the large clinical trial, a subset of 50 Kenyan children underwent intensive sampling to develop AS, its metabolite dihydroartemisinin (DHA) and MQ PK models. The final MQ PK model was validated using sparse data collected in the remaining participants (NONMEM®). The doses were one or two tablets containing 25/55 mg AS/MQ administered once a day for 3 days according to patients' age. A sensitive LC–MS/MS method was used to quantify AS, DHA and MQ concentrations in plasma. An attempt was made to investigate the relationship between the absence/presence of malaria recrudescence and MQ area under the curve (AUC) using logistic regression.
AS/DHA concentration–time profiles were best described using a one-compartment model for both compounds with irreversible AS conversion into DHA. AS/DHA PK were characterized by a significant degree of variability. Body weight affected DHA PK parameters. MQ PK was characterized by a two-compartment model and a large degree of variability. Allometric scaling of MQ clearances and volumes of distribution was used to depict the relationship between MQ PK and body weight. No association was found between the model predicted AUC and appearance of recrudescence.
The population pharmacokinetic models developed for both AS/DHA and MQ showed a large variability in drug exposure in the investigated African paediatric population. The largest contributor to this variability was body weight, which is accommodated for by the ASMQ fixed dose combination (FDC) dosing recommendation. Besides body weight considerations, there is no indication that the dosage should be modified in children with malaria compared to adults.
Trial registration Pan African Clinical Trials Registry PACTR201202000278282 registration date 2011/02/16
The World Health Organization (WHO) estimates a significant 18% reduction in the incidence of malaria along with a considerable 28% decrease in the malaria mortality rate between 2010 and 2017 [1]. Despite this substantial progress, the disease still caused an estimated 435,000 deaths worldwide, mostly in Africa (93%) and in children under 5 years of age (61%) [1]. Artemisinin-based combination therapy (ACT) is the first-line treatment for uncomplicated Plasmodium falciparum infection, the predominant cause of malaria in Africa, recommended by the WHO since 2001 [2]. These combinations involve a rapidly eliminated and fast-acting artemisinin derivative together with a much more slowly eliminated drug that kills the remaining parasites. One of the five WHO recommended artemisinin-based combinations is artesunate (AS) associated with mefloquine (MQ), extensively used in Asia and Latin America for the last 20 years [3]. This combination is less commonly selected in Africa, because of the availability of other affordable and already registered artemisinin-based combinations [4], as well as existing concerns about MQ tolerability [5, 6]. However, the WHO has recommended reconsideration of the use of ASMQ in Africa in order to increase the number of artemisinin-based combinations available, with the consequent reduction of the risk of developing drug resistance [4].
The development of a fixed-dose combination (FDC) of AS and MQ was begun in 2002 by the Drugs for Neglected Diseases initiative (DNDi) with the fixed-dose artesunate-based combination therapy (FACT) Consortium [3]. This combination has been demonstrated to be efficacious and safe in Asia and Latin America [7,8,9], but there is still limited experience with its use in Africa. Therefore, an open-label, prospective, randomized, controlled, multi-centre, non-inferiority clinical trial evaluating the efficacy, safety and pharmacokinetics of the ASMQ FDC versus artemether–lumefantrine (AMLF) in children aged 6–59 months was conducted in Africa by DNDi (Pan African Clinical Trials Registry number PACTR201202000278282). Because MQ dose splitting into three equal daily doses has been shown to optimize treatment compliance and to improve MQ tolerability [10, 11], FDC ASMQ dispersible tablets were administered over three consecutive days based on the patients' age. The efficacy of ASMQ was found to be non-inferior to the efficacy of AMLF and the safety of the two treatments was found to be similar with low risk of repeated early vomiting, indicating that ASMQ is a valuable treatment option for children younger than 5 years with uncomplicated falciparum malaria in Africa [12]. Within the framework of this previous study, a pharmacokinetic study was conducted to characterize ASMQ FDC pharmacokinetics in the African paediatric patient population, to compare it to data gathered in adult patients and volunteers, to validate the recommended treatment regimen, and to explore the relationships between drug exposure and treatment outcomes.
Study design and participants
The clinical trial was carried out in six African centres: three in Tanzania, two in Burkina Faso and one in Kenya. Written informed consent from a parent/guardian was required to enrol children younger than 5 years in the trial, who were infected by P. falciparum, as confirmed by microscopy (density between 2000 and 200,000 asexual parasites/µL), and with fever equal to or higher than 37.5 °C. Exclusion criteria were children with body weight less than 5 kg, signs of severe/complicated malaria, febrile conditions caused by diseases other than malaria, a known hypersensitivity to the study drugs, a mixed plasmodium infection, a history of anti-malarial treatment in the 2 weeks preceding the trial or 4 weeks in case of mefloquine and piperaquine, prior participation in a therapeutic trial within 3 months or inability to tolerate oral medication. Patients were followed up to day 63 after start of treatment or to the first recurrence of infection. The study protocol was reviewed and approved by national and independent ethics committees of all participating centres.
Of the 945 patients enrolled in the trial, 473 were randomized to the ASMQ arm (one of them was never dosed) and 472 were randomized to the AMLF arm. The pharmacokinetic analysis described here was performed on the 472 patients who received ASMQ.
Administered doses for these patients were one or two dispersible tablets containing 25 mg AS and 55 mg MQ once a day for three consecutive days to children aged from 6 to 11 months and from 12 to 59 months, respectively. Clinical and parasitological examinations were scheduled at baseline, i.e. before drug administration, at day 0 (D0), D1, D2, D3, D7, D14, D21, D28, D35, D42, D49, D56 and D63 and on any other day if the patient spontaneously returned and parasitological reassessment was required (as per protocol). A margin of ± 2 days to the assigned day of visit was allowed from D7 onward. In case of recurrence of parasitaemia on D7, D14, D21, D28, D35, D42, D49, and D56 the date was recorded and the type of recurrence was determined by PCR (appearance of new infection, malaria recrudescence, missing PCR information or undetermined type).
According to the study protocol, the first fifty children from Kenya enrolled in the ASMQ arm underwent intensive blood sampling: at baseline, on D0 after drug administration (until 6 h after first dosing), D2 (until 6 h after the third dose), D3 (72 h after first dose), D7 and on one other occasion on day 28, 35, 42, 49, 56 or 63. Two blood samples, at baseline and on D7, were collected for all the other participants. Additionally, for all patients with recurrence of parasitaemia, a blood sample was taken on the day of failure.
The mass spectrometry assay for AS, DHA and MQ used for the analysis of study samples is an adaptation of a previously published validated multiplex method [13]. The assay has been further improved by the use of stable isotopically labelled internal standards for MQ (mefloquine-d9) and DHA (DHA-13Cd4) to circumvent the potential matrix effect that may affect the accuracy of mass detection.
The mobile phase was delivered at a flow rate of 0.3 mL/min on a 2.1 mm × 75 mm XSelect HSS 3.5 μm column (Waters, Milford, MA, USA), using solvent A (2 mM ammonium acetate + 0.1% FA) and solvent B (MeCN + 0.1% FA) distributed according to the following stepwise gradient program: 98% A: 0 min; 98% A → 15% A: from 0.0 min → 13.0 min followed by a re-equilibration step to the initial solvent proportions. The retention time of mefloquine/mefloquine-d9, DHA/DHA-13Cd4 and artesunate is 7.4 min, 8.2 min and 9.2 min, respectively. The chromatographic system was coupled to a triple stage quadrupole (TSQ) Quantum Ion mass spectrometer (MS) from Thermo Fischer Scientific (Waltham, MA, USA) equipped with an Ion Max electrospray ionization (ESI) interface. The limits of quantification (LOQ) of the method are 2.5 ng/mL for MQ and 2 ng/mL for AS and DHA.
Plasma samples were isolated by centrifugation and stored at − 20 °C until batch analysis. Briefly, 100 μL of plasma sample were mixed with 50 µL internal standard (DHA-13Cd4 at 130 ng/mL; mefloquine-d9 at 43 ng/mL) and extracted with 600 µL of acetonitrile. The supernatant (700 µL) was evaporated under nitrogen at room temperature and was reconstituted in 150 µL of MeOH/ammonium acetate 2 mM (1:1) adjusted with formic acid at 0.1%, vortex-mixed and centrifuged again. The samples were maintained at +5 °C in autosampler racks throughout the analytical series. The injection volume was 20 μL.
The method is precise (with mean inter-day CV % < 10%), and accurate (inter-day deviation from nominal values < 5%). Since its initiation, the laboratory has participated in the Pharmacology Proficiency Testing Programme for anti-malarial drugs (http://www.wwarn.org/toolkit/qaqc) organized by the World Wide Antimalarial Resistance Network WWARN (http://www.wwarn.org/).
Pharmacokinetics analysis
Non-linear mixed effects modelling program (NONMEM®, version 7.3) [14] with the Perl-Speaks NONMEM® (PsN) toolkit (version 3.7.6) [15] was used to estimate average population pharmacokinetic parameters and their associated between-subject variability (BSV) and to identify factors that influence them. MQ and AS/DHA pharmacokinetic models were developed on the data collected from 50 Kenyan patient subjects with extensive sampling. Molar units were used for AS/DHA pharmacokinetic analyses. Because of the very fast rate of AS and DHA elimination and the selection of the trial sampling times, an external model validation could only be performed for MQ on the clinical trial data not used for model-building. Graphical exploration and statistical analyses were performed by means of the R package (version 2.15.1, R Development Core Team, http://www.r-project.org/).
Structural and statistical model
A stepwise modelling approach was undertaken to identify models that best described the MQ and AS/DHA pharmacokinetics. Multi-compartment dispositions with first-order absorption and elimination processes were compared for MQ. Due to the restricted amount of AS and DHA data, drug and metabolite pharmacokinetics were modelled simultaneously and directly described by means of a one compartment model with linear absorption and elimination. Moreover, since AS is rapidly and almost completely hydrolysed in DHA, its elimination was assumed to occur exclusively via irreversible conversion to DHA [16, 17]. An adequate AS absorption rate constant (Ka) estimation could not be made because of the small number of samples collected right after dose intake (one sample at maximum for each enrolled child on the first and third treatment day). Ka was thus fixed to 3.2 h−1, the mean of previously published estimates retrieved from papers using a first-order process to depict AS absorption [17, 18].
Parameterization was performed in terms of clearances (CL for drugs and CLM for metabolite), inter-compartmental clearance (Q), central (VC for drugs and VM for metabolite) and peripheral (VP) volumes of distribution and Ka. The metabolic conversion rate from AS to DHA was estimated by CL/Vc as previously discussed. AS and MQ relative bioavailability (F1, fixed to 100% and with estimated BSV) were also tested for AS/DHA and MQ to account for dose variation with respect to the nominal value due to the administration of water dispersible tablets. Since the ASMQ combination is administered orally, the pharmacokinetic parameter estimates represent apparent values.
Exponential errors were assumed to capture BSV in all the pharmacokinetic parameters. Proportional, additive and combined proportional-additive error models were compared to describe drugs and metabolite intra-patient (residual) variability. Finally, the correlation between AS and DHA concentration measurements was tested using the L2 function in NONMEM®.
Covariate analysis
Available covariates were: body weight (BW), height/length, age, sex, creatinine, total bilirubin (BIL), aspartate (AST) and alanine (ALT) aminotransferases, haemoglobin (Hb), haematocrit (Ht), total parasitaemia and co-medications categorized as CYP3A4 inducers. Visual inspection of the correlation between post hoc individual estimates of the pharmacokinetic parameters and the available patients' characteristics was initially conducted to identify potential physiologically plausible relationships. Creatinine clearance was not evaluated since MQ elimination occurs mainly through non-renal processes and AS is completely converted into DHA, which is eliminated via glucuronidation [16]. A stepwise forward insertion/backward deletion approach was then undertaken. Potential covariates influencing the kinetic parameters were first incorporated one at a time and tested for significance (univariate analysis). Sequential multivariate combinations of the identified factors were investigated to discard redundancies and to build an intermediate model with all the most important covariates (multivariate analysis). Finally, backward deletion consisted of removing covariates one at a time from the intermediate model, starting from the most insignificant until no further deterioration of the model was observed.
The influence of body weight on all MQ and DHA pharmacokinetic parameters (PAR) was tested using allometric scaling:
$$ PAR = \theta *\left( {\frac{BW}{MBW}} \right)^{PWR} $$
with θ PAR population estimate, MBW the median population body weight and PWR the function power fixed to 0.75 for clearances and 1 for volumes of distribution [19]. A linear relationship between the typical value of a parameter and all the other covariates (continuous centered on the population median; dichotomous coded as 0 and 1) was used. Additionally, AST, ALT and BIL were implemented in the model as dichotomous variables, by introducing a boundary condition, i.e. below or exceeding 1.5 times the upper limit of normal (ULN). Children's age was used to investigate the impact of organ maturation on MQ and DHA clearances, using the following equations, in addition to the simple linear one:
$$ CL = \theta *\frac{1}{{1 + \left( {\frac{AGE}{{TM_{50} }}} \right)^{ - Hill} }} $$
$$ CL = \theta *\left( {MAT_{mag} + \left( {1 - MAT_{mag} } \right)*\left( {1 - e^{{ - AGE*K_{mat} }} } \right)} \right) $$
where Hill is the sigmoid power, TM50 the AGE at 50% of maturation, MATmag, the maturation magnitude for age, and Kmat the age maturation rate constant [20, 21]. The population median covariate value was assigned to patients with missing information.
The acute phase of malaria is associated with altered gastrointestinal motility and an increased likelihood of vomiting. In the three-daily dose ASMQ regimen, the second dose is administered when the patient is in an improved state of health, thanks to the first dose of AS, that kills most of the parasites [22]. The potential impact of parasitaemia on AS and MQ F1 was studied using a linear model of log-transformed (base 10) parasite counts measured at baseline of each ASMQ administration day. Missing parasitaemia information was imputed at the median value of the specific study day. Treatment day (0 vs. 1 and 2), considered as a surrogate marker of the rapid improvement in health due to the first AS dose, was also evaluated on AS and MQ F1. Since parasite counts and treatment day are correlated, differences in individual day 0 F1 due to parasitaemia at enrolment were explored, i.e. baseline parasite counts recorded at the first treatment day, by combining these two covariates. Furthermore, it was hypothesized that a patient's clinical condition affects MQ Ka and this was tested by integrating the effect of the treatment day (0 vs. 1 and 2) on Ka.
Terminal half-lives (t1/2), maximum concentration (Cmax), and time to achieve Cmax (tmax) for all the three drugs, MQ area under the curve to infinite (AUC0–inf) and AS and DHA AUC0–24 after the first and the third ASMQ intake were computed using final pharmacokinetic parameter estimates and classic pharmacokinetic equations or NONMEM integration, as appropriate.
Parameter estimation, model selection and exclusion criteria
MQ and AS/DHA concentrations were fitted using the first-order conditional (FOCE) method with interaction. AS and DHA non-zero concentrations measured more than a week after last drug intake were thought unreliable and thus omitted from the analysis. Other missing variables (unreported concentration measurements, dose intake or sampling times, inconsistent date/time of dose intake and sampling) were also omitted. Data below the quantification limit (BQL) of the assays were handled by setting the first of a series of BQL samples at LOQ/2 and as missing all the others (M6 method) [23].
Diagnostic goodness-of-fit plots, along with differences in the NONMEM® objective function value (ΔOFV), were employed to discriminate between nested models. Since a ΔOFV between any two hierarchical models approximates a χ2 distribution, a change of more than 3.84 (p < 0.05) and 6.63 (p < 0.01) points was considered statistically significant for one additional parameter in model-building or forward insertion and backward-deletion covariate steps, respectively. Akaike's information criterion (AIC) was used for non-hierarchical models. Shrinkage was also evaluated. Sensitivity analyses removing outlying data with absolute conditional weighted residuals (CWRES) greater than 4 or potentially unreliable covariate values and concentration measurements were finally performed to avoid any potential bias in parameter estimation and covariate exploration.
Model validation and assessment
The stability of the final MQ and AS/DHA models was assessed by means of the bootstrap method implemented in PsN-Toolkit [15]. Median parameter values with their 95% confidence interval (CI95%) were derived from 2000 replicates of the initial datasets and compared with the original estimates. Prediction-corrected visual predictive checks (pcVPC) were also performed using the PsN-Toolkit and the R package Xpose4 by simulations based on the final pharmacokinetic models with variability using 1000 children [15, 24]. Moreover, the final MQ pharmacokinetic model was validated using concentrations collected from participants not used in initial model development. The accuracy and precision of the model were estimated by means of prediction error (MPE) and root mean square error (RMSE), using log-transformed concentrations, for the entire dataset and also for each study site [25].
Comparison between mefloquine exposures in children and adult volunteers and patients
Median and 90% prediction interval (PI90 %) of children and adult concentration–time profiles were obtained through simulations (n = 1000) using the final pharmacokinetic model described above and published MQ pharmacokinetic models including BSV and intra-individual variability, respectively. A literature search allowed the identification of two pharmacokinetic models developed in adults receiving the same fixed dose formulation of ASMQ as the one administered to the children enrolled in this clinical trial [26, 27]. The investigated populations consisted of Indian adult patients and Thai adult patients and volunteers, administered with 400 mg of MQ once per day over three consecutive days. MQ disposition was described by a two compartment model with linear elimination in both analyses. A first-order and a single transit compartment models in Julien et al. [26] and Reuter et al. [27], respectively, characterized the absorption phase. The two models were implemented in NONMEM®, fixing simulated individuals' body weight to the corresponding median population value. Administered MQ doses were 110 mg and 400 mg over three consecutive days for children and adults, respectively. MQ drug exposure was quantified by computing median and PI95% AUC over the whole study period (AUC0–day63) by NONMEM integration for all the simulated population/model.
Mefloquine pharmacokinetic–pharmacodynamic analysis
This exploratory analysis was carried out on MQ data collected from all children participating in the trial with complete dosing history information that did not drop out in the early days of the study. Model predicted MQ cumulative AUC (AUC0–dayx) on study days 7, 28, 42, and 63 were calculated by concentration integration in NONMEM®. The relationship between recrudescence of infection (response variable, coded as 0/1) and model predicted AUC0–dayx (independent variable) on study days 7, 28, 42, and 63 was inspected by means of logistic regression using STATA (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, TX: StataCorp LP). The independent variable was log-transformed (using base 2) and cantered on its median value. The level of significance was set at 0.05.
Of the 472 children enrolled in the trial and randomized in the ASMQ arm, 21 were removed according to the exclusion criteria of the pharmacokinetic analysis. MQ and AS/DHA pharmacokinetic model development was carried out on 48 patients and MQ model validation on 378 patients, after removal of subjects with unreliable data. The characteristics of the patients used in the MQ and AS/DHA model-building, as well as the final MQ model validation and MQ pharmacokinetic–pharmacodynamic analysis datasets, are listed in Table 1.
Table 1 Characteristics of the children enrolled in the trial for mefloquine and artesunate/dihydroartemisinin model development, mefloquine model validation and pharmacokinetic-pharmacodynamic analysis
Population pharmacokinetic analysis
A total of 216 MQ, 117 AS and 134 DHA (including BQL) concentrations were available for the 48 Kenyan patients selected for the pharmacokinetic model development. Of note, none of the MQ concentrations were quantified as a BQL, while 71% and 57% of AS and DHA samples were BQL data. Median (range) treatment duration per study subject was 3 days (1–3) and the number of available non-BQL samples was 5 (1–7) for MQ, 1 (1–2) for AS and 2 (1–3) for DHA. MQ concentrations ranged between 0.17 ng/mL and 6552.51 ng/mL, AS (> BQL) between 2.1 and 8469.8 ng/mL and DHA (> BQL) between 2.9 and 2400.9 ng/mL.
Artesunate and dihydroartemisinin
As previously described, a two-compartment model was used to simultaneously fit AS and DHA data with first-order absorption, drug exclusive elimination via irreversible conversion to DHA and first-order elimination of metabolite. Initially, BSV was assigned only on CL and a mixed error model was assumed for the intra-patient variability of both drug and metabolite. Model stability was achieved by integrating a correlation between AS and DHA concentration measurements (ΔOFV = − 25, p < 0.001). BSV on VC did not improve data description (ΔOFV = 0, p > 0.05) whilst assignment of BSV to CLM (ΔOFV = − 7.3, p < 0.01) and to VM (ΔOFV = − 8.0, p < 0.01) yielded a better fit of the data. Inclusion of relative F1 (fixed to 100% with estimated BSV) explained all the BSV on AS and DHA clearance and significant decreased the OFV (ΔOFV = − 17.7, p < 0.01). The estimates and variability (CV%) of the pharmacokinetic parameters obtained by the base population model were a relative F1 of 100% (67%), a CL of 180 L/h, a VC of 166 L, a CLM of 12.5 L/h and a VM of 13.8 L (57%).
Age, sex and BIL as well as the hepatic liver tests ALT and AST had a significant impact on F1 (ΔOFV < − 9.6, p < 0.01). Because of poor effect estimation (relative standard error, RSE = 155%), BIL was not kept for further covariate analyses. Sensitivity analyses revealed that the effect of ALT and AST on F1 were purely due to a single patient having the highest values for both hepatic enzyme tests. Whether this finding was a true or an incidental effect could not be validated and these covariates were thus not retained in the model. F1 was found to increase with the parasite counts (ΔOFV = − 13.2, p < 0.01), and to be higher at day 0 compared to days 1 and 2 of treatment (ΔOFV = − 13.7, p < 0.01). As shown in Table 1, baseline parasite counts were extremely high before starting the anti-malarial treatment and dropped to 0 before administration of the third ASMQ, a consequence of the important and immediate AS effect. Differences in F1 at day 0 related to parasite counts were investigated but did not improve the fit with respect to the model including only the treatment day or the parasite counts as covariate (ΔOFV < 3.8, p > 0.05). Because of the correlation between the two factors and the absence of fit improvement by combining the parasite information and the treatment day, only the latter was kept in the model. BW allometric scaling on CLM and VM markedly decreased the objective function (AIC difference of − 22 with respect to the basic structural model). Maturation on CLM was adequately described using Eq. 3 and improved the model fit (ΔOFV = − 18.9, p < 0.01). VM was significantly impacted by sex (ΔOFV = − 8.8, p < 0.01). Complete multivariate analyses allowed for the effect of sex on VM and F1 to be discarded, as well as that of maturation on DHA clearance. These results show that F1 is reduced by 68% upon doubling child age with respect to the population median (2.6 years), and is 29% higher in the first day of therapy than in the subsequent treatment days. The effect of BW on CLM and VM was also retained.
Model evaluation and assessment
The final model parameter estimates, together with their bootstrap estimations, are shown in Table 2 and the goodness-of-fit plots in Additional file 1. Model predicted secondary parameters are presented in Table 4. Shrinkage was lower than 30% for BSV and 10% for residual variabilities. The model was considered reliable since the parameter estimates were within the bootstrap CI95% and differed less than 15% from their bootstrap estimations. Prediction corrected VPCs shown in fig. 1 evidence model misspecification. However, the model was judged acceptable because of the paucity of available AS/DHA data.
Table 2 Final population parameter estimates of artesunate and dihydroartemisinin with their bootstrap evaluations in 2000 replicates
Prediction corrected visual predictive check of the final model of a artesunate and b dihydroartemisinin. Open circles represent prediction corrected observed plasma concentration; black solid and dashed lines the median and PI90% of the observed data; shaded magenta and grey surfaces the model predicted 90% confidence interval of the simulated median and PI90%, respectively; horizontal black lines are the LOQ of AS (0.005 nmol/mL) and DHA (0.005 nmol/mL). The lower panels show the fraction of observed (open circles) with the PI95% of the simulated (shaded magenta surface) BQL data
Mefloquine
A two-compartment model with first-order absorption and elimination described MQ data better than a one-compartment model (ΔOFV = − 64, p < 0.001). No additional benefit was observed using three compartments (ΔOFV = − 0.9, p > 0.05). BSV on VC (ΔOFV = − 22, p < 0.001) in addition to CL yielded a better fit of the data, which was further enhanced by inclusion of BSV on Ka (ΔOFV = − 19, p < 0.001). No improvement of the model fit was observed associating BSV on Q or VP (ΔOFV = 0, p > 0.05). The inclusion of MQ F1 fixed to 100% with an estimated BSV significantly decreased the OFV whilst explaining all the BSV associated to VC (ΔOFV = − 9.4, p < 0.01). Finally, a proportional model was retained to describe the intra-patient variability. The estimates and variability (CV %) of the pharmacokinetic parameters obtained by the base population model were an F1 of 100% (39%), a CL of 0.48 L/h (40%), a VC of 88 L, a Q of 0.41 L/h, a VP of 69 L, and a Ka of 0.15 h−1 (87%).
The univariate analyses showed no association between the covariates tested and MQ bioavailability, clearances and volumes of distribution (ΔOFV ≥ − 3.2, p > 0.05; AIC difference of 2 points with respect to the structural model for BW on all the PK parameters). However, the sensitivity analysis performed while removing the patient with extremely low concentrations after the second and third ASMQ dose revealed that this outlier masked the real impact of BW on clearances and volumes of distribution and of age on F1 (AIC difference of − 5 and ΔOFV = − 5.4, p < 0.05, respectively), without inducing any modification in the MQ basic model. Sex and age were found to significantly influence Ka (ΔOFV ≤ − 7.0, p < 0.05). A decrease of 74% in Ka was observed while doubling the age with respect to the population median (2.6 year) and female children had 55% lower Ka than male children. Multivariate analysis showed that age accounted for the effect of sex on Ka and allowed for the discarding of the impact of age on F1. Finally, significantly different Ka at day 0 and 1/2 of ASMQ treatment were identified due to improvement in patient health following the first intake of AS (ΔOFV = − 39.2, p < 0.001). Multivariate and backward deletion step analyses performed using the reduced dataset, obtained by removal of the outlying patient, showed that the BW effect on clearances and volumes of distribution, as well as age and treatment day effect on Ka, should be retained in the final MQ pharmacokinetic model.
The final model parameters, together with their bootstrap estimations, are displayed in Table 3 and the goodness-of-fit plots presented in Additional file 2. Model predicted secondary parameters are shown in Table 4. Shrinkage was 28% for residual variability and lower than 15% for BSV. The model was considered reliable since the parameters were within the bootstrap CI95% and differed less than 5% from the bootstrap estimations. The results of the pcVPC (Fig. 2) support the predictive performance of the model. Moreover, the external validation done using the remaining 538 concentrations from 378 children enrolled in the trial showed a negligible bias of 0% (CI95% − 2 to 1%) with a precision of 16% at an individual level. A small bias of 18% (CI95% 13–24%) with a precision of 81% was calculated for population predictions. Non-significant or small (absolute values ≤ 6%) biases were calculated at each study site on an individual level (Table 5). Furthermore, the precision of drug predictions was close to the estimated residual intra-patient variability, which strongly supports the predictive performance of the model (Table 5).
Table 3 Final population parameter estimates of mefloquine with their bootstrap evaluations in 2000 replicates
Table 4 AS, DHA and MQ final model-predicted secondary pharmacokinetic parameters
Prediction corrected visual predictive check of the final model with MQ prediction corrected plasma concentration (open circles) and quartiles (black solid and dashed lines) with model-based percentiles 90% confidence interval (shaded magenta and grey surfaces for the median and low/high percentiles, respectively). Horizontal black line represents the MQ LOQ (2.5 ng/mL)
Table 5 Final model accuracy and precision per study site at individual level
Horizontal black line represents the MQ LOQ (2.5 ng/mL).
Figure 3 compares the model-predicted AUC0–day63 for children and adult volunteers and patients. Median (PI95%) AUC0–day63 of 725 mg/L/h (310–1718) was computed through simulations of the final pharmacokinetic model for children weighting 12.2 kg and taking 110 mg of MQ once per day over three consecutive days. Adult patients had a median (PI95%) AUC0–day63 of 1080 mg/L/h (599–1911) and 936 mg/L/h (570–1413) calculated using the model of Julien et al. and Reuter et al. respectively, while adult volunteers of 865 mg/L/h (555–1211) under the dosage regimen of MQ 400 mg once per day over three consecutive days. Median (PI90%) concentration time profiles for adult and children patient are shown in Fig. 4.
Model predicted AUC0–day63 for children and adult patients and volunteers obtained by simulating 1000 individuals with the present (children), the Julien et al. (adult patients) and Reuter et al. (adult volunteers and patients) models, respectively [26, 27]
Median and 90% prediction intervals of MQ concentration–time profiles for children and adult patients receiving 110 mg and 400 mg of MQ once per day over three consecutive days obtained with this study (children, magenta solid line and shaded surface), the Julien et al. (adult, light grey line and shaded surface), and Reuter et al. (adult, dark grey line and shaded surface) models, respectively [26, 27]
Mefloquine pharmacokinetic-pharmacodynamic analysis
Treatment failure was reported for 212 (56%) of the children enrolled in the study, of these failures, 81% (n = 171) were due to new infections and 7% (15) to recrudescence during the 63 days of follow-up. In 2% of the enrolled individuals PCR information was missing and in 10% it was not possible to determine the nature of the treatment failure. Median (range) model-predicted AUC0–day7 were estimated to be 281 mg/L/h (70–854 mg/L/h) in children with reported treatment success within the follow-up period, and 286 mg/L/h (167–378 mg/L/h) and 286 mg/L/h (70–579 mg/L/h) for children with or without malaria recrudescence, respectively. No significant associations were found through logistic regression between model-predicted AUC0–day at day 7, 28, 42 or 63 and appearance/absence of recrudescence (p > 0.05) (data not shown for day > 7).
The present analysis describes the pharmacokinetics of fixed-dose ASMQ in African children under the age of 5 years, with the aim of identifying the source of the significant variability in drug exposure and validating the recommended weight-for-age dosage regimen. The very short half-lives estimated for AS and DHA are in good agreement with reported values ranging from 22 to 72 min for the drug and from 30 to 186 min for the metabolite [28, 29]. The calculated tmax for AS and DHA agree with reported peak AS and DHA plasma concentrations within the 1st h and 2 h post-dose, also supporting the appropriateness of the value chosen for AS Ka in this work [28, 29].
A two-compartment model was used to describe mefloquine pharmacokinetics as already shown in previous analyses [26, 27]. Drug clearance and central and peripheral volumes of distribution were found to be markedly lower than the values estimated in adult patients of African, Caucasian or Asian origin [26, 27, 30]. However, estimated median terminal half-life and mean absorption times are comparable to those obtained for adults [26, 27, 30,31,32]. In addition, simulations performed using the final model in children and previously published pharmacokinetic models in adult patients and volunteers show that these populations have comparable exposure under the specific recommended dosing regimen.
Considerable between-subject variability characterized the pharmacokinetics of both anti-malarial drugs. Such variability remained largely unexplained by the inclusion of the available covariates. Body weight was associated to all the MQ and DHA pharmacokinetic parameters. The association between this demographic characteristic and AS and DHA dispositions has already been described [17, 33]. Reported discrepancies in MQ disposition and elimination between adult and children may be ascribed to differences in patients' body weight. These results illustrate the association between body weight and AS/DHA and MQ dispositions after ASMQ FDC administration in African children and thus support the recommended dose adjustments according to weight-for-age, in order to obtain similar exposures in adults and children.
Twenty-one percent of the initial AS relative F1 variability was explained by age and treatment day. It is worth realizing that F1 is intrinsically connected to AS and DHA pharmacokinetic parameters, apparent because of ASMQ oral administration. The decrease in F1 observed with age implies an increase in drug and metabolite eliminations. This effect might thus be related to organ maturation in the study population. F1 was significantly higher at day 0 than days 1 and 2 of treatment. This is a consequence of the rapid and efficacious therapeutic AS effect observed already after the first AS dose intake. Recently, the relationship between malaria disease and AS bioavailability has been described using parasitaemia variation [34, 35]. An increase of AS F1, resulting in augmented drug exposure, was reported with increasing parasite counts. This same trend was found in univariate analysis in the study population but was not retained after multivariate combination with treatment day. These two covariates are indeed correlated. However, it was not possible to identify differences in the first dose F1 due to variations in parasite counts in the study population. This suggests that the general health improvement and not only the disappearance of the parasite after the first ASMQ dose affects AS pharmacokinetics.
Age was found to markedly decrease MQ drug absorption rate. A significantly higher tmax has been reported in healthy fasting volunteers taking an MQ dose compared to those having a high-fat breakfast (36 vs. 17 h), meaning that food would increase MQ Ka [36]. This is consistent with the hypothesis that the younger children in the African paediatric population investigated were breastfed and thus received a more appropriate amount of food compared to the older ones. Under this hypothesis, and according to the results of the previously cited study, younger children are expected to have higher MQ Ka than older ones. Of note, the impact of food on MQ Ka remains controversial [37, 38]. Finally, the rapid and significant therapeutic AS effect, captured in the analysis by treatment day, induced a significant increase in MQ absorption rate after the first ASMQ FDC administration. It is possible that the dramatic decrease of parasite load following the first intake of AS improves patient state of health resulting in the disappearance of gastrointestinal tract disturbances [22]. The PK of the second and third MQ doses thus might benefit from the AS treatment effect with a favourable modification of drug absorption rate.
As already described in studies performed in Tanzania and Cambodia, more than half of the African paediatric participants had a residual concentration of at least one anti-malarial drug above the limit of quantification at baseline (lumefantrine was measured in 64% of the patients, sulphadoxine in 11%, amodiaquine/deshethylamodiaquine in 16%, pyrimethamine in 2% and quinine in 6%) [39, 40]. These findings are worrying since they indicate that parasites have been exposed to sub-therapeutic concentrations of anti-malarials for some time in a population presenting an elevated risk of developing drug resistance [22]. This might contribute to the dangerous spread of anti-malarial drug resistance.
The MQ model developed in Kenyan children using intensive sampling was applied to data collected from children from Burkina Faso, Tanzania and Kenya. Similar non-significant or small biases and precision per study centre were estimated, suggesting comparable drug exposure among different populations. The relationships between therapeutic response and pharmacokinetics of MQ as monotherapy and in combination with AS have been previously compared in a Thai population [41]. Recrudescence of infection in 24% and 2% of patients was reported in cases of MQ administered alone and with AS, respectively, indicating that the addition of the artemisinin derivative improved the cure rates considerably. Furthermore, no significant association could be found between MQ pharmacokinetics and treatment response. In line with these results, only 3% of the African paediatric population studied presented recrudescence of infection, which could not be related to MQ exposure within the study period. The low number of cases of malaria recrudescence might have limited the likelihood of detecting such an association.
The study described provides the pharmacokinetic parameters for MQ and AS, administered as a FDC of AS/MQ, in African children under the age of 5 years with acute P. falciparum malaria. The considerable variability characterizing the pharmacokinetics of these two anti-malarial drugs can be partly explained by children's body weight, justifying the current dosing recommendations based on weight-for-age considerations, to ensure similar exposure in children and adults.
ACT:
artemisinin-based combination therapy
AIC:
Akaike's information criterion
ALT:
alanine aminotransferases
AMLF:
artemether–lumefantrine
ASMQ:
artesunate–mefloquine
AST:
aspartate
area under the curve
BIL:
total bilirubin
BQL:
below the quantification limit
BSV:
between-subject variability
BW:
CL:
clearance (CL for drugs and CLM for metabolite)
Cmax :
maximum concentration
CWRES:
conditional weighted residuals
DHA:
DNDi :
Drugs for Neglected Diseases initiative
ESI:
electrospray ionization
F1:
relative bioavailability
fixed-dose artesunate-based combination therapy consortium
FDC:
fixed dose combination
FOCE:
first-order conditional method
Hb:
Ht:
Ka :
absorption rate constant
Kmat :
the age maturation rate constant
LOQ:
limits of quantification
MATmag :
maturation magnitude for age
MPE:
means of prediction error
OFV:
objective function value
pharmacokinetic parameters
PcVPc:
prediction-corrected visual predictive checks
PI90 % :
90% prediction interval
PK:
inter-compartmental clearance
RMSE:
root mean square error
t1/2 :
terminal half-life
TM50 :
age at 50% of maturation
tmax :
time to achieve Cmax
TSQ:
a triple stage quadrupole
ULN:
upper limit of normal
central volume of distribution (VC for drugs and VM for metabolite)
VP :
peripheral volume of distribution
WWARN:
World Wide Antimalarial Resistance Network
WHO. World Malaria Report 2018. Geneva: World Health Organization; 2018.
WHO. Antimalarial drug combination therapy. Report of a WHO technical Consultation. Geneva: Roll Back Malaria/World Health Organization; 2018.
Wells S, Diap G, Kiechel JR. The story of artesunate–mefloquine (ASMQ), innovative partnerships in drug development: case study. Malar J. 2013;12:68.
WHO. Guidelines for the treatment of malaria. Geneva: World Health Organization; 2015.
Luxemburger C, Price RN, Nosten F, Ter Kuile FO, Chongsuphajaisiddhi T, White NJ. Mefloquine in infants and young children. Ann Trop Paediatr. 1996;16:281–6.
ter Kuile FO, Nosten F, Luxemburger C, Kyle D, Teja-Isavatharm P, Phaipun L, et al. Mefloquine treatment of acute falciparum malaria: a prospective study of non-serious adverse effects in 3673 patients. Bull World Health Organ. 1995;73:631–42.
Leang R, Ros S, Duong S, Navaratnam V, Lim P, Ariey F, et al. Therapeutic efficacy of fixed dose artesunate–mefloquine for the treatment of acute, uncomplicated Plasmodium falciparum malaria in Kampong Speu, Cambodia. Malar J. 2013;2:343.
Valecha N, Srivastava B, Dubhashi NG, Rao BHK, Kumar A, Ghosh SK, et al. Safety, efficacy and population pharmacokinetics of fixed-dose combination of artesunate–mefloquine in the treatment of acute uncomplicated Plasmodium falciparum malaria in India. J Vector Borne Dis. 2013;50:258–64.
Santelli AC, Ribeiro I, Daher A, Boulos M, Marchesini PB, Santos R, et al. Effect of artesunate–mefloquine fixed-dose combination in malaria transmission in Amazon basin communities. Malar J. 2012;11:286.
Ashley EA, Stepniewska K, Lindegårdh N, McGready R, Hutagalung R, Hae R, et al. Population pharmacokinetic assessment of a new regimen of mefloquine used in combination treatment of uncomplicated falciparum malaria. Antimicrob Agents Chemother. 2006;50:2281–5.
Krudsood S, Looareesuwan S, Silachamroon U, Chalermrut K, Pittrow D, Cambon N, et al. Artesunate and mefloquine given simultaneously for three days via a prepacked blister is equally effective and tolerated as a standard sequential treatment of uncomplicated acute Plasmodium falciparum malaria: randomized, double-blind study in Thailand. Am J Trop Med Hyg. 2002;67:465–72.
Sirima SB, Ogutu B, Lusingu J, Mtoro A, Mrango Z, Ouedraogo A, et al. Comparison of artesunate–mefloquine and artemether–lumefantrine fixed-dose combinations for treatment of uncomplicated Plasmodium falciparum malaria in children younger than 5 years in sub-Saharan Africa: a randomised, multicentre, phase 4 trial. Lancet Infect Dis. 2016;16:1123–33.
Hodel EM, Zanolari B, Mercier T, Biollaz J, Keiser J, Olliaro P, et al. A single LC-tandem mass spectrometry method for the simultaneous determination of 14 antimalarial drugs and their metabolites in human plasma. J Chromatogr B Analyt Technol Biomed Life Sci. 2009;877:867–86.
Beal, SL. Boeckmann A, Sheiner L. NONMEM User's Guides (1989–2009). Icon Development Solutions, Ellicot City, MD, USA.
Lindbom L, Pihlgren P, Jonsson EN. PsN-Toolkit–a collection of computer intensive statistical methods for non-linear mixed effect modeling using NONMEM. Comput Meth Progr Bio. 2005;79:241–57.
Ilett BF, Ethell RT, Maggs JL, Davis TME, Batty KT, Burchell B, et al. Glucuronidation of dihydroartemisinin in vivo and by human liver microsomes and expressed UDP-glucuronosyltransferases. Drug Metab Dispos. 2002;30:1005–12.
Tan B, Naik H, Jang IJ, Yu KS, Kirsch LE, Shin CS, et al. Population pharmacokinetics of artesunate and dihydroartemisinin following single- and multiple-dosing of oral artesunate in healthy subjects. Malar J. 2009;8:304.
Morris CA, Tan B, Duparc S, Borghini-Fuhrer I, Jung D, Shin CS, et al. Effects of body size and gender on the population pharmacokinetics of artesunate and its active metabolite dihydroartemisinin in pediatric malaria patients. Antimicrob Agents Chemother. 2013;57:5889–900.
Holford NH. A size standard for pharmacokinetics. Clin Pharmacokinet. 1996;30:329–32.
Holford N, Heo YA, Anderson B. A pharmacokinetic standard for babies and adults. J Pharm Sci. 2013;102:2941–52.
Savic RM, Cowan MJ, Dvorak CC, Pai SY, Pereira L, Bartelink IH, et al. Effect of weight and maturation on busulfan clearance in infants and small children undergoing hematopoietic cell transplantation. Biol Blood Marrow Transplant. 2013;19:1608–14.
White NJ. Assessment of the pharmacodynamic properties of antimalarial drugs in vivo. Antimicrob Agents Chemother. 1997;41:1413–22.
Ahn JE, Karlsson MO, Dunne A, Ludden TM. Likelihood based approaches to handling data below the quantification limit using NONMEM VI. J Pharmacokinet Pharmacodyn. 2008;35:401–21.
Jonsson EN, Karlsson MO. Xpose–an S-PLUS based population pharmacokinetic/pharmacodynamic model building aid for NONMEM. Comput Methods Programs Biomed. 1999;58:51–64.
Sheiner LB, Beal SL. Some suggestions for measuring predictive performance. J Pharmacokinet Pharmacodyn. 1981;9:503–12.
Jullien V, Valecha N, Srivastava B, Sharma B, Kiechel JR. Population pharmacokinetics of mefloquine, administered as a fixed-dose combination of artesunate–mefloquine in Indian patients for the treatment of acute uncomplicated Plasmodium falciparum malaria. Malar J. 2014;13:187.
Reuter SE, Upton RN, Evans AM, Navaratnam V, Olliaro PL. Population pharmacokinetics of orally administered mefloquine in healthy volunteers and patients with uncomplicated Plasmodium falciparum malaria. J Antimicrob Chemother. 2015;70:868–76.
Benakis A, Paris M, Loutan L, Plessas CT, Plessas ST. Pharmacokinetics of artemisinin and artesunate after oral administration in healthy volunteers. Am J Trop Med Hyg. 1997;56:17–23.
Morris CA, Duparc S, Borghini-Fuhrer I, Jung D, Shin C-S, Fleckenstein Morris L. Review of the clinical pharmacokinetics of artesunate and its active metabolite dihydroartemisinin following intravenous, intramuscular, oral or rectal administration. Malar J. 2011;10:263.
Staehli Hodel EM, Guidi M, Zanolari B, Mercier T, Duong S, Kabanywanyi AM, et al. Population pharmacokinetics of mefloquine, piperaquine and artemether–lumefantrine in Cambodian and Tanzanian malaria patients. Malar J. 2013;12:235.
Gutman J, Green M, Durand S, Rojas OV, Ganguly B, Quezada WM, et al. Mefloquine pharmacokinetics and mefloquine-artesunate effectiveness in Peruvian patients with uncomplicated Plasmodium falciparum malaria. Malar J. 2009;8:58.
Krudsood S, Looareesuwan S, Tangpukdee N, Wilairatana P, Phumratanaprapin W, Leowattana W, et al. New fixed-dose artesunate–mefloquine formulation against multidrug-resistant Plasmodium falciparum in adults: a comparative phase IIb safety and pharmacokinetic study with standard-dose nonfixed artesunate plus mefloquine. Antimicrob Agents Chemother. 2010;54:3730–7.
Stepniewska K, Taylor W, Sirima SB, Ouedraogo EB, Ouedraogo A, Gansané A, et al. Population pharmacokinetics of artesunate and amodiaquine in African children. Malar J. 2009;8:200.
Lohy Das J, Dondorp AM, Nosten F, Phyo AP, Hanpithakpong W, Ringwald P, et al. Population pharmacokinetic and pharmacodynamic modeling of artemisinin resistance in Southeast Asia. AAPS J. 2017;19:1842–54.
Lohy Das JP, Kyaw MP, Nyunt MH, Chit K, Aye KH, Moe M, et al. Population pharmacokinetic and pharmacodynamic properties of artesunate in patients with artemisinin sensitive and resistant infections in Southern Myanmar. Malar J. 2018;17:126.
Crevoisier C, Handschin J, Barré J, Roumenov D, Kleinbloesem C. Food increases the bioavailability of mefloquine. Eur J Clin Pharmacol. 1997;53:135–9.
Dao NVH, Quoc NP, Ngoa ND, Thuy LT, The ND, Dai B, et al. Fatty food does not alter blood mefloquine concentrations in the treatment of falciparum malaria. Trans R Soc Trop Med Hyg. 2005;99:927–31.
Price R, Simpson JA, Teja-Isavatharm P, Than MM, Luxemburger C, Heppner DG, et al. Pharmacokinetics of mefloquine combined with artesunate in children with acute falciparum malaria. Antimicrob Agents Chemother. 1999;43:341–6.
Hodel EM, Genton B, Zanolari B, Mercier T, Duong S, Becket H-P, et al. Residual antimalarial concentrations before treatment in patients with malaria from Cambodia: indication of drug pressure. J Infect Dis. 2010;202:1088–94.
Hodel EM, Kabanywanyi AM, Malila A, Zanolari B, Mercier T, Beck HP, et al. Residual antimalarials in malaria patients from Tanzania–implications on drug efficacy assessment and spread of parasite resistance. PLoS ONE. 2009;4:e8184.
Simpson JA, Price R, Kuile F, Teja-Isavatharm P, Nosten F, Chongsuphajaisiddhi T, et al. Population pharmacokinetics of mefloquine in patients with acute falciparum malaria. Clin Pharmacol Ther. 1999;66:472–84.
JRK was the originator of the study proposal, coordinated the study, and reviewed the manuscript; MG performed the mathematical evaluation and drafted the manuscript; TM performed the bioanalytical data analysis; LD managed the work at CHUV, analysed and interpreted the data; CC designed the methodology and supervised the population PK work; BO performed and supervised the clinical work; GC coordinated all aspects related to the clinical study; MA contributed to the mathematical evaluation and preparation of the manuscript. All authors read and approved the final manuscript.
We would like to thank Joelle Vanraes for her continuous and thorough managerial support of the study and Louise Burrows for help with editing the manuscript. We would also like to thank the authors of the large clinical therapeutic study: Sirima BS, Ogutu B, Lusingu JPA, Mtoro A, Mrango Z, Ouedraogo A, Yaro JB, Onyango KO, Gesase S, Mnkande E, Ngocho JS, Ackermann I, Aubin F, Vanraes J, Strub N, and Carn G.
The datasets used and/or analysed during the current study are available from the corresponding author, according to DNDi's institutional guidelines.
The study protocol was reviewed and approved by national and independent ethics committees of all participating centers: Comité d'éthique pour la Recherche en Santé, Burkina Faso; KEMRI/National Ethics Review Committee, Kenya; Ministry of Health and Social Welfare, Tanzania. Written informed consent from a parent/guardian was obtained to enroll children younger than 5 years in the trial.
Agence Française de Développement, France; Department for International Development, UK; Dutch Ministry of Foreign Affairs, Netherlands; European and Developing Countries Clinical Trials Partnership; Fondation Arpe, Switzerland; Médecins Sans Frontières; Swiss Agency for Development and Cooperation, Switzerland.
School of Pharmaceutical Sciences, University of Geneva, University of Lausanne, Geneva, Switzerland
Monia Guidi & Chantal Csajka
Laboratory and Service of Clinical Pharmacology, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
Monia Guidi, Thomas Mercier, Manel Aouri, Laurent A. Decosterd & Chantal Csajka
Kenya Medical Research Institute, Kisumu, Kenya
Bernhards Ogutu
Drugs for Neglected Diseases initiative, Geneva, Switzerland
Gwénaëlle Carn & Jean-René Kiechel
Monia Guidi
Thomas Mercier
Manel Aouri
Laurent A. Decosterd
Chantal Csajka
Gwénaëlle Carn
Jean-René Kiechel
Correspondence to Jean-René Kiechel.
Artesunate (upper panel) and dihydroartemisinin (lower panel) goodness-of-fit plots of observed vs. individual and population predicted concentrations, and conditional weighted residuals (CWRES) vs. population predicted concentrations and time after dose.
Mefloquine goodness-of-fit plots of observed vs. individual and population predicted concentrations, and conditional weighted residuals (CWRES) vs. population predicted concentrations and time after dose.
Guidi, M., Mercier, T., Aouri, M. et al. Population pharmacokinetics and pharmacodynamics of the artesunate–mefloquine fixed dose combination for the treatment of uncomplicated falciparum malaria in African children. Malar J 18, 139 (2019). https://doi.org/10.1186/s12936-019-2754-6
Population pharmacokinetics
|
CommonCrawl
|
abstractmath.org 2.0
help with abstract math
Produced by Charles Wells Revised 2017-03-28
Introduction to this website website TOC website index
blog Back to Understanding Math Chapter head
MATHEMATICAL STRUCTURES
A mathematical structure is a set (or sometimes several sets) with various associated mathematical objects such as subsets, sets of subsets, operations and relations, all of which must satisfy various requirements (axioms). The collection of associated mathematical objects is called the structure and the set is called the underlying set.
This definition of mathematical structure is not a mathematical definition. The word "structure" is usually used in the definition or discussion of a particular kind of mathematical structure, without any general definition of the phrase "mathematical structure" being given.
In recent times it has become common to define a mathematical structure using either category theory or type theory. Math structures in practice are most commonly defined in terms of (mathematical) spaces, and category theory and type theory makes it easier to give definitions directly in terms of spaces rather than sets. For example, there is a category that is the definition of "group", and a certain kind of functor from it to the category of sets gives a discrete group, a functor to the category of topological spaces gives a topological group, and so on. See Toposes, Triples and Theories, page 125. For a general introduction to this idea, see Shulman, Homotopy type theory: the logic of space.
Examples of mathematical structures
In this article, I give some simple examples in detail. Although they are simple, they all have uses in one or another branch of math. After those examples, I describe (with links) some mathematical structures of major importance that an undergraduate math student will meet.
$\mathbb{N}$ is the set of all positive integers, $\mathbb{Z}$ is the set of all integers and $\mathbb{R}$ is the set of all real numbers.
Pointed sets
A pointed set is a set together with a particular element of the set.
The set $\{1,2,3\}$ together with $2$ is a pointed set. It would normally be written as $(\{1,2,3\},2)$.
$(\mathbb{R},0)$ is a pointed set.
$(\mathbb{R},\pi)$ is a pointed set. It is not the same pointed set as $(\mathbb{R},0)$.
$(\mathbb{Z},\pi)$ is not a pointed set because $\pi\notin\mathbb{Z}$.
A relation is a set $S$ together with a set of ordered pairs of elements of the set.
I will sometimes denote the set of ordered pairs by a Greek letter, for example $\alpha$ (pronounced "alpha"), so that $\alpha$ must be a subset of $S\times S$. Then, if $(s,s')$ is an ordered pair in the set $\alpha$, you could write "$s\,\alpha\, s'$", pronounced "$s$ is related by alpha to $s'$" or "$s$ alpha $s'$".
Small examples
The set \[\alpha:=\{(1,2),(2,3),(1,3)\}\] is a relation on the set $S=\{1,2,3\}$. It is in fact the familiar relation "$\lt$" on $S$, because if $m,n$ are in $S$, then $m\lt n$ if and only if $(m,n)$ is one of the pairs $(1,2)$, $(2,3)$, or $(1,3)$.
The concept that "$\lt$" is a set of ordered pairs takes a bit of getting used to.
The set $S:=\{1,2,3\}$ together with the set of ordered pairs \[\{(1,1),(1,2),(2,3),(3,1)\}\] is a relation. It is not a familiar relation. It is an arbitrary relation. (Click on "arbitrary". You might learn something.) If you call it "$\beta$" (pronounced "bay-ta" in the USA and "bee-ta" in Britain), then "$1\,\beta\,2$" is true but "$1\,\beta\,3$" is false.
$\mathbb{Z}$ together with the set $\{(m,m)\,|\,m=m\}$ (see setbuilder notation) is the equals relation on the set of all integers.
There are more examples of relations in the section on Maps between math structures.
Congruence relations
Congruence relations are defined by requiring that two integers be related if they have the same remainder when divided by a particular integer. A congruence relation is an example of an equivalence relation.
For example, every integer $n$ leaves one of these remainders when divided by $3$: $0$, $1$ or $2$. I will use the notation "$\underset{3}\sim$" for this relation and similarly for integers other than $3$. (The standard notation is described in the Wikipedia article.) So $5\underset{3}\sim17$ is true because both $5$ and $17$ leave a remainder of $2$ when divided by $3$, but $5\underset{3}\sim10$ is false.
Notice that for integers $m$ and $n$, $m\underset{2}\sim n$ is true if and only if $m$ and $n$ are both even or both odd.
A partition is a set together with a set of subsets (called "blocks") with the property that every element of the set is in exactly one block.
The number of partitions of a finite set is given by the Bell Numbers.
The set $\{1,2,3\}$ together with the set of subsets $\{\{1,2\},\{3\}\}$ is a partition. Note that $1\in\{1,2\}$ and not in $\{3\}$, similarly for $2$, and $3\in\{3\}$ and not in $\{1,2\}$, so this structure fits the definition of "partition".
The set $\{1,2,3,4,5,6,7\}$ together with the set of subsets \[\{\{1,4,7\},\{2,5\},\{3,6\}\}\] is a partition. It groups numbers together that have the same remainder when divided by $3$. It can be visualized like this:
The set $\{1,2,3\}$ together with the set of subsets $\{\{1,2\},\{1,3\}\}$ is not a partition because $1$ is in two different blocks.
The set $\{1,2,3,4\}$ together with the set of subsets $\{\{2,4\},\{1\}\}$ is not a partition because $3$ is not in any block.
Partition induced by a congruence relation
You can group the set $\mathbb{N}$ of positive integers into three blocks depending on what their remainder when divided by $3$ is. This produces three infinite blocks: $\{1,4,7,10,13,\ldots\}$, $\{2,5,8,11,14,\ldots\}$ and $\{3,6,9,12,15,\ldots\}$.
If you do this for $2$ instead of $3$ you get two blocks: the set of even positive integers and the set of odd positive integers.
There is a similar partition of $\mathbb{N}$ induced by any positive integer $n$, giving $n$ blocks according to remainders.
Cartesian product
If $S$ and $T$ are sets, then the cartesian product "$S\times T$" is set of all ordered pairs whose first entry is in $S$ and whose second entry is in $T$. In this section we only consider cartesian products with $S=T$.
For example, if $S=\{2,5,7\}$, then \[S\times S=\{(2,2),(2,5),(2,7),(5,2),(5,5),(5,7),(7,2),(7,5),(7,7)\}\]
A binary operation on a set $S$ is a function $b:S\times S\to S$.
The operation of adding two real numbers is a binary operation $+:\mathbb{R}\to\mathbb{R}$.
Like many common binary operations, it is written between the two numbers; you write "$2+5$" instead of "$+(2,5)$". See infix notation.
Other familiar binary operations on the real numbers are multiplication and subtraction.
Many more examples of binary operations are given in the chapter on Examples of Functions.
Monoids
A monoid is a binary operation $\Delta:S\times S\to S$ for some set $S$ with two properties given below, using infix notation.
The usual notation for a monoid $\Delta:S\times S\to S$ is "$(S,\Delta)$". $S$ is called the underlying set of the monoid $(S,\Delta)$.
It would be perfectly OK to say simply that $\Delta$ is a monoid, not mentioning $S$. That's because $\Delta$ is a function, and by definition it has to have a domain of the form $S\times S$ for some set $S$. The notation "$(S,\Delta)$" is useful because it gives a name for the underlying set.
Axioms for monoids
Axiom 1: Associativity For all elements $r$, $s$ and $t$ of $S$, \[(r\Delta s)\Delta t=r\Delta(s\Delta t)\]
Axiom 2: Identity There is an element $e\in S$ with the property that for all $s\in S$, \[e\Delta s=s\Delta e=s\]
If several monoids are being considered, you can use $e_S$ for the identity of $(S,\Delta)$, $e_T$ for the identity of $(T,\nabla)$, and so on. But many authors simply use "$e$" (or "$1$") for the identity of any monoid. See overloaded notation.
It is customary in math to use juxtaposition for the binary operation of a monoid, so that the requirement for associativity is that $(rs)t=r(st)$ and for the identity element that $es=se=s$. However, I will use $\Delta$ here except in examples where some other notation is customary (usually the plus sign, the multiplication sign, or juxtaposition).
The concept of monoid is a particularly simple example of an algebraic structure. The most important algebraic structures include groups, rings, fields and vector spaces. All of them involve one or two binary operations that are monoids, mostly with additional properties.
Examples of monoids
Addition on the nonnegative integers is a monoid; in other words, $(\mathbb{N}\cup \{0\},+)$ is a monoid. Addition is associative, and the number $0$ is the identity element.
($\mathbb{Z},+)$, ($\mathbb{Q},+)$, and ($\mathbb{R},+)$ are also monoids, and so are the sets of nonnegative rational numbers and nonnegative real numbers with addition as operation.
Multiplication on the positive integers is a monoid; in other words, $(\mathbb{N},\times)$ is a monoid. The identity is the number $1$.
Let $S$ be a set. The set of all functions from $S$ to $S$ is denoted by $S\to S$ (this is one of several common notations). $(S\to S,\circ)$ is a monoid, where "$\circ$" denotes functional composition. The identity function is the identity of the monoid, and functional composition is always associative.
Let $S$ be a set. $\mathcal{P}(S)$ is the set of all subsets of $S$ (that is fairly common notation). Then $(\mathcal{P}(S),\cup)$ is a poset: Union is associative and the empty set is the identity.
Much more detail for monoids is given in the Wikipedia article on monoids. It includes many more examples besides the ones given here.
Some mathematical structures of major importance in math
A math major in a college in the USA is likely to meet with all three of these examples.
A group is an abstraction of symmetry in all of its meanings. The definition of a group says it is a monoid in which every element has an inverse, but the importance of groups comes from their association with symmetry. Most math majors in the USA learn about groups from some course or other.
The Wikipedia articles on groups, symmetries, symmetry groups and group actions include many examples and theorems concerning groups of symmetries.
Vector space
A vector space is a set, whose elements are called vectors, with an operation of addition on the vectors and an operation called scalar multiplication allowing a vector to be multiplied by a number (or more generally by an element of a particular field).
When you first meet with vectors as a student, typically (in the USA) in second or third semester calculus, they are drawn as arrows; the idea is that a vector is determined by a direction and a length. In that guise they have a basic role to play in physics. In fact, there are vector spaces whose elements are functions; then they are called function spaces and are a major tool in analysis.
The Wikipedia article on vector spaces describes the definition and uses of vector spaces in considerable detail. It is one of the better written math articles in Wikipedia.
Metric space
A metric space is a set $S$ together with a function $d:S\times S\to\mathbb{R}$ that satisfies a list of axioms that are all true for the distance function on the real line. So a metric space is an abstraction of the behavior of the distance function on the reals. In particular, in a metric space, you can define convergence to a limit.
A metric space is an example of a topological space, although topological spaces are a more general concept.
The Wikipedia article on metric spaces defines metric spaces and gives many examples of them.
Maps between math structures
Structures of a given type may have maps between them that preserve the structure. Such maps are often called morphisms, but particular types of structures may have their own names for structure-preserving functions. I give some examples here. More examples of structure-preserving maps are given in the article Functions: Images and Metaphors.
Morphisms of pointed sets
If $(S,s)$ and $(T,t)$ are pointed sets, then a function $m:S\to T$ is a morphism of pointed sets if and only if $m(s)=t$.
$(\mathbb{Z},42)$ and $(\mathbb{R},\pi)$ are pointed sets. The function $f$ defined by $f(n):=(n-41)\pi$ is a morphism of pointed sets, because $f(42)=\pi$.
The constant function that takes every integer in $\mathbb{Z}$ to $\pi$ is also a morphism from $(\mathbb{Z},42)$ to $(\mathbb{R},\pi)$.
The inclusion function $\text{inc}:\mathbb{Z}\to\mathbb{R}$ defined by $\text{inc}(n)=n$ is not a morphism from $(\mathbb{Z},42)$ to $(\mathbb{R},\pi)$ because $\text{inc}(42)\neq\pi$.
Morphisms of relations
Suppose $\alpha$ is a relation on $S$ and $\beta$ is a relation on $T$. Then a morphism of relations from $\alpha$ to $\beta$ is a function $f:S\to T$ satisfying the following requirement
If $s\,\alpha\, s'$ then $f(s)\,\beta\,tf(s')$.
Increasing function
Earlier I mentioned the relation \[\alpha:=\{(1,2),(2,3),(1,3)\}\] on the set $S=\{1,2,3\}$, which is the "less-than" relation "$\lt$" on $S$. Now let $T:=\{1,2,3,4\}$. Then "$\lt$" is also a relation on $T$, namely the relation \[\beta:=\{(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)\}\] Then the function $f:S\to T$ defined by $f(1)=1$, $f(2)=2$ and $f(3)=4$ is a morphism of relations from $(S,\alpha)$ to $(T,\beta)$. The easiest way to check this is by drawing a picture:
In this picture, the less-than relation in each of the sets is simply "above", and you can see that if one number is above another number on the left, then the numbers that the arrows take them to have the same relationship.
A function that preserves the "less-than" relation is called a strictly increasing function. The function that takes $1\to 1$, $2\to 4$, $3\to 4$ does not preserve "$\lt$" since $2\lt3$ but $4$ is not less than $4$. However, that function does preserve the "$\leq$" relation, since $4\leq4$. As you might guess, such a function is called an increasing function but not a strictly increasing function.
Maps between congruence relations
Let's look at the doubling function $d:\mathbb{N}\to\mathbb{N}$ defined by $d(n)=2n$.
The function $d$ is a morphism of relations from $\underset{3}\sim$ to $\underset{2}\sim$. To prove this requires showing that if $m\underset{3}\sim n$, then $2m\underset{2}\sim 2n$. Now, by definition, for any integers $r$ and $s$, $r\underset{2}\sim s$ means that $r$ and $s$ are both even or both odd. But for any integer $r$, $2r$ is always even! So the statement "If $m\underset{3}\sim n$, then $2m\underset{2}\sim 2n$" is true because "$2m\underset{2}\sim 2n$" is always true!
If you are not sure you understand this, read about the truth table for conditional assertions: If $Q$ is true, then "If $P$ then $Q$" is always true.
On the other hand, $d$ is not a morphism of relations from $\underset{2}\sim$ to $\underset{3}\sim$. This is false by counterexample: $4\underset{2}\sim 8$ because they are both even, but $8\underset{3}\sim 16$ is false, because when $8$ is divided by $3$ the remainder is $2$, whereas when $16$ is divided by $3$ the remainder is $1$.
Morphisms of partitions
Let $S$ and $T$ be sets and $P_S$ be a partition of $S$ and $P_T$ a partition of $T$. A function $f:S\to T$ is a morphism of partitions if $f$ takes every element of a block of $P_S$ into one particular block of $P_T$.
We looked previously at these two partitions:
The partition $P:=\{\{1,2\},\{3\}\}$ of the set $\{1,2,3\}$.
The partition $Q:=\{\{1,4,7\},\{2,5\},\{3,6\}\}$ of the set $\{1,2,3,4,5,6,7\}$.
Any constant function is a morphism of partitions from $P$ to $Q$, for example this one:
These are also morphisms of partitions:
The inclusion function is not a morphism of partitions (in this case!).
This function is also not a morphism:
Morphisms of binary operations
Suppose $*$ is a binary operation on a set $S$ and $\#$ is a binary operation on a set $T$. Then a function $f:S\to T$ is a morphism of binary operations if for all $s,s'\in S$, $f(s* s')=f(s)\# f(s')$. The customary way of saying this is: "$f:(S,*)\to(T,\#)$ is a homomorphism from $(S,*)$ to $(T,\#)$".
Many examples of morphisms of binary operations are described in the abstractmath.org article Functions: Images and Metaphors. I will give one example here.
Exponentiation
Both addition and multiplication are binary operations on the set $\mathbb{R}$ of real numbers. Let $E:\mathbb{R}\to\mathbb{R}$ be the function defined by $E(r):=2^r$. Then $E:(\mathbb{R},+)\to(\mathbb{R},\times)$ is a homomorphism.
This follows from the law of exponents: \[E(r+s)=2^{r+s}=2^r\times 2^s=E(r)\times E(s)\]
Morphisms of monoids
Suppose $(S,\Delta)$ and $(T,\nabla)$ are monoids with identities $e_S$ and $e_{T}$. Then a function $h:S\to T$ is a homomorphism if \[\text{(ME)}\,\,\,\,\,\,\,\,h(e_S)=e_T\] and for all elements $s, s'$ of $S$, \[\text{(MM)}\,\,\,\,\,\,\,\,h(s\Delta s')=h(s)\nabla h(s')\] This is described by saying "$h$ preserves the identity and the binary operation."
$(\mathbb{Z},+)$ is a monoid ($0$ is the identity). Let $h:\mathbb{Z}\to\mathbb{Z}$ be multiplication by $42$. Then $h$ is a homomorphism from $(\mathbb{Z},+)$ to $(\mathbb{Z},+)$.
$h$ preserves the identity: \[h(0)=42\times 0=0\]
$h$ preserves addition: \[h(m+n)=42(m+n)=42m+42n=h(m)+h(n)\]
$(\mathbb{R},+)$ and $(\mathbb{R},\times)$ are not only binary operations, they are monoids: Both "$+$" and "$\times$" are associative, and the identities are $0$ and $1$ respectively. We saw previously that the exponential map is a morphism of the binary operations, but it also preserves the identities, since $2^0=1$. So exponentiation from $(\mathbb{R},+)$ to $(\mathbb{R},\times)$ is a homomorphism of monoids.
How to think about mathematical structures
Many structures on one set
The same set can have many different structures on it. For example, a two-element set has two different partition structures on it and sixteen different binary operations on it.
Canonical structures
Widely-used mathematical objects generally have "canonical structures" (or "standard" structures) of various types on them. For example, the set of integers can be ordered in many ways, but it has a particular ordering (the familiar one) that is referred to as "the ordering of the integers". This is the ordering that begins $1\lt2\lt3\lt4\ldots$
In the same way, the set of real numbers has a canonical ordering ($r\lt s$ means that there is a positive real number $t$ for which $r+t=s$), a canonical algebraic structure, a canonical metric space structure and a canonical topology.
Minimality
Presenting a complex mathematical idea as a mathematical structure involves finding a minimal or nearly minimal set of associated objects (a structure) and a minimal or nearly minimal set of conditions on those objects from which the theorems about the structure follow. The ingredients of the structure are kept (nearly) non-redundant so that it is easier to verify that some object is an example of that kind of structure. This is essentially the main use of the axiomatic method
This small set of objects and conditions may not be the most important aspects of the structure for applications or for one's mental representation of the structure.
All this is discussed in more detail in the article on Definition.
Different definitions for the same structure
The same kind of structure can often be defined by two or more very different kinds of minimal ingredients. A mathematical structure of a given type has lots of structure implied by the minimal definition, and when you think of a structure it is best to think of it as containing all that information, not just the stuff in the definition.
"Equivalence relation" and "partition" are two different ways of defining exactly the same structure on a set. This is explained in the Wikipedia article on The Fundamental Theorem of Equivalence Relations.
|
CommonCrawl
|
Mathematics Institute
WMI Gallery
UG Handbook
Maths Focus Groups
Erasmus Program
SIAM-IMA Student Chapter
Funding and Fees
Contacts and Links
WMI Research Interests
Centres & Programmes
MIRaW
WMI Preprints
Staff Research Interests
Seminars by subject
Number Theory seminar abstracts, Term 1 2017-18
C D E
Congruences and the local Jacquet-Langlands correspondence, by Shaun Stevens (joint work with Vincent Sécherre)
The local Jacquet—Langlands correspondence is a bijection between certain irreducible complex representations of a general linear group over a p-adic field and an inner form of such a group, defined by a character relation; equivalently, it is the bijection between these representations induced via the local Langlands correspondence. While the existence of the Jacquet—Langlands correspondence has been known since the 1980s -- before the local Langlands correspondence -- it is not yet known how to make it explicit in general, even though there are classifications of the irreducible representations on both sides (and more); moreover, all results so far (mostly due to Bushnell—Henniart) have concentrated on the ``cuspidal'' case, where the character relation is more amenable to computation.
As well as trying to explain what these words mean, I will report on work where we use mod-l ``congruences'' between representations (for l a prime different from p) to bear on this question, reducing most of the problem to the cuspidal case. Subsequent work of Dotto, using the same ideas, has made all but the ``unramified part'' of the correspondence explicit.
Congruent Numbers in Totally Real Fields, by Sam Edis
In 1983 Tunnell showed (assuming the BSD conjecture) that there is a finite time method to determine whether a given number is congruent. We will discuss issues with extending this to the case of totally real number fields, and give some extensions of his work and methods to this case.
Densities in the hyperelliptic ensemble, by Alexandra Florea
I will talk about the 1-level density and the pair correlation of zeros of quadratic Dirichlet L-functions over function fields. I will explain how one can obtain several lower order terms for both of these statistics, when the Fourier transform of the test function is sufficiently restricted. This is joint work with Hung Bui.
Equidistribution of rational points on the sphere, Sarnak's Conjecture, and the twisted Linnik Conjecture, by Raphael Steiner
It is a classical theorem in the theory of modular forms that the points $\boldsymbol{x}/\sqrt{N}$, where $\boldsymbol{x} \in \mathbb{Z}^n$ runs over all the solutions to $\sum_{i=1}^n x_i^2=N$, equidistribute on $S^{n-1}$ for $n \ge 4$ as $N$ (odd) tends to infinity. The rate of equidistribution poses however a more challenging problem. Due to its Diophantine nature the points inherit a repulsion property, which opposes equidistribution on small sets. Sarnak conjectures that this Diophantine repulsion is the only obstruction to the rate of equidistribution. Using the smooth delta-symbol circle method, developed by Heath-Brown, Sardari was able to show that the conjecture is true for $n\ge 5$ and recovering Sarnak's progress towards the conjecture for $n=4$. Building on Sardari's work, Browning, Kumaraswamy, and myself were able to reduce the conjecture to correlation sums of Kloosterman sums of the following type:
$$ \sum_{q \le Q} \frac{1}{q}S(m,n;q)exp(4 \pi i \alpha \sqrt{mn}/q). $$
Assuming the twisted Linnik conjecture, which states that the above sum is $O((Qmn)^{\epsilon})$ for $|\alpha|\le 2$, we are able to verify Sarnak's Conjecture. I will further discuss progress towards the twisted Linnik conjecture and the techniques involved. If the time permits I will lose some words about the differences in the automorphic and the circle method approach and how they may be combined.
Growth of torsion of elliptic curves in extensions of the rationals, by Enrique Gonzalez Jimenez
One of the main goals in the theory of elliptic curves is to characterize the possible torsion structures over a given number field, or over all number fields of a given degree. One of the milestone in this subject was the characterization of the rational case given by Mazur in 1978. Later, the quadratic case was obtained by Kamienny, Kenku and Momose in 1992. For greater degree a complete answer for this problem is still open, although there have been some advances in the last years.
The purpose of this talk is to shed light on how the torsion group of an elliptic curve defined over the rationals grows upon base change.
This is an ongoing project partially joint with Á. Lozano-Robledo, F. Najman and J. M. Tornero.
Quartic orders of D4-type with monogenic cubic resolvent, by Stanley Xiao
In a seminal paper, M. Bhargava showed that quartic orders are parametrized by $\operatorname{GL}_2(\mathbb{Z}) \times \operatorname{SL}_3(\mathbb{Z})$-orbits of pairs of ternary quadratic forms. Later, M. Wood showed that $\operatorname{GL}_2(\mathbb{Z})$-orbits of integral binary quartic forms parametrize pairs $(Q,C)$ where $Q$ is a quartic order and $C$ a monogenic cubic resolvent ring of $Q$. A quartic order $Q$ is said to be $D_4$-type if the Galois closure of the field of fractions of $Q$ has Galois group isomorphic to $D_4$. In a recent paper, Altug, Shankar, Varma, and Wilson enumerated quartic orders of $D_4$-type when counting by conductor. In this talk, we give a report on recent progress made on counting maximal pairs $(Q,C)$ with $Q$ a quartic order of $D_4$-type and $C$ a monogenic cubic resolvent ring of $Q$. This is joint work with C. Tsang.
Recent Progress on the Lind-Lehmer Problem for p-groups, by Chris Pinner
In 2005 Doug Lind generalized the concept of Mahler measure to an arbitrary compact abelian group. As in Lehmer's problem for the classical Mahler measure one can ask for the minimal non-trivial measure. For a finite abelian group this corresponds to the smallest non-trivial integral group determinant. After a brief survey of existing results I will present some new congruences satisfied by the Lind Mahler measure for p-groups. These enable us to determine the minimal measure when the p-group has one particularly large component and to compute the minimal measures for many new families of small p-groups.
This is joint work with Mike Mossinghoff of Davison College.
If there is time I will also mention some 3-group results from a summer undergraduate research project with Stian Clem which may suggest what is going on in general.
Recent progress on the generalized Gauss Circle Problem, and related topics, by David Lowry-Duda
The Gauss Circle problem concerns estimating the number of integer points contained within a circle of radius R centered at the origin. For large R, the number of points is very nearly the area of the circle, but the error term appears to be much smaller than expected. The generalized Gauss Circle problem refers to the analogous problem in dimension 3 or more. Using the theory of modular forms and theta functions, it is possible to tackle these problems. In this talk, I describe ideas and techniques leading to improved understanding of these error terms, as well as related topics concerning sums of coefficients of modular forms. This talk includes some joint work with Chan Ieong Kuan, Thomas Hulse of Morgan State, and Alex Walker of Brown University.
Some algebras associated to genus one curves, by Tom Fisher
Haile, Han and Kuo have studied certain non-commutative algebras associated to a binary quartic or ternary cubic form. These give an explicit realisation of an isomorphism relating the Weil-Chatelet and Brauer groups of an elliptic curve. I will describe how I expect their constructions to generalise to other genus one curves.
The Kakeya conjecture and number theory, by Ben Green
The Kakeya conjecture asserts that every subset of R^n containing a unit line in every direction has dimension n. Whilst the Kakeya conjecture itself does not have any number-theoretic implications (so far as I know), many arithmetic questions reside nearby. For example, there is a purely arithmetic (and very simple-to-state) conjecture which would imply the Kakeya conjecture. In another direction, it is my belief that questions about the distribution of arithmetic functions in progressions, such as the Elliott-Halberstam conjecture, are strictly harder than the Kakeya conjecture. I will discuss these issues.
Enquiries: +44 (0)24 7652 4695
Postgrad admissions
Zeeman Building
Coventry CV4 7AL
Staff Intranet | Alumni site
Warwick Maths on Facebook Warwick Maths on Twitter
Page contact: Adam Harper
Last revised: Mon 20 Nov 2017
|
CommonCrawl
|
A provisional conclusion about the effects of stimulants on learning is that they do help with the consolidation of declarative learning, with effect sizes varying widely from small to large depending on the task and individual study. Indeed, as a practical matter, stimulants may be more helpful than many of the laboratory tasks indicate, given the apparent dependence of enhancement on length of delay before testing. Although, as a matter of convenience, experimenters tend to test memory for learned material soon after the learning, this method has not generally demonstrated stimulant-enhanced learning. However, when longer periods intervene between learning and test, a more robust enhancement effect can be seen. Note that the persistence of the enhancement effect well past the time of drug action implies that state-dependent learning is not responsible. In general, long-term effects on learning are of greater practical value to people. Even students cramming for exams need to retain information for more than an hour or two. We therefore conclude that stimulant medication does enhance learning in ways that may be useful in the real world.
Yes, according to a new policy at Duke University, which says that the "unauthorized use of prescription medicine to enhance academic performance" should be treated as cheating." And no, according to law professor Nita Farahany, herself based at Duke University, who has called the policy "ill-conceived," arguing that "banning smart drugs disempowers students from making educated choices for themselves."
Furthermore, there is no certain way to know whether you'll have an adverse reaction to a particular substance, even if it's natural. This risk is heightened when stacking multiple substances because substances can have synergistic effects, meaning one substance can heighten the effects of another. However, using nootropic stacks that are known to have been frequently used can reduce the chances of any negative side effects.
With subtle effects, we need a lot of data, so we want at least half a year (6 blocks) or better yet, a year (12 blocks); this requires 180 actives and 180 placebos. This is easily covered by $11 for Doctor's Best Best Lithium Orotate (5mg), 200-Count (more precisely, Lithium 5mg (from 125mg of lithium orotate)) and $14 for 1000x1g empty capsules (purchased February 2012). For convenience I settled on 168 lithium & 168 placebos (7 pill-machine batches, 14 batches total); I can use them in 24 paired blocks of 7-days/1-week each (48 total blocks/48 weeks). The lithium expiration date is October 2014, so that is not a problem
Take at 10 AM; seem a bit more active but that could just be the pressure of the holiday season combined with my nice clean desk. I do the chores without too much issue and make progress on other things, but nothing major; I survive going to The Sitter without too much tiredness, so ultimately I decide to give the palm to it being active, but only with 60% confidence. I check the next day, and it was placebo. Oops.
How much of the nonmedical use of prescription stimulants documented by these studies was for cognitive enhancement? Prescription stimulants could be used for purposes other than cognitive enhancement, including for feelings of euphoria or energy, to stay awake, or to curb appetite. Were they being used by students as smart pills or as "fun pills," "awake pills," or "diet pills"? Of course, some of these categories are not entirely distinct. For example, by increasing the wakefulness of a sleep-deprived person or by lifting the mood or boosting the motivation of an apathetic person, stimulants are likely to have the secondary effect of improving cognitive performance. Whether and when such effects should be classified as cognitive enhancement is a question to which different answers are possible, and none of the studies reviewed here presupposed an answer. Instead, they show how the respondents themselves classified their reasons for nonmedical stimulant use.
Historically used to help people with epilepsy, piracetam is used in some cases of myoclonus, or muscle twitching. Its actual mechanism of action is unclear: It doesn't act exactly as a sedative or stimulant, but still influences cognitive function, and is believed to act on receptors for acetylcholine in the brain. Piracetam is used off-label as a 'smart drug' to help focus and concentration or sometimes as a way to allegedly boost your mood. Again, piracetam is a prescription-only drug - any supply to people without a prescription is illegal, and supplying it may result in a fine or prison sentence.
The term "smart pills" refers to miniature electronic devices that are shaped and designed in the mold of pharmaceutical capsules but perform highly advanced functions such as sensing, imaging and drug delivery. They may include biosensors or image, pH or chemical sensors. Once they are swallowed, they travel along the gastrointestinal tract to capture information that is otherwise difficult to obtain, and then are easily eliminated from the system. Their classification as ingestible sensors makes them distinct from implantable or wearable sensors.
This mental stimulation is what increases focus and attention span in the user. The FDA permitted treatments for Modafinil include extreme sleepiness and shift work disorder. It can also get prescribed for narcolepsy, and obstructive sleep apnea. Modafinil is not FDA approved for the treatment of ADHD. Yet, many medical professionals feel it is a suitable Adderall alternative.
Most research on these nootropics suggest they have some benefits, sure, but as Barbara Sahakian and Sharon Morein-Zamir explain in the journal Nature, nobody knows their long-term effects. And we don't know how extended use might change your brain chemistry in the long run. Researchers are getting closer to what makes these substances do what they do, but very little is certain right now. If you're looking to live out your own Limitless fantasy, do your research first, and proceed with caution.
Nootropics are a responsible way of using smart drugs to enhance productivity. As defined by Giurgea in the 1960's, nootropics should have little to no side-effects. With nootropics, there should be no dependency. And maybe the effects of nootropics are smaller than for instance Adderall, you still improve your productivity without risking your life. This is what separates nootropics from other drugs.
Herbal supplements have been used for centuries to treat a wide range of medical conditions. Studies have shown that certain herbs may improve memory and cognition, and they can be used to help fight the effects of dementia and Alzheimer's disease. These herbs are considered safe when taken in normal doses, but care should be taken as they may interfere with other medications.
"You know how they say that we can only access 20% of our brain?" says the man who offers stressed-out writer Eddie Morra a fateful pill in the 2011 film Limitless. "Well, what this does, it lets you access all of it." Morra is instantly transformed into a superhuman by the fictitious drug NZT-48. Granted access to all cognitive areas, he learns to play the piano in three days, finishes writing his book in four, and swiftly makes himself a millionaire.
Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.)
Bacopa is a supplement herb often used for memory or stress adaptation. Its chronic effects reportedly take many weeks to manifest, with no important acute effects. Out of curiosity, I bought 2 bottles of Bacognize Bacopa pills and ran a non-randomized non-blinded ABABA quasi-self-experiment from June 2014 to September 2015, measuring effects on my memory performance, sleep, and daily self-ratings of mood/productivity. Because of the very slow onset, small effective sample size, definite temporal trends probably unrelated to Bacopa, and noise in the variables, the results were as expected, ambiguous, and do not strongly support any correlation between Bacopa and memory/sleep/self-rating (+/-/- respectively).
Taken together, these considerations suggest that the cognitive effects of stimulants for any individual in any task will vary based on dosage and will not easily be predicted on the basis of data from other individuals or other tasks. Optimizing the cognitive effects of a stimulant would therefore require, in effect, a search through a high-dimensional space whose dimensions are dose; individual characteristics such as genetic, personality, and ability levels; and task characteristics. The mixed results in the current literature may be due to the lack of systematic optimization.
More recently, the drug modafinil (brand name: Provigil) has become the brain-booster of choice for a growing number of Americans. According to the FDA, modafinil is intended to bolster "wakefulness" in people with narcolepsy, obstructive sleep apnea or shift work disorder. But when people without those conditions take it, it has been linked with improvements in alertness, energy, focus and decision-making. A 2017 study found evidence that modafinil may enhance some aspects of brain connectivity, which could explain these benefits.
These are the most highly studied ingredients and must be combined together to achieve effective results. If any one ingredient is missing in the formula, you may not get the full cognitive benefits of the pill. It is important to go with a company that has these critical ingredients as well as a complete array of supporting ingredients to improve their absorption and effectiveness. Anything less than the correct mix will not work effectively.
Flow diagram of cognitive neuroscience literature search completed July 2, 2010. Search terms were dextroamphetamine, Aderrall, methylphenidate, or Ritalin, and cognitive, cognition, learning, memory, or executive function, and healthy or normal. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies meeting the inclusion criteria stated in the text.
Sometimes called smart drugs, brain boosters, or memory-enhancing drugs, the term "nootropics" was coined by scientist Dr. Corneliu E. Giurgea, who developed the compound piracetam as a brain enhancer, according to The Atlantic. The word is derived from the Greek noo, meaning mind, and trope, which means "change" in French. In essence, all nootropics aim to change your mind by enhancing functions like memory or attention.
Let's start with the basics of what smart drugs are and what they aren't. The field of cosmetic psychopharmacology is still in its infancy, but the use of smart drugs is primed to explode during our lifetimes, as researchers gain increasing understanding of which substances affect the brain and how they do so. For many people, the movie Limitless was a first glimpse into the possibility of "a pill that can make you smarter," and while that fiction is a long way from reality, the possibilities - in fact, present-day certainties visible in the daily news - are nevertheless extremely exciting.
Our 2nd choice for a Brain and Memory supplement is Clari-T by Life Seasons. We were pleased to see that their formula included 3 of the 5 necessary ingredients Huperzine A, Phosphatidylserine and Bacopin. In addition, we liked that their product came in a vegetable capsule. The product contains silica and rice bran, though, which we are not sure is necessary.
Before you try nootropics, I suggest you start with the basics: get rid of the things in your diet and life that reduce cognitive performance first. That is easiest. Then, add in energizers like Brain Octane and clean up your diet. Then, go for the herbals and the natural nootropics. Use the pharmaceuticals selectively only after you've figured out your basics.
Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment.
There are also premade 'stacks' (or formulas) of cognitive enhancing superfoods, herbals or proteins, which pre-package several beneficial extracts for a greater impact. These types of cognitive enhancers are more 'subtle' than the pharmaceutical alternative with regards to effects, but they work all the same. In fact, for many people, they work better than smart drugs as they are gentler on the brain and produce fewer side-effects.
Gamma-aminobutyric acid, also known as GABA, naturally produced in the brain from glutamate, is a neurotransmitter that helps in the communication between the nervous system and brain. The primary function of this GABA Nootropic is to reduce the additional activity of the nerve cells and helps calm the mind. Thus, it helps to improve various conditions, like stress, anxiety, and depression by decreasing the beta brain waves and increasing the alpha brain waves. It is one of the best nootropic for anxiety that you can find in the market today. As a result, cognitive abilities like memory power, attention, and alertness also improve. GABA helps drug addicts recover from addiction by normalizing the brain's GABA receptors which reduce anxiety and craving levels in the absence of addictive substances.
Taurine (Examine.com) was another gamble on my part, based mostly on its inclusion in energy drinks. I didn't do as much research as I should have: it came as a shock to me when I read in Wikipedia that taurine has been shown to prevent oxidative stress induced by exercise and was an antioxidant - oxidative stress is a key part of how exercise creates health benefits and antioxidants inhibit those benefits.
First off, overwhelming evidence suggests that smart drugs actually work. A meta-analysis by researchers at Harvard Medical School and Oxford showed that Modafinil has significant cognitive benefits for those who do not suffer from sleep deprivation. The drug improves their ability to plan and make decisions and has a positive effect on learning and creativity. Another study, by researchers at Imperial College London, showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions.
Going back to the 1960s, although it was a Romanian chemist who is credited with discovering nootropics, a substantial amount of research on racetams was conducted in the Soviet Union. This resulted in the birth of another category of substances entirely: adaptogens, which, in addition to benefiting cognitive function were thought to allow the body to better adapt to stress.
Because executive functions tend to work in concert with one another, these three categories are somewhat overlapping. For example, tasks that require working memory also require a degree of cognitive control to prevent current stimuli from interfering with the contents of working memory, and tasks that require planning, fluency, and reasoning require working memory to hold the task goals in mind. The assignment of studies to sections was based on best fit, according to the aspects of executive function most heavily taxed by the task, rather than exclusive category membership. Within each section, studies are further grouped according to the type of task and specific type of learning, working memory, cognitive control, or other executive function being assessed.
The U.S. Centers for Disease Control and Prevention estimates that gastrointestinal diseases affect between 60 and 70 million Americans every year. This translates into tens of millions of endoscopy procedures. Millions of colonoscopy procedures are also performed to diagnose or screen for colorectal cancers. Conventional, rigid scopes used for these procedures are uncomfortable for patients and may cause internal bruising or lead to infection because of reuse on different patients. Smart pills eliminate the need for invasive procedures: wireless communication allows the transmission of real-time information; advances in batteries and on-board memory make them useful for long-term sensing from within the body. The key application areas of smart pills are discussed below.
Two variants of the Towers of London task were used by Elliott et al. (1997) to study the effects of MPH on planning. The object of this task is for subjects to move game pieces from one position to another while adhering to rules that constrain the ways in which they can move the pieces, thus requiring subjects to plan their moves several steps ahead. Neither version of the task revealed overall effects of the drug, but one version showed impairment for the group that received the drug first, and the other version showed enhancement for the group that received the placebo first.
You'll find several supplements that can enhance focus, energy, creativity, and mood. These brain enhancers can work very well, and their benefits often increase over time. Again, nootropics won't dress you in a suit and carry you to Wall Street. That is a decision you'll have to make on your own. But, smart drugs can provide the motivation boost you need to make positive life changes.
As mentioned earlier, cognitive control is needed not only for inhibiting actions, but also for shifting from one kind of action or mental set to another. The WCST taxes cognitive control by requiring the subject to shift from sorting cards by one dimension (e.g., shape) to another (e.g., color); failures of cognitive control in this task are manifest as perseverative errors in which subjects continue sorting by the previously successful dimension. Three studies included the WCST in their investigations of the effects of d-AMP on cognition (Fleming et al., 1995; Mattay et al., 1996, 2003), and none revealed overall effects of facilitation. However, Mattay et al. (2003) subdivided their subjects according to COMT genotype and found differences in both placebo performance and effects of the drug. Subjects who were homozygous for the val allele (associated with lower prefrontal dopamine activity) made more perseverative errors on placebo than other subjects and improved significantly with d-AMP. Subjects who were homozygous for the met allele performed best on placebo and made more errors on d-AMP.
A 100mg dose of caffeine (half of a No-Doz or one cup of strong coffee) with 200mg of L-theanine is what the nootropics subreddit recommends in their beginner's FAQ, and many nootropic sellers, like Peak Nootropics, suggest the same. In my own experiments, I used a pre-packaged combination from Nootrobox called Go Cubes. They're essentially chewable coffee cubes (not as gross as it sounds) filled with that same beginner dose of caffeine, L-theanine, as well as a few B vitamins thrown into the mix. After eating an entire box of them (12 separate servings—not all at once), I can say eating them made me feel more alert and energetic, but less jittery than my usual three cups of coffee every day. I noticed enough of a difference in the past two weeks that I'll be looking into getting some L-theanine supplements to take with my daily coffee.
The hormone testosterone (Examine.com; FDA adverse events) needs no introduction. This is one of the scariest substances I have considered using: it affects so many bodily systems in so many ways that it seems almost impossible to come up with a net summary, either positive or negative. With testosterone, the problem is not the usual nootropics problem that that there is a lack of human research, the problem is that the summary constitutes a textbook - or two. That said, the 2011 review The role of testosterone in social interaction (excerpts) gives me the impression that testosterone does indeed play into risk-taking, motivation, and social status-seeking; some useful links and a representative anecdote:
The blood half-life is 12-36 hours; hence two or three days ought to be enough to build up and wash out. A week-long block is reasonable since that gives 5 days for effects to manifest, although month-long blocks would not be a bad choice either. (I prefer blocks which fit in round periods because it makes self-experiments easier to run if the blocks fit in normal time-cycles like day/week/month. The most useless self-experiment is the one abandoned halfway.)
Nootropics (/noʊ.əˈtrɒpɪks/ noh-ə-TROP-iks) (colloquial: smart drugs and cognitive enhancers) are drugs, supplements, and other substances that may improve cognitive function, particularly executive functions, memory, creativity, or motivation, in healthy individuals.[1] While many substances are purported to improve cognition, research is at a preliminary stage as of 2018, and the effects of the majority of these agents are not fully determined.
|
CommonCrawl
|
Online ISSN 1534-7486; Print ISSN 1056-3911
Previous issue | This issue | Most recent issue | All issues | Next issue | Previous article | Recently published articles | Next article
The defect of Fano $3$-folds
Author: Anne-Sophie Kaloghiros
Journal: J. Algebraic Geom. 20 (2011), 127-149
Published electronically: October 7, 2009
Erratum: J. Algebraic Geom. 21 (2012), 397-399.
Abstract | References | Additional Information
Abstract: This paper studies the rank of the divisor class group of terminal Gorenstein Fano $3$-folds. If $Y$ is not $\mathbb {Q}$-factorial, there is a small modification of $Y$ with a second extremal ray; Cutkosky, following Mori, gave an explicit geometric description of contractions of extremal rays on terminal Gorenstein $3$-folds. I introduce the category of weak-star Fanos, which allows one to run the Minimal Model Program (MMP) in the category of Gorenstein weak Fano $3$-folds. If $Y$ does not contain a plane, the rank of its divisor class group can be bounded by running an MMP on a weak-star Fano small modification of $Y$. These methods yield more precise bounds on the rank of $\operatorname {Cl} Y$ depending on the Weil divisors lying on $Y$. I then study in detail quartic $3$-folds that contain a plane and give a general bound on the rank of the divisor class group of quartic $3$-folds. Finally, I indicate how to bound the rank of the divisor class group of higher genus terminal Gorenstein Fano $3$-folds with Picard rank $1$ that contain a plane.
References [Enhancements On Off] (What's this?)
X. Benveniste, Sur le cone des $1$-cycles effectifs en dimension $3$, Math. Ann. 272 (1985), no. 2, 257–265 (French). MR 796252, DOI https://doi.org/10.1007/BF01450570
Ivan Cheltsov, Nonrational nodal quartic threefolds, Pacific J. Math. 226 (2006), no. 1, 65–81. MR 2247856, DOI https://doi.org/10.2140/pjm.2006.226.65
Cinzia Casagrande, Priska Jahnke, and Ivo Radloff, On the Picard number of almost Fano threefolds with pseudo-index $>1$, Internat. J. Math. 19 (2008), no. 2, 173–191. MR 2384898, DOI https://doi.org/10.1142/S0129167X08004625
C. Herbert Clemens, Double solids, Adv. in Math. 47 (1983), no. 2, 107–230. MR 690465, DOI https://doi.org/10.1016/0001-8708%2883%2990025-7
Alessio Corti, Del Pezzo surfaces over Dedekind schemes, Ann. of Math. (2) 144 (1996), no. 3, 641–683. MR 1426888, DOI https://doi.org/10.2307/2118567
Steven Cutkosky, Elementary contractions of Gorenstein threefolds, Math. Ann. 280 (1988), no. 3, 521–525. MR 936328, DOI https://doi.org/10.1007/BF01456342
Sławomir Cynk, Defect of a nodal hypersurface, Manuscripta Math. 104 (2001), no. 3, 325–331. MR 1828878, DOI https://doi.org/10.1007/s002290170030
Pierre Deligne, Théorie de Hodge. III, Inst. Hautes Études Sci. Publ. Math. 44 (1974), 5–77 (French). MR 498552
Alexandru Dimca, Betti numbers of hypersurfaces and defects of linear systems, Duke Math. J. 60 (1990), no. 1, 285–298. MR 1047124, DOI https://doi.org/10.1215/S0012-7094-90-06010-7
A. J. de Jong, N. I. Shepherd-Barron, and A. Van de Ven, On the Burkhardt quartic, Math. Ann. 286 (1990), no. 1-3, 309–328. MR 1032936, DOI https://doi.org/10.1007/BF01453578
Stephan Endraß, On the divisor class group of double solids, Manuscripta Math. 99 (1999), no. 3, 341–358. MR 1702593, DOI https://doi.org/10.1007/s002290050177
Robert Friedman, Simultaneous resolution of threefold double points, Math. Ann. 274 (1986), no. 4, 671–689. MR 848512, DOI https://doi.org/10.1007/BF01458602
Phillip A. Griffiths, On the periods of certain rational integrals. I, II, Ann. of Math. (2) 90 (1969), 460-495; ibid. (2) 90 (1969), 496–541. MR 0260733, DOI https://doi.org/10.2307/1970746
J. William Hoffman and Steven H. Weintraub, The Siegel modular variety of degree two and level three, Trans. Amer. Math. Soc. 353 (2001), no. 8, 3267–3305. MR 1828606, DOI https://doi.org/10.1090/S0002-9947-00-02675-1
V. A. Iskovskih, Fano threefolds. I, Izv. Akad. Nauk SSSR Ser. Mat. 41 (1977), no. 3, 516–562, 717 (Russian). MR 463151
V. A. Iskovskih, Fano threefolds. II, Izv. Akad. Nauk SSSR Ser. Mat. 42 (1978), no. 3, 506–549 (Russian). MR 503430
Anne-Sophie Kaloghiros. A classification of terminal quartic $3$-folds and applications to rationality questions. Preprint, arxiv:0908.0289.
Anne-Sophie Kaloghiros. The topology of terminal quartic $3$-folds. University of Cambridge PhD Thesis, arXiv:0707.1852.
Yujiro Kawamata, Crepant blowing-up of $3$-dimensional canonical singularities and its application to degenerations of surfaces, Ann. of Math. (2) 127 (1988), no. 1, 93–163. MR 924674, DOI https://doi.org/10.2307/1971417
Yujiro Kawamata, Katsumi Matsuda, and Kenji Matsuki, Introduction to the minimal model problem, Algebraic geometry, Sendai, 1985, Adv. Stud. Pure Math., vol. 10, North-Holland, Amsterdam, 1987, pp. 283–360. MR 946243, DOI https://doi.org/10.2969/aspm/01010283
Joseph M. Landsberg and Laurent Manivel, On the projective geometry of rational homogeneous varieties, Comment. Math. Helv. 78 (2003), no. 1, 65–100. MR 1966752, DOI https://doi.org/10.1007/s000140300003
Massimiliano Mella, Birational geometry of quartic 3-folds. II. The importance of being $\Bbb Q$-factorial, Math. Ann. 330 (2004), no. 1, 107–126. MR 2091681, DOI https://doi.org/10.1007/s00208-004-0542-1
Shigefumi Mori and Shigeru Mukai, Classification of Fano $3$-folds with $B_2\geq 2$. I, Algebraic and topological theories (Kinosaki, 1984) Kinokuniya, Tokyo, 1986, pp. 496–545. MR 1102273
Shigefumi Mori and Shigeru Mukai, Classification of Fano $3$-folds with $B_{2}\geq 2$, Manuscripta Math. 36 (1981/82), no. 2, 147–162. MR 641971, DOI https://doi.org/10.1007/BF01170131
Shigeru Mukai, New developments in Fano manifold theory related to the vector bundle method and moduli problems, Sūgaku 47 (1995), no. 2, 125–144 (Japanese). MR 1364825
Yoshinori Namikawa, Smoothing Fano $3$-folds, J. Algebraic Geom. 6 (1997), no. 2, 307–324. MR 1489117
Yoshinori Namikawa and J. H. M. Steenbrink, Global smoothing of Calabi-Yau threefolds, Invent. Math. 122 (1995), no. 2, 403–419. MR 1358982, DOI https://doi.org/10.1007/BF01231450
Yu. G. Prokhorov, The degree of Fano threefolds with canonical Gorenstein singularities, Mat. Sb. 196 (2005), no. 1, 81–122 (Russian, with Russian summary); English transl., Sb. Math. 196 (2005), no. 1-2, 77–114. MR 2141325, DOI https://doi.org/10.1070/SM2005v196n01ABEH000873
Miles Reid. Projective morphisms according to kawamata. Warwick online preprint, 1983.
B. Saint-Donat, Projective models of $K-3$ surfaces, Amer. J. Math. 96 (1974), 602–639. MR 364263, DOI https://doi.org/10.2307/2373709
Kil-Ho Shin, $3$-dimensional Fano varieties with canonical singularities, Tokyo J. Math. 12 (1989), no. 2, 375–385. MR 1030501, DOI https://doi.org/10.3836/tjm/1270133187
Hiromichi Takagi, Classification of primary $\Bbb Q$-Fano threefolds with anti-canonical Du Val $K3$ surfaces. I, J. Algebraic Geom. 15 (2006), no. 1, 31–85. MR 2177195, DOI https://doi.org/10.1090/S1056-3911-05-00416-9
A. N. Varchenko, Semicontinuity of the spectrum and an upper bound for the number of singular points of the projective hypersurface, Dokl. Akad. Nauk SSSR 270 (1983), no. 6, 1294–1297 (Russian). MR 712934
Jonathan Wahl, Nodes on sextic hypersurfaces in ${\bf P}^3$, J. Differential Geom. 48 (1998), no. 3, 439–444. MR 1638049
X. Benveniste. Sur le cone des $1$-cycles effectifs en dimension $3$. Math. Ann., 272(2):257–265, 1985. MR 796252 (86j:14005)
Ivan Cheltsov. Nonrational nodal quartic threefolds. Pacific J. Math., 226(1):65–81, 2006. MR 2247856 (2007e:14024)
Cinzia Casagrande, Priska Jahnke, and Ivo Radloff. On the Picard number of almost Fano threefolds with pseudo-index $>1$. Internat. J. Math., 19(2):173–191, 2008. MR 2384898 (2008m:14080)
C. Herbert Clemens. Double solids. Adv. in Math., 47(2):107–230, 1983. MR 690465 (85e:14058)
Alessio Corti. Del Pezzo surfaces over Dedekind schemes. Ann. of Math. (2), 144(3):641–683, 1996. MR 1426888 (98e:14037)
Steven Cutkosky. Elementary contractions of Gorenstein threefolds. Math. Ann., 280(3):521–525, 1988. MR 936328 (89k:14070)
Sławomir Cynk. Defect of a nodal hypersurface. Manuscripta Math., 104(3):325–331, 2001. MR 1828878 (2002g:14056)
Pierre Deligne. Théorie de Hodge. III. Inst. Hautes Études Sci. Publ. Math., (44):5–77, 1974. MR 0498552 (58:16653b)
Alexandru Dimca. Betti numbers of hypersurfaces and defects of linear systems. Duke Math. J., 60(1):285–298, 1990. MR 1047124 (91f:14041)
A. J. de Jong, N. I. Shepherd-Barron, and A. Van de Ven. On the Burkhardt quartic. Math. Ann., 286(1-3):309–328, 1990. MR 1032936 (91f:14038)
Stephan Endraß. On the divisor class group of double solids. Manuscripta Math., 99(3):341–358, 1999. MR 1702593 (2000f:14058)
Robert Friedman. Simultaneous resolution of threefold double points. Math. Ann., 274(4):671–689, 1986. MR 848512 (87k:32035)
Phillip A. Griffiths. On the periods of certain rational integrals. I, II. Ann. of Math. (2) 90 (1969), 460-495; ibid. (2), 90:496–541, 1969. MR 0260733 (41:5357)
J. William Hoffman and Steven H. Weintraub. The Siegel modular variety of degree two and level three. Trans. Amer. Math. Soc., 353(8):3267–3305 (electronic), 2001. MR 1828606 (2003b:11044)
V. A. Iskovskih. Fano threefolds. I. Izv. Akad. Nauk SSSR Ser. Mat., 41(3):516–562, 717, 1977. MR 463151 (80c:14023a)
V. A. Iskovskih. Fano threefolds. II. Izv. Akad. Nauk SSSR Ser. Mat., 42(3):506–549, 1978. MR 503430 (80c:14023b)
Yujiro Kawamata. Crepant blowing-up of $3$-dimensional canonical singularities and its application to degenerations of surfaces. Ann. of Math. (2), 127(1):93–163, 1988. MR 924674 (89d:14023)
Yujiro Kawamata, Katsumi Matsuda, and Kenji Matsuki. Introduction to the minimal model problem. In Algebraic geometry, Sendai, 1985, volume 10 of Adv. Stud. Pure Math., pages 283–360. North-Holland, Amsterdam, 1987. MR 946243 (89e:14015)
Joseph M. Landsberg and Laurent Manivel. On the projective geometry of rational homogeneous varieties. Comment. Math. Helv., 78(1):65–100, 2003. MR 1966752 (2004a:14050)
Massimiliano Mella. Birational geometry of quartic 3-folds. II. The importance of being $\mathbb {Q}$-factorial. Math. Ann., 330(1):107–126, 2004. MR 2091681 (2005h:14030)
Shigefumi Mori and Shigeru Mukai. Classification of Fano $3$-folds with $B_ 2\geq 2$. I. In Algebraic and topological theories (Kinosaki, 1984), pages 496–545. Kinokuniya, Tokyo, 1986. MR 1102273
Shigefumi Mori and Shigeru Mukai. Classification of Fano $3$-folds with $B_ {2}\geq 2$. Manuscripta Math., 36(2):147–162, 1981/82. MR 641971 (83f:14032)
Shigeru Mukai. New developments in the theory of Fano threefolds: vector bundle method and moduli problems [translation of Sūgaku 47 (1995), no. 2, 125–144]. Sugaku Expositions, 15(2):125–150, 2002. Sugaku expositions. MR 1364825 (96m:14059)
Yoshinori Namikawa. Smoothing Fano $3$-folds. J. Algebraic Geom., 6(2):307–324, 1997. MR 1489117 (99d:14040)
Yoshinori Namikawa and J. H. M. Steenbrink. Global smoothing of Calabi-Yau threefolds. Invent. Math., 122(2):403–419, 1995. MR 1358982 (96m:14056)
Yu. G. Prokhorov. The degree of Fano threefolds with canonical Gorenstein singularities. Mat. Sb., 196(1):81–122, 2005. MR 2141325 (2006e:14058)
B. Saint-Donat. Projective models of K$3$ surfaces. Amer. J. Math., 96:602–639, 1974. MR 0364263 (51:518)
Kil-Ho Shin. $3$-dimensional Fano varieties with canonical singularities. Tokyo J. Math., 12(2):375–385, 1989. MR 1030501 (90j:14050)
Hiromichi Takagi. Classification of primary $\mathbb {Q}$-Fano threefolds with anti-canonical Du Val $K3$ surfaces. I. J. Algebraic Geom., 15(1):31–85, 2006. MR 2177195 (2006k:14071)
A. N. Varchenko. Semicontinuity of the spectrum and an upper bound for the number of singular points of the projective hypersurface. Dokl. Akad. Nauk SSSR, 270(6):1294–1297, 1983. MR 712934 (85d:32028)
Jonathan Wahl. Nodes on sextic hypersurfaces in $\mathbb {P}^ 3$. J. Differential Geom., 48(3):439–444, 1998. MR 1638049 (99g:14055)
Anne-Sophie Kaloghiros
Affiliation: Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB, United Kingdom
Email: [email protected]
Received by editor(s): August 5, 2008
Received by editor(s) in revised form: February 24, 2009
Additional Notes: This work was partially supported by Trinity Hall, Cambridge
|
CommonCrawl
|
An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification
M. Attique Khan1,
Tallha Akram2,
Muhammad Sharif1,
Aamir Shahzad ORCID: orcid.org/0000-0003-4480-226X3,
Khursheed Aurangzeb4,5,
Musaed Alhussein4,
Syed Irtaza Haider4 &
Abdualziz Altamrah4
Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in its early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for the highly equipped environment. The recent advancements in computerized solutions for this diagnosis are highly promising with improved accuracy and efficiency.
In this article, a method for the identification and classification of the lesion based on probabilistic distribution and best features selection is proposed. The probabilistic distribution such as normal distribution and uniform distribution are implemented for segmentation of lesion in the dermoscopic images. Then multi-level features are extracted and parallel strategy is performed for fusion. A novel entropy-based method with the combination of Bhattacharyya distance and variance are calculated for the selection of best features. Only selected features are classified using multi-class support vector machine, which is selected as a base classifier.
The proposed method is validated on three publicly available datasets such as PH2, ISIC (i.e. ISIC MSK-2 and ISIC UDA), and Combined (ISBI 2016 and ISBI 2017), including multi-resolution RGB images and achieved accuracy of 97.5%, 97.75%, and 93.2%, respectively.
The base classifier performs significantly better on proposed features fusion and selection method as compared to other methods in terms of sensitivity, specificity, and accuracy. Furthermore, the presented method achieved satisfactory segmentation results on selected datasets.
Skin cancer is reported to be one of the most rapidly spreading cancer amongst other types. It is broadly classified into two primary classes; Melanoma and Benign. The Melanoma is the deadliest type of cancer with highest mortality rate worldwide [1]. In the US alone, an astonishing mortality rate of 75% is reported due to melanoma compared to other types of skin cancers [2]. The occurrence of melanoma reported to be doubled (increases 2 to 3% per year) in the last two decades, faster than any other types of cancer. American Cancer Society (ACS) has estimated, 87,110 new cases of melanoma will be diagnosed and 9,730 people will die in the US only in 2017 [3]. Malignant melanoma can be cured if detected at its early stages, e.g., if diagnosed at stage I, the possible survival rate is 96%, compared to 5% at its stage IV [4, 5]. However, early detection is strenuous due to its high resemblance with benign cancer, even an expert dermatologist can diagnose it wrongly. A specialized technique of dermatoscopy is mostly followed by dermatologist to diagnose melanoma. In a clinical examination, most commonly adopted methods of visual features inspection are; Menzies method [6], ABCD rule [7], and 7-point checklist [8]. The most commonly used methods are the ABCD (atypical, border, color, diameter) rules and pattern analysis. It is reported that this traditional dermoscopy method can increase the detection rate 10 to 27% [9]. These methods distinctly increases the detection rate compared to conventional methods but still dependent on dermatologist's skills and training [10]. To facilitate experts numerous computerized analysis systems have been proposed recently [11, 12] which are referred to as pattern analysis/ computerized dermoscopic analysis systems. These methods are non-invasive and image analysis based technique to diagnose the melanoma.
In the last decade, several non-invasive methods were introduced for the diagnosis of melanoma including optical imaging system (OIS) [13], optical coherence tomography (OCT) [14], light scattering (LS) [15], spectropolarimetric imaging system (SIM) [16, 17], fourier polarimetry (FP) [18], polarimetric imaging [19], reectance confocal microscopy (RCM) [20, 21], photo-acoustic microscopy [22], optical transfer diagnosis (OTD) [23], etc. All these above mentioned methods have enough potential to diagnose the skin lesions and also accurate enough to distinguish the melanoma and benign. The optical methods are mostly utilized during a clinal tests to evaluate the presurgical boundaries of the basal cell carcinoma. It can help in drawing boundaries around the region of interest (ROI) in the dermoscopic images. LS skin methods give the information about the micro-architecture, which is represented with small pieces of pigskin and mineral element and helps to determine the extent of various types of skin cancers. The SIM method correctly evaluates the polarimetric contrast of the region of interest or infectious region such as melanoma, compared to the background or healthy region. However, in FP method human skins is observed with laser scattering and difference is identified using optical method for the diagnostic test for differentiating melanoma and benign.
It is proved that malignant melanoma is a lethal skin cancer that is extra dominant between the 15 and above aged people [24]. The recent research shows high rate of failure to detect and diagnose this type of cancer at the early stages [25]. Generally, it consists of four major steps: preprocessing, which consists of hair removal, contrast enhancement, segmentation, feature extraction, and finally classification. The most challenging task in dermoscopy is an accurate detection of lesion's boundary because of different artifacts such as hairs, illumination effects, low lesion contrast, asymmetrical and irregular border, nicked edges, etc. Therefore, for an early detection of melanoma, shape analysis is more important. In features extraction step, several types of features are extracted such as shape, color, texture, local etc. But, we have no clear knowledge about salient features for classification.
In this article, we propose a new method of lesion detection and classification by implementing probabilistic distribution based segmentation method and conditional entropy controlled features selection. The proposed technique is an amalgamation of five major steps: a) contrast stretching; b) lesion extraction; c) multi-level features extraction; d) features selection and e) classification of malignant and benign. The results are tested on three publicly available datasets which are PH2, ISIC (i.e. ISIC MSK-2 and ISIC UDA), and Combined (ISBI 2016 and ISBI 2017), containing RGB images of different resolutions, which are later normalized in our proposed technique. Our main contributions are enumerated below:
Enhanced the contrast of a lesion area by implementing a novel contrast stretching technique, in which we first calculated the global minima and maxima from the input image and then utilized low and high threshold values to enhance the lesion.
Implemented a novel segmentation method based on normal and uniform distribution. Mean of the uniform distribution is calculated from the enhanced image and the value is added in an activation function, which is introduced for segmentation. Similarly, mean deviation of the normal distribution is calculated using enhanced image and also inserted their values in an activation function for segmentation.
A fusion of segmented images is implemented by utilizing additive law of probability.
Implemented a novel feature selection method, which initially calculate the Euclidean distance between fused feature vector by implementing an Entropy-variance method. Only most discriminant features are later utilized by multi-class support vector machine for classification.
The chronological order of this article is as follows: The related work of skin cancer detection and classification is described in "Related work" section. "Methods" section explains the proposed method, which consists of several sub steps including contrast stretching, segmentation, features extraction, features fusion, classification etc. The experimental results and conclusion of this article are described in "Results" and "Discussion" sections.
In the last few decades, advance techniques in different domains of medical image processing, machine learning, etc., have introduced tremendous improvements in computer aided diagnostic systems. Similarly, improvements in dermatological examination tools have led the revolutions in the prognostic and diagnostic practices. The computerized features extractions of cutaneous lesion images and features analysis by machine learning techniques have potential to enroute the conventional surgical excision diagnostic methods towards CAD systems.
In literature several methods are implemented for automated detection and classification of skin cancer from the dermoscopic images. Omer et al. [26] introduced an automated system for an early detection of skin lesion. They utilized color features prior to global thresholding for lesion's segmentation. The enhanced image was later subjected to 2D Discrete Fourier Transform (DCT) and 2D Fast Fourier Transform (FFT) for features extraction prior to the classification step. The results were tested on a publicly available dataset PH2. Barata et al. [27] described the importance of color features for detection of skin lesion. The color sampling method is utilized with Harris detector and compared their performance with grayscale sampling. Also, compared the color-SIFT (scale invariant feature transform) and SIFT features and conclude that color-SIFT features performs good as compare to SIFT. Yanyang et al. [28] introduced an novel method for melanoma detection based on Mahalanobis distance learning and graph regularized non-negative matrix factorization. The introduced method treated as a supervised learning method and reduced the dimensionality of extracted set of features and improves the classification rate. The method is evaluated on PH2 dataset and achieved improved performance. Catarina et al. [29] described the strategy of combination of global and local features. The local features (BagOf Features) and global features (shape and geometric) are extracted from original image and fused these features based of early fusion and late fusion. The author claim the late fusion is never been utilized in this context and it gives better results as compared to early fusion.
Ebtihal et al. [30] introduced an hybrid method for lesion classification using color and texture features. Four moments such as mean standard deviation, degree of asymmetry and variance is calculated against each channel, which are treated as a features. The local binary pattern (LBP) and gray level co-occurrences matrices (GLCM) were extracted as a texture features. Finally, the combined features were classified using support vector machine (SVM). Agn et al. [31] introduced a saliency detection technique for accurate lesion detection. The introduced method resolve the problems when the lesion borders are vague and the contrast between the lesion and inundating skin is low. The saliency method is reproduced with the sparse representaion method. Further, a Bayesian network is introduced that better explains the shape and boundary of the lesion. Euijoon et al. [38] introduced a saliency based segmentation technique where the background of original image was detected by spatial layout which includes boundaries and color information. They implemented Bayesian framework to minimize the detection errors. Similarly, Lei et al. [32] introduced a new method of lesion detection and classification based on multi-scale lesion biased representation (MLR). This proposed method has the advantage of detecting the lesion using different rotations and scales, compared to conventional methods of single rotation.
From above recent studies, we noticed that the colour information and contrast stretching is an important factor for accurately detection of lesion from dermoscopic images. Since the contrast stretching methods improves the visual quality of lesion area and improves the segmentation accuracy. Additionally, for improved classification, several features are utilized in literature but according to best our knowledge, serial based features fusion is not yet utilized. However, in our case only salient features are utilized which are later subjected to fusion for improved classification.
A new method is proposed for lesion detection and classification using probabilistic distribution based segmentation method and conditional entropy controlled features selection. The proposed method is consists of two major steps: a) lesion identification; b) lesion classification. For lesion identification, we first enhance the contrast of input image and then segment the lesion by implementation of novel probabilistic distribution (uniform distribution, normal distribution). The lesion classification is done based of multiple features extraction and entropy controlled most prominent features selection. The detailed flow diagram of proposed method is shown in Fig. 1.
Proposed architecture of skin lesion detection and classification
Contrast stretching
There are numerous contrast stretching or normalization techniques [34], which attempt to improve the image contrast by stretching pixels' specific intensity range to a different level. Most of the available options take gray image as an input and generate an improved output gray image. In our research work, the primary objective is to acquire a three channel RGB image having dimensions m×n×3. Although, the proposed technique can only work on a single channel of size m×n, therefore, in proposed algorithm we separately processed red, green and blue channel.
In RGB dermoscopic images, mostly the available contents are visually distinguishable into foreground which is infected region and the background. This distinctness is also evident in each and every gray channel, as shown in Fig. 2.
Information of original image and their respective channels: a original image; b red channel; c green channel; d blue channel
Considering the fact [35], details are always high with higher gradient regions which is foreground and details are low with the background due to low gradient values. We firstly divide the image into equal sized blocks and the compute weights for all regions and for each channel. For a single channel information, details are given below.
Gray channel is preprocessed using Sobel edge filter to compute gradients where kernel size is selected to be 3×3.
Gradient calculation for each equal sized block and rearranging in an ascending order. For each block the weights are assigned according to the gradient magnitude.
$$ \Gamma\zeta(x,y) = \left\{\begin{array}{ll} \varsigma_{w}^{b1} & if \upsilon_{c}(x,y)\leq T_{1}; \\ \varsigma_{w}^{b2} & T_{1} < \upsilon_{c}(x,y)\leq T_{2}; \\ \varsigma_{w}^{b3} & T_{1} < \upsilon_{c}(x,y) \leq T_{3}; \\ \varsigma_{w}^{b4} & otherwise \\ \end{array}\right. $$
where \(\varsigma _{w}^{bi}~(i\leq 4)\) are statistical weight coefficient and T i is gradient intervals threshold.
Cumulative weighted gray value is calculated for each block using:
$$ N_{g}(z)=\sum\limits_{i=1}^{4}\varsigma_{w}^{bi} n_{i}(z) $$
where n i (z) represents cumulative number of gray level pixels for each block i.
Concatenate red, green and blue channel to produce enhanced RGB image.
For each channel, three basic conditions are considered for optimized solution: I) extraction of regions with maximum information; II) selection of a block size; III) an improved weighting criteria. In most of the dermoscopic images, maximum informative regions are with in the range of 25−75%. Therefore, considering the minimum value of 25%, the number of blocks are selected to be 12 as an optimal number, with an aspect ratio of 8.3%. These blocks are later selected according to the criteria of maximal information retained (cumulative number of pixels for each block). Laplacian of Gaussian method (LOG) [36] is used with sigma value of two for edge detection. Weights are assigned according to the number of edge points, E pi for each block:
$$ B_{wi}=\frac{E_{pi}}{E^{b}_{max}} $$
where \(E^{b}_{max}\) is the block with maximum edges. Finally, adjust the intensity levels of enhance image and perform log operation to improved lesion region as compare to original.
$$ \varphi(AI)=\zeta (B_{wi}) $$
$$ \varphi(t)=C \times log(\beta + \varphi(AI)) $$
Where β is a constant value, (β≤10), which is selected to be 3 for producing most optimal results. ζ denotes the adjust intensity operation, φ(AI) is enhance image after ζ operation and φ(t) is final enhance image. The final contrast stretching results are shown in Fig. 3.
Proposed contrast stretching results
Lesion segmentation
Segmentation of skin lesion is an important task in the analysis of skin lesions due to several problems such as color variation, presence of hairs, irregularity of lesion in the image and necked edges. Accurate segmentation provides important cues for accurate border detection. In this article, a novel method is implemented based of probabilistic distribution. The probabilistic distribution is consists of two major steps: a) uniform distribution based mean segmentation; b) normal distribution based segmentation.
Mean segmentation
The uniform distribution of mean segmentation is calculated from enhanced image φ(t) and then perform threshold function for lesion extraction. The detailed description of mean segmentation is defined below: Let t denotes the enhanced dermoscopic image and f(t) denotes the function of uniform distribution, which is determined as \(f(t)=\frac {1}{y-x}\). Where y and x denotes the maximum and minimum pixels values of φ(t). Then the mean value is calculated as follows:
$$ \mu = \int_{x}^{y}t \ f(t)\ dt $$
$$ \quad=\int_{x}^{y}t \ \frac{1}{y-x} \ dt $$
$$ \quad=\frac{1}{y-x}\left [ \frac{t^{2}}{2} \right ]^{y}_{x} $$
$$ \quad=\frac{1}{2(y-x)}\left [(y+x)(y-x) \right ] $$
$$ \mu=\frac{1}{2}\left [(y+x) \right] $$
Then perform an activation function, which is define as follows:
$$ A(\mu)=\frac{1}{\left (1+\left (\frac{\mu}{\varphi(t)} \right) \right)^{\alpha}}+\frac{1}{2\mu}+ C $$
$$ F(\mu)=\left\{\begin{array}{ll} 1 & if\ A(\mu)\geq \delta_{thresh}\\ 0 & if\ A(\mu)<\delta_{thresh} \end{array}\right. $$
where δ thresh is Otus's threshold, α is a scaling factor which controls the lesion area and its value is selected on the basis of simulations performed, α≤10, and finally got α=7 to be most optimal number. C is a constant value which is randomly initialized within the range of 0 to 1. The segmentation results are shown in Fig. 4.
Proposed uniform distribution based mean segmentation results. a original image; b enhanced image; c proposed uniform based mean segmentation; d 2D contour image; e Contour plot; f 3D contour plot; g lesion area
Mean deviation based segmentation
The mean deviation (M.D) of normal distribution is is calculated from φ(t) having parameter μ and σ. The value of M.D is utilized by activation function for extraction of lesion from the dermoscopic images. Let t denotes the enhanced dermoscopic image and f(t) denotes the normalized function, which determined as \(f(t)=\frac {1}{\sqrt {2\pi }\sigma }e^{-\frac {1}{2}(\frac {t-\mu }{\sigma })^{2}}\). Then initialize the M.D as:
$$ M.D=\int_{-\infty}^{+\infty}\left | t-\mu \right |f(t) $$
$$ \qquad=\int_{-\infty}^{+\infty}\left | t-\mu \right | \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}\left(\frac{t-\mu}{\sigma}\right)^{2}} dt $$
Then put \(g=\frac {t- \mu }{\sigma }\) in Eq. 14.
$$ M.D=\frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^{+\infty}\left | \sigma g \right | e^{\frac{-g^{2}}{2}} dg $$
$$ \qquad=\frac{\sigma}{\sqrt{2\pi}}\left [ \int_{0}^{\infty}g \ e^{\frac{-g^{2}}{2}} dg + \int_{0}^{\infty}g \ e^{\frac{-g^{2}}{2}} dg \right ] $$
$$ M.D=\frac{2\sigma}{\sqrt{2\pi}} \int_{0}^{\infty}g \ e^{\frac{-g^{2}}{2}} dg $$
Put \(\frac {g^{2}}{2}=l\) in Eq. 17 and it becomes:
$$ M.D=\frac{2\sigma}{\sqrt{2\pi}} \int_{0}^{\infty}\sqrt{2l} \ e^{-l} \ \frac{dl}{\sqrt{2l}} $$
$$ \qquad=\frac{2\sigma}{\sqrt{2\pi}} \int_{0}^{\infty} e^{-l} \ dl $$
$$ \qquad=\sqrt{\frac{2}{\pi}}\sigma \left [ \frac{e^{-l}}{-1} \right ]^{\infty}_{0} $$
$$ \qquad=-\sqrt{\frac{2}{\pi}}\sigma \left [ \frac{1}{e^{l}} \right ]^{\infty}_{0} $$
$$ \qquad=-\sqrt{\frac{2}{\pi}}\sigma (-1) $$
$$ M.D=0.7979 \sigma $$
Then perform an activation function to utilize M.D as:
$$ AC(M.D)=\frac{1}{\left (1+\left (\frac{M.D}{\varphi(t)} \right) \right)^{\alpha}}+\frac{1}{2 \ M.D}+ C $$
$$ F(M.D)=\left\{\begin{array}{ll} 1 & if\ AC(M.D)\geq \delta_{thresh}\\ 0 & if\ AC(M.D)< \delta_{thresh} \end{array}\right. $$
The segmentation results of M.D is shown in Fig. 5.
Proposed normal distribution based M.D segmentation results. a original image; b enhanced image; c proposed M.D based segmentation; d 2D contour image; e Contour plot; f 3D contour plot; g lesion area
Image fusion
The term image fusion mean to combine the information of two or more than two images in one resultant image, which contains better information as compare to any individual image or source. The image fusion reduces the redundancy between two or more images and increase the clinical applicability for diagnosis. In this work, we implemented a union based fusion of two segmented images into one image. The resultant image is more accurate and having much information as compare to individual. Suppose N denotes the sample space, which contains 200 dermoscopic images. Let X1∈F(μ) which is mean segmented image. Let X2∈F(M.D) which M.D based segmented image. Let i denotes the X1 pixels values and j denotes the X2 pixels values and S denotes the both i and j pixels which pixels values are 1. It mean all 1 value pixels fall in S. Then X1∪X2 written as:
$$ X_{1}\cup X_{2}=(X_{1} \cup X_{2})\cap \phi $$
$$ P(X_{1}\cup X_{2})=P((X_{1} \cup X_{2}))\cap P(\phi) $$
$$ =\left\{\begin{array}{lll} \xi((X_{1}, X_{2})==1) & if &(i,j) \in z_{1} \\ \xi((X_{1}, X_{2})==0) & if & (i,j) \in z_{2} \end{array}\right. $$
Where z1 represented as ground truth Table 1.
Table 1 Ground truth table for z1
$$ \varrho (t)=\left\{\begin{array}{ll} 1 & if \ \ \sum\left[i, j\right]>1 \\ 0 & \ \ Otherwise \end{array}\right. $$
$$ P(X_{1}\cup X_{2})=P(X_{1})+ P(X_{2})-P(\phi) $$
Where P(ϕ) denotes the 0 values which presented as background and 1 denotes the lesion. The graphical results after fusion are shown in Fig. 6.
Proposed fusion results. a original image; b fused segmented image; c mapped on fused image; d ground truth image
In this section, we analyze our segmentation results in terms of accuracy or similarity index as compared to given ground truth values. We select randomly images from PH2 dataset and shows their results in tabular and graphical. The proposed segmentation results are directly compare to ground truth images as shown in Fig. 7. The testing accuracy against each selected dermoscopic image are depicted in Table 2. From Table 2 the accuracy of each image is above 90% and the maximum similarity rate is 98.10. From our analysis, the proposed segmentation results perform well as compare to existing methods [31, 37–39] in terms of border detection rate.
Proposed fusion results. a original image; b proposed segmented image; c mapped on proposed image; d ground truth image; e border on proposed segmented image
Table 2 Lesion detection accuracy as compared to ground truth values
Image representation
In this three types of features are extracted for the representation of an input image. The basic purpose of feature extraction is to find out a combination of most efficient features for classification. The performance of dermoscopic images mostly depends on the quality and the consistency of the selected features. In this work, three types of features are extracted such as color, texture and HOG for classification of skin lesion.
HOG features
The Histogram Oriented Gradients (HOG) features are originally introduced by Dalal [40] in 2005 for human detection. The HOG features are also called shape based features because they work on the shape of the object. In our case, the HOG features are extracted from segmented skin lesion and work efficiently because every segmented lesion have their own shape. As shown in Fig. 8, the HOG features are extracted from segmented lesion and obtain a feature vector of size 1×3780 because we have the size of segmented image is 96×128 and size of bins is 8×8. The size of extracted features are too high and they effect on the classification accuracy. For this reason, we implement a weighted conditional entropy with PCA (principle component analysis) on extracted feature vector. The PCA return the score against each feature and then weighted entropy is utilized to reduced the feature space and select the maximum 200 score features. The weighted conditional entropy is define as:
$$ E_{W}=\sum\limits_{i=1}^{K}\sum\limits_{j=1}^{K}W_{i,j}.\ P(i,j)log\frac{P(i)}{P(i,j)} $$
A system architecture of multiple features fusion and selection
Where i, j denotes the current and next feature respectively. Wi,j denotes the weights of selected features, which is selected between 0 and 1 (0≤W ij ≤1) and \( P(i,j)=\frac {W_{ij}. \ n_{ij}}{\sum _{ij=1}^{K}W_{ij}. \ n_{ij}}\). Hence the new reduce vector size is 1×200.
Harlick features
Texture information of an input image is an important component, which is utilized to identify the region of interest such as a lesion. For texture information of lesion, we extract the harlick features [41]. The harlick features are extracted from the segmented image as shown in Fig. 8. There are total 14 texture features implemented (i.e. autocorrelation, contrast, cluster prominence, cluster shade, dissimilarity, energy, entropy, homogeneity 1, homogeneity 2, maximum probability, average, variances, inverse difference normalized and inverse difference moment normalized) and a feature vector of size 1×14 is created. After calculating the mean, range and variance of each feature, the final vector is calculated having size 1×42.
Color features
The color information of the region of interest has attained strong prevalence for classification of lesions in malignant or benign. The color features provide a quick processing and are deeply robust to geometric variations of lesion patterns. Three types of color spaces are utilized for color features extraction such as RGB, HSI, and LAB. As shown in Fig. 9, the mean, variance, skewness and kurtosis are calculated for each selected channel. From Fig. 8, its shown clearly that the 1×12 features are extracted from each color space and total features of three color spaces having dimension of 1×36.
Selected channels for color features extraction
Features fusion
The goal of feature fusion is to create a new feature vector, which contains much information as compare to individual feature vector. Different types of features are extracted from same image always indicates the distinct characteristics of an image. The combination of these features effectively discriminate the information of extracted features and also eliminates the redundant information between them. The elimination of redundant information between extracted set of features provides improved classification performance. In this work, we implemented a parallel features fusion technique. The implemented technique efficiently fuse the all extracted features and also remove the redundant information between them. The fusion process is detailed as: Suppose C1,C2, and C3 are known lesion classes (i.e. melanoma, atypical nevi and benign). Let \(\Theta =\left \{ \psi \ | \ \psi \in \mathbb {R}^{K} \right \}\) denotes the testing images. As given three extracted feature sets \(D=\left \{ \alpha \ | \ \alpha \in \mathbb {R}^{h} \right \}, E=\left \{ j \ | \ j \in \mathbb {R}^{t} \right \}, \left \{ o \ | \ o \in \mathbb {R}^{c} \right \}\), where α, j and o are three feature vector (i.e. HOG, texture and color). Then the parallel fusion is define as:
$$ F\big(P^{//}\big)=(\alpha_{1}, \alpha_{2}, \ldots \alpha_{d})(j_{1}, j_{2},\ldots j_{d})(o_{1},o_{2},\ldots o_{d}) $$
Where d denotes the dimension of extracted set of features. As we know the dimension of each extracted feature vector (i.e. HOG (1×200), Texture (1×42) and Color (1×36). Then the fused vector is define as:
$$ \Upsilon \big(F_{s}^{//}\big)=\left (\alpha + \iota \ j, \alpha + \iota \ o \ | \ \alpha \in D, \ j \in E, \ o \in F\right) $$
It in an n dimensional complex vector, where n=max(d(D),d(E),d(F)). From previous expression, the HOG has maximum dimension 1×200. Hence, make the size of E and F feature vector equally to D vector. For this purpose adding zeros. For example below is a given matrix, which consists of three feature vectors.
$$ \left\{\begin{array}{l} D= (0.2 \ \ 0.7 \ \ 0.9 \ \ 0.11 \ \ 0.10 \ \ 0.56 \ \ \ldots \ \ 0.90)\\ E=(0.1 \ \ 0.3 \ \ 0.5 \ \ 0.17 \ \ 0.15)\\ F=(0.3 \ \ 0.17 \ \ 0.93 \ \ 0.15) \end{array}\right. $$
Then make the same size of feature vector, by adding zeros.
$$ \left\{\begin{array}{l} D= (0.2 \ \ 0.7 \ \ 0.9 \ \ 0.11 \ \ 0.10 \ \ 0.56 \ \ ... \ \ 0.90)\\ E=(0.1 \ \ 0.3 \ \ 0.5 \ \ 0.17 \ \ 0.15 \ \ 0.0 \ \ ... \ \ 0.0)\\ F=(0.3 \ \ 0.17 \ \ 0.93 \ \ 0.15 \ \ 0.0 \ \ 0.0 \ \ ... \ \ 0.0) \end{array}\right. $$
Finally, a novel feature selection technique is implemented on fused features vector and select the most prominent features for classification.
Features selection
The motivation behind the implementation of feature selection technique is to select the most prominent features for improving the accuracy and also make the system fast in terms of execution time. The major reasons behind feature selection technique are a) utilize only a selected group prominent features leads to increased the classification accuracy by the elimination of irrelevant features; b) the miniature group of features is discovered that maximally increases the performance of proposed method; c) select a group of features from the high dimensional features set for a dense and detailed data representation. In this work, a novel Entropy-Variances based feature selection method is implemented. The proposed method performs in two steps. First, it calculates the Bhattacharyya distance of fused feature vector. The Bhattacharyya distance find out the closeness between two features. It is utilized for classification of lesion classes and also consider more reliable as compare to Euclidean distance. Second, it implements an entropy-variance method on closeness features and select the most prominent features based on their maximum values. Entropy in a nutshell is the uncertainty measurement associated with initialization of the closeness features. Since base classifier is highly dependent on their initial conditions for their fast convergence and accurate approximation. Also, the selected closeness features should have maximum entropy value. To the best of our knowledge, entropy, especially in conjunction with Bhattacharyya distance and Variances, has never been adopted for selection of most prominent features. Let f i and fi+1 are two features of fused vector \(\Upsilon \left (F_{s}^{//}\right)\). The Bhattacharyya distance is calculated as:
$$ \vec{ B_{d}}=-ln \left(\sum\limits_{u \in \Upsilon \big(F_{s}^{//}\big)}\sqrt{\left(f_{i}(u). f_{i+1}(u)\right)}\right) $$
Then Entropy-variance is performed on crossness vector to find out the best features based of their maximum entropy value.
$$ \begin{aligned} {}E_{V}\left(\vec{ B_{d}}\right) &= -\frac{ln\left(f_{(i+1)}+ \sigma^{2}\right)}{ln\left(f_{i}+\sigma^{2}\right)+ ln\left(f_{i}-\sigma^{2}\right)}\\ &\quad \sum\limits_{f=1}^{\Upsilon} \left(H_{f_{i}}^{0} / \delta H \right) \; log_{2} \left(H_{f_{i}}^{0} / \delta H\right) \end{aligned} $$
$$ \delta H = \sum\limits_{f=0}^{\Upsilon -1}H_{0}^{i} $$
where \(H_{i}^{j}\) denotes the closeness set of features. Hence the size of selected feature vector is 1×172. The selected vector is feed to multi-class SVM for classification of lesion (i.e. melanoma, benign). The one-against all multi-class SVM [42] is utilized for classification.
Evaluation protocol
The proposed method is evaluated on four publicly available datasets including PH2, ISIC, and collective ISBI (ISBI 2016 and ISBI 2017). The proposed method is a conjunction of two primary steps: a) lesion identification; b) lesion classification (i.e. melanoma, benign, atypical nevi). The lesion identification results are discussed in their own section. In this section, we discussed proposed lesion classification results. Four classifications three types of features are extracted (i.e. texture, HOG, and color). The experimental results are obtained on each feature set individually and then compare their results with proposed feature vector (fused vector). The multi-class SVM is selected as a base classifier and compare their results with nine classifications method (decision tree (DT), quadratic discriminant analysis (QDA), quadratic SVM (Q-SVM), logistic regression (LR), Naive Bayes, weighted K-Nearest Neighbor (w-KNN), ensemble boosted tree (EBT), ensemble subspace discriminant (ESDA), and cubic KNN (C-KNN)). Seven measures are calculated for testing the performance of proposed method such as sensitivity, specificity, precision, false negative rate (FNR), false positive rate (FPR), and accuracy. Also, calculate the execution time of one image. The proposed method is implemented on MATLAB 2017a having personal computer Core i7 with 16GB of RAM.
Datasets & results
PH2 Dataset
The PH2 dataset [51] consists of 200 RGB dermoscopic images and of resolution (768×560). This dataset has three main divisions; a) melanoma; b) benign; c) common nevi. There are 40 melanoma, 80 benign and 80 common nev image are in this dataset. For validation 50:50 strategy is performed for training and testing of proposed method. Four experiments are done on different feature sets (i.e. harlick features, color features, HOG features, proposed features fusion and selection method) for given a comparison between individual set of features and proposed feature set. The proposed features fusion and selection with entropy-variances method results are depicted in Table 3. The proposed method obtain maximum accuracy 97.06%, sensitivity 96.67%, specificity 98.74%, precision 97.06% and FPR is 0.01. The individual feature set by without utilizing feature selection algorithm results are depicted in Table 4. The results of Tables 3 and 4 are confirmed by their confusion matrix in Table 5, which shows that proposed features fusion and selection method efficiently perform on base classifier as compare to other classification methods. The comparison of proposed method on PH2 dataset also given in Table 6, which shows the authenticity of proposed method.
Table 3 Proposed features fusion and selection results on PH2 dataset
Table 4 Results of individual extracted set of features using PH2 dataset
Table 5 Confusion matrix for PH2 dataset
Table 6 PH2 dataset: Comparison of proposed algorithm with existing methods
ISIC dataset
The ISIC dataset [52] is an institutional database and often used in skin cancer research. It is an open source database having high-quality RGB dermoscopic images of resolution (1022×1022). ISIC incorporates many sub-datasets but we selected: a) ISIC MSK-2 and b) ISIC-UDA. From ISIC MSK-2 dataset, we collected 290 images having 130 melanoma and 160 benign. For validation of proposed algorithm, we have performed four experiments on different types of features (i.e. Harlick features, Color features, HOG features and proposed features fusion and selection vector). Four different classification methods are compared with the base classifier (multi-class SVM). The proposed features fusion and selection results are shown in Table 7 having maximum accuracy 97.2%, sensitivity 96.60% and specificity 98.30% on the base classifier. The individual feature set results are depicted in Table 8, and base classifier (multi-class SVM) perform well as compared to other methods. The base classifier results are confirmed by their confusion matrix given in Table 9. From ISIC UDA dataset, we select total 233 images having 93 melanoma and 140 benign. The proposed method results are depicted in Table 10 having maximum accuracy 98.3% and specificity 100% on the base classifier. Also, the results on individual feature sets are depicted in the Table 11, which shows that the proposed features fusion and selection method perform significantly well as compared to Table 10. The base classifier results are confirmed by their confusion matrix given in the Table 12, which shows the authenticity of proposed method.
Table 7 Proposed features fusion and selection results on ISIC-MSK dataset
Table 8 Results for individual extracted set of features using ISIC-MSK dataset
Table 9 Confusion matrix for all set of extracted features using ISIC-MSK dataset
Table 10 Proposed features fusion and feature selection results on ISIC-UDA dataset
Table 11 Results for individual extracted set of features using ISIC-UDA dataset
Table 12 Confusion matrix for all set of extracted features using ISIC-UDA dataset
ISBI - 2016 & 17
These datasets - ISBI 2016 [52] and ISBI 2017 [53], are based on ISIC archive, which is a largest publicly available collection of quality controlled dermoscopic images for skin lesions. It contains separate training and testing RGB samples of different resolutions, such as ISBI 2016 contains 1279 images (273 melanoma and 1006 benign), where 900 images for training and 350 for testing the algorithm. The ISBI 2017 dataset contains total 2750 images (517 melanoma and 2233 benign) including 2000 training images and 750 testing. For experimental results, first experiments are done on each dataset separately and obtained classification accuracy 83.2%, and 88.2% on ISBI 2016, and ISBI 2017, respectively. The classification results are given in Tables 13 and 14, which is proved by their confusion matrix given in Table 16. After that, both datasets are combined and 10 fold cross-validation is performed for classification results. The maximum classification accuracy of 93.2% is achieved with multi-class SVM, presented in Table 15, which is also confirmed by their confusion matrix given in Table 16. The proposed method is also compared with [54], which has achieved maximum classification accuracy of 85.5%, AUC 0.826, sensitivity 0.853, and specificity 0.993 on ISBI 2016 dataset. However, with our method, achieved classification accuracy is 93.2%, AUC 0.96, sensitivity 0.930, and specificity 0.970, which confirms the authenticity and efficiency of our algorithm on combined dataset compared to [54]. Moreover, in [55] reported maximum AUC is 0.94 for skin cancer classification for 130 melanoma images, however, our method achieved AUC 0.96 on 315 melanoma images. In [56] and [57], the classification accuracy achieved is 85.0% and 81.33% for ISBI 2016 dataset. Upon comparison with [54–56], and [57], the proposed method performs significantly better on both (ISBI 2016 & 17) datasets.
Table 13 Classification results on ISBI 2016 dataset
Table 15 Classification results for challenge ISBI 2016 & ISBI 2017 dataset
Table 16 Confusion matrix for ISBI 2016, ISBI 2017, and Combined images dataset
In this section, we epitomized our proposed method in terms of tabular and visual results. The proposed method consists of two major steps: a) lesion identification; b) lesion classification as shown in the Fig. 1. The lesion identification phase has two major parts such as enhancement and segmentation. The lesion enhancement results are shown in the Fig. 3, which shows the efficiency of introduced technique. Then the lesion segmentation method is performed and their results in terms of quantitative and tabular in Table 2 and Figs. 4, 5, 6 and 7. After this extract multi-level features and fused based on parallel strategy. Then a novel feature selection technique is introduced and performed on fused feature vector to select the best features as shown in Fig. 8. Finally, the selected features are utilized by a multi-class SVM. The multi-class SVM selected as a base classifier. The purpose of features fusion and selection is to improve the classification accuracy and also make the system more efficient. Three publicly available datasets are utilized for classification purposes such as PH2, ISIC, and Combined dataset (ISBI 2016 and ISBI 2017). The individual feature results on selected datasets are presented in the Tables 4, 8, and 11. Then compared their results with proposed features fusion and selection as presented in the Tables 3, 7, and 10, which shows that proposed method performs significantly better in terms of classification accuracy and execution time. The base classifier results are also confirmed by their confusion matrix given in Tables 5, 9, and 12. Also, the comparison results of the PH2 dataset with existing methods is presented in the Table 6, which shows the efficiency of proposed method. Moreover, the proposed method is also evaluated on combination of ISBI 2016 and ISBI 2017 dataset and achieved classification accuracy 93.2% as presented in Table 15. The classification accuracy of proposed method on Combined dataset is confirmed by their confusion matrix given in Table 16, which shows the authenticity of proposed method as compared to existing methods.
In this work, we have implemented a novel method for the identification and classification of skin lesions. The proposed framework incorporates two primary phases: a) lesion identification; b) lesion classification. In the identification step, a novel probabilistic method is introduced prior to features extraction. An entropy controlled variances based features selection method is also implemented by combining Bhattacharyya distance, and with an aim of considering only discriminant features. The selected features are later utilized for classification in its final step using multi-class SVM. The proposed method is tested on three publicly available datasets (i.e. PH2, ISBI 2016 & 17, and ISIC), and it is concluded that the base classifier performs significantly better with proposed features fusion and selection method, compared to other existing techniques in term of sensitivity, specificity, and accuracy. Furthermore, the presented method achieved satisfactory segmentation results on selected datasets.
ABCD:
Atypical, border, color, diameter
ACS:
C-KNN:
Cubic KNN
DCT:
Discrete Fourier Transform
DT:
EBT:
Ensemble boosted tree
ESDA:
Ensemble subspace discriminant analysis
FFT:
FNR:
False negative rate
GLCM:
Gray level co-occurrences matrices
HOG:
Histogram Oriented Gradients
LBP:
local binary pattern
Laplacian of Gaussian
M.D:
Mean Deviation
MLR:
Multi-scale lesion biased representation
Principle component analysis
QDA:
Quadratic discriminant analysis
Q-SVM:
Quadratic SVM
RGB:
Red, Green, Blue
SIFT:
Scale-invariant feature transform
SVM:
W-KNN:
Weighted K-Nearest Neighbor
Rigel DS, Friedman RJ, Kopf AW. The incidence of malignant melanoma in the United States: issues as we approach the 21st century. J Am Acad Dermatol. 1996; 34(5):839–47.
Altekruse SF, Kosary CL, Krapcho M, Neyman N, Aminou R, Waldron W, Ruhl J, et al. SEER cancer statistics review, 1975–2007. Bethesda: National Cancer Institute 7; 2010.
Abuzaghleh O, Barkana BD, Faezipour M. Automated skin lesion analysis based on color and shape geometry feature set for melanoma early detection and prevention. In: Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island. IEEE: 2014. p. 1–6.
Freedberg KA, Geller AC, Miller DR, Lew RA, Koh HK. Screening for malignant melanoma: a cost-effectiveness analysis. J Am Acad Dermatol. 1999; 41(5):738–45.
Barata C, Ruela M, Francisco M, Mendonça T, Marques JS. Two systems for the detection of melanomas in dermoscopy images using texture and color features. IEEE Syst J. 2014; 8(3):965–79.
Menzies SW, Ingvar C, Crotty KA, McCarthy WH. Frequency and morphologic characteristics of invasive melanomas lacking specific surface microscopic features. Arch Dermatol. 1996; 132(10):1178–82.
Stolz W, Riemann A, Cognetta AB, Pillet L, Abmayr W, Holzel D, Bilek P, Nachbar F, Landthaler M. Abcd rule of dermatoscopy-a new practical method for early recognition of malignant-melanoma. Eur J Dermatol. 1994; 4(7):521–7.
Argenziano G, Fabbrocini G, Carli P, De Giorgi V, Sammarco E, Delfino M. Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: comparison of the ABCD rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Arch Dermatol. 1998; 134(12):1563–70.
Mayer J. Systematic review of the diagnostic accuracy of dermatoscopy in detecting malignant melanoma. Med J Aust. 1997; 167(4):206–10.
Braun RP, Rabinovitz H, Tzu JE, Marghoob AA. Dermoscopy research—An update. In: Seminars in cutaneous medicine and surgery, vol. 28, no. 3. Frontline Medical Communications: 2009. p. 165–71.
Katapadi AB, Celebi ME, Trotter SC, Gurcan MN. Evolving strategies for the development and evaluation of a computerised melanoma image analysis system. Comput Methods Biomech Biomed Eng Imaging Vis. 2017;:1–8.
Jaworek-Korjakowska J. Computer-aided diagnosis of micro-malignant melanoma lesions applying support vector machines. BioMed Res Int. 2016;:2016.
Safrani A, Aharon O, Mor S, Arnon O, Rosenberg L, Abdulhalim I. Skin biomedical optical imaging system using dual-wavelength polarimetric control with liquid crystals. J Biomed Opt. 2010; 15(2):026024.
Patalay R, Craythorne E, Mallipeddi R, Coleman A. An integrated skin marking tool for use with optical coherence tomography (OCT). In: Proc. of SPIE Vol, vol. 10037.2017. p. 100370Y–1.
Rajaram N, Nguyen TH, Tunnell JW. Lookup table–based inverse model for determining optical properties of turbid media. J Biomed Opt. 2008; 13(5):050501.
Aharon O Abdulhalim, Arnon O, Rosenberg L, Dyomin V, Silberstein E. Differential optical spectropolarimetric imaging system assisted by liquid crystal devices for skin imaging. J Biomed Opt. 2011; 16(8):086008.
Graham L, Yitzhaky Y, Abdulhalim I. Classification of skin moles from optical spectropolarimetric images: a pilot study. J Biomed Opt. 2013; 18(11):111403.
Ushenko AG, Dubolazov OV, Ushenko VA, Yu Novakovskaya O, Olar OV. Fourier polarimetry of human skin in the tasks of differentiation of benign and malignant formations. Appl Opt. 2016; 55(12):B56–B60.
Ávila FJ, Stanciu SG, Costache M, Bueno JM. Local enhancement of multiphoton images of skin cancer tissues using polarimetry. In: Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC, 2017 Conference on). IEEE: 2017. p. 1–1.
Stamnes JJ, Ryzhikov G, Biryulina M, Hamre B, Zhao L, Stamnes K. Optical detection and monitoring of pigmented skin lesions. Biomed Opt Express. 2017; 8(6):2946–64.
Pellacani G, Cesinaro AM, Seidenari S. Reflectance-mode confocal microscopy of pigmented skin lesions–improvement in melanoma diagnostic specificity. J Am Acad Dermatol. 2005; 53(6):979–85.
Oh J-T, Li M-L, Zhang HF, Maslov K, Stoica G, Wang LV. Three-dimensional imaging of skin melanoma in vivo by dual-wavelength photoacoustic microscopy. J Biomed Opt. 2006; 11(3):034032.
Swanson DL, Laman SD, Biryulina M, Ryzhikov G, Stamnes JJ, Hamre B, Zhao L, Sommersten E, Castellana FS, Stamnes K. Optical transfer diagnosis of pigmented lesions. Dermatol Surg. 2010; 36(12):1979–86.
Rademaker M, Oakley A. Digital monitoring by whole body photography and sequential digital dermoscopy detects thinner melanomas. J Prim Health Care. 2010; 2(4):268–72.
Moncrieff M, Cotton S, Hall P, Schiffner R, Lepski U, Claridge E. SIAscopy assists in the diagnosis of melanoma by utilizing computer vision techniques to visualise the internal structure of the skin. Med Image Underst Anal. 2001;:53–6.
Barata C, Marques JS, Rozeira J. Evaluation of color based keypoints and features for the classification of melanomas using the bag-of-features model. In: International Symposium on Visual Computing. Berlin, Heidelberg: Springer: 2013. p. 40–49.
Gu Y, Zhou J, Qian B. Melanoma Detection Based on Mahalanobis Distance Learning and Constrained Graph Regularized Nonnegative Matrix Factorization. In: Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE: 2017. p. 797–805.
Barata C, Celebi ME, Marques JS. Melanoma detection algorithm based on feature fusion. In: Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE. IEEE: 2015. p. 2653–6.
Almansour E, Jaffar MA. Classification of Dermoscopic Skin Cancer Images Using Color and Hybrid Texture Features. IJCSNS Int J Comput Sci Netw Secur. 2016; 16(4):135–9.
Ahn E, Kim J, Bi L, Kumar A, Li C, Fulham M, Feng DD. Saliency-based Lesion Segmentation via Background Detection in Dermoscopic Images. IEEE J Biomed Health Inform. 2017; 21(6):1685–93.
Bi L, Kim J, Ahn E, Feng D, Fulham M. Automatic melanoma detection via multi-scale lesion-biased representation and joint reverse classification. In: Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on. IEEE: 2016. p. 1055–8.
Wong A, Scharcanski J, Fieguth P. Automatic skin lesion segmentation via iterative stochastic region merging. IEEE Trans Inf Technol Biomed. 2011; 15(6):929–36.
Mokhtar N, Harun N, Mashor M, Roseline H, Mustafa N, Adollah R, Adilah H, Nashrul MN. Image Enhancement Techniques Using Local, Global, Bright, Dark and Partial Contrast Stretching For Acute Leukemia Images. Lect Notes Eng Comput Sci. 2009;:2176.
Duan Q, Akram T, Duan P, Wang X. Visual saliency detection using information contents weighting. In: Optik - International Journal for Light and Electron Optics, Volume 127, Issue 19.2016. p. 7418–30.
Akram T, Naqvi SR, Ali Haider S, Kamran M. Towards real-time crops surveillance for disease classification: exploiting parallelism in computer vision. In: Computers and Electrical Engineering, Volume 59.2017. p. 15–26.
Barata C, Celebi ME, Marques JS. Melanoma detection algorithm based on feature fusion. In: Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE. IEEE: 2015. p. 2653–56.
Ahn E, Bi L, Jung YH, Kim J, Li C, Fulham M, Feng DD. Automated saliency-based lesion segmentation in dermoscopic images. In: Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE. IEEE: 2015. p. 3009–12.
Bozorgtabar B, Abedini M, Garnavi R. Sparse Coding Based Skin Lesion Segmentation Using Dynamic Rule-Based Refinement. In: MLMI@ MICCAI.2016. p. 254–61.
Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1. IEEE: 2005. p. 886–93.
Haralick RM, Shanmugam K. Textural features for image classification. IEEE Trans Syst Man CYbernetics. 1973; 6:610–21.
Liu Y, Zheng YF. One-against-all multi-class SVM classification using reliability measures. In: Neural Networks, 2005. IJCNN'05. Proceedings 2005 IEEE International Joint Conference on, vol. 2. IEEE: 2005. p. 849–54.
Abuzaghleh O, Barkana BD, Faezipour M. Noninvasive real-time automated skin lesion analysis system for melanoma early detection and prevention. IEEE J Trans Eng Health Med. 2015; 3:1–12.
Kruk M, Świderski B, Osowski S, Kurek J, Sowińska M, Walecka I. Melanoma recognition using extended set of descriptors and classifiers. Eurasip J Image Video Process. 2015; 2015(1):43.
Ruela M, Barata C, Marques JS, Rozeira J. A system for the detection of melanomas in dermoscopy images using shape and symmetry features. Comput Methods Biomech and Biomed Eng Imaging Vis. 2017; 5(2):127–37.
Waheed Z, Waheed A, Zafar M, Riaz F. An efficient machine learning approach for the detection of melanoma using dermoscopic images. In: Communication, Computing and Digital Systems (C-CODE), International Conference on. IEEE: 2017. p. 316–9.
Satheesha TY, Satyanarayana D, Prasad MNG, Dhruve KD. Melanoma is Skin Deep: A 3D reconstruction technique for computerized dermoscopic skin lesion classification. IEEE J Trans Eng Health Med. 2017; 5:1–17.
Gu Y Zhou, Qian B. Melanoma Detection Based on Mahalanobis Distance Learning and Constrained Graph Regularized Nonnegative Matrix Factorization. In: Applications of Computer Vision (WACV) 2017 IEEE Winter Conference on. IEEE: 2017. p. 797–805.
Rastgoo M, Morel O, Marzani F, Garcia R. Ensemble approach for differentiation of malignant melanoma. In: The International Conference on Quality Control by Artificial Vision 2015. International Society for Optics and Photonics: 2015. p. 953415.
Mendonça T, Ferreira PM, Marques JS, Marcal ARS, Rozeira J. PH 2-A dermoscopic image database for research and benchmarking. In: Engineering in Medicine and Biology Society (EMBC) 2013 35th Annual International Conference of the IEEE. IEEE: 2013. p. 5437–40.
Gutman D, Codella NCF, Celebi E, Helba B, Marchetti M, Mishra N, Halpern A. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv preprint arXiv:1605.01397. 2016.
Codella NCF, Gutman D, Celebi ME, Helba B, Marchetti MA, Dusza SW, Kalloo A, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). arXiv preprint arXiv:1710.05006. 2017.
Yu L, Chen H, Dou Q, Qin J, Heng P-A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans Med Imaging. 2017; 36(4):994–1004.
Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017; 542(7639):115–8.
Ge Z, Demyanov S, Bozorgtabar B, Abedini M, Chakravorty R, Bowling A, Garnavi R. Exploiting local and generic features for accurate skin lesions classification using clinical and dermoscopy imaging. In: Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on. IEEE: 2017. p. 986–90.
Lopez AR, Giro-i-Nieto X, Burdick J, Marques O. Skin lesion classification from dermoscopic images using deep learning techniques. In: Biomedical Engineering (BioMed) 2017 13th IASTED International Conference on. IEEE: 2017. p. 49–54.
The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group under grant# (RG-1438-034) and Higher Education Commission, Pakistan - Startup Research Grant #: 21-260/SRGP/R&O/HEC/2014.
Department of Computer Science, COMSATS Institute of Information Technology, Wah, Pakistan
M. Attique Khan & Muhammad Sharif
Department of Electrical Engineering, COMSATS Institute of Information Technology, Wah, Pakistan
Tallha Akram
Department of Electrical Engineering, COMSATS Institute of Information Technology, Abbottabad, Pakistan
Aamir Shahzad
College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
Khursheed Aurangzeb, Musaed Alhussein, Syed Irtaza Haider & Abdualziz Altamrah
Department of Electrical Engineering, COMSATS Institute of Information Technology, Attock, Pakistan
Khursheed Aurangzeb
M. Attique Khan
Musaed Alhussein
Syed Irtaza Haider
Abdualziz Altamrah
MAK, TA, MS and AS conceived the study and participated in its design and coordination and helped to draft the manuscript. KA, MA, SIA and AA provided guidance and support in every part of this work and assisted in the writing and editing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Tallha Akram or Aamir Shahzad.
The datasets analysed during the current study are in open access using the following links.
1. AADI project repository at the web link: http://www.fc.up.pt/addi/ph2/%20database.html
2. ISIC UDA archive. https://isic-archive.com/
3. ISBI 2016. https://challenge.kitware.com/#challenge/n/ISBI_2016/%3A_Skin_Lesion_Analysis_Towards_Melanoma_Detection
Khan, M., Akram, T., Sharif, M. et al. An implementation of normal distribution based segmentation and entropy controlled features selection for skin lesion detection and classification. BMC Cancer 18, 638 (2018). https://doi.org/10.1186/s12885-018-4465-8
Multi-level features extraction
|
CommonCrawl
|
Performance analysis of feedback-free collision resolution NDMA protocol
S. Lagen ORCID: orcid.org/0000-0003-4986-51041,2,
A. Agustin1,
J. Vidal1 &
J. Garcia1
EURASIP Journal on Wireless Communications and Networking volume 2018, Article number: 45 (2018) Cite this article
To support communications of a large number of deployed devices while guaranteeing limited signaling load, low energy consumption, and high reliability, future cellular systems require efficient random access protocols. However, how to address the collision resolution at the receiver is still the main bottleneck of these protocols. The network-assisted diversity multiple access (NDMA) protocol solves the issue and attains the highest potential throughput at the cost of keeping devices active to acquire feedback and repeating transmissions until successful decoding. In contrast, another potential approach is the feedback-free NDMA (FF-NDMA) protocol, in which devices do repeat packets in a pre-defined number of consecutive time slots without waiting for feedback associated with repetitions. Here, we investigate the FF-NDMA protocol from a cellular network perspective in order to elucidate under what circumstances this scheme is more energy efficient than NDMA. We characterize analytically the FF-NDMA protocol along with the multipacket reception model and a finite Markov chain. Analytic expressions for throughput, delay, capture probability, energy, and energy efficiency are derived. Then, clues for system design are established according to the different trade-offs studied. Simulation results show that FF-NDMA is more energy efficient than classical NDMA and HARQ-NDMA at low signal-to-noise ratio (SNR) and at medium SNR when the load increases.
The fifth generation (5G) of cellular networks, set for availability around 2020, is expected to enable a fully mobile and connected society, characterized by a massive growth in connectivity and an increased density and volume of traffic. Hence, a wide range of requirements arise, such as scalability, rapid programmability, high capacity, security, reliability, availability, low latency, and long-life battery for devices [1]. All these requirements pave the way for machine-type communications (MTC), which enable the implementation of the Internet of Things (IoT) [2]. Unlike typical human-to-human communications, MTC devices are equipped with batteries of finite lifetime and generate bursty and automatic data without or with low human intervention, so that traffic in the uplink direction is accentuated [3]. MTC systems consider different use cases that range from massive MTC, where the number of deployed devices is very high, to mission-critical MTC, where real-time and high-reliability communication needs have to be satisfied [4].
To address such a massive number of low-powered devices generating bursty traffic with low latency requirements, simple medium access control (MAC)-layer random access protocols of ALOHA-type are preferred because they offer a relatively straightforward implementation and can accommodate bursty devices in a shared communication channel [4, 5]. They are indeed used in today's most advanced cellular networks (as the random access channel (RACH) in LTE) [6] and are being considered in different MTC systems, such as LoRa [7], SigFox, enhanced MTC [8], narrowband (NB) LTE-M [9, 10], and NB-IoT [11–13].
Basic ALOHA-type protocols are based on the collision model: a packet is received error-free only when a single device transmits. Thus, the MAC layer and the physical (PHY) layer are fully decoupled. In [14], Guez et al. made a fundamental change in the collision model and introduced the multipacket reception (MPR) model: when there are simultaneous transmissions, instead of associating collisions with deterministic failures, reception is described by conditional probabilities. Therefore, signal processing techniques enable a receiver to decode simultaneous signals from different devices and hence collisions can be resolved at the PHY layer. As a result, a tighter interaction between PHY and MAC layers is achieved [14–16].
MPR can be realized through many techniques, which are classified according to three different perspectives: transmitter, trans-receiver, and receiver (see [17] for details). Among all of them, a promising trans-receiver approach based on random access for different 5G services is the network-assisted diversity multiple access (NDMA) protocol. NDMA was initially presented in [18] for flat-fading channels and, afterwards, extended to multi-path time-dispersive channels in [19]. The basic idea of NDMA is that the signals received in collided transmissions are stored in memory and then they are combined with future repetitions at the receiver so as to extract all collided packets with a linear detector. In the single-antenna case and under the assumption of perfect reception, NDMA only requires the number of repetitions to be equal to the number of collided packets [18]. Thus, NDMA dramatically enhances throughput and delay performance as compared to ALOHA-type protocols, but estimation of the number of devices involved in a collision (e.g., P devices) and a properly adjustment of the number of repetitions (i.e. P−1) is required every time a collision occurs.
Many NDMA protocols, and variations of it, have been proposed and analyzed in the literature, including different ways to determine the number of devices involved in a collision [18–22], interference cancellation receivers [23–25], and modified protocols that use channel knowledge at the transmitter side [26, 27]. Stability analysis of NDMA was addressed in [28–30]. Finally, the hybrid automatic repeat request (HARQ) concept was applied to NDMA in [31] (named H-NDMA) in order to deal with reception errors at low/medium SNR by forcing devices involved in a collision of P devices to transmit repetitions more than P−1 times. This way, packet reception was significantly improved at low SNR with H-NDMA as compared to classical NDMA.
One of the main drawbacks of NDMA protocols is, however, the overhead required to identify collisions and adjust the number of repetitions accordingly every time a collision occurs (which implies communicating it to all the devices involved in the collision) [18]. Indeed, devices need to decode control signaling at every time slot to know if the subsequent time slot is reserved for repetitions or not, hence increasing the energy consumption. This aspect is critical for MTC devices with finite battery lifetime.
To cope with these issues, authors in [32] proposed a non-centralized procedure for NDMA, coined feedback-free NDMA (FF-NDMA), in which the number of time slots for repetitions is kept constant to R (conforming a contention period (CP)) and is equal for all devices and transmissions. See Fig. 1 for R = 3. Accordingly, devices are only allowed to start transmission at the beginning of the CP and will do so R times. This way, collisions of up to R devices can be resolved in the single-antenna case without requiring the receiver to communicate the collision multiplicity to the devices every time a collision occurs and avoiding the signaling related to the state (reserved for repetitions or not) of the subsequent time slot. The joint PHY-MAC performance analysis of FF-NDMA protocol was performed in [33] for the general case of MIMO systemsFootnote 1 with orthogonal space-time block coding (OSTBC). Significant throughput and energy gains as compared to ALOHA-based schemes were reported with a non-centralized protocol that requires low overhead. Nevertheless, it was assumed in [32, 33] that whenever a packet was received in error at the receiver then said packet was lost, since FF-NDMA was initially designed to address the broadcast protocol in ad hoc networks where no feedback is available.
Although NDMA and FF-NDMA were initially proposed a decade ago, the emerging MTC systems (with different requirements than those of conventional human-based cellular networks) suggest reviewing random access protocols with MPR and analyzing its applicability to the uplink communication in cellular networks [3], specially for scenarios characterized by a large number of devices, limited signaling load, low energy consumption, and high reliability. In particular, NDMA-based protocols are highly attractive for massive MTC. NDMA has been deeply analyzed in the recent literature with different protocols (e.g., H-NDMA [31]). However, FF-NDMA misses such wide analysis while it is suitable for massive MTC scenarios due to its low associated signaling load and reduced implementation complexity. Indeed, it is worth mentioning that NB-IoT [11] and the new radio (NR) access technology design for 3GPP 5G systems [34] already consider a contention-based transmission mode with a predefined number of packet repetitions (known as uplink grant-free access, in which devices contend for resources, and multiple predefined repetitions are allowed, as specified in [34]). Such uplink grant free access in NR targets at least for massive MTC and would allow the implementation of FF-NDMA.
In this paper, we analyze the FF-NDMA protocol with MIMO configurations and OSTBC from a cellular network perspective, in which multiple devices intend to communicate with a base station (BS), as shown in Fig. 1. The MIMO system defined and analyzed in the sequel carries over to a multi-cell scenario where cell-edge terminals experience similar average SNR to an x-number of BSs that are able to receive and decode packets in a distributed way with an x-fold number of received antennas. Differently from [32, 33], in which no feedback was considered and whenever a packet was received in error then the packet was discarded, we use a general model in which packets are not discarded. To do so, we consider a finite-user slotted random access system where devices can be either transmitting, thinking (i.e., there is no packet to transmit), decoding, or backlogged (i.e., packet transmission was erroneous and the device is waiting for a new transmission opportunity) and we assume that each device is equipped with a single-packet bufferFootnote 2. Therefore, FF-NDMA is feedback-free in the sense that it is not needed to broadcast information related to the number of repetitions and to the state of the forthcoming time slots (as in NDMA or H-NDMA) but, in contrast to [32, 33], ACK feedback to acknowledge a correct detection of the devices' packets per CP is assumed.
In this context, the main contributions of this paper are summarized as follows:
we develop a joint PHY-MAC analysis of the FF-NDMA protocol by using the MPR model and, then, characterize the system through a finite Markov chain, for which the system state probabilities and the transition probabilities among them are obtained in closed-form.
we characterize analytically the FF-NDMA protocol in terms of throughput, delay, capture probability (i.e., probability of a successful transmission or, equivalently, reliability of the protocol), energy, and energy efficiency (i.e., efficiency of the protocol, which is measured through a throughput-energy ratio). Also, we propose two criteria to analyze the stability of finite-user random access with single-packet bufferFootnote 3.
we investigate the system performance of FF-NDMA as a function of the CP length (R), for different SNR and load conditions, and we compare FF-NDMA with S-ALOHA, classical NDMA [18], and H-NDMAFootnote 4 [31]. As we will see, the energy consumption is reduced with FF-NDMA as compared to H-NDMA in certain situations due to the lower control signaling to be decoded. To address the throughput-energy trade-off, we use the energy-efficiency metric and focus on determining the circumstances in which FF-NDMA is more energy efficient than H-NDMA.
Organization: The paper is organized as follows: in Section 2, we assess the differences between FF-NDMA and other NDMA-based protocols (including NDMA and H-NDMA) and then we present the system model and the main features of the FF-NDMA protocol. Section 3 establishes the MPR model and characterizes the system by using a finite Markov chain, for which the system state probabilities (related to the backlog state) are derived. Then, in Section 4, based on the obtained system state probabilities, expressions for throughput, delay, capture probability, energy, and energy efficiency are developed and two stability conditions are set. Section 5 presents the simulation results by using different SNR and different offered loads and system design clues are extracted. Finally, conclusions are drawn in Section 6.
Notation: In this paper, scalars are denoted by italic letters. Boldface lower-case and upper-case letters denote vectors and matrices, respectively. For given real-valued scalars a and b, \(\text {Pr}\left (a {\le } b \right)\), \(\text {Pr}\left (a {=} b \right)\), \(\text {Pr}\left (a {=} b | \mathcal {C} \right)\), ⌈a⌉, and \(\log _{2} (a)\), denote the probability of a being smaller than b, the probability of a being equal to b, the probability of a being equal to b given condition \(\mathcal {C}\), the ceiling function of a, and the base 2 logarithm of a, respectively. For given positive integer scalars a and b, \(\left ({\begin {array}{c} a\\ b \end {array}} \right)\)refers to the binomial coefficient and a! denotes the factorial of a. For a given vector a, aT stands for the vector transpose. \(\mathcal {Q}(.)\) refers to the Q-function (i.e., the integral of a Gaussian density). \(\mathbb {R}^{m\times n}\), \(\mathbb {R}_{+}^{m\times n}\), and \(\mathbb {C}^{m\times n}\) denote an m by n dimensional real space, real positive space, and complex space, respectively.
In this section, we first compare FF-NDMA protocol with classical NDMA [18] and H-NDMA [31], and then present the system model for FF-NDMA.
Comparison of NDMA protocols
Figure 2 shows the protocol differences between NDMAFootnote 5 and FF-NDMA with R=3. To perform a fair protocol comparison, we assume that each time slot contains a data part for data transmission and a control part for feedback from BS (which is not always used in FF-NDMA).
In FF-NDMA, transmissions are attempted at the CP start and the number of repetitions is fixed to the CP length (R repetitions) independently of the number of devices that collide. In contrast, in NDMA, transmissions are attempted at the time slot scale and the number of repetitions is dynamically adapted according to the number of collided packets. H-NDMA follows classical NDMA operation but, at low/medium SNR, the BS might ask for additional repetitions on a HARQ basis to improve packet reception. For these reasons, the throughput of FF-NDMA can not be as large as that of classical NDMA at high SNR and as that of H-NDMA at any SNR range. However, load signaling, implementation complexity, and energy consumption are reduced with FF-NDMA.
Under NDMA, receiving and decoding control signaling from the BS is required at every time slot for different purposes: to know if the subsequent time slot is either busy or free (i.e., reserved for repetitions of collided packets or not), to receive ACK in case a packet was transmitted, and to know the number of repetitions to be performed in case a packet was transmitted but not successfully decoded due to collision [18]. H-NDMA requires extra signaling load from the BS towards devices to request additional repetitions on a HARQ basis [31], once the repetitions of NDMA have been completed. On the other hand, in FF-NDMA, control signaling is only needed to receive ACK at those CPs in which a packet was transmitted. This makes the application of FF-NDMA to MTC systems highly attractive because the energy consumption for control signaling decoding is reduced. The difference in the control signaling to be decoded with FF-NDMA and NDMA is illustrated in Fig. 2 in orange color.
To summarize, the throughput of FF-NDMA is going to be lower than the throughput of H-NDMA, but the energy consumption can be reduced with FF-NDMA. In this line, in Section 5.2, we use the energy efficiency as a suitable metric to address the throughput-energy trade-offs between FF-NDMA and H-NDMA and, hence, determine which protocol is more energy efficient under different circumstances.
In addition, due to the lower control signaling to be decoded with FF-NDMA, its implementation complexity is also significantly reduced as compared to NDMA or H-NDMA, because devices do not need to decode control signaling from the BS at every time slot and can enter into sleep mode. With FF-NDMA, decoding of a single-control signaling per CP in which transmission was attempted is required. With NDMA or H-NDMA, decoding of control signaling at every time slot while data is in the buffer is needed to know if transmission can be attempted and to get the feedback.
Finally, it is important to emphasize that NDMA and H-NDMA require a self-contained time slot, as shown in Fig. 2, in which the feedback for repetitions is received just after the packet transmission and devices can attempt a repetition at the subsequent time slot. However, conventional repetitions processes (e.g., HARQ) might take some time slots between obtaining the feedback and retransmitting again [35]. In this situation, FF-NDMA avoids the additional delay that appears in NDMA and H-NDMA under non-ideal repetition processes owing to the fact that FF-NDMA does not rely on feedback to perform repetitions. Both the energy savings (due to lower control signaling to decode) and the delay reductions (under non-ideal repetition processes) are evaluated in Section 5.1.
System model for FF-NDMA
Consider a wireless cellular system composed of one BS with N receive antennas and a deployment of K devices that will transmit packets to the BS through a slotted random access network, as shown in Fig. 1. Every device is equipped with M transmit antennas and has a single-packet buffer.
A frame composed of time slots is adopted. Each time slot contains a data part for data transmission from devices to BS and a control part for feedback from BS to devices (which is not always used), see Fig. 2. Time slots are grouped into contention periods (CPs) of R time slots. We assume that each device is CP- and slot-synchronous with the BS. Devices transmit whenever they have a packet in their buffer at the beginning of the CP, and packet repetitions are performed during the CP, so that devices transmit their packets R times using the data plane. After the R repetitions, the BS acknowledges reception of the correctly received packets through the control channel, so that devices known if transmission was successful or not. Note that the maximum number of packets that can be simultaneously decoded at a BS with N antennas and R repetitions is \(\tilde {R} = NR\).
In this scenario, collisions come up and every device can be in one of four different device states: thinking, transmitting, decoding, or backlogged. The device state diagram is shown in Fig. 3. In the thinking state, the device does not have a packet in its buffer and does not participate in any scheduling activity. In this device state, a device generates a packet with probability σ. Once a packet is generated, its transmission is attempted at the beginning of the next CP and repeated during R time slots (which corresponds to the transmitting state). After transmission, the device decodes an acknowledgment of receipt message from the BS. If the transmission succeeds (i.e., ACK feedback is received), the device remains in the thinking state. Otherwise, the device moves into the backlogged state and retransmits the packet with probability υ. When the packet is finally successfully decoded at the BS, the device moves back to the thinking state and the process restarts again.
State diagram of the device operation
We follow classical NDMA [18] and H-NDMA [31] assumption that uniform average power from every device is received at the BS. This is possible thanks to the uplink slow power control mechanism [36]. Accordingly, all devices are received at the BS with the same average SNR (γ). The use of uplink power control has the benefit that the scenario is terminal-wise symmetric (in terms of average SNR) and the MPR model can be thus applied, as it will be shown in Section 3.
Signal model
To exploit transmit diversity with no channel knowledge at the terminal sideFootnote 6, transmission of each device is done through an OSTBC with Q complex symbols that are spread in time and space over T channel uses and M transmit antennas. Therefore, the transmitted signal matrix for the kth device, \(\mathbf {X}_{k}{\in }\mathbb {C}^{M\times T}\), is expressed as [37]
$$ {\mathbf{X}}_{k}=\sum_{q=1}^{Q} \alpha_{k,\,q}{\mathbf{A}}_{q}+j\beta_{k,\,q}\mathbf{B}_{q}, $$
where αk,q and βk,q refer to the real and imaginary parts of the qth complex symbol at the kth device, respectively, and \(\mathbf {A}_{q},\mathbf {B}_{q} \in \mathbb {R}^{M\times T}\) denote the pair of real-valued code matrices that define the OSTBC [38]. We assume that the transmitted symbols are m-QAMFootnote 7.
Considering a flat fading channel constant over the time slot and that \(\tilde {k}\) devices are transmitting, the received signal at the N antennas of the BS over T channel uses in the rth time slot, \(\mathbf {Y}_{r}{\in }\mathbb {C}^{N\times T}\), is given by [39]
$$ {\mathbf{Y}}_{r}=\sum\limits_{k=1}^{\tilde{k}} \sqrt{\frac{P_{k}}{ML_{k}}}\mathbf{H}_{k,r}\mathbf{X}_{k}+\mathbf{W}_{r}, $$
where P k stands for the transmitted power of the kth device, L k refers to the slow propagation losses (including pathloss and shadowing) between the kth device and the BS, \(\mathbf {H}_{k,r}{\in }\mathbb {C}^{N\times M}\) is the Rayleigh flat-fading channel matrix between the antennas at the kth device and the BS during the rth time slot that contains zero mean complex Gaussian components, and \(\mathbf {W}_{r}{\in }\mathbb {C}^{N\times T}\) denotes the received noise that is composed of zero mean complex Gaussian components with variance \(\sigma _{\mathrm {w}}^{2}\). The average received SNR is given by \(\gamma {=}\frac {P_{k}}{L_{k}\sigma _{\mathrm {w}}^{2}}\) and is uniform among devices due to the uplink slow power control mechanism (which adjusts the uplink power P k according to the slow propagation losses L k at every device).
The BS combines the received signals in a CP of R time slots to perform multi-user detection. We assume that the channel is constant on one time slot but uncorrelated between time slots (fast-fading channel assumption)Footnote 8. Accordingly, assuming that \(\tilde {k}\) devices are present, the received signal in a CP can be arranged in vector form by separating the real and imaginary parts as (see [37], Section 7.1):
$$\begin{array}{*{20}l} \mathbf{y}&=\left[{\begin{array}{c} \mathbf{y}_{1}\\ \vdots \\ \mathbf{y}_{R} \end{array}} \right]=\sqrt{\frac{\gamma\sigma_{\mathrm{w}}^{2}}{2M}}\bar{\mathbf{H}}\mathbf{x}+\mathbf{w} \\ & =\sqrt{\frac{\gamma\sigma_{\mathrm{w}}^{2}}{2M}}\left[ {\begin{array}{ccc} \bar{\mathbf{H}}_{1,1} & \dots & \bar{\mathbf{H}}_{\tilde{k},1}\\ \vdots & \ddots & \vdots \\ \bar{\mathbf{H}}_{1,R} & \dots & \bar{\mathbf{H}}_{\tilde{k},R} \end{array}} \right]\left[ {\begin{array}{c} \mathbf{x}_{1}\\ \vdots \\ \mathbf{x}_{\tilde{k}} \end{array}} \right]+\left[ {\begin{array}{c} \mathbf{w}_{1}\\ \vdots \\ \mathbf{w}_{R} \end{array}} \right], \end{array} $$
where \(\mathbf {y}_{r}{\in }\mathbb {R}^{2NT\times 1}\) and \(\mathbf {w}_{r}{\in }\mathbb {R}^{2NT\times 1}\) contain the real and imaginary parts of the received signal and the noise samples in the rth time slot (see (2)), \(\mathbf {x}_{k}=[\alpha _{k,1}\dots \alpha _{k,Q} \ \beta _{k,1}\dots \beta _{k,Q}]^{T}{\in }\mathbb {R}^{2Q\times 1}\) contains the 2Q real and imaginary parts of the complex symbols transmitted by the kth device (see (1)), and \(\bar {\mathbf {H}}_{k,r}{\in }\mathbb {R}^{2NT\times 2Q}\) denotes the equivalent channel matrix for the kth device during the rth time slot. The equivalent channel matrix \(\bar {\mathbf {H}}_{k,r}\) depends on the Rayleigh flat-fading channel matrix (Hk,r in (2)) and the pair of real-valued code matrices (A q ,B q in (1)) (see details in [33], Appendix). According to this, \(\mathbf {y},\mathbf {w}{\in }\mathbb {R}^{2NTR\times 1}\), \(\mathbf {x}{\in }\mathbb {R}^{2Q\tilde {k}\times 1}\), and \(\bar {\mathbf {H}}{\in }\mathbb {R}^{2NTR\times 2Q\tilde {k}}\).
Note that to perform decoding of the contending signals, the receiver (BS) has to get the identity of the contending devices to estimate the channel matrices from them. In this regard, we assume that all devices have orthogonal pilot signals and that channels are perfectly acquired at the receiver side. The effect of a limited number of orthogonal pilot signals, non-orthogonal pilot signals, and imperfectly acquired channels is out of the scope of the paper and is left as interesting future work.
Packet error rate
By using a decorrelating receiver at the BS that combines the repetitions of devices attempting transmission within a CP of R time slots (see (3)), the multiple access interference is vanished and the bit error rate (BER) is invariant to the amplitudes of the interfering signals [33]. Therefore, for m-QAM, the BER of device k given that \(\tilde {k}\) devices are transmitting is given by [40, 41]
$$ \text{BER}_{\tilde{k},k}=\frac{4\left(1{-}\frac{1}{\sqrt{m}}\right)}{\log_{2} (m)}\mathcal{Q}\left(\sqrt{\frac{3\chi_{\tilde{k},k}\gamma}{2M(m-1)}}\right), \ \forall \tilde{k}{\le} K, $$
where \(\mathcal {Q}(.)\) refers to the Q-function (the integral of a Gaussian density) and \(\chi _{\tilde {k},k}\) is a chi-square distributed random variable with \(\text {dof}_{\tilde {k}}\) degrees of freedom for any OSTBC with M=T:
$$ \text{dof}_{\tilde{k}}=2({\text{RNM}}-Q\tilde{k} + Q). $$
For 4-QAM (QPSK), the BER expression in (4) is reduced to \(\text {BER}_{\tilde {k},k}{=}\mathcal {Q}\left (\sqrt {\frac {\chi _{\tilde {k},k}\gamma }{2M}}\right)\). In case that m-PSK was considered, the BER expression in (4) should be modified according to [40] and the whole forthcoming analysis would apply as well.
In (4), we have assumed fixed power spent at devices per time slot. This will allow us to compare the FF-NDMA protocol with classical NDMA [18] and H-NDMA [31], in which constant power per time slot is used since devices do not know the number of repetitions to be performed until a collision occurs and the BS communicates so.
Note that while R is a value to be fixed by the network, the value of \(\tilde {k}\) is random in each CP and depends on K, σ, and υ. So, the BER in a CP depends not only on the average SNR (γ) but also on the actual number of devices that are transmitting (\(\tilde {k}\)).
As in [33], we assume that a packet is in error whenever the BER in (4) is above a certain threshold ω. Therefore, an upper bound of the packet error rate (PER) for device k given that \(\tilde {k}\) devices are transmitting can be found as \(\text {PER}_{\tilde {k},k}{\le }\text {Pr}(\text {BER}_{\tilde {k},k}{\ge } \omega)\). According to this and (4), we get
$$ \text{PER}_{\tilde{k},k}{\le} \text{Pr}\left(\sqrt{\chi_{\tilde{k},k}}{\le} \mathcal{Q}^{-1}\left(\frac{\omega\log_{2} (m)}{4(1{-}\frac{1}{\sqrt{m}})}\right) \sqrt{\frac{2M(m-1)}{3\gamma}} \right), $$
which can be computed according to the cumulative function of the chi distribution in closed-form as
$$ \text{PER}_{\tilde{k},k}\le 1-F_{\tilde{k}}\left(\mathcal{Q}^{- 1}\left(\frac{\omega\log_{2} (m)}{4(1{-}\frac{1}{\sqrt{m}})}\right) \sqrt{\frac{2M(m{-}1)}{3\gamma}}\right), $$
$$ F_{\tilde{k}}(z)=e^{-z^{2}/2}\sum_{l=0}^{I} \frac{(z^{2}/2)^{l}}{l!}, \ I=\frac{\text{dof}_{\tilde{k}}}{2}-1. $$
It is important to recall that, as γ is equal for all devices, distinction among specific devices is not necessary and the following condition is fulfilled (see (7)):
$$ \text{PER}_{\tilde k}=\text{PER}_{\tilde k,k} = \text{PER}_{\tilde k,j}, \forall j,k. $$
Markov model for FF-NDMA
Analytic characterization of the performance and stability of the FF-NDMA protocol with MPR requires the use of a Markov model that incorporates different states of the system and the transition probabilities between them. In this regard, in this section we first set up the MPR model for the FF-NDMA protocol, which will allow us to work with conditional probabilities instead of associating collisions or erroneous receptions with deterministic failures. Then, according to the MPR model, we derive analytic expressions for the system state probabilities of the finite Markov chain that represents the FF-NDMA protocol.
MPR matrix
The MPR model is characterized by an MPR matrix that contains conditional probabilities, see [14]. Under FF-NDMA, the MPR matrix \(\mathbf {C} {\in } \mathbb {R}_{+}^{\tilde {R} \times (\tilde {R}+1)}\) with \(\tilde {R}~{=}~NR\) is given by
$$ \mathbf{C}= \left({\begin{array}{ccccc} C_{1,0}& C_{1,1} & 0 & \dots & 0\\ C_{2,0}& C_{2,1} & C_{2,2} & \dots & 0 \\ \vdots & & & \ddots & \vdots \\ C_{\tilde{R},0} & C_{\tilde{R},1}& C_{\tilde{R},2} & \dots & C_{\tilde{R},\tilde{R}} \end{array}} \right), $$
where Cx,y, \(1 {\le } x {\le } \tilde {R}\), and 0≤y≤x denotes the probability that, given x transmitting devices, y out of x transmissions are successful. The number of non-zero rows of the MPR matrix is given by the maximum number of packets that can be simultaneously decoded, i.e., \(\tilde {R}\).
As γ is assumed equal for all K devices, we do not need to distinguish among specific devices so that the element Cx,y of the MPR matrix C contains the product of PERs corresponding to the combinations of x devices for which y transmissions are successful and x−y are not. According to (9), the elements of the MPR matrix in (10) (i.e., Cx,y for \(1 {\le } x {\le } \tilde {R}\), 0≤y≤x) are given by
$$ {C_{x,y}} = \left(\begin{array}{l} x\\ y \end{array} \right)(\text{PER}_{x})^{x - y}{(1 - \text{PER}_{x})^{y}}, $$
and Cx,y=0 for y>x. Thus, we can complete the MPR matrix that characterizes the FF-NDMA protocol, C in (10), using (7), (9), and (11).
Markov chain for the system states
Let random variable B(s) denote the number of backlogged devices at the beginning of CP s. B(s) is referred to as the system state, which depends on the previous system state (i.e., B(s−1)) as well as on the number of devices whose state has changed during CP s. Hence, the process can be modeled by a finite Markov chain since B(s)≤K. Figure 4 shows the Markov chain for a simplified scenario with K = 3.
Markov chain of the system state (number of backlogged devices) in FF-NDMA protocol for K=3
The steady-state probability of the system being in state i (π i ) is thus given by
$$ \pi_{i}= \underset{s \to \infty}{\lim} {\text{Pr}}\left(B(s) = i\right), $$
and the transition probability from system state i to j (p ij ,0≤i,j≤K) is defined as [42]
$$ p_{i,j}= \underset{s \to \infty}{\lim} {\text{Pr}}\left(B(s) = j \vert B(s - 1) = i\right). $$
Notice that, under conventional slotted ALOHA, downward transitions are only possible from system state i to j=i−1, since a single packet can be decoded at a time, and p0,1=0. In contrast, under FF-NDMA, downward transitions are possible from system state i to \(j{\le } i{-}\tilde {R}\) as long as j≥0. In Fig. 4, all downward transitions have been represented; however, only those from system state i to \(j{\le } i {-}\tilde {R}\) are possible, i.e., are such that pi,j≠0.
Now we focus on obtaining the transition probabilities pi,j in (13), which depend on the MPR matrix C in (10), the generation probability σ, and the retransmission probability υ. To do so, let us define the following parameters.
Define \(\phi ^{m,n}_{i}\) as the probability that m≥0 backlogged devices transmit and n≥0 new packets are generated by thinking devices given that the system state is i (i.e., there are i devices in the backlog and K−i devices in the thinking state). Since packet generation and packet retransmission are independent events, \(\phi ^{m,n}_{i}\) is obtained as
$$ \phi^{m,n}_{i}= \left({\begin{array}{c} i\\ m \end{array}} \right){\upsilon^{m}}{(1 {-} \upsilon)^{i - m}}\left({\begin{array}{c} {K {-} i}\\ n \end{array}} \right){\sigma^{n}}{(1 {-} \sigma)^{K - i- n}}. $$
Similarly, define \(\varphi ^{m,n}_{i}\) as the probability that more than m backlogged devices transmit and n≥0 new packets are generated by thinking devices given that the system state is i
$$ \varphi^{m,n}_{i}\,=\, \left({1 {-} \!\sum\limits_{l = 0}^{m} {\left({\begin{array}{c} i\\ l \end{array}} \right){\upsilon^{l}}{{(1 {-} \upsilon)}^{i - l}}}} \right)\!\!\left({\begin{array}{c} {K {-} i}\\ n \end{array}} \right)\!{\sigma^{n}}{(1 {-} \sigma)^{K - i - n}}. $$
This way, the transition probabilities pi,j in (13) for \(i{-}\tilde {R}{\le } j{\le } i{+}\tilde {R}\) can be found by performing the following operation:
The left-hand-side vector in (??) includes all transition probabilities from system state i to states in between \(i {-} \tilde {R}\) and \(i {+} \tilde {R}\).
For illustrative purposes, let us explain how, for instance, pi,i in (??) is computed (i.e., the probability of remaining in state i). Then, by taking each row of the MPR matrix, we consider all the possible cases where from 1 to \(\tilde {R}\) packets are transmitted. The first right-hand-side matrix product takes into account the case where 1 packet is transmitted. In this case, two events can happen: a backlogged packet is transmitted but it is not successfully decoded \(\left (\phi _{i}^{1,0}C_{1,0}\right)\), or a new packet is generated and it is successfully decoded \(\left (\phi _{i}^{0,1}C_{1,1}\right)\). In both situations, the state of the backlog does not change. The rest of terms in (??) account for the cases in which \(2, 3, \dots, \tilde {R}\) packets were transmitted, and we have obtained them by extrapolating the aforementioned reasoning. In this particular case, where the system state i remains unchanged, the probability of not transmitting any packet \(\left (\text {i.e.},~ \phi _{i}^{0,0}\right)\) as well as the case where more than \(\tilde {R}\) backlogged packets are transmitted \(\left (\text {i.e.},~ \varphi _{i}^{\tilde {R},0}\right)\) have to be considered (see last right-hand-side vector in (??)).
The transition probabilities pi,j in (??) for \(i {-} \tilde {R} {\le } j {\le } i {+} \tilde {R}\) can also be obtained in compact form, as shown in next Eq. (16). The expression in (16) for \(i {-} \tilde {R} {\le } j {\le } i {+} \tilde {R}\) has been obtained by compacting (??). Let us recall that \(\phi _{i}^{m,n}\) and \(\varphi _{i}^{m,n}\) are given by (14) and (15), respectively, for m≥0 and n≥0, but take value 0 otherwise.
The remaining transition probabilities pi,j for \(j{<}i {-} \tilde {R}\) and \(j {>} i {+} \tilde {R}\) are included in (16) and are obtained as follows: downwards transitions from system state i towards states \(j {<} i {-} \tilde {R}\) are impossible because at most \(\tilde {R}\) packets can be successfully decoded, and therefore pi,j=0 for \(j {<} i {-} \tilde {R}\). Upwards transitions from system state i towards states \(j {>} i {+} \tilde {R}\) happen when j−i thinking devices have generated packets and collided (the activity of the backlogged devices is immaterial in this case because they do not alter the backlog state, so collision is generated by thinking devices alone), and are thus given by the last equation in (16). It considers all the combinations in which, among the K−i devices that were thinking, j−i thinking devices have generated packets and K−j have not.
To sum up, transition probabilities are given by
$$ p_{i,j}= \left\{ \begin{array}{ll} 0, & j <i{-}{\tilde R}, \\ \sum\limits_{x=1}^{\tilde{R}} \sum\limits_{y=0}^x C_{x,y} \phi_{i}^{x-(y+j-i),y+j-i} + &\\ \varphi_{i}^{\tilde{R}+i-j,j-i}+ \phi_{i}^{i-j,j-i}, & i{-}\tilde{R}\le j\le i{+}\tilde{R}, \\ \left(\!\begin{array}{l} K - i\\ j - i \end{array}\!\right){\sigma^{{j - i}}}{(1 - \sigma)^{K - j}}, & j >i{+}{\tilde R}. \end{array} \right. $$
Once we have all the transition probabilities pi,j by using (16), we can focus on obtaining the steady-state probabilities π i in (12). By arranging all the transition probabilities pi,j in a matrix \(\mathbf {P}{\in } \mathbb {R}_{+}^{(K {+} 1)\times (K {+} 1)}\) (i as row index, and j as column index) and all the steady-state probabilities π i in a vector \(\boldsymbol {\pi } {\in } \mathbb {R}_{+}^{(K{+}1)\times 1}\), the steady-state vector must satisfy [43]: π=Pπ and \(\sum _{i = 0}^{K} \pi _{i} = 1\). Therefore, π can be obtained as the normalized single eigenvector associated with the unit eigenvalue of P.
Performance analysis of FF-NDMA
In this section, we derive throughput, delay, capture probability, energy, and energy efficiency for FF-NDMA by using the steady-state probabilities obtained in Section 3.2. Then, two stability criteria are proposed.
The throughput (S) is defined as the average number of correctly decoded packets per time slot. It is given by the product of the steady-state probabilities and the associated throughput on each state (S i ), i.e.,
$$ \textit{S} = \frac{1}{R}\sum_{i = 0}^{K} S_{i} \pi_{i} \ \text{[packets/slot]}, $$
where the \(\frac {1}{R}\) penalty arises because devices do repeat the same packet R times within the CP. S i in (17) denotes the throughput obtained in system state i and considers the different cases where successful decoding takes place (i.e., the elements of the MPR matrix Cx,y such that \(1 {\le } x {\le } \tilde {R}\) and 1≤y≤x, each with its associated throughput of y successfully decoded packets):
$$ S_{i}= \sum_{x = 1}^{\tilde{R}} \sum_{y = 1}^{x} yC_{x,y} \sum_{m + n = x} \phi_{i}^{m,n}, $$
where \(\sum _{m + n = x} \phi _{i}^{m,n}\) denotes the probability that exactly x packets are transmitted (which can come from backlog and/or thinking states). For example, with \(\tilde {R} {=} 2\), the throughput associated with each system state (S i in (18), \(i{=}0,\dots,K\)) results:
$$ \begin{aligned} S_{i} &= C_{1,1} \left(\phi_{i}^{0,1}{+}\phi_{i}^{1,0}\right)+ C_{2,1} \left(\phi_{i}^{1,1} +\phi_{i}^{2,0}+\phi_{i}^{02}\right)\\ &\quad + 2C_{2,2} \left(\phi_{i}^{1,1}{+}\phi_{i}^{2,0}{+}\phi_{i}^{0,2}\right). \end{aligned} $$
The mean delay (D) is the average number of time slots required for a successful packet transmission, which includes the mean backlog delay, the duration of packet transmission, and the waiting time until a transmission opportunity (i.e., CP start).
To derive D, we first compute the mean backlog delay, i.e., the mean time a device spends in the backlog [42], as follows: let \(\bar {B}\) denote the mean number of devices in the backlog that is simply given by
$$ \bar{B}= \sum\limits_{i = 0}^{K} i \pi_{i}. $$
If devices join the backlog at a rate b, by using Little's formula [44], the mean time spent in the backlog is \(\bar {B}/b\).
A fraction (S−b)/S of the packets are never backlogged and thus have a (3R−1)/2 mean delay, which comes from the duration of a packet transmission (i.e., R time slots) plus the mean waiting time until the CP starts (i.e., (R−1)/2). Contrarily, the packets whose fraction is b/S will experience the mean backlog delay (i.e., \(\bar {B}/b\)) plus a (3R−1)/2 delay.
Therefore, the mean delay D (measured in number of time slots) is given by the weighted sum of delays associated with packets that are never backlogged and packets that are backlogged:
$$ \begin{aligned} \textit{D}&=\left(\frac{{\textit{S} {-} b}}{S}\right)\left(\frac{{3R {-} 1}}{2}\right)+ \frac{b}{S}\left({\frac{\bar{B}}{b} + \frac{{3R {-} 1}}{2}} \right) \\ &= \frac{{(3R {-} 1)}}{2} + \frac{\bar{B}}{S} \ \text{[slots]}. \end{aligned} $$
Note that although b has been defined to derive D, the final expression of D in (21) does not depend on it.
Capture probability
The capture probability (Pcap) is the probability of a successful packet transmission given that a packet has been transmitted. It measures the reliability of the transmission scheme [4]. Pcap can be computed by considering the weighted average for all system states of the probability that the transmission is successful given that a packet is transmitted and the system state is i\(\left (\text {i.e.},~ P^{\text {cap}}_{i}\right)\):
$$ \textit{P}^{{\text{cap}}}= \sum\limits_{i = 0}^{K} \pi_{i} P^{\text{cap}}_{i}. $$
\(P^{\text {cap}}_{i}\) is obtained by considering all the cases where a successful transmission takes place (i.e., \(\tilde {k}{=}1,\dots,\tilde {R}\)). In each case, it is given by the product of the probability of a successful decoding given that \(\tilde {k}\) devices transmit (i.e., \((1-{\text {PER}}_{\tilde {k}})\)) times the probability that \(\tilde {k} {-} 1\) devices transmit \(\left (\text {i.e.},~ \sum _{m + n = \tilde {k} - 1} \phi _{i}^{m,n}\right)\). Thus, it results to be
$$ P^{\text{cap}}_{i}= \sum_{\tilde{k}=1}^{\tilde{R}}(1-{\text{PER}}_{\tilde{k}}) \sum_{m + n = \tilde{k} - 1} \phi_{i}^{m,n}, $$
where \({\text {PER}}_{\tilde {k}}\) is shown in (9).
To compute the mean energy consumption (E) for a successfully packet transmission, we consider that each device can be in four different device states (being each one associated with a different power consumption level: P0, P1, P2, and P3, measured in Watts) (see Fig. 3):
thinking (or idle) state (P0): there is no data to transmit,
transmitting state (P1): the device is transmitting,
decoding state (P2): the device is listening to the BS signaling and decoding the acknowledgement, or
waiting state (P3): there is a packet to transmit but there is no transmission opportunityFootnote 9.
For FF-NDMA, the mean number of time slots that a device spends on every device state (T0, T1, T2, T3) is
$$ \begin{aligned} &T_{0} = 1/{\sigma}, \quad T_{1} = \tau RN_{\text{tx}}, \quad T_{2} = (1-\tau)N_{\text{tx}}, \\ &T_{3} = \textit{D} - T_{1} - T_{2}. \end{aligned} $$
The number of time slots in the thinking state (T0) is given by the inverse of the packet generation probability (σ). The number of time slots for the transmitting state (T1) depends on the number of transmissions required for a successful transmission (denoted by Ntx, and given in next Eq. (25)), the fact that within a CP the packet is repeated R times, and the fraction of a time slot that is devoted for data transmission (τ). For the decoding state, T2 depends on Ntx and the fraction of a time slot that is reserved to receive feedback from the BS (1−τ). Recall that only one decoding per CP in which a packet was transmitted is needed in FF-NDMA. Finally, The number of time slots in the waiting state (T3) is determined by the average delay D in (21) minus the mean transmitting and decoding times, hence, including the waiting time in the backlog and the waiting time for the CP to start.
The number of transmissions required for a successful transmission (Ntx in (24)) is given by the inverse of the capture probability Pcap shown in (22):
$$ N_{\text{tx}} = \sum\limits_{n = 1}^{\infty} {n{\textit{P}^{\text{cap}}}{{(1 - {\textit{P}^{\text{cap}}})}^{n - 1}}}=\frac{1}{{\textit{P}^{\text{cap}}}}. $$
Note that the number of transmissions in (25) does not consider the number of repetitions within a CP, it is rather given by the number of times the device accesses the channel.
Therefore, the mean consumed energy E (measured in Watts × slot) is given by the product of the time that devices spend on each state by the power spent on each device state:
$$ \textit{E}=T_{0}P_{0} + T_{1}P_{1} + T_{2}P_{2} + T_{3}P_{3} \ \text{[Watts}\times\text{slots]}. $$
The energy efficiency (EE) is a benefit-cost ratio that measures the efficiency of a protocol [45]. It is defined as the amount of data (benefit) that can be reliably transmitted per Joule of consumed energy (cost). Thus, it is measured in bits/Joule or, equivalently, in packets/slot/Watt (according to the definitions in previous sections). The energy efficiency is a highly relevant metric in low-powered and finite battery lifetime MTC devices [4].
Based on the model presented in Section 4.4, the mean power consumption for a successful packet transmission (measured in Watts) is given by
$$ \textit{P}=\frac{T_{0}P_{0} + T_{1}P_{1} + T_{2}P_{2} + T_{3}P_{3}}{T_{0} + T_{1} + T_{2} + T_{3}} = \frac{E}{T_{0} + \textit{D}} \ \text{[Watts]}. $$
Accordingly, EE (in packets/slot/Watt) is given by the ratio between the throughput S in (17) and the mean consumed power P in (27):
$$ \text{EE} = \frac{S}{P} = \frac{S}{E}(T_{0} + \textit{D}) \ \text{[packets/slot/Watt]}. $$
Note that the energy efficiency EE captures the trade-offs in throughput and energy consumption that might arise with different NDMA-based protocols.
Stability criteria
Stability analysis is usually performed for infinite-user random access (see [14]) or for finite-user buffered random access (see [46] and references therein), where devices are equipped with a buffer of infinite size. In the former case, the system is unstable when the number of devices in the backlog grows to infinity while, in the later, the system is unstable when the buffer size grows to infinity.
For finite-user random access with single-packet buffer, stability has not been defined. However, it can be addressed if a sensible definition related to undesired states of the system is done. In this sense, we here set two stability criteria for finite-user random access with single-packet buffer.
Stability based on the probability of being in the last system state. The system is said to be stable if the probability of being in system state K is below a certain threshold, i.e., if
$$ \pi_{K}\le \alpha, $$
where 0<α<1.
Stability based on the mean number of devices that are in the backlog. The system is said to be stable if the mean number of devices in the backlog is below a certain threshold, i.e., if
$$ \bar{B}\le \beta, $$
with 0<β<K.
Results and system design clues
In this section, we evaluate the FF-NDMA protocol in terms of throughput, delay, capture probability, energy, and energy efficiency so as to devise the most suitable CP length (R) as a function of the the offered load (G=σK) and the SNR (γ). K=30 devices are considered. A symmetric scenario with an equal average SNR (i.e., γ) for all devices is used. γ is determined by devices in worst propagation conditions, and so γ will be varied through simulations to emulate the different propagation conditions. The retransmission probability is set equal to the generation probability, i.e., υ=σ.
The 2×2 MIMO with Alamouti OSTBC is considered (i.e., two antennas at devices and two antennas at BS, M=N=T=Q=2). The transmitted symbols are QPSK (i.e., m=4 in (4)). The BER threshold is equal to ω=0.001. For the power consumption, P0=0.01 mW, P1=200 mW (i.e., 23 dBm as transmit power at devices), P2=150 mW, P3=10 mW, and τ=0.8 are used (according to [47] and [48]).
The performance of FF-NDMA is compared to classical slotted ALOHA (S-ALOHA), classical NDMA [18], and H-NDMA [31], all with MIMO configurations. S-ALOHA corresponds to the case of R=1. To emulate NDMA and H-NDMA under the same conditions, the proposed framework in this work can be applied with some slight but important modifications. For H-NDMA, we denote as Rh the number of additional repetitions that the BS may request on a HARQ basis. As compared to NDMA, this reduces the PER but might increase the energy consumed in devices for data decoding. For simulations, we use up to Rh=4. Therefore, the modifications required to emulate NDMA and H-NDMA are
The degrees of freedom \(\text {dof}_{\tilde {k}}\) in (5) are equal to the following:
$$ \begin{array}{ll} \text{NDMA}: & \text{dof}_{\tilde{k}}~{=}~2\left(\left\lceil\frac{\tilde{k}}{N}\right\rceil NM~{-}~Q\tilde{k}~{+}~Q\right), \\ [2pt] \text{H-NDMA}: & \text{dof}_{\tilde{k}}~{=}~2\left(\left(\left\lceil\frac{\tilde{k}}{N}\right\rceil{+}R_{\mathrm{h}}\right) NM{-}Q\tilde{k}{+}Q\right), \end{array} $$
since the number of repetitions is adjusted at each collision according to the number of collided packets \(\tilde {k}\). H-NDMA might have more degrees of freedom than NDMA, and thus a lower PER (see (7)), which is beneficial at low SNR.
NDMA and H-NDMA protocols can (ideally) decode \(\tilde {R} {=} \tilde {k}\) packets (i.e., all collided packets)Footnote 10 by setting \(\left \lceil \frac {\tilde {k}}{N}\right \rceil {-} 1\) repetitions in NDMA and up to \(\left \lceil \frac {\tilde {k}}{N}\right \rceil {-} 1{ + }R_{\mathrm {h}}\) repetitions in H-NDMA. Therefore, the MPR matrix C in (10) has a size of K×(K+1), since collisions of up to K devices can be resolved in both protocols.
The throughput S for NDMA and H-NDMA can be computed as follows:
$$ \textit{S}= \frac{1}{l}\sum_{i = 0}^{K} S_{i} \pi_{i}, $$
where l is the average number of repetitions:
$$ l= \sum_{i = 0}^{K} l_{i} \pi_{i}, \quad l_{i}=\sum_{m = 0}^{i} \sum_{n = 0}^{K - i} (m + n)\phi^{m,n}_{i}. $$
The mean delays D are:
$$ \begin{array}{ll} \text{NDMA}: & \textit{D} = \frac{{(3l {-} 1)}}{2} + \frac{\bar{B}}{S}, \\ [2pt] \text{H-NDMA}: & \textit{D} = \frac{{(3(l+R_{\mathrm{h}}) {-} 1)}}{2} {+} \frac{\bar{B}}{S}. \end{array} $$
The mean energy consumption E in (26) is also dependent on the average number of repetitions in (33), as l impacts on the mean transmitting, decoding, and waiting times:
$$ T_{1} = \tau lN_{\text{tx}}, \quad T_{2} = (1-\tau) \textit{D}, \quad T_{3} = \textit{D}-T_{1}-T_{2}. $$
Note that, in NDMA and H-NDMA, decoding of control signaling from the BS at devices is required in every time slot, as shown in Fig. 2. This is reflected in T2, see (35). Conversely, with FF-NDMA, decoding is needed per CP (i.e., 1 decoding every R time slots) to receive ACK only at those CPs in which a packet has been transmitted (see T2 in (24)).
Finally, in order to take into account practical implementation issues, we define parameter dretx as the delay (in number of time slots) between reception of feedback and the next repetition in NDMA and H-NDMA protocols (see explanation in Section 2.1). In the ideal case, dretx=0. Otherwise, delay in (34) is modified as follows: Dretx=D+(l−1)dretx, i.e., each repetition has an associated delay of dretx time slots. In this case, the mean waiting time is given by T3=Dretx−T1−T2. So, the mean waiting time T3 increases as dretx increases. For simulations, we consider the ideal case with dretx=0 and the case of dretx=4 (which do affect the delay and energy metrics of NDMA and H-NDMA).
In this section, we evaluate the FF-NDMA protocol in terms of throughput (S), delay (D), capture probability (Pcap), and energy (E), by following the expressions in (17), (21), (22), and (26), respectively, as a function of the offered load (G=σK) for an average SNR (γ) of 10 and 0 dB under different R values (indicated in the legends). Figures 5 and 6 show the performance results for γ=10 dB and γ=0 dB, respectively.
Performance vs. offered load (G = σK) for γ = 10 dB. a Throughput, b delay, c capture probability, d energy
Performance vs. offered load (G = σK) for γ = 0 dB. a Throughput, b delay, c capture probability, d energy
For γ=10 dB (see Fig. 5), ideal NDMA and ideal H-NDMA with dretx=0 provide the largest performance (in terms of throughput, energy, and delay) because they are able to adapt the number of repetitions dynamically to the number of collided packets. At medium/high SNR, H-NDMA is equivalent to NDMA, since no additional repetitions on a HARQ basis are required. Differently, for γ=0 dB (see Fig. 6), the performance of ideal NDMA vanishes because the system is limited by the erroneous detections rather than by the number of collided packets. This situation is resolved with ideal H-NDMA, which provides the largest performance gains (in terms of throughput, energy, and delay) at low SNR when dretx=0, since it can cope with the erroneous packet receptions through additional repetitions, improving as well the reliability.
At low SNR regime, FF-NDMA outperforms ideal NDMA protocol (dretx=0) in terms of throughput, delay, and energy without the need of invoking HARQ processes (which are needed for H-NDMA and might involve larger delays in case a delay between the packet transmission, the feedback, and the repetition is considered, i.e., dretx>0).
FF-NDMA performance can get close to ideal H-NDMA for different SNR ranges when choosing a suitable fixed CP length according to the offered load of the system. It can be observed that a maximum throughput level at low loads is achieved but as the load increases the throughput diminishes (see Figs. 5a and 6a). This is because no backoff policy is considered at all (υ=σ), and the system gets saturated for high-offered loads. Using larger R increases the value of the maximum throughput and its decay with the load starts later (i.e., the stability region is enlarged). Hence, if σ takes high values, it might be wiser to use larger R. Also, it is important to note that the load point in which the network should switch towards a larger R is reduced for low SNR regions and, hence, the use of a larger CP length starts to be relevant for lower loads (see Fig. 6a). The delay and the energy grow rapidly to infinity as the load increases (see Figs. 5b–d and 6b–d). By using larger R, the delay and energy are reduced and maintained for a wide range of offered loads. The system reliability is larger with large R (see Figs. 5c and 6c), since a larger CP length allows improving the PER (see (5)).
In FF-NDMA, the optimal R for maximum throughput, minimum delay, or minimum energy, depends on the offered load. As the load increases, higher R can provide larger throughput gains, delay reductions, and energy consumption savings, due to the effective capability for packet collision resolution of FF-NDMA.
In FF-NDMA, the optimal R for maximum reliability is provided with a large R, since the system can operate in a wide range of offered loads while maintaining the capture probability at its maximal value.
It is important to note that, for γ=10 dB, the capture probability is improved with FF-NDMA as compared to S-ALOHA, NDMA, and H-NDMA (see Fig. 5c). This is due to the fact that the additional repetitions provided by a fixed CP length allow the reducing of the PER (see (7)) and, hence, enhancing the system reliability, i.e., the probability of a successful transmission, as compared to NDMA and H-NDMA in which the repetitions are set mainly to resolve collisions. Differently, for γ=0 dB, the reliability with H-NDMA is also high because the HARQ mechanism starts to play a key role for successful packet reception (see Fig. 6c).
Regarding the energy consumption, at medium/high SNR, FF-NDMA provides similar energy consumption levels as compared to ideal H-NDMA with dretx=0 (see Fig. 5d). At low SNR (see Fig. 6d), the energy consumption can even be reduced with FF-NDMA as compared to ideal H-NDMA due to the energy savings provided by a lower amount of control signaling to be decoded.
The performance of FF-NDMA is not far from the ideal H-NDMA (dretx=0) and it gets closer as the SNR is reduced, while much less signaling overhead and implementation complexity is required. The lower control signaling to be decoded is reflected in a reduced energy consumption of FF-NDMA as compared to ideal H-NDMA either at low SNRs (see Fig. 6d) or at high loads (see Fig. 5d).
When non-ideal feedback for the repetition process is considered (e.g., dretx=4), the FF-NDMA protocol obtains a significantly reduced delay and lower energy consumption as compared to NDMA and H-NDMA schemes. The non-ideal feedback for repetitions has a detrimental impact on delay of NDMA and H-NDMA protocols at any SNR range, as shown in Figs. 5b and 6b for dretx=4. Instead, FF-NDMA is not affected by the non-ideal feedback repetition process. Thus, delay reductions of up to 70% are obtained with FF-NDMA as compared to H-NDMA for dretx=4. This is because devices have to wait for repetitions with H-NDMA while in FF-NDMA the repetition procedure is fixed to the CP length. The non-ideal feedback process also cause a reduced energy consumption with FF-NDMA as compared to NDMA and H-NDMA because devices spent less time in the waiting state to successfully complete a packet transmission. The energy is reduced with FF-NDMA in two situations: (i) at low SNRs (see Fig. 6d, for which energy savings of 5–20% are obtained) and (ii) when the load increases at medium/high SNRs (see Fig. 5d, for which energy savings up to 10% are reported). In both cases, the additional control signaling to be decoded with H-NDMA becomes relevant in terms of energy consumption because devices are active more time to successfully transmit a packet.
Non-ideal feedback repetition processes (i.e., dretx>0) have a detrimental effect over NDMA and H-NDMA. In this conditions, FF-NDMA provides significant delay reductions for any SNR range and load condition. Also, energy savings are reported at low SNR and at medium/high SNR with high load conditions.
To summarize, Table 1 includes the SNR regions in which FF-NDMA protocol outperforms the benchmarked protocols (ideal NDMA, non-ideal NDMA, ideal H-NDMA, and non-ideal H-NDMA) in terms of throughput S, delay D, energy E, and capture probability Pcap, separately.
Table 1 SNR Regions where FF-NDMA outperforms NDMA and H-NDMA protocols
In this section, we evaluate the energy efficiency (EE) of FF-NDMA in (28) and of ideal H-NDMA (dretx=0) as a function of the average SNR (γ). Let us recall that, at high SNR, EEH-NDMA=EENDMA since both approaches are equivalent. However, at low SNR, EEH-NDMA is higher than EENDMA. For FF-NDMA, EEFF-NDMA is computed by adopting the best R for each load and SNR condition.
Let us note that in this section, we use an ideal scenario for H-NDMA (i.e., dretx=0), so all the energy efficiency gains of FF-NDMA over ideal H-NDMA that are reported come due to the lower control signaling to be decoded with FF-NDMA. Note also that the EE is a useful metric to capture the throughput/energy trade-offs that have been observed in the previous section into a single figure of merit.
As it was shown in Section 4.5, the energy efficiency depends on the power consumption levels associated with the different device states (P0,P1,P2,P3). So, to illustrate the effect of the additional control signaling to be decoded with H-NDMA, we use different power decoding values P2={150,200,250} mW while keeping fixed the transmit power to P1=200 mW. Figure 7 displays the energy efficiency for two offered load conditions G={10,18} packets/slot and different power decoding values (P2, indicated in the legends). As it is expected, varying the P2 value has a higher impact on H-NDMA than FF-NDMA, since FF-NDMA only needs to decode ACK feedback while H-NDMA needs to decode ACK feedback, feedback associated with repetitions, and feedback related to the state of the forthcoming time slots. A larger P2 value increases the power consumption and, hence, reduces the EE.
Energy efficiency (packets/slot/Watt) of FF-NDMA and ideal H-NDMA (dretx=0) vs. SNR γ (dB) for two different offered loads (G) and different power decoding values (P2). P1 = 200 mW. a G=σK=10 packets/slot, b G=σK=18 packets/slot
Table 2 summarizes the SNR regions in which FF-NDMA scheme is more energy efficient than ideal H-NDMA protocol for the different G and P2 values displayed in Fig. 7. By considering the interval [−5,5] dB as low SNR, [5,15] dB as medium SNR, and \([15, {+}\infty ]\) dB as high SNR(recall we are using QPSK symbols), we can conclude the following from Fig. 7 and Table 2. FF-NDMA is more energy efficient than ideal H-NDMA at low SNR for any load condition. At medium SNR, FF-NDMA scheme obtains a higher energy efficiency as compared to ideal H-NDMA when the load is high and the decoding power is similar or larger than the transmitting power(i.e., P2≥P1, see Fig. 7b). At high SNR, ideal H-NDMA is more energy efficient for any load condition because the throughput is significantly better. Therefore, we infer that energy efficiency gains of FF-NDMA w.r.t. ideal H-NDMA are obtained in two situations:
at low SNR and
at medium SNR when the load increases and P2 is similar or larger than P1.
Table 2 SNR regions where FF-NDMA is more energy efficient than H-NDMA for different loads (G) and power decoding values (P2)
In both situations, the additional control signaling to be decoded with H-NDMA (and NDMA) increases significantly the power consumption as compared to FF-NDMA because, as more repetitions are required for a successful packet transmission, the difference in the power consumption among both protocols becomes evident. This fact produces that, although the throughput of FF-NDMA is always lower than the one of H-NDMA, the energy efficiency can be boosted with FF-NDMA in certain situations (see (28)).
The energy efficiency is improved with FF-NDMA as compared to H-NDMA in the situations that more repetitions are needed to complete a packet transmission (i.e., low SNR or medium SNR with high load) due to the lower control signaling to be decoded with FF-NDMA.
In this section, we evaluate the stability of the FF-NDMA protocol by following the criteria proposed in Section 4.6. That is, stability based on the probability of system state K (π K ) and stability based on the mean number of devices in the backlog (\(\bar {B}\)). An average SNR (γ) of 10 dB is used. dretx=0, P1=200 mW, and P2=150 mW.
Figure 8 displays π K as a function of the offered load (G=σK). It can be observed that by increasing the CP length (i.e., R), a lower value of π K is obtained and the stability region is thus expanded. For example, if the system to be stable has to satisfy π K ≤0.1 then larger values of σ are available to meet the stability condition by using several time slots per CP.
Probability of being in system state K (π K ) vs. offered load (G) for γ=10 dB
Figure 9 depicts \(\bar {B}\) versus the offered load (G=σK). Similarly as in Fig. 8, with a larger R, a lower value of \(\bar {B}\) is obtained and the stability region is enlarged.
Mean number of devices in the backlog (\(\bar {B}\)) vs. offered load (G) for γ=10 dB
Using larger R, the stability region is enlarged (i.e., the system can operate in a wider range of generation probabilities σ without exceeding undesired system states).
By increasing the offered load (i.e., σ) or by imposing stricter stability criteria (i.e., lower α in (29) or lower β in (30)), stability can not be met with low R and thus increasing R is the only option.
Impact of antenna configurations
Finally, we analyze the impact of different antenna configurations. Different cases are considered to assess the importance of transmit and receive diversity through multi-antenna terminals: 1×1 (i.e., M=N=T=Q=1), 1×2 (i.e., M=T=Q=1 and N=2), 2×2 MIMO with Alamouti OSTBC (i.e., M=N=T=Q=2), and 2×4 MIMO with Alamouti OSTBC (i.e., M=T=Q=2 and N=4). Figure 10 shows the energy efficiency of FF-NDMA in (28) and of ideal H-NDMA (dretx=0) as a function of the average SNR (γ) for G=10 packets/slot, P1=200 mW and P2=150 mW. It can be observed that equipping the BS with multiple antennas and exploiting receive diversity provides more EE gains than employing multiple antennas at devices. This is because the capability for collision resolution at the BS linearly increases with the number of receive antennas, and so does the maximum throughput, while transmit diversity is already provided by the temporal repetitions.
Energy efficiency (packets/slot/Watt) of FF-NDMA and ideal H-NDMA (dretx=0) vs. SNR γ(dB) for G=10 packets/slot. Antenna configurations (M×N): 1×1, 1×2, 2×2, and 2×4
The FF-NDMA protocol uses packet repetitions during a fixed CP to resolve collisions. The goal of this paper is to characterize it analytically from a cellular network perspective. To this goal, a finite-user slotted random access is considered, in which devices can be in one of four possible device states: transmitting, thinking, decoding, or backlogged. In this context, we characterize the system through an MPR model and a finite Markov chain, and derive accordingly analytic expressions for throughput, delay, capture probability (or reliability), energy, and energy efficiency.
Results show that by increasing the CP length, the throughput is reduced and energy/delay are increased at low loads because redundant repetitions are performed. In contrast, at medium/high loads, throughput, delay, and energy are improved with a larger CP length due to the effective capability of FF-NDMA to resolve collisions. Also, it is shown that using a larger CP length allows improving the system reliability and enlarging the stability region, thus, enabling operation in a wider range of loads without exceeding undesired system states.
Results also evidence that FF-NDMA offers significant benefits as compared to NDMA and H-NDMA protocols in MTC-based use cases. It outperforms NDMA in all metrics at low SNR. As compared to ideal H-NDMA, FF-NDMA is more energy efficient in two situations: (1) at low SNR and (2) at medium SNR when the load increases and the decoding power is similar or larger than the transmitting power at devices, due to the lower amount of control signaling to be decoded. When non-ideal feedback processes for repetitions are considered, FF-NDMA can boost the delay and energy performance of NDMA and H-NDMA owing to the feedback-free repetition procedure. All this demonstrates the suitability of FF-NDMA protocol for scenarios characterized by a large number of devices, low complexity, limited signaling load, low energy consumption, and high reliability.
Interesting future work includes a deep analysis of the multi-cell deployment. The developed framework can be applied to devices located at the cell-edge with symmetric SNR conditions, which could be simultaneously decoded at multiple BSs to exploit further receive diversity. The general case with cell-center and cell-edge devices (some of them with asymmetric SNR conditions towards the different BSs) is also left for future work.
All previous works on NDMA have focused on single-input single-output (SISO) and single-input multiple output (SIMO) systems rather than in the general MIMO case.
Slotted random access assisted by retransmission diversity and MPR for FF-NDMA with R = 3. The frame is composed of contention periods (CPs), each containing R consecutive time slots. Devices access the shared channel whenever they have a packet to transmit at the CP start. As an example, two and four devices transmit in the first and second CPs, respectively
The single-packet buffer assumption is useful to emulate MTC scenarios where packets are generated every certain period of time (e.g., by sensors) and in which a single-packet buffer is enough to report the latest information.
Stability has been defined in literature for infinite-user as well as for infinite-buffer systems, but not for finite-user single-buffer systems.
We compare FF-NDMA not only with NDMA but also with H-NDMA, since H-NDMA significantly outperforms NDMA at low SNR regimes (i.e., the common operational range for MTC devices that use low order constellations).
H-NDMA matches NDMA operation in Fig. 2 under high SNR regime.
Slotted random access for NDMA and FF-NDMA with R=3. In NDMA, decoding control signaling at every time slot is needed. In FF-NDMA, control signaling has to be decoded only at those CPs in which packets were transmitted
We assume that antenna precoding at the transmitter side entails additional complexity and energy consumption at the base band processing.
Note that LTE uses QPSK, 16-QAM, and 64-QAM [6], while NB-IoT does only support QPSK [11].
If the channel was static and constant among time slots, then the fast-fading channel assumption could be achieved by ensuring that each terminal adds a different random phase for transmission in every time slot.
The waiting time includes the waiting time in the backlog as well as the waiting time for the CP to start.
Recall that, in FF-NDMA, the BS can decode at most \(\tilde {R} {=} RN\) packets at each CP.
T Taleb, A Kunz, Machine type communications in 3GPP networks: potential, challenges and solutions. IEEE Commun. Mag. 50(3), 178–184 (2012).
A Al-Fuqaha, et al., Internet of things: a survey on enabling technologies, protocols and applications. IEEE Commun. Surv. Tutor. 17(4), 2347–2376 (2015).
A Bader, et al., First mile challenges for large-scale IoT. IEEE Commun. Mag. 55(3), 138–144 (2017).
H Shariatmadari, et al., Machine-type communications: current status and future perspectives toward 5G systems. IEEE Commun. Mag. 53(9), 10–17 (2015).
A Laya, L Alonso, J Alonso-Zarate, Is the random access channel of LTE and LTE-A suitable for M2M communications? A survey of alternatives. IEEE Tutor. Surv. Commun. Mag. 16(1), 4–16 (2014).
3GPP Long term evolution (LTE). www.3gpp.org/. Accessed Feb 2018.
So J, et al., LoRaCloud: LoRa platform on OpenStack, IEEE NetSoft Conf. and Workshops, (Seoul, 2016).
Revised WI: further LTE physical layer enhancements for MTC. RP-150492, ericsson, RAN 67, Shanghai, China.
TPC de Andrade, et al., The random access procedure in long term evolution networks for the internet of things. IEEE Commun. Mag. 55(3), 124–131 (2017).
R Ratasuk, et al., Narrowband LTE-M system for M2M communication, IEEE Vehicular Technology Conf, (Vancouver, 2014).
Y-P Wang, et al., A primer on 3GPP narrowband internet of things. IEEE Commun. Mag. 55(3), 117–123 (2017).
R Ratasuk, et al., NB-IoT system for M2M communication, IEEE Wireless Commun. and Networking Conf, 1–5 (2016).
3GPP RP-150492, Ericsson, Revised WI: Further LTE Physical Layer Enhancements for MTC. TSG RAN Meeting 67, Shanghai, China, 9-12 Mar. 2015.
S Ghez, S Verdú, SC Schwartz, Stability properties of slotted ALOHA with multipacket reception capability. IEEE Trans. Autom. Control. 33(7), 640–649 (1988).
R Nelson, L Kleinrock, The spatial capacity of a Slotted ALOHA multihop packet radio network with capture. IEEE Trans. Commun. 32(6), 684–694 (1984).
S Ghez, S Verdu, SC Schwartz, Optimal decentralized control in the random access multipacket channel. IEEE Trans. Autom. Control. 34(11), 1153–1163 (1989).
J-L Lu, W Shu, M-Y Wu, A survey on multipacket reception for wireless random access networks. J. Comput. Netw. Commun.2012:, 14 (2012). Article ID 246359. https://doi.org/10.1155/2012/246359.
M Tsatsanis, R Zhang, S Banerjee, Network-assisted diversity for random access wireless networks. IEEE Trans. Sign. Process. 48(3), 702–711 (2000).
R Zhang, M Tsatsanis, Network-assisted diversity multiple access in dispersive channels. IEEE Trans. Commun. 50(4), 623–632 (2002).
N Souto, et al., Iterative multipacket detection for high throughput transmissions in OFDM systems. IEEE Trans. Commun. 58(2), 429–432 (2010).
R Zhang, ND Sidiropoulos, M Tsatsanis, Collision resolution in packet radio networks using rotational invariance techniques. IEEE Trans. Commun. 50(1), 146–155 (2002).
B Ozgul, H Delic, Wireless access with blind collision-multiplicity detection and retransmission diversity for quasi-static channels. IEEE Trans. Commun. 54(5), 858–867 (2006).
R Dinis, et al., Frequency-domain multipacket detection: a high throughput technique for SC-FDE systems. IEEE Trans. Wirel. Commun. 8(7), 3798–3807 (2009).
Pereira M, et al., Optimization of a p-persistent network diversity multiple access protocol for a SC-FDE System. IEEE Trans. Wirel. Commun. 12(12), 5953–5965 (2013).
R Robles, et al., A random access protocol incorporating multi-packet reception, retransmission diversity and successive interference cancellation, 8th Int. Workshop on Multiple Access Commun. (MACOM2015), (Helsinki, 2015).
R Samano-Robles, Network diversity multiple access with imperfect channel state information at the transmitter side. Adv. Wirel. Optim. Commun. (2016).
R Robles, et al., Network diversity multiple access in Rayleigh fading correlated channels with imperfect channel and collision multiplicity estimation, 24th Telecommunications Forum (TELFOR), (Belgrade, 2016).
G Dimic, ND Sidiropoulos, L Tassiulas, Wireless networks with retransmission diversity access mechanisms: stable throughput and delay properties. IEEE Trans. Sign. Process. 51(8), 2019–2030 (2013).
R Samano-Robles, M Ghogho, DC McLernon, Wireless networks with retransmission diversity and carrier-sense multiple access. IEEE Trans. Sign. Process. 57(9), 3722–3726 (2009).
R Samano-Robles, A Gameiro, Stability properties of network diversity multiple access protocols with multiple antenna reception and imperfect collision multiplicity estimation. J. Comput. Netw. Commun. 2013:, 10 (2013). Article ID 984956. http://dx.doi.org/10.1155/2013/984956.
F Ganhao, et al., Performance analysis of an hybrid ARQ adaptation of NDMA schemes. IEEE Trans. Commun. 61(8), 3304–3317 (2013).
M Madueño, J Vidal, Joint physical-MAC layer design of the broadcast protocol in ad-hoc networks. IEEE J. Sel. Areas Commun. 23(1), 65–75 (2005).
M Madueño, J Vidal, PHY-MAC performance of a MIMO network-assisted multiple access scheme, IEEE 6th Workshop on Signal Process. Advances in Wireless Commun, (New York, 2005).
3GPP TR 38.912, Study on New Radio (NR) access technology. Release 14, V14.0.0, Mar. 2017.
3GPP TS 36.213, Evolved Universal Terrestrial Radio Access (E-UTRA); Physical Layer Procedures. v9.2.0, Jun. 2010.
3GPP TR 36.814, Further advancements for E-UTRA physical layer aspects. Release 9, v9.0.0, Mar. 2010.
EG Larsson, P Stoika, Space-time block coding for wireless communications (Cambridge University Press, The Edinburgh Building, 2003).
V Tarokh, H Jafarkhani, RA Calderbank, Space-time block codes from orthogonal designs. IEEE Trans. Inf. Theory. 45(5), 1451–1458 (1999).
D Tse, P Viswanath, Fundamentals of wireless communications (Cambridge University Press, The Edinburgh Building, 2004).
ST Chung, A Goldsmith, Degrees of freedom in adaptive modulation: a unified view. IEEE Trans. Commun. 49(9), 1561–1571 (2001).
JG Proakis, M Salehi, Digital communications, 5th ed. McGraw-Hill (The McGraw-Hill Companies, Inc., New York, 2008).
R Rom, M Sidi, Multiple access protocols: performance and analysis (Springer Verlag, New York, 1990).
Book MATH Google Scholar
L Kleinrock, Theory, Volume 1, Queueing systems. Wiley-Interscience (Wiley, New York, 1975).
JDC Little, A proof for the queuing formula: L= λW. Oper. Res. 9(3), 383–387 (1961).
A Zappone, E Jorswieck, Energy efficiency in wireless networks via fractional programming theory. Found. Trends Communun. Inf. Theory. 11(3-4), 185–701 (2014).
B Dai, W Yu, Sparse beamforming and user-centric clustering for downlink cloud radio access network. IEEE Access Special Section Recent Adv. C-RAN. 2:, 1326–1339 (2014).
Andreev S, et al., Efficient small data access for machine-type communications in LTE, EEE Int. Conf. Commun, 3569–3574 (2013).
P Grover, K Woyach, A Sahai, Towards a communication-theoretic understanding of system-level power consumption. EEE J. Sel. Areas Commun. 29(8), 1744–1755 (2011).
This work has been partially funded by the Spanish Ministerio de Economía, Industria y Competitividad and FEDER funds through project TEC2016-77148-C2-1-R (AEI/FEDER, UE): 5G &B RUNNER-UPC, and by the Catalan Government through the grant 2017 SGR 578 - AGAUR.
Signal Theory and Communications department, Universitat Politècnica de Catalunya (UPC), Barcelona, 08034, Spain
S. Lagen, A. Agustin, J. Vidal & J. Garcia
Mobile Networks department, Centre Tecnològic de Telecomunicacions de Catalunya, Castelldefels, 08860, Spain
S. Lagen
A. Agustin
J. Vidal
J. Garcia
SL, AA, and JV put forward the idea. SL, AA, and JG did the mathematical development. SL and JG carried out the experiments. SL wrote the manuscript. JV took part in the discussions and he also guided, reviewed, and checked the writing. All authors contributed to the interpretation of the results and read and approved the final manuscript.
Correspondence to S. Lagen.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Lagen, S., Agustin, A., Vidal, J. et al. Performance analysis of feedback-free collision resolution NDMA protocol. J Wireless Com Network 2018, 45 (2018). https://doi.org/10.1186/s13638-018-1049-x
DOI: https://doi.org/10.1186/s13638-018-1049-x
Slotted random access
Packet repetition
Multipacket reception
Feedback-free NDMA
|
CommonCrawl
|
Application of a pH-Sensitive Fluoroprobe (C-SNARF-4) for pH Microenvironment Analysis in Pseudomonas aeruginosa Biofilms
Ryan C. Hunter, Terry J. Beveridge
Ryan C. Hunter
Department of Microbiology, University of Guelph, Guelph, Ontario N1G 2W1, Canada
For correspondence: [email protected]
Terry J. Beveridge
An important feature of microbial biofilms is the development of four-dimensional physical and chemical gradients in space and time. There is need for novel approaches to probe these so-called microenvironments to determine their effect on biofilm-specific processes. In this study, we describe the use of seminaphthorhodafluor-4F 5-(and-6) carboxylic acid (C-SNARF-4) for pH microenvironment analysis in Pseudomonas aeruginosa biofilms. C-SNARF-4 is a fluorescent ratiometric probe that allows pH quantification independent of probe concentration and/or laser intensity. By confocal scanning laser microscopy, C-SNARF-4 revealed pH heterogeneity throughout the biofilm in both the x,y and x,z planes, with values ranging from pH 5.6 (within the biofilm) to pH 7.0 (bulk fluid). pH values were typically remarkably different than those just a few micrometers away. Although this probe has been successfully used in a number of eukaryotic systems, problems have been reported which describe spectral emission changes as a result of macromolecular interactions with the fluorophore. To assess how the biofilm environment may influence fluorescent properties of the dye, fluorescence of C-SNARF-4 was quantified via spectrofluorometry while the probe was suspended in various concentrations of representative biofilm matrix components (i.e., proteins, polysaccharides, and bacterial cells) and growth medium. Surprisingly, our data demonstrate that few changes in emission spectra occur as a result of matrix interactions below pH 7. These studies suggest that C-SNARF-4 can be used as a reliable indicator of pH microenvironments, which may help elucidate their influence on the medical and geobiological roles of natural biofilms.
In natural environments, microbes preferentially live in matrix-enclosed, complex, integrated communities known as biofilms. The fully mature biofilm is typified by a three-dimensional structure made up of bacterial cells, a microbe-derived polymer matrix, and interstitial water channels that facilitate the exchange of nutrients and wastes with the surrounding environment. A most remarkable feature of these complex biofilm communities is the development of chemical gradients (i.e., pH, redox potential, and ions) due to the differential diffusion of nutrients, metabolic products, and oxygen throughout the biofilm (1, 14, 49, 50, 54). These gradients are partitioned into so-called microenvironments produced by the diverse microbial physiology and local physicochemical properties found within a biofilm, so that the conditions within a microenvironment can be profoundly different from those encountered in the bulk phase.
This feature of biofilms has been recognized in a number of natural and clinical environments. For example, the heterogeneous charge distribution of various matrix polymers is thought to contribute to biofilm antimicrobial tolerance through sorption or deactivation of chemically reactive biocides (20, 24, 25, 31, 35). pH and redox gradients within biofilms have also been implicated in driving global geochemical cycles by altering the speciation of inorganic ions (9, 27, 30, 32). Although the concept of the microenvironment is widely accepted, its dynamic spatial and temporal complexity makes it difficult to define precise associations between these gradients and specific biofilm processes. Accordingly, there is a need for novel approaches to define the physical and chemical properties of biofilm microenvironments. Such approaches will help elucidate the environmental and clinical phenomena driven by small aggregates of cells within microbial communities.
Gradients of ion concentrations have been extensively studied in the past, either at the micrometer scale by means of microelectrodes (1, 6, 15, 37, 39, 41, 51, 54, 56) or at higher resolution using fluorescent imaging in which probes exhibit spectral emission changes in response to changing ion concentrations (12, 50). These techniques, however, are not without their limitations—microelectrode resolution is restricted (i.e., based on electrode tip size), ion-sensitive probes are not always suitable for quantitative microscopy (due to compartmentalization and photobleaching of the probe) (2, 50), and highly advanced technologies (such as multiphoton microscopy) (40, 45) are not accessible to all laboratories. Ratiometric imaging has been recently introduced (47); here, protonation of specific fluorophores shifts their emission spectra so that increasing ion concentration increases fluorescent intensity at one wavelength and decreases it at another. The ratio of intensities obtained at both wavelengths serves as a quantitative measure of pH. This method has previously been applied to biofilm systems (4, 16) but not with the probes used in our report; the older probes used previously have been discontinued. 5-(and-6)-Carboxy-seminaphthorhodafluor-1 (C-SNARF-1) is a ratiometric dye that has seen considerable use in studying [H+] in eukaryotic systems (2, 3, 33, 34, 52). To our knowledge, however, its use in microbial systems has not been reported.
This study assesses the potential of a fluorinated derivative of C-SNARF-1, seminaphthorhodafluor-4F 5-(and-6)-carboxylic acid (C-SNARF-4), as a quantitative indicator of pH microenvironments in microbial biofilms. C-SNARF-4 has dual-emission properties but has a slightly lower pH sensitivity maximum (pKa ∼6.4) relative to its other C-SNARF relatives, making it more suitable for use in the slightly acidic environments expected in our biofilms (21). We have used C-SNARF-4 in combination with confocal scanning laser microscopy (CSLM) to study single-species Pseudomonas aeruginosa communities and the pH of their microenvironments. Since there have been recent reports suggesting C-SNARF emission characteristics can be influenced by the probe's interaction with various cell components (7, 23, 38, 48, 55), we wanted to determine the extent to which biofilm components (i.e., proteins, exopolymers, and bacterial cells) influence C-SNARF-4 emission properties. Accordingly, spectrofluorometric analyses were used to determine the effect of representative matrix components (alginate, bovine serum albumin, P. aeruginosa cells, and growth medium) on the spectral emission properties of the fluorophore. Novel fluorescent techniques such as this are required to fully understand the contributions of pH microenvironments to biofilm-specific processes.
Bacterial strains, culture conditions, and biofilm development.Pseudomonas aeruginosa PAO1 was used throughout this study and was obtained from J. Lam (University of Guelph). To visualize biofilm growth, a gfp-tagged PAO1 was used possessing plasmid pMF230 (36), which contained the gene for green fluorescent protein (GFP) containing the mut2 mutation (13). GFP was expressed constitutively. Strains were maintained on Trypticase soy agar (Becton Dickinson). When required, carbenicillin was added at 300 μg per ml. A dilute broth medium (Trypticase soy broth [TSB]) was used for the flow system to grow biofilms at a concentration of 3 g per liter (dTSB), which is 1/10 the recommended concentration. The pH of dTSB was adjusted to 6.8. When bacteria containing pMF230 were analyzed, 50 μg of carbenicillin per ml was added to the dTSB.
Biofilms were cultivated at room temperature in single-channel flow cells (1 by 10 by 40 mm; Biosurface Technologies Inc., Bozeman, MT) supplied with dTSB at a flow rate of 0.1 ml/min for 7 days using a Manostat Carter multichannel peristaltic pump (Barnant, Barrington, IL). Flow cells were sterilized using 75% (vol/vol) ethanol. Once sterile, flow cells were conditioned with dTSB for 24 h, at which point the flow system was inoculated with an overnight culture of PAO1 (optical density at 600 nm = 0.6) through an upstream injection port. After inoculation, flow was arrested to facilitate bacterial adhesion for 1 h. Following attachment, flow was resumed and the substrate was pumped through at a constant rate of 0.1 ml/min for 7 days.
Spectrofluorometric assays.To calibrate C-SNARF-4 photon emissions and to assess the dye's potential use in probing biofilm microenvironments, small volumes of a 1 mM stock solution of C-SNARF-4 in dimethyl sulfoxide were added to pH-adjusted 50 mM HEPES. Final pHs were 5.6, 6.0, 6.4, 6.8, 7.2, and 7.6. The final probe concentration ranged from 1 μM to 10 μM to determine the effects of compartmentalization on emission intensity. Samples were mixed, allowed to equilibrate for 15 min, and transferred to a quartz cuvette (path length, 0.5 cm). C-SNARF-4 emission spectra were recorded on a Quantamaster C-61 steady-state spectrofluorometer (Photon Technology International, Lawrenceville, NJ) using an excitation wavelength of 488 nm, emission wavelengths of 580 nm and 640 nm, and a slit width of 2 nm. To determine the C-SNARF-4 fluorescence ratio, background fluorescence (bkgd) was subtracted at each emission wavelength 640 and 580 nm) as calculated in equation 1. $$mathtex$$\[\mathrm{Ratio}{=}(\mathrm{Em}_{640}{-}\mathrm{Em}_{640,\mathrm{bkgd}})/(\mathrm{Em}_{580}{-}\mathrm{Em}_{580,\mathrm{bkgd}})\]$$mathtex$$(1)
Titration curves were calculated and used to convert fluorescence emission ratios to pH.To assess the influence of matrix components on C-SNARF-4 emissions, samples were prepared by adding C-SNARF-4 (10 μM) to pH-adjusted dTSB or characteristic matrix components (105 cells; 1% or 2% [wt/vol] alginic acid or 1% or 2% [wt/vol] bovine serum albumin [BSA]) in pH-adjusted HEPES. All samples were mixed, allowed to equilibrate for 15 min, and quantified via spectrofluorometry using the calibration sample settings. Emission ratios were determined according to equation 1.
Confocal scanning laser microscopy.All biofilm images were collected using a Leica TCS SP2 CSLM (Leica) equipped with an Ar/Kr 488-nm laser, and a 488/514 dichroic beam splitter which provided optimal signal analysis from the GFP and C-SNARF-4. A 40×/1.00 positive low Fluotar oil immersion lens was used to collect 2,056- by 2,056-bit resolution images in the x,y and x,z planes. Depths quoted in this paper are the distances from the substratum to the focal plane.
GFP-tagged biofilms were excited at 488 nm and emission was detected at 510 nm. Since our CSLM does not lend itself to continuous monitoring of biofilm growth, flow cells were clamped to stop dTSB flow and imaged within 15 min. GFP intensity was quantified using LCS confocal software.
To determine the spatial distribution of pH microenvironments, biofilms were treated with 1 ml of dTSB supplemented with C-SNARF-4 at a final concentration of 10 μM. Once biofilms reached maturity (7 days), flow cell inputs were clamped, and spent medium was carefully removed via a syringe through the upstream injection port. The fresh dTSB-C-SNARF-4 solution was carefully introduced through the same injection port to minimize structural damage to the biofilm. Flow cells were clamped, and biofilms were imaged within 15 min of flow stoppage to minimize the accumulation of acidic metabolites, which may artificially modify local pH microenvironments. C-SNARF-4-treated biofilms were excited at 488 nm, and emission was detected in two channels at 580 nm and 640 nm. pH microenvironments were determined from a series of two-channel optical sections by calculating the ratio of emission intensity (i.e., pixel values) between the two channels (Em640 nm/Em580 nm). All intensities were determined using LCS software. Images were enhanced using Photoshop software (Adobe, Mountain View, Calif.) for presentation purposes only.
Identical calibration standards were used for CSLM as they were for spectrofluorometry, though only 10 μM C-SNARF-4 was used. Samples were mixed and imaged in 200-μl sample wells and imaged using settings identical to those for the biofilm samples. Titration curves of ratios versus pH were calculated using LCS software according to equation 1 and were used to convert C-SNARF-4 emissions ratios to biofilm pH values. As a control, untreated biofilms were also imaged to ensure that autofluorescence of biological material would not interfere with pH quantification.
Image analysis.C-SNARF-4 emissions within the biofilm were evaluated with LCS software by selecting 20 regions of interest from the image pairs (580 nm and 640 nm). Each region of interest was ∼50 μm2 of biofilm area, providing an average measure of pH within each region. The regions of interest were selected from the bulk fluid phase and microcolony fringes and throughout the centers of microcolonies. Emission ratios between the two image pairs were determined, and pHs were determined based on calibration titration curves.
Fluorescence spectra in buffer.C-SNARF-4 was calibrated in a series of pH-adjusted buffers (pH 5.6 to 7.6, the expected pH levels in the biofilm) at various concentrations. As our data were obtained spectrofluorometrically, as well as by quantitative CSLM imaging, we measured an identical set of calibration solutions by both techniques. The calibrated emission spectra (540 nm to 680 nm) for C-SNARF-4 (10 μM) obtained in pH-adjusted HEPES solutions are shown in Fig. 1a and b. When samples were excited in the fluorometer at 488 nm, the emission spectrum of the probe exhibited peaks of fluorescence emission at 580 nm and ∼650 nm (Fig. 1a). As the pH of the solution increased from 5.6 to 7.6, emission intensities recorded at 580 nm decreased, whereas the emission intensities recorded at 640 nm increased. The intensities of the two channels were equal at approximately pH 6.4. When quantified by CSLM, the emission bands also had their maxima close to 580 nm and 640 nm, and intensities at both wavelengths underwent similar changes in response to increasing pH levels (Fig. 1b). However, in this system, emission intensities were significantly greater at the shorter wavelength (580 nm) than the spectrofluorometry data, while they remained relatively consistent at the higher wavelength (640 nm). This suggests that C-SNARF-4 probe fluorescence ratios can be influenced by the spectral sensitivity of the instrument used.
Emission spectra of C-SNARF-4 in various pH-adjusted HEPES solutions, buffered to pH 5.6, 6.0, 6.4, 6.8, 7.2, and 7.6. Identical samples were excited at 488 nm and fluorescence emission levels (arbitrary units [a.u.]) were recorded by spectrofluorometry (a) or CSLM (b). (c) Corresponding emission intensity ratios (640 nm/580 nm) for spectrofluorometric data (□) and CSLM emissions (▪). Note that emission ratios remained constant, although emission spectra varied according to probe concentrations (1 μM, 2 μM, and 10 μM).
To use these data to determine pH in biofilm systems, the ratio of emission intensities at 640 to 580 nm were determined and shown in Fig. 1c. Standard deviations of fluorometry-derived calibration ratios ranged from ∼0.3 pH units at near-neutral pH values (i.e., pH 7.6) to less than 0.1 pH unit at more acidic pH values (pH 5.6). CSLM ratios showed a standard deviation of less than 0.2 pH units at all pH levels used in this experiment. When decreasing fluorophore concentrations were used (1 μM and 5 μM), emission intensities decreased significantly in both systems; however, the emission intensity ratios seen here remained constant for each instrument at all concentrations (data not shown). These observations corroborate that emission ratios are proportional to pH and will not be affected by compartmentalization of the probe throughout the matrix.
CSLM micrographs depicting SNARF fluorescence intensity in 50 mM HEPES adjusted to pH 5.6 and 7.2 are shown in Fig. 2. Image pairs at 580 nm (green) and 640 nm (red) are combined in each micrograph. These two images clearly show a difference in green and red pixel intensity at acidic and neutral pH values, suggesting that SNARF-4 can be used in combination with confocal microscopy to visually identify pH microenvironments that exist within PAO1 biofilms.
CSLM image slices of SNARF-4 suspended in pH-adjusted 50 mM HEPES buffer. Samples were imaged in 200-μl sample wells, and fluorescence intensity was independent of depth throughout the sample. The ratio of intensity of the red channel to green (640 nm/580 nm) is indicative of the ambient pH. Although concentration changes can affect pixel intensities, C-SNARF-4 concentrations in these two micrographs are identical (10 μM).
SNARF interaction with matrix components.To assess potential interactions of CSNARF-4 with components of the biofilm matrix and the growth medium, the probe was incubated with various concentrations of exopolysaccharides (alginate), protein (BSA), bacterial cells (P. aeruginosa PAO1), and growth medium (dTSB). Spectrofluorometric analysis revealed that from pH 5.6 to 7.0, there was little difference in C-SNARF-4 emissions whether in pH-adjusted buffer or in the presence of matrix components (Fig. 3a). Above pH 7.2, however, bacterial cell/C-SNARF-4 samples revealed an increase in fluorescent intensity ratio (640 nm/580 nm), and showed even greater variation at pH 7.6. This was not surprising, though, since some of our images showed internalization of the pH probe by P. aeruginosa cells (Fig. 3b), which likely regulated their internal pH at a level more alkaline than that of the surrounding environment. Internalization would lead to protonation-deprotonation of the probe, thus influencing its emission intensity. At pH 7.6, emission ratios were slightly higher in alginic acid and BSA samples, though this variation translated into a difference of only 0.2 pH units. (It is possible that the higher concentration of ionizable carboxylate groups on the alginic acid somehow affected the emission characteristics of C-SNARF-4.) These results suggest that even if the fluorophore does become bound or interacts with biofilm components, particularly alginate and BSA, C-SNARF-4 is still highly sensitive to pH changes and alters its emission spectra accordingly.
(a) Fluorescence ratios of C-SNARF-4 from stained suspensions of representative matrix components (Δ, 1% alginic acid; ♦, 1% BSA; ▪, 105 CFU/ml P. aeruginosa; ○, dTSB; □, 50 mM HEPES). Above pH 7, bacterial cells, alginic acid, and BSA showed a concentration-dependent increase in emission ratios, suggesting an interaction of biofilm matrix components with C-SNARF-4. (b) C-SNARF-treated Pseudomonas aeruginosa planktonic cells. Some microbes show internalization of the probe, which can alter fluorescence emission levels due to intracellular pH differences.
When various concentrations of BSA and alginate were used (0.5, 1.0, and 2%), emission ratios below pH 7.0 remained constant, while fluorescence above 7.0 showed a concentration-dependent change in emission intensity (data not shown). This indicated that a heterogeneous distribution of matrix components throughout the biofilm could result in emission ratio variation of C-SNARF-4 in slightly alkaline microenvironments.
Biofilm growth and microbiology.Due to the dynamic nature of a microenvironment, it is important to establish a point in time when the biofilm has matured to a quasi-steady state in terms of its three-dimensional architecture before characterizing the gradients that have developed within it. GFP-labeled PAO1 provided easy monitoring of biofilms over the course of 7 days, by allowing us to visualize cells as they reconfigured their community. Though GFP did not reveal the distribution of extracellular polymers throughout the biofilm, we felt that this probe gave an accurate representation of biofilm structure in CSLM. Early biofilm development (24 to 48 h) frequently revealed individual cells colonizing the flow cell surface, while only a few small clusters of cells were observed (data not shown). Only after 60 h did the microcolonies start to develop vertically into structures typical of P. aeruginosa biofilms. After 72 h of cultivation, biofilm communities began to show distinct structural features, typified by cell clusters (microcolonies) organized into 80- to 100-μm-diameter pillar structures that were separated by large areas of uncolonized substratum (Fig. 4). As biofilms continued to grow past this point, microcolonies maintained an average thickness of ∼40 μm and an average diameter of ∼90 μm. Based on the consistency of these structural features (thickness and diameter), it was determined that after 96 h of flow-cell cultivation, biofilms had reached maturity (i.e., quasi-steady state). It was likely that pH microenvironments at this point were most representative of natural conditions in mature biofilms. As a result, it was these biofilms that were subjected to further C-SNARF-4 analysis.
CSLM micrograph of GFP-labeled PAO1 showing biofilm morphology after 7 days of growth. Subsequent biofilm development maintained an average thickness of ∼40 μm and an average diameter of ∼90 μm, indicating a pseudo-steady state. Bar, 40 μm.
pH microenvironments in PAO1 biofilms.C-SNARF-4 allowed us to visualize acidic microenvironments in P. aeruginosa PAO1 biofilms. Figures 5b, c, and d reveal a typical C-SNARF-4-treated biofilm at three different focal planes in x and y of the microcolony that is shown in the bright-field micrograph in Fig. 5a. The thickness of the microcolony was approximately 50 μm and focal planes are shown at 40-μm (5b), 20-μm (5c), and 5-μm (5d) distances from the substratum. pH values were determined for 5 to 10 regions of interest within three different areas of the biofilm (bulk fluid, microcolony edges, and center of the microcolony). By determining the ratio of pixel intensity (red/green) in these regions, we were able to distinguish the presence of distinct horizontal pH heterogeneity throughout the biofilm. pH imaging at the 40-μm plane (Fig. 5b) revealed a nearly homogenous pH profile, with an average pH of 6.7 ± 0.1. pH values in the bulk fluid and within the microcolony tended to be within 0.2 pH units of the growth medium (pH 6.8). Some areas appeared to have lower fluorescence intensity than the rest of the region within the micrograph, which may indicate restricted diffusion of the probe into the microcolony. However, ratiometric calculations account for a lower concentration of probe within the biofilm. pH profiles obtained closer to the substratum (i.e., deeper in the biofilm) (Fig. 5c and d) showed much more pH variation relative to the microcolony at the fluid-biofilm interface. The pH values shown at 20 μm from the substratum (Fig. 5c) ranged from 6.1 ± 0.3 at the center of the microcolony to 6.3 ± 0.2 at the edges of the biofilm. More alkaline microenvironments tended to be located in areas towards the edge of the biofilm, although this varied depending on sampling location. C-SNARF-4 emissions nearest the substratum (Fig. 5d) also showed remarkable variation in pH relative to higher regions of the biofilm. At this depth, average pH values towards the center of the microcolony (6.0 ± 0.3) were also more acidic than the surrounding areas (6.3 ± 0.2), although this also varied depending on location. In virtually all regions of the biofilm, pH values were often quite different from regions of the same biofilm just a few micrometers away. In all focal planes, bulk fluid pH generally remained near neutral, although a slight drop in pH was observed (0.2 U) in some areas, typically nearest the substratum (not shown). These data suggest a significant heterogeneous environmental chemistry throughout and on the exterior of the biofilm matrix.
Bright-field (a) and ratiometric CSLM image slices (b, c, and d) of a C-SNARF-4-treated biofilm at 5 μm (b), 20 μm (c), and 40 μm (d) from the biofilm-bulk fluid interface. Data shown represent average pH levels over an area of 50 μm2. pH was determined in the bulk fluid (white), the biofilm-fluid interface (yellow), and deep within the biofilm (red). Although fluorescence intensity may appear different in areas of similar pH, the ratio of red/green intensity is constant, giving a quantitative measure of pH. Frequently, pH values are remarkably different in areas located just a few micrometers away from each other, indicating discrete pH microenvironments throughout the biofilm. Bar, 40 μm.
A typical x,z profile also shows distinct pH variation (Fig. 6). As seen in the horizontal cross-sections, pH values in the center of the microcolony were generally more acidic than those in the bulk fluid phase (∼6.8, the pH of the growth medium). Average values near the biofilm-fluid interface were 6.4 ± 0.2, whereas average pH levels deeper in the biofilm were 6.1 ± 0.3. In some areas, however, pH was more alkaline in deeper regions of the biofilm than in surrounding areas, and no identifiable pH gradients (i.e., top to bottom) were seen throughout this study.
x,z CSLM image section of a C-SNARF-4-treated biofilm. The top represents the biofilm-bulk fluid interface, while the bottom of the image represents the substratum. Average pH values were obtained from 50-μm2 areas in the bulk fluid (white text), the biofilm-fluid interface (yellow text), and within the microcolony (red text). Bar, 20 μm.
The highest pH detected in this study was 7.0, and this was found in the bulk fluid phase, whereas the most acidic environment detected (pH 5.6) was located in the center of a microcolony (values not shown). No detectable levels of fluorescence were present in untreated biofilms, which confirmed that autofluorescence of biological material did not contribute to pH measurements (data not shown).
As microbial communities mature, the heterogeneous consumption of nutrients and production of metabolic wastes, in combination with diffusion limitations on chemical species throughout the biofilm matrix, generate so-called microenvironments (i.e., those minute regions associated with biofilms that are a few micrometers in diameter). These become remarkably different from the physical and chemical conditions that exist in the surrounding bulk fluid phase. Although this heterogeneity is well recognized, the direct implications of these microenvironments (notably pH and redox potential) on biofilm-specific phenomena have been elusive and difficult to detect at high spatial resolution in real time. For example, microelectrodes have difficulty monitoring regions <10 to 25 μm2, and their physical insertion can perturb the biofilm, affecting cell growth and matrix physicochemistry. For this reason, other approaches have been highly sought after to define the physicochemical properties of these microenvironments.
In this work, we describe the use of C-SNARF-4, a single-excitation, dual-emission fluorescent pH indicator in probing pH microenvironments that exist within the P. aeruginosa PAO1 biofilm matrix. The pH-sensitive chemical structure of the C-SNARF-4 probe is shown in Fig. 7. Above pH 5, it is assumed that the dye exists as a mixture of two forms, the monoanionic (naphthol) and dianionic (naphtholate) states, between which a pH-sensitive equilibrium is established depending on the acidity or alkalinity of the solution (21, 22). In basic aqueous solutions above pH 9, the phenol and carboxylic acid groups of the probe become completely ionized. It is believed that acidification of a solution first protonates the phenol group (pKa ∼6.4) of the dianion to yield the C-SNARF-4 monoanion. Though further acidification below pH 5 generates neutral and cationic forms of the dye, these forms are nonfluorescent and contribute little to the emission properties of the dye molecule (Molecular Probes Technical Support, personal communication). Generation of the naphthol species through acidification alters the emission spectrum of the probe, so that acidic conditions increase the intensity of the probe at 580 nm and decrease it (or keep it constant, depending on instrumentation sensitivity) at 640 nm. As shown here, the ratio between both wavelengths (640 nm/580 nm) is independent of probe concentration, which is consistent with previous studies (34). This allows the relative concentrations of the protonated and unprotonated forms to be quantified, providing an accurate and reliable measure of pH in a given environment. These pH-sensitive properties of C-SNARF probes have been used successfully in quantifying pH in perfused myocardium (34), mitochondria (46), internal pH in Saccharomyces cerevisiae (2), and several other eukaryotic systems. To our knowledge, this is the first time that the use of a C-SNARF probe has been applied to the study of biofilms.
Chemical structure and pH-dependent equilibrium of C-SNARF-4. As pH is lowered, protonation of the phenol group (pKa = 6.4) of the monoanion (naphthol form) yields the dianion (naphtholate form). The concentration of each is determined by the acidity or alkalinity of the microenvironment. Since each form has a characteristic emission maximum at different wavelengths (580 nm and 640 nm), fluorescence emission ratios can reliably be used to quantitatively determine pH.
The use of C-SNARF probes has several advantages over other available techniques. Most importantly, spectral characteristics of these fluorophores permit dual-emission ratiometric imaging. As a result, fluorescent intensity in a given environment is independent of probe concentration, overcoming theoretical problems due to compartmentalization or sequestration of the probe. Another potential limitation of fluorescent pH imaging is the loss of fluorescence intensity due to the inner filter effect of thick samples. It has previously been shown, however, that SNARF emissions are not affected in semisolid media up to 200 μm in thickness (much thicker than P. aeruginosa biofilms) (34). C-SNARF probes have also been found to become photosensitive over lengthy experiments (11), which could be a significant limitation. However, one study suggests that C-SNARF probes are only sensitive to photobleaching above pH 7.3 (i.e., the protonated form), since most light is absorbed by the protonated dye (5). As the pH is increased above the pKa of the probe, the unprotonated form absorbs the bulk of the incident light, and bleach rates are increased. Since the pKa of C-SNARF-4 is lower than that of other SNARF probes, it may be sensitive to photobleaching at lower pH levels. However, it has been noted that in short-term experiments such as this one, pH equilibration of the probe occurs much faster than photobleaching, otherwise, emission ratios would be homogeneous throughout the sample.
Although fluorescent imaging with C-SNARF-4 may contribute to our understanding of biofilm physiology, inherent restrictions of the probe may preclude its use in some environments. For example, the pKa (∼6.4) of C-SNARF-4 may limit its use in some acidophilic biofilms such as dental biofilms, where pH levels have been shown to drop down below pH 3 following fermentation of dietary carbohydrates (50). Photon absorption and scatter may also restrict their use in thick biofilms or deep microbial mats, which often exceed 10 cm in thickness. Microelectrodes remain a more attractive option in these applications. Another minor drawback of C-SNARF probes would be when the pH of the system is near that of the surrounding milieu (46). Here, pH-dependent emission intensity may make it difficult to distinguish boundary layers of the biofilm (i.e., biofilm-fluid interface), requiring localization information to be obtained at the same time (i.e., phase-contrast light microscopy). Finally, even though the spatial organization of pH microenvironments may be such that they approach smaller so-called nanoenvironments or beyond, quantification of pH using these techniques is limited by the resolution of the digital image. As can be expected, detector resolution of the confocal microscope surpasses what can be represented in a micrograph.
Interactions of fluorescent probes with matrix components could also potentially complicate usage of any indicator. For example, Večeř et al. (48), House (23), and Seksek et al. (42) have all described non-pH related interactions of SNARF probes with eukaryotic intracellular compounds, presumably by charge-charge interaction. Our study, however, shows that the interaction of C-SNARF-4 with alginic acid, BSA, or growth medium at natural pH and below does not seem to influence fluorescence emission characteristics of the probe. Alginic acid (alginate), a heteropolymer of mannuronic and guluronic acids, is often suggested to be the dominant extracellular polymeric substance component in mucoid P. aeruginosa biofilms. It should be noted, however, that this polymer is not normally produced by common laboratory strains such as PAO1 (43, 53). Yet, highly charged polymers such as alginate are frequently encountered in natural biofilms of all sorts. For this reason, we feel that alginate provides a suitable control polymer due to its highly anionic nature, while providing a convenient model compound to mimic interactions between C-SNARF-4 and microbial polymers. Interestingly, no deleterious interactions were observed between alginate and C-SNARF-4 at pH 7.0 or below. Only at pH 7.2 and above did emission ratios slightly change. Though BSA is not a component of biofilm matrices, it provides a well-characterized model macromolecule to assess interactions of C-SNARF-4 with biofilm matrix proteins. BSA contains a high content of charged amino acids (Asp, Gln, Lys, and Arg) and cystine residues, is able to reversibly bind a wide variety of ligands, and undergoes conformational isomerization with changes in pH (26), making this protein a likely candidate to influence the fluorescence properties of C-SNARF-4. Surprisingly, at neutral pH and below, no change in ratio was observed when C-SNARF-4 was incubated with various concentrations of BSA. These results differ from those previously reported (42), which show a significant alteration in emission spectra. More recently, however, it was determined that the SNARF variant used in that study (C-SNARF-1) contained a contaminant able to bind to BSA, which led to discrepancies between intracellular and cell-free spectral properties (55). C-SNARF-4 differs from the probe used in that study as it has a lower pKa value (6.4 compared to 7.5) due to a fluorine atom, and it lacks the acetoxymethylester substituent. These may contribute to different interactions with BSA. Lipid-associated molecules (i.e., lipoproteins) and nucleic acids are also known to be common components of biofilm matrices, although they were not investigated in this study. It has been established, however, that lipids (42, 48) and double-stranded and heat-denatured DNA (42) induce no change in SNARF-1 fluorescence.
When C-SNARF-4 was incubated with alginate and BSA above pH 7, it showed a concentration dependent change in ratio, which translated into a pH difference of 0.2 to 0.3 units. It is likely that at higher concentrations of alginate and BSA there would be even greater discrepancies. However, EPS frequently exists at a concentration of 1 to 2% (wt/vol) in natural biofilms (18, 19). It is possible that some microenvironments may be less hydrated and contain a higher concentration of EPSs, but in our opinion, under most circumstances, natural EPS and protein (and presumably other exopolymer) concentrations should not be a hindrance with C-SNARF-4 ratiometric staining. Yet, alkaline systems may generate significant errors in pH measurements when this probe is used to determine microenvironment pH. This would be especially true if there is a heterogeneous distribution of EPS throughout the matrix.
The principal findings of this research demonstrate that CSNARF-4 can be used as a reliable quantitative indicator of pH in microenvironments of P. aeruginosa biofilms in situ. At low matrix densities, there is so little interaction between probe and exopolymer that emission spectra are not altered. With a calibration curve obtained in cell-free pH-adjusted buffer solutions and once fluorescence ratios in C-SNARF-4-treated biofilms were converted to [H+], this probe revealed significant pH heterogeneity in microenvironments at all x, y, and z zones of the biofilm matrix, suggesting diverse microbial metabolic rates and restricted diffusion of nutrients, wastes, and other acidic metabolites throughout the biofilm.
The spatial variation in pH shown in this study may have several implications for biofilm structure and function. For example, Stoodley et al. (44) revealed that changes in bulk fluid pH alter the physical properties of a biofilm, reducing the thickness of a biofilm by 30% when grown in pH 3 buffer, relative to those grown at neutral pH. This is to be expected with matrix biopolymers that are ionizable and depend on electrostatic interaction. pH should contribute to most physicochemical properties such as polymer-polymer, ion-polymer, and macromolecule-polymer interactions, and even polymer motion, gyration, and polymer-polymer entwining. From a clinical perspective, a pH gradient that ranges from 5.6 to 7.0 could potentially alter such rheological properties so that the biofilm matrix could serve as a significant physical barrier to antibiotic penetration through size-dependent exclusion of molecules from the EPS matrix (29). A change in matrix chemistry (as a result of pH) may also lead to protonation-deprotonation of an incoming antibiotic such that matrix polymers inhibit the action of these biocides through sorption and/or inactivation.
Biofilm biogeochemistry is also highly sensitive to changes in pH. For example, Barker et al. (4) have identified a strong dependence of mineral dissolution and bioweathering rates on pH, which are likely affected by pH microenvironments. Ferris et al. (17) also showed that metal uptake by wastewater biofilms was 12-fold greater at pH 7.0 than at pH 3.1. Since biofilms have been explored in potential bioremediative applications, understanding the effect of microenvironments on biofilm-metal uptake processes is highly desirable, so as to manipulate these systems to our advantage. Other previous studies (8, 9, 27, 28, 30) have proposed that pH and redox fluctuations in microenvironments could cause a heterogeneous distribution of mineral phases within natural biofilms. On a geological time scale, it is believed that by locally mediating solution chemistry, subpopulations of microbes within a biofilm can ultimately determine the state and availability of a metal, so that different mineral phases that form only under discrete geochemical conditions (pH and Eh; e.g., goethite, hematite, and magnetite) form within micrometers of one another. The pH distribution in our study (pH 5.6 to pH 7.0) may not be sufficient to explain mineral distributions described in previous studies; however, iron Pourbaix diagrams suggest that if the differential diffusion or metabolism of oxygen generates regions of low Eh, pH gradients observed in this study may indeed create localized geochemistry that promotes the formation of discrete mineral phases. The spatial distribution of these mineral precipitates may serve as a micrometer-scale analogue to larger sedimentary deposits such as banded iron formations (10).
Our present study should help in the understanding of the role pH in microenvironments plays in the medical and geobiological role of natural biofilms. We are currently exploring the geobiological aspects.
R.C.H. was funded through a Natural Science and Engineering Council of Canada (NSERC) graduate fellowship, and the experimentation was funded through NSERC—Discovery and Advanced Food and Materials Network (AFMnet)-National Centres of Excellence grants to T.J.B.
Received 24 September 2004.
Accepted 26 November 2004.
Allan, V. J. M., L. E. Macaskie, and M. E. Callow. 1999. Development of a pH gradient within a biofilm is dependent upon the limiting nutrient. Biotechnol. Lett. 21:407-413.
Aon, J. C., and S. Cortassa. 1997. Fluorescent measurement of the intracellular pH during sporulation of Saccharomyces cerevisiae. FEMS Microbiol. Lett. 153:17-23.
Ariyoshi, H., and E. W. Salzman. 1995. Spatial distribution and temporal change in cytosolic pH and [Ca2+] in resting and activated single human platelets. Cell Calcium 17:317-326.
Barker, W. W., S. A. Welch, S. Chu, and J. F. Banfield. 1998. Experimental observations of the effects of bacteria on aluminiosilicate weathering. Am. Mineral. 83:1551-1563.
Bassnett, S., L. Reinsch, and D. C. Beebe. 1990. Intracellular pH measurement using single excitation-dual emission fluorescence ratios. Am. J. Physiol. 258:C171-C178.
Beyenal, H., and Z. Lewandowski. 2000. Combined effect of substrate concentration and flow velocity on effective diffusivity in biofilms. Water Res. 34:528-538.
Blank, P. S., H. S. Silverman, O. Y. Chung, B. A Hogue, M. D. Stern, R. G. Hansford, E. G. Lakatta, and M. C. Capogrossi. 1992. Cytosolic pH measurements in single cardiac myocytese using carboxy-seminaphthorhodafluor-1. Am. J. Physiol. 263:H276-H284.
Brown, D. A., T. J. Beveridge, C. W. Keevil, and B. L. Sheriff. 1998. Evaluation of microscopic techniques to observe iron precipitation in a natural biofilm. FEMS Microbiol. Ecol. 26:297-310.
Brown, D. A., D. C. Kamineni, J. A. Sawicki, and T. J. Beveridge. 1994. Minerals associated with biofilm occurring on exposed rock in a granitic underground research laboratory. Appl. Environ. Microbiol. 60:3182-3191.
Brown, D. A., J. A. Sawicki, and B. L. Sheriff. 1998. Alteration of microbially precipitated iron oxides and hydroxides. Am. Mineral. 83:1419-1425.
Buckler, K. J., and R. D. Vaughn-Jones. 1990. Application of a new pH-sensitive fluoroprobe (carboxy-SNARF-1) for intracellular pH measurement in small isolated cells. Pflugers Arch. 417:234-239.
Caldwell, D. E., D. R. Korber, and J. R. Lawrence. 1992. Confocal laser microscopy and digital image analysis in microbial ecology. Adv. Microbiol. Ecol. 12:1-67.
Cormack, B. P., R. H. Valdivia, and S. F. Falkow. 1996. FACS-optimized mutants of the green fluorescent protein. Gene 173:33-38.
Costerton, J. W., Z. Lewandowski, D. DeBeer, D. Caldwell, D. Korber, and G. James. 1994. Biofilms, the customized microniche. J. Bacteriol. 176:2137-2142.
De Beer, D., R. Srinivasan, and P. S. Stewart. 1994. Direct measurement of chlorine penetration into biofilms during disinfection. Appl. Environ. Microbiol. 60:4339-4344.
de los Rios, A., J. Wierzchos, L. G. Sancho, and C. Ascaso. 2003. Acid microenvironments in microbial biofilms of Antarctic endolithic microecosystems. Environ. Microbiol. 5:231-237.
Ferris, F. G., S. Schultz, T. C. Witten, F. C. Fyfe, and T. J. Beveridge. 1989. Metal interactions with microbial biofilms in acidic and neutral pH environments. Appl. Environ. Microbiol. 55:1249-1257.
Flemming, H. C., and J. Windenger. 2001. Relevance of microbial extracellular polymeric substances (EPSs)—part I: structural and ecological aspects. Water Sci. Technol. 43:1-8.
Flemming, H. C., J. Windenger, C. Mayer, V. Korstgens, and W. Borchard. 2000. Cohesiveness in biofilm matrix polymers, p. 87-105. In D. G. Allison, P. Gilbert, H. M. Lappin-Scott, and M. Wilson (ed.), Community structure and co-operation in biofilms, SGM symposium series vol. 59. Cambridge University Press, Cambridge, United Kingdom.
Gordon, C. A., N. A. Hodges, and C. Marriott. 1988. Antibiotic interaction and diffusion through alginate and exopolysaccharide of cystic-fibrosis-derived Pseudomonas aeruginosa. J. Antimicrob. Chemother. 42:974-977.
Haugland, R. P. 1992. Handbook of fluorescent probes and research chemicals, 6th ed. Molecular Probes, Eugene, Oreg.
Haugland, R. P, and J. Whitaker. July 1990. Xanthene dyes having a fused (C) benzo ring. U.S. patent 4,945,171.
House, C. R. 1994. Confocal ratio-imaging of intracellular pH in unfertilized mouse oocytes. Zygote 2:37-45.
Hoyle, B. D., and J. W. Costerton. 1991. Bacterial resistance to antibiotics. The role of biofilms. Prog. Drug. Res. 37:91-105.
Hoyle, B. D., C. K. W. Wong, and J. W. Costerton. 1992. Disparate efficacy of tobramycin on Ca2+-, Mg2+-, and HEPES-treated Pseudomonas aeruginosa biofilms. Can. J. Microbiol. 38:1214-1218.
Jones, T., Jr. 1996. All about albumin: biochemistry, genetics, and medical applications. Academic Press, Inc., Orlando, Fla.
Karthhikeyan, S., and T. J. Beveridge. 2002. Pseudomonas aeruginosa biofilms react with and precipitate toxic soluble gold. Environ. Microbiol. 4:667-675.
Langley, S., and T. J. Beveridge. 1999. Metal binding by Pseudomonas aeruginosa PAO1 is influenced by growth of the cells as a biofilm. Can. J. Microbiol. 45:616-622.
Lawrence, J. R., G. M. Wolfaardt, and D. R. Korber. 1994. Monitoring diffusion in biofilm matrices using confocal laser microscopy. Appl. Environ. Microbiol. 60:1166-1173.
Lee, J.-U., and T. J. Beveridge. 2001. Interaction between iron and Pseudomonas aeruginosa biofilms attached to Sepharose surfaces. Chem. Geol. 180:67-80.
OpenUrlCrossRefWeb of Science
Lewis, K. 2001. Riddle of biofilm resistance. Antimicrob. Agents Chemother. 45:999-1007.
Liehr, S. K. 1995. Effect of pH on metals precipitation in denitrifying biofilms. Water Sci. Technol. 32:179-183.
Martinez-Zaguilan, R., M. W. Gurule, and R. M. Lynch. 1996. Simultaneous measurement of intracellular pH and Ca2+ in insulin-secreting cells by spectral imaging microscopy. Am. J. Physiol. 270:C1438-C1446.
Muller-Borer, B. J., H. Yang, S. A. Marzouk, J. J. Lemasters, and W. E. Cascio. 1998. pHi and pHo at different depths in perfused myocardium measured by confocal fluorescence microscopy. Am. J. Physiol. 275:H1937-H1947.
Nichols, W. W., S. M. Dorrington, M. P. E. Slack, and H. L. Walmsley. 1988. Inhibition of tobramycin diffusion by binding to alginate. Antimicrob. Agents Chemother. 32:518-523.
Nivens, D. E., D. E. Ohman, J. Williams, and M. J. Franklin. 2001. Role of alginate and its O acetylation in formation of Pseudomonas aeruginosa microcolonies and biofilms. J. Bacteriol. 183:1047-1057.
Okabe, S., T. Kindaichi, T. Ito, and H. Satoh. 2004. Analysis of size distribution and areal cell density of ammonia-oxidizing bacterial microcolonies in relation to substrate microprofiles in biofilms. Biotechnol. Bioeng. 85:86-95.
Owen, C. S. 1992. Comparison of spectrum-shifting intracellular pH probes 5′(and 6′)-carboxy-10-dimthylamino-3-hydroxyspiro[7H-benzo-[c]xanthene-7,1′(3′H)-isobenzofuran]-3′-one and 2′,7′-biscarboxy-ethyl-5(and 6)-carboxyfluorescein. Anal. Biochem. 204:65-71.
Ramsing, N. B. M. Kühl, and B. B. Jørgensen. 1993. Distribution of sulfate-reducing bacteria, O2, and H2S in photosynthetic biofilms determined by oligonucleotide probes and microelectrodes. Appl. Environ. Microbiol. 59:3840-3849.
Sanders, R., A. Draaijer, H. C. Gerritsen, P. M. Houpt, and Y. K. Levine. 1995. Quantitative pH imaging in cells using confocal fluorescence lifetime imaging microscopy. Anal. Biochem. 227:302-308.
Schramm, A., L. H. Larsen, N. P. Revsbech, N. B. Ramsing, R. Amann, and K. H. Schleifer. 1996. Structure and function of a nitrifying as determined by in situ hybridization and the use of microelectrodes. Appl. Environ. Microbiol. 62:4641-4647.
Seksek, O., N. Henry-Toulme, F. Sureau, and J. Bolard. 1991. SNARF-1 as an intracellular pH indicator in laser microspectrofluorometry: a critical assessment. Anal. Biochem. 193:49-54.
Stapper, A. P., G. Narasimhan, D. E. Ohman, J. Barakat, M. Hentzer, S. Molin, A. Kharazmi, N. Hoiby, and K. Mathee. 2004. Alginate production affect Pseudomonas aeruginosa biofilm development and architecture, but is not essential for biofilm development. J. Med. Microbiol. 53:679-690.
Stoodley, P., D. de Beer, and H. M. Lappin-Scott. 1997. Influence of electric fields and pH on biofilm structure as related to the bioelectric effect. Antimicrob. Agents Chemother. 41:1876-1879.
Szmacinski, H., and J. R. Lakowicz. 1993. Optical measurements of pH using fluorescent lifetimes and phase modulation fluorometry. Anal. Chem. 65:1668-1674.
Takahashi, A., Y. Zhang, E. Centonze, and B. Herman. 2001. Measurement of mitochondrial pH in situ. BioTechniques 30:804-808.
Tsien, R. Y., and M. Poenie. 1986. Fluorescence ratio imaging: a new window into intracellular ionic signaling. Trends Biochem. Sci. 11:450-455.
Večeř, J., A. Holoubek, and K. Sigler. 2001. Fluorescence behavior of the pH-sensitive probe carboxy SNARF-1 in suspension of liposomes. Photochem. Photobiol. 74:8-13.
Villaverde, S., R. G. Mirpuri, Z. Lewandowski, and W. L. Jones. 1997. Physiological and chemical gradients in a Pseudomonas putida 54G biofilm degrading toluene in a flat plate vapor phase bioreactor. Biotechnol. Bioeng. 56:361-371.
Vroom, J. M., K. J. de Grauw, H. C. Gerritsen, D. J. Bradshaw, P. D. Marsh, G. K. Watson, J. J. Birmingham, and C. Allison. 1999. Depth penetration and detection of pH gradients in biofilms by two-photon excitation microscopy. Appl. Environ. Microbiol. 65:3502-3511.
Walters, M. C., III, F. Roe, A. Bugnicourt, M. J. Franklin, and P. S. Stewart. 2003. Contributions of antibiotic penetration, oxygen limitation, and low metabolic activity to tolerance of Pseudomonas aeruginosa biofilms to ciprofloxacin and tobramycin. Antimicrob. Agents Chemother. 47:317-323.
Westerblad, H., J. D. Bruton, and J. Lannergren. 1997. The effect of intracellular pH on contractile function of intact, single fibres of mouse muscle declines with increasing temperature. J. Physiol. 500:193-204.
Wozniak, D. J., T. J. O. Wyckoff, M. Starkey, R. Keyer, P. Azadi, G. A. O'Toole, and M. R. Parsek. 2003. Alginate is not a significant component of the extracellular polysaccharide matrix of PA14 and PAO1 Pseudomonas aeruginosa biofilms. Proc. Natl. Acad. Sci. USA 100:7907-7912.
Xu, K. D., P. S. Stewart, F. Xia, C. T. Huang, and G. A. McFeters. 1998. Spatial physiological heterogeneity in Pseudomonas aeruginosa biofilm is determined by oxygen availability. Appl. Environ. Microbiol. 64:4035-4039.
Yassine, M., J. M. Salmon, J. Vigo, and P. Viallet. 1995. C-SNARF-1 as a pHi fluoroprobe: discrepancies between conventional and intracellular data do not result from protein interactions. J. Photochem. Photobiol. B, Biol. 37:18-25.
Yu, T., and P. L. Bishop. 2001. Stratification and oxidation-reduction potential change in an aerobic and sulfate-reducing biofilm studied using microelectrodes. Water Environ. Res. 73:368-373.
Applied and Environmental Microbiology May 2005, 71 (5) 2501-2510; DOI: 10.1128/AEM.71.5.2501-2510.2005
You are going to email the following Application of a pH-Sensitive Fluoroprobe (C-SNARF-4) for pH Microenvironment Analysis in Pseudomonas aeruginosa Biofilms
|
CommonCrawl
|
Home > Journals > Ann. Probab. > Volume 48 > Issue 6 > Article
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither Project Euclid nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the Project Euclid website.
November 2020 Capacity of the range in dimension $5$
Bruno Schapira
Ann. Probab. 48(6): 2988-3040 (November 2020). DOI: 10.1214/20-AOP1442
This article is only available to subscribers. It is not available for individual sale.
We prove a central limit theorem for the capacity of the range of a symmetric random walk on $\mathbb{Z}^{5}$, under only a moment condition on the step distribution. The result is analogous to the central limit theorem for the size of the range in dimension three, obtained by Jain and Pruitt in 1971. In particular, an atypical logarithmic correction appears in the scaling of the variance. The proof is based on new asymptotic estimates, which hold in any dimension $d\ge5$, for the probability that the ranges of two independent random walks intersect. The latter are then used for computing covariances of some intersection events at the leading order.
Bruno Schapira. "Capacity of the range in dimension $5$." Ann. Probab. 48 (6) 2988 - 3040, November 2020. https://doi.org/10.1214/20-AOP1442
Received: 1 August 2019; Revised: 1 April 2020; Published: November 2020
First available in Project Euclid: 20 October 2020
MathSciNet: MR4164459
Digital Object Identifier: 10.1214/20-AOP1442
Primary: 60F05, 60G50, 60J45
Keywords: capacity, central limit theorem, intersection of random walk ranges, Random walk, range
Rights: Copyright © 2020 Institute of Mathematical Statistics
+ SAVE TO MY LIBRARY
Ann. Probab.
Vol.48 • No. 6 • November 2020
Institute of Mathematical Statistics
Subscribe to Project Euclid
Bruno Schapira "Capacity of the range in dimension $5$," The Annals of Probability, Ann. Probab. 48(6), 2988-3040, (November 2020)
Print Friendly Version (PDF)
|
CommonCrawl
|
Publication Info.
Communications for Statistical Applications and Methods
The Korean Statistical Society
2287-7843(pISSN)
2383-4757(eISSN)
Mathematics > Applied Statistics
Communications for Statistical Applications and Methods (Commun. Stat. Appl. Methods, CSAM) is an official journal of the Korean Statistical Society and Korean International Statistical Society. It is an international and Open Access journal dedicated to publishing peer-reviewed, high quality and innovative statistical research. CSAM publishes articles on applied and methodological research in the areas of statistics and probability. It features rapid publication and broad coverage of statistical applications and methods. It welcomes papers on novel applications of statistical methodology in the areas including medicine (pharmaceutical, biotechnology, medical device), business, management, economics, ecology, education, computing, engineering, operational research, biology, sociology and earth science, but papers from other areas are also considered. The main criteria for publication are originality of work which makes a significant contribution to the area where the proposed statistical methods are applied. CSAM mainly accepts the applied and methodological papers, implying exclusion of papers with primarily theoretical or mathematical contents. Papers should include real or simulated data analysis, and the datasets used in the paper along with computer codes are strongly recommended to be included as supplementary material. The journal publishes articles written in English, and any researchers throughout the world can submit a manuscript if it fits the scope of the journal. The journal's publication types include original research articles, reviews, tutorials, reports on statistical software, case studies in the practice of statistics, discussions of interest to statistics teachers, editorials, book reviews, and correspondence. Other types are also negotiable with the editorial board. All articles are subject to peer review and processed through an online editorial system. Every attempt will be made to provide the first review of a submitted manuscript within one month of submission.
KSCI KCI SCOPUS
DOI 출판
한국DOI센터에 등록된 논문의 서지정보를 불러와서 KoreaScience에 공개합니다.
신규 권호 정보 생성과 논문 PDF 연결은 담당자([email protected], 042-869-1775)에게 요청하세요.
Application of covariance adjustment to seemingly unrelated multivariate regressions
Wang, Lichun;Pettit, Lawrence 577
https://doi.org/10.29220/CSAM.2018.25.6.577 PDF KSCI
Employing the covariance adjustment technique, we show that in the system of two seemingly unrelated multivariate regressions the estimator of regression coefficients can be expressed as a matrix power series, and conclude that the matrix series only has a unique simpler form. In the case that the covariance matrix of the system is unknown, we define a two-stage estimator for the regression coefficients which is shown to be unique and unbiased. Numerical simulations are also presented to illustrate its superiority over the ordinary least square estimator. Also, as an example we apply our results to the seemingly unrelated growth curve models.
Penalized variable selection for accelerated failure time models
Park, Eunyoung;Ha, Il Do 591
The accelerated failure time (AFT) model is a linear model under the log-transformation of survival time that has been introduced as a useful alternative to the proportional hazards (PH) model. In this paper we propose variable-selection procedures of fixed effects in a parametric AFT model using penalized likelihood approaches. We use three popular penalty functions, least absolute shrinkage and selection operator (LASSO), adaptive LASSO and smoothly clipped absolute deviation (SCAD). With these procedures we can select important variables and estimate the fixed effects at the same time. The performance of the proposed method is evaluated using simulation studies, including the investigation of impact of misspecifying the assumed distribution. The proposed method is illustrated with a primary biliary cirrhosis (PBC) data set.
A rolling analysis on the prediction of value at risk with multivariate GARCH and copula
Bai, Yang;Dang, Yibo;Park, Cheolwoo;Lee, Taewook 605
Risk management has been a crucial part of the daily operations of the financial industry over the past two decades. Value at Risk (VaR), a quantitative measure introduced by JP Morgan in 1995, is the most popular and simplest quantitative measure of risk. VaR has been widely applied to the risk evaluation over all types of financial activities, including portfolio management and asset allocation. This paper uses the implementations of multivariate GARCH models and copula methods to illustrate the performance of a one-day-ahead VaR prediction modeling process for high-dimensional portfolios. Many factors, such as the interaction among included assets, are included in the modeling process. Additionally, empirical data analyses and backtesting results are demonstrated through a rolling analysis, which help capture the instability of parameter estimates. We find that our way of modeling is relatively robust and flexible.
Investigating the underlying structure of particulate matter concentrations: a functional exploratory data analysis study using California monitoring data
Montoya, Eduardo L. 619
Functional data analysis continues to attract interest because advances in technology across many fields have increasingly permitted measurements to be made from continuous processes on a discretized scale. Particulate matter is among the most harmful air pollutants affecting public health and the environment, and levels of PM10 (particles less than 10 micrometers in diameter) for regions of California remain among the highest in the United States. The relatively high frequency of particulate matter sampling enables us to regard the data as functional data. In this work, we investigate the dominant modes of variation of PM10 using functional data analysis methodologies. Our analysis provides insight into the underlying data structure of PM10, and it captures the size and temporal variation of this underlying data structure. In addition, our study shows that certain aspects of size and temporal variation of the underlying PM10 structure are associated with changes in large-scale climate indices that quantify variations of sea surface temperature and atmospheric circulation patterns.
Linear regression under log-concave and Gaussian scale mixture errors: comparative study
Kim, Sunyul;Seo, Byungtae 633
Gaussian error distributions are a common choice in traditional regression models for the maximum likelihood (ML) method. However, this distributional assumption is often suspicious especially when the error distribution is skewed or has heavy tails. In both cases, the ML method under normality could break down or lose efficiency. In this paper, we consider the log-concave and Gaussian scale mixture distributions for error distributions. For the log-concave errors, we propose to use a smoothed maximum likelihood estimator for stable and faster computation. Based on this, we perform comparative simulation studies to see the performance of coefficient estimates under normal, Gaussian scale mixture, and log-concave errors. In addition, we also consider real data analysis using Stack loss plant data and Korean labor and income panel data.
On the maximum likelihood estimation for a normal distribution under random censoring
Kim, Namhyun 647
In this paper, we study statistical inferences on the maximum likelihood estimation of a normal distribution when data are randomly censored. Likelihood equations are derived assuming that the censoring distribution does not involve any parameters of interest. The maximum likelihood estimators (MLEs) of the censored normal distribution do not have an explicit form, and it should be solved in an iterative way. We consider a simple method to derive an explicit form of the approximate MLEs with no iterations by expanding the nonlinear parts of the likelihood equations in Taylor series around some suitable points. The points are closely related to Kaplan-Meier estimators. By using the same method, the observed Fisher information is also approximated to obtain asymptotic variances of the estimators. An illustrative example is presented, and a simulation study is conducted to compare the performances of the estimators. In addition to their explicit form, the approximate MLEs are as efficient as the MLEs in terms of variances.
Neural network heterogeneous autoregressive models for realized volatility
Kim, Jaiyool;Baek, Changryong 659
In this study, we consider the extension of the heterogeneous autoregressive (HAR) model for realized volatility by incorporating a neural network (NN) structure. Since HAR is a linear model, we expect that adding a neural network term would explain the delicate nonlinearity of the realized volatility. Three neural network-based HAR models, namely HAR-NN, $HAR({\infty})-NN$, and HAR-AR(22)-NN are considered with performance measured by evaluating out-of-sample forecasting errors. The results of the study show that HAR-NN provides a slightly wider interval than traditional HAR as well as shows more peaks and valleys on the turning points. It implies that the HAR-NN model can capture sharper changes due to higher volatility than the traditional HAR model. The HAR-NN model for prediction interval is therefore recommended to account for higher volatility in the stock market. An empirical analysis on the multinational realized volatility of stock indexes shows that the HAR-NN that adds daily, weekly, and monthly volatility averages to the neural network model exhibits the best performance.
Resistant GPA algorithms based on the M and LMS estimation
Hyun, Geehong;Lee, Bo-Hui;Choi, Yong-Seok 673
Procrustes analysis is a useful technique useful to measure, compare shape differences and estimate a mean shape for objects; however it is based on a least squares criterion and is affected by some outliers. Therefore, we propose two generalized Procrustes analysis methods based on M-estimation and least median of squares estimation that are resistant to object outliers. In addition, two algorithms are given for practical implementation. A simulation study and some examples are used to examine and compared the performances of the algorithms with the least square method. Moreover since these resistant GPA methods are available for higher dimensions, we need some methods to visualize the objects and mean shape effectively. Also since we have concentrated on resistant fitting methods without considering shape distributions, we wish to shape analysis not be sensitive to particular model.
|
CommonCrawl
|
4.4 Determinate Frame Analysis
>>When you're done reading this section, check your understanding with the interactive quiz at the bottom of the page.
Frame structures are more complex than beams because they do not necessarily all lie along a straight line as beams do. In frames, there can be both vertical members and members that are inclined on an angle. The first part of this section will discuss the types of loads on inclined members and how to deal with them. Then, it will explain the method that is used to analyse determinate frames (using the same methods that we used previously for determinate beams).
Inclined Loads
For members that are inclined on an angle, it is often most convenient to analyse them by first transforming all the loads on the member into the local member axis direction (perpendicular and parallel to the inclined member). This process is illustrated in Figure 4.7.
Figure 4.7: Resolving Loads on Inclined Members into Local Axis Directions
Sample geometry for an inclined member is shown at the top of Figure 4.7. Four different types of inclined loadings are shown in the figure.
The first type shows the transformation of point loads on an inclined member into parallel and perpendicular components.
The second type (`wind-type') is typical of distributed loadings caused by wind or other pressure-type loadings. The distributed load is applied directly perpendicular to the inclined member and is distributed along the diagonal length of the member ($L/cos\theta$ in this case). Since the load is already perpendicular to the member, no transformation is needed.
The third type ('dead-type') is a distributed load that is not applied perpendicular to the member. In this case, it is aligned with the global vertical axis direction to simulate the effect of a vertical gravity (or 'dead') load. This type of load is also distributed along the diagonal length of the member since the source of the load (in this case, the dead weight of the member) is also distributed along the diagonal length. In this case, a direct trigonometric transformation may be used to split the vertical distributed load into two different components, one perpendicular to the member (which will cause shear and bending) and one parallel to the member (which will cause axial load) as shown in the figure.
The fourth and final type (`snow-type') is a distributed load which is not perpendicular to the member and is also not distributed along the member length, but along the horizontal projection of the member (in this case, the distance $L$). For the case of a snow load, only a certain amount of snow can fall from a certain area of sky, so the greater the inclination of the member, the longer the length that the snow will be spread out over (making the load per unit of member length lower). The total vertical load here will be equal to $w_{snow}L$, which is less than the corresponding total vertical load from the dead load case, which was equal to $w_{dead}L/\cos\theta$. Before the snow-type distributed load can be split into perpendicular and parallel components, it must be spread evenly over the diagonal length of the member (instead of being spread evenly over the horizontal projection of the member). If we split the total vertical load $w_{snow}L$ over the total diagonal length $L/\cos\theta$ then we get a new vertical distributed load equal to $w_{snow}\cos\theta$ which is now distributed along the diagonal length of the member. From here, the load may be divided into perpendicular and parallel components as was done for the dead-type load, which results in the components shown in Figure 4.7.
Method for Analysing Determinate Frames
Analysing determinate frames is very similar to analysing determinate beams, except that you need to split up the frame into separate members so that they can each be analysed individually as beams. Frames also have the added complexity of potentially inclined members as well as the inclusion of axial forces in the analysis, which we previously neglected when we were analysing beams.
The general steps for analysing a determinate frame are:
Use equilibrium to find all reaction forces.
Split the frame into separate members.
Any point load or moment which acts directly on a joint between two or more members must be placed on only ONE of the members when they are split up. It does not matter which member gets the point load, as long as it is only on one.
Find all of the forces at the ends of each member (at either member ends, or at cuts between that member and the adjacent member) using equilibrium on free body diagrams of each member on their own.
Resolve all of the loads on the member (end loads and moments as well as loads along the length of the member) into the local member axis directions (i.e. perpendicular to and parallel to the member).
Now the axial moment, shear force and bending moment diagrams may be found by solving each member as if it was a separate beam (see see Section 4.3).
(As required) Use the results from each member to draw overall axial force, shear force and bending moment diagrams for the entire frame structure.
The analysis of determinate frames will be demonstrated using the example structure shown in Figure 4.8. Draw axial, shear and moment diagrams for all members of the structure.
Figure 4.8: Example Determinate Frame Structure
As a first step, we can check that the structure is stable and determinate using the methods from Chapter 2. Using equation \eqref{eq:deg-indet}:
\begin{equation} \boxed{i_e = 3m + r - (3j + e_c) } \label{eq:deg-indet} \tag{1} \end{equation} \begin{align*} i_e &= 3m + r - (3j + e_c) \\ &= 3(3) + 3 - (3(4) + 0) \\ & = 0 \end{align*}
Since $i_e = 0$ then the structure is determinate. It is also stable since there are no collapse mechanisms present.
The next step in the analysis is to find the reaction forces. In this structure there are three reaction forces, $A_x$ and $A_y$ at the left pin, and $D_y$ at the right roller. We will find the reactions using equilibrium on the entire structure. The free body diagram of the structure is shown in Figure 4.9.
Figure 4.9: Example Frame Free Body Diagram
Starting with the moment equilibrium about point A to find the vertical reaction at D ($D_y$):
\begin{align*} \curvearrowleft \sum M_A &= 0 \\ -75(5)-25(7)(3.5)+D_y(7) &= 0 \\ D_y &= +141.1\mathrm{\,kN} \end{align*} \begin{equation*} \boxed{D_y = 141.1\mathrm{\,kN} \uparrow} \end{equation*}
For horizontal equilibrium, the horizontal reaction at A ($A_x$) is originally assumed to be positive (pointing to the right):
\begin{align*} \rightarrow \sum F_x &= 0 \\ A_x + 75 &= 0 \\ A_x &= -75\mathrm{\,kN} \end{align*}
But the negative solution tells us that $A_x$ actually points to the left (as shown in Figure 4.9):
\begin{equation*} \boxed{A_x = 75\mathrm{\,kN} \leftarrow} \end{equation*}
Vertical equilibrium:
\begin{align*} \uparrow \sum F_y &= 0 \\ A_y - 25(7) + D_y &= 0 \\ A_y - 25(7) + 141.1 &= 0 \\ A_y &= +33.9\mathrm{\,kN} \end{align*} \begin{equation*} \boxed{A_y = 33.9\mathrm{\,kN} \uparrow} \end{equation*}
Now, the structure must be divided into separate members. Our structure will be divided into three members, AB, BC, and CD. We will go through each in turn, solving for all the unknown end forces, axial force, shear force and moment before moving on to the next member. The free body diagram and solution for member AB is shown in Figure 4.10.
Figure 4.10: Example Frame Member AB
The free body diagram (FBD) on the left of Figure 4.10 shows all of the information that is currently known about member AB. It contains the known reactions at the base $A_x$ and $A_y$. In addition, the unknown forces at the cut at point B are also shown. To form the FBD for member AB, the structure had to be cut at point B. Since the structure is continuous at that point, we know that vertical and horizontal forces and a moment must be transmitted across the cut. Since we don't know anything about these forces yet, they are all drawn in the positive direction. The notation $B_x^{AB}$ means: "the force at point B in the x-direction, acting on member AB." Likewise, $M_B^{AB}$ means: "the moment at point B acting on member AB."
The FBD, has three unknowns: $B_x^{AB}$, $B_y^{AB}$, and $M_B^{AB}$. We can solve for these three unknowns using the equations of equilibrium:
\begin{align*} \curvearrowleft \sum M_B &= 0 \\ M_B^{AB} - 75(8) &= 0 \end{align*} \begin{equation*} \boxed{M_B^{AB} = 600\mathrm{\,kNm} \curvearrowleft} \end{equation*} \begin{align*} \rightarrow \sum F_x &= 0 \\ B_x^{AB} - 75 &= 0 \end{align*} \begin{equation*} \boxed{B_x^{AB} = 75.0\mathrm{\,kN} \rightarrow} \end{equation*} \begin{align*} \uparrow \sum F_y &= 0 \\ B_y^{AB} + 33.9 &= 0 \end{align*} \begin{equation*} \boxed{B_y^{AB} = 33.9\mathrm{\,kN} \downarrow} \end{equation*}
The resulting solved FBD is shown in Figure 4.10.
Now that we know all of the forces acting on member AB, we can use the methods of beam analysis to find the axial, shear and moment diagrams which are shown in Figure 4.10.
The construction of the axial force diagram is similar to the shear force diagram, starting at one end, forces that are parallel to the member that cause compression, move the axial force diagram one way, and forces that cause tension, move it the other way. It doesn't matter which way is which on the diagram, as long as the compression and tension sides of the diagram are indicated. The compression side of the axial force diagram is shown with a `C' in the figure. In this case, we can start at point A, assuming that the member is fixed at the other end (point B). The vertical reaction force at A of $49.5\mathrm{\,kN}$ causes the member to go in compression, so we move the axial force diagram to the right by the same amount and indicate that side as being in compression. There is no other load parallel to the member until point B, which has a force of $49.5\mathrm{\,kN}$ that would cause tension in the member if it pushes it away from B (assume that the force acts just below point B). This pushes the axial force diagram back to the left, meeting up with the member axis at 0.
The shear and moment diagrams for this member are simple and were constructed moving from bottom to top. The moment diagram is 'drawn on the compression side.' This means that for whichever side of of the member that shows a moment on the moment diagram, the extreme fibre on that side of the beam will be in compression. For this member AB, all of the moment is on the left side of the member. Therefore the left side of the member is in compression (and the right side is in tension).
At a cut location, moment arrows always point towards the compression side of the member.
Now that member AB has been completely solved, we can move on to the next member, member BC, which is shown in Figure 4.11.
Figure 4.11: Example Frame Member BC Equilibrium and Resolution of Forces into the Local Axis Direction
Part (a) of Figure 4.11 shows a free body diagram of member BC with all of the information that is currently known. Since members AB and BC are on either side of the cut at point B, the forces and moments must be transferred at that point. So, we can take the forces at point B from member AB and apply them to point B on member BC; however, we must be sure to reverse the direction of the forces, since forces and moments must be equal and opposite on either side of a cut (as previously discussed in Section 1.6). The horizontal force at the cut at B changes direction from right to left, the vertical force changes from down to up, and the moment changes from counter-clockwise to clockwise. Again, there are three unknown forces/moments at point C due to the cut between member BC and member CD: $C_x^{BC}$, $C_y^{BC}$, and $M_C^{BC}$. These may be found using equilibrium:
\begin{align*} \curvearrowleft \sum M_C &= 0 \\ M_C^{BC} + 25(7)(3.5) - 33.9 (7) - 75.0 (2) - 600 &= 0 \end{align*} \begin{equation*} \boxed{M_C^{BC} = 374.8\mathrm{\,kNm} \curvearrowleft} \end{equation*} \begin{align*} \rightarrow \sum F_x &= 0 \\ -75 + C_x^{BC} &= 0 \end{align*} \begin{equation*} \boxed{C_x^{BC} = 75.0\mathrm{\,kN} \rightarrow} \end{equation*} \begin{align*} \uparrow \sum F_y &= 0 \\ 33.9 - 25(7) + C_y^{BC} &= 0 \end{align*} \begin{equation*} \boxed{C_y^{BC} = 141.1\mathrm{\,kN} \uparrow} \end{equation*}
The resulting solved free body diagram is shown in Part (b) of Figure 4.11. Since member BC is an inclined member, we need to resolve all of the forces into the local member directions (i.e. perpendicular and parallel to the member) before we can find the axial, shear and moment on the member. This process is shown in Parts (c) and (d) of the figure. Part (c) shows how to convert the horizontal and vertical forces at point B into forces that are perpendicular and parallel to member BC. To do this, each force must be split into two components, one perpendicular and one parallel to member BC. The perpendicular components from each are then added together to get the total perpendicular point load at B:
\begin{align*} P_{perp} &= 75.0 \sin \theta + 33.9 \cos \theta \\ P_{perp} &= 75.0 \sin {15.9^\circ} + 33.9 \cos {15.9^\circ} \\ P_{perp} &= 53.2\mathrm{\,kN} \nwarrow \text{ (perpendicular to BC)} \end{align*}
and the parallel forces are summed to get:
\begin{align*} P_{para} &= -75.0 \cos {15.9^\circ} + 33.9 \sin {15.9^\circ} \\ P_{para} &= 62.8\mathrm{\,kN} \swarrow \text{ (parallel. to BC)} \end{align*}
The same process is followed for the point loads at point C.
Part (d) of Figure 4.11 shows the resulting point loads (parallel and perpendicular) at either end of the member. It also shows the distributed load resolved into the local axis (member) directions. The snow-type distributed load on the beam is resolved into the perpendicular and parallel directions using the expressions previously shown in Figure 4.7:
\begin{align*} w_{perp} &= w_{snow} \cos^2 \theta \\ w_{perp} &= 25 \cos^2 {15.9^\circ} \\ w_{perp} &= 23.1\mathrm{\,kN/m} \searrow \\ w_{para} &= w_{snow} \cos \theta \sin \theta \\ w_{para} &= 25 \cos {15.9^\circ} \, \sin {15.9^\circ} \\ w_{para} &= 6.59\mathrm{\,kN/m} \swarrow \end{align*}
The moments are not affected when translating the forces into the perpendicular and parallel directions.
Part (e) of Figure 4.11 shows the same fully solved free-body diagram as Part (d) but simply rotated to be horizontal so that it is easier to analyse. Note also that the length of the member itself ($\sqrt{7^2 + 2^2} = 7.28\mathrm{\,m}$) is longer than the horizontal projection ($7\mathrm{\,m}$).
Now that all of the loads on member BC are known, the axial, shear and moment diagrams may be constructed using the methods for beam analysis. This process is shown in Figure 4.12.
Figure 4.12: Example Frame Member BC Axial, Shear and Moment
The axial force diagram is not constant for member BC, as shown in Figure 4.12, because the snow-type distributed load on the member has a parallel component which acts along the length of the member. This parallel distributed load may be called a traction along the length of the member. Moving from left to right, the member starts with a tension of $62.8\mathrm{\,kN}$ which is then steadily increased by a traction in the same direction of $6.59\mathrm{\,kN/m}$, which moves the axial force diagram further towards the tension side. This results in a slope on the axial force diagram also equal to $6.59\mathrm{\,kN/m}$. At the right end of the member, a final compression (in the opposite direction) of $110.8\mathrm{\,kN}$ brings the axial force diagram back to zero.
The shear force and bending moment diagrams are constructed as before, with particular attention to the slope of the moment diagram at any point being directly equal to the value of the shear force diagram at the same point. The moment diagram shown in Figure 4.12, starts with a jump up due to the clockwise moment at point B, then moves even higher due to the shear between points B and B', before dropping once again between points B' and C. It is important to identify the maximum moment and where that maximum moment occurs. The value of the maximum moment is easily found by adding the moment at point B ($600\mathrm{\,kNm}$) to the area under the shear force diagram between points B and B' (a triangle with a height of $53.2\mathrm{\,kN}$). To find the area of that triangle, we need to know the length of the base. This may be found using similar triangles as shown (the total length of the member multiplied by the height of the small triangle divided by the total height of both triangles). In this case, the length of the smaller triangle is $2.303\mathrm{\,m}$ as shown. This is the location of the point of maximum moment, which should be identified on the moment diagram. Using this length, the area under the shear force diagram between points B and B' is equal to $0.5(53.2)(2.303)=61.2\mathrm{\,kNm}$. This gives a maximum moment of $600 + 61.2 = 661.2\mathrm{\,kNm}$ at a location $2.303\mathrm{\,m}$ from point B (as shown in shown in Figure 4.12).
Since the shape of the shear force diagram is linear, then the shape of the moment diagram should be parabolic. The shape of the parabola can be easily determined by sketching in the slope of the moment diagram at both ends as shown in Figure 4.12 and by identifying locations of zero slope, which are the points where the shear force diagram equals zero. This moment diagram is again drawn on the compression side of the member and it can be seen that, as mentioned previously, the point moments at the ends of the member point towards the compression side of the beam at either end.
Now that member BC is completely solved, we can move onto the final member, member CD, which is shown in Figure 4.13.
Figure 4.13: Example Frame Member CD
The free body diagram of member CD shown in Figure 4.13 includes all of the information that is known up to this point (including the opposite direction forces from member BC on the other side of the cut at point C). As this figure shows, there are no unknown forces that need to be found for member CD. This is typically the case with the final member in a frame analysis. We have already determined the forces at point D when we found the reactions using global equilibrium; however, we can use equilibrium on member CD as a check that we have solved the rest of the frame properly. If we do this check and equilibrium is not satisfied, then we have made a mistake in one of the previous steps. So, let's check equilibrium for member CD:
\begin{align*} \curvearrowleft \sum M_C &= 0 \\ -374.8 + 75(5) &= 0 \; \checkmark \end{align*} \begin{align*} \rightarrow \sum F_x &= 0 \\ 75 - 75 &= 0 \; \checkmark \end{align*} \begin{align*} \uparrow \sum F_y &= 0 \\ 141.1 - 141.1 &= 0 \; \checkmark \end{align*}
All of the equilibrium equations are satisfied, so we can have some confidence that our solution is correct.
Knowing all of the forces on member CD, we can construct the axial, shear and moment diagrams using beam analysis methods as shown in Figure 4.13 (try it yourself!).
Now that all of the axial, shear and moment diagrams have been constructed for each member, the last optional step is to combine them onto a single diagram which shows the axial, shear and moment for the entire structure. These summary diagrams are shown in Figure 4.14.
Figure 4.14: Example Frame Solution Summary
Book traversal links for 4.4 Determinate Frame Analysis
4.3 Determinate Beam Analysis
4.5 Practice Problems
Interactive Quiz
4.2 Common Load Types for Beams and Frames
|
CommonCrawl
|
Convergence and validation of the numerical solution
Parametric analysis
Modeling, testing, and parametric analysis of a parabolic solar cooking system with heat storage for indoor cooking
Ndiaga Mbodji1Email author and
Ali Hajji1
Received: 10 February 2017
In an ever-changing world where needs increase daily due to economic growth and demographic progression, where prices are unstable, where reserves are running out, where climate change is topical, the energy issues are increasingly marked by the question of sustainability. In many developing countries, wood and subsidized butane are the main sources of energy used for cooking in households. The use of solar energy in domestic cooking becomes unavoidable. Several models of solar cookers have been proposed, but most of them dealt with box and oven types of solar cookers without storage.
This paper presents a dynamic thermodynamic model of a parabolic solar cooking system (PSCS) with heat storage, along with a comparison of the model solution with experimental measurements. The model uses various thermal resistances to take into account heat transfer between the different parts of the system.
The first experimental setup consists of a parabolic concentrator (0.80-m diameter and 0.08-m depth) and a 1.57-l cylindrical receiver. The second experimental setup is composed of a parabolic concentrator (1.40-m diameter and 0.16-m depth), the same receiver, and a 6.64-l heat storage. Tests were carried out in Rabat, Morocco, between April 24 and July 10, 2014, and between May 15 and June 18, 2015. Synthetic oil is used as a transfer fluid and a sensible heat storage.
Comparison between predicted and measured temperatures shows a good agreement with a relative error of ± 4.4%. The effects of important system design and operating parameters were also analyzed. The results show that a 50 W m−2 increase of the daily maximum solar radiation increases the storage temperature by 4 °C and a 5% increase of the receiver reflectance or absorptance improves the maximum storage temperature by 3.6 and 3.9 °C, respectively. Optimizing the aspect ratio of the receiver to 2 gives a maximum storage temperature of 85 °C. Increasing the thermal fluid mass flow rate from 0 to 18 kg h−1, or the receiver thermal insulation from 0.01 to 0.08 m, increases the maximum storage temperature by 65 and 17 °C, respectively.
Parabolic solar cooking system
Heat storage
With increasing population, economic growth, and environmental concerns, the use of solar energy in domestic cooking is becoming a good alternative for sustainable development which will greatly decrease mortality, deforestation, and soil erosion. The World Health Organization (WHO) reports that each year, 1.6 million people die from respiratory diseases caused by indoor air pollution due to solid fuel use for cooking [1]. A domestic solar cooker saves 100 trees in 15 years of life, prevents annually the release of 1.5 ton of CO2, increases the household purchasing power (by reducing the budget allocated to cooking), and gives more time to women and children who spend 15 h per week to the chore of wood [1].
While most solar cookers in use today do not have heat storage, this feature will alleviate the mismatch between solar heat energy supply and energy demand for cooking. Heat storage is important for indoor solar cooking requirements and will ensure continuity of service, reduce the use of conventional energy, and give a reasonable cooking time compared with conventional cooking [2].
Modeling solar concentrating systems including parabolic solar cooking systems (PSCS) is a key tool in order to increase their effectiveness and optimize their operating conditions. Several models of solar cookers have been proposed in the last years, but most of them dealt with box and oven types of solar cookers. Very little modeling work considered detailed dynamic temperature distribution and heat transfer in PSCS with storage. Existing models of parabolic solar systems—other than cooking applications—emphasize optimization of power production and not maximizing fluid temperature. In cooking systems, the fluid temperature determines not only the types of food that can be cooked but also the cooking time.
There is therefore a need to develop a detailed dynamic model of PSCS with heat storage, which will determine the temperature variations in all system components. The present work is focused on developing such a model and on its experimental validation. The "Brief literature review" section presents a brief literature review on the topic of modeling PSCS. The objective is to compare the different modeling approaches, the numerical solutions, the formulas used to assess heat losses, and their validation methods. The "System description and heat transfer processes" section describes the experimental system components and the heat transfer processes involved in its operation. Particular attention was given to the receiver, the key element which absorbs incoming solar radiation, converts it to heat, and transmits it to the heat transfer fluid. The "Governing equations and numerical solution" section gives the governing equations derived from heat balance relationships and heat transfer coefficient formulas and describes their numerical solution. The "Convergence and validation of the numerical solution" section presents the model validation by comparison with other known models and with the experimental results obtained from prototype testing. Finally, the "Parametric analysis" section gives the results of a parametric analysis on the effect of most relevant design elements and operating conditions.
Brief literature review
Solar cooking can be classified into four categories depending on the required temperature range: cooking (85 to 90 °C), boiling (100 to 130 °C), frying (200 to 250 °C), and grilling (over 300 °C) [3]. However, the most frequent classification on solar cooking systems distinguishes between direct and indirect systems [3]. In direct cookers, the solar heat is transferred directly from the reflective surface to the food container (casserole, pot, dish, etc.) whereas in indirect cookers, the pot is physically separate from the collector and a heat-transferring medium is required to convey the heat to the cooking pot. Both types can integrate or not a heat storage system whose size depends on the desired autonomy. Modern technologies often include internal large-scale kitchens for collective applications using fluid transfer with or without heat storage depending on the desired autonomy [3].
Most modeling of solar parabolic systems is concerned with Stirling engine applications, and only few studies have dealt with modeling such systems used in solar cooking. Mawire et al. [4] introduced the energy balance equations to model a solar energy capture (SEC) system and a thermal energy storage (TES) system for an indirect parabolic solar cooker. An oil–pebble bed is used as the TES material. A Simulink block was used to solve the equations and to perform energy and exergy analyses. The results indicate a greater degree of thermal stratification and energy stored when using constant-temperature charging than when using constant-flowrate charging. There are greater initial energy and exergy rates for the constant-flowrate method when the solar radiation is low. Energy efficiencies using both methods are comparable whilst the constant-temperature method results in greater exergy efficiency at higher levels of solar radiation. The same authors [5] used the system to carry out discharging simulations for the thermal energy storage system (TES). It was observed from the results that the TES system at a constant flow rate allows for a higher rate of heat utilization. However, this is not beneficial to the cooking process since the maximum cooking temperature is not maintained for the duration of the discharging period. On the other hand, the controlled load power discharging method (variable flow rate) has a slower initial rate of heat utilization but the maximum cooking temperature is maintained during the whole discharging process, and this is desirable for the cooking process. Prasanna [2] modeled and designed a hybrid solar cooking system consisting of a parabolic collector, a thermal storage tank, and a heat exchanger. The energy source is a combination of solar energy and liquefied petroleum gas (LPG). A bond graph modeling approach was used to build a dynamic model. He found an optimal flow rate allowed a 6% increase in system efficiency as compared to the thermosyphon flow rate. He also found that when the pipe diameter is decreased, the efficiency curve moves up. Mussard [6] developed a low-cost small-scale parabolic trough coupled with a thermal storage unit for higher temperature cooking. The system is built with a self-circulation loop and uses thermal oil. The thermal behavior of the system was simulated using the finite-volume method. He compared different sensible and latent heat storage materials and concluded on the relevance of latent heat-based systems. He also showed that a glass cover with an air gap around the absorber would not improve the efficiency at low temperatures, but when reaching high temperatures (around 220 °C), thermal insulation would be necessary. A storage mainly based on thermal oil is much more efficient than on aluminum crossed by thermal oil channels. Comparison between the current heat storage and the direct cooker for boiling water even with a standard pot shows that heat storage increases cooking time from 27 to 38 min. He also showed that the selective coating does not drastically improve the efficiency of the system, but the use of an evacuated tube around the absorber reduces by a factor of 2 the charging time of the heat storage.
System description and heat transfer processes
System description and operation
Figure 1 shows the schematics of the experimental system used in this study and described in more details in a previous paper [3]. The system is composed of the following elements: a solar concentrator, a receiver, a heat storage tank, and a circulation pump placed in the primary fluid circuit. Synthetic oil SAE-40 is used as the heat transfer fluid, and the system has a two-axis tracking mechanism. Figure 2 shows pictures of the second experimental setup.
Schematics of a solar cooking system with a heat storage [3]
Detailed pictures of the second experimental setup. a Complete system. b Concentrator and receiver. c Heat storage tank and circulation pump
The key part of the PSCS is the receiver also called the absorber which is composed of two black iron cylinders as shown in Fig. 3. The inner cylinder, with a volume of 1.57 l, has a thickness of 1.5 mm, a diameter of 0.1 m, and a length of 0.2 m and is black acrylic painted to maximize absorption of solar radiation. The outer cylinder is larger with a thickness of 1 mm, a diameter of 0.2 m, and a length of 0.25 m. The absorber is maintained at the focal point by three square sliding iron tube arms expandable from 0.4 to 0.6 m in length. Glass wool is placed between the two cylinders, as insulation to reduce heat losses. The front of the receiver can optionally be equipped with a glass cover. Table 1 gives the system size parameters and optical properties of each material in the two devices.
Longitudinal section of the receiver
System size parameters and optical properties
First experimental device
Second experimental device
Total mass
Mirror reflectance
Length perpendicular to the aperture
Square sliding tube
Intercept factor
Absorptance
Glass cover
Transmittance
*Same value in both systems
Heat transfer modes
Incident sunlight reaching the parabolic dish is concentrated on the glass cover at the front face of the receiver. A first part of the concentrated solar radiation is reflected to the ambient, the glass absorbs a second part, and a third part is transmitted through the glass cover. The latter part is absorbed by the receiver plate. A small part is reflected back to the glass cover. Heat is transmitted to the fluid via the black-painted metal absorber. The selective coating layer has a high absorptance and a low emittance in order to reduce thermal radiation losses. Figure 4 shows a cross section of the receiver and all heat transfer processes involved. The absorber is considered to be very rigid, and its properties are not affected by temperature.
Cross section of the receiver with heat transfer processes
The thermal resistances and the different modes of heat transfer between the external environment (ambient and sky), the receiver, and the fluid are depicted in Fig. 5. We take into account the energy stored in each node and consider a one-dimensional space temperature variation in the receiver and heat storage tank. The different heat losses, which are conduction through the receiver insulation, convection from the receiver to the ambient air, and radiation from the receiver to the sky, are also considered in a similar manner as Guendouz [7] and Rongrong et al. [8]. For evaluating the different heat loss coefficients, we used the equations presented by Duffie and Beckman [9] and Incropera et al. [10].
Equivalent thermal resistance model
Governing equations and numerical solution
The present model of the PSCS considers all the abovementioned heat transfer processes, takes into account the presence of the glazing on the receiver, and assumes one-dimensional variations of the temperature along the receiver and the storage tank. The first law of thermodynamics is applied between times t and t + Δt to various system components to obtain the governing energy balance equations in a convenient explicit finite difference form ready for numerical solution.
- Solar concentrator:
$$ {m}_c{C}_c\Delta {T}_c=\left[\left(1-\rho \right){I}_c-{h}_1\left({T}_c-{T}_a\right)-{h}_2\left({T}_c-{T}_s\right)-{h}_3\left({T}_c-{T}_g\right)\right]{A}_c\Delta t $$
- Glass cover:
$$ {m}_g{C}_g\Delta {T}_g=\left[{\upalpha}_g\gamma \rho {A}_c{I}_c+{h}_3\left({T}_c-{T}_g\right){A}_c-\left({h}_4\left({T}_g-{T}_a\right)+{h}_5\left({T}_g-{T}_s\right)\right){A}_g+\left({h}_6+{h}_7\right)\left({T}_p-{T}_g\right){A}_g\right]\Delta t $$
The intercept factor γ accounts for various imperfections in the system operation: shadowing, tracking system inaccuracy, geometry, mirror clearness, dust on the glass cover, and miscellaneous factors [11].
$$ \gamma =\prod_{i=1}^6{\gamma}_i $$
- Receiver plate:
$$ {m}_p{C}_p\varDelta {T}_p=\left[{\alpha}_p{\tau}_g\gamma \rho {A}_c{I}_c-{h}_8\left({T}_p-{T}_{f,r}^1\right){A}_p-\left({h}_6+{h}_7\right)\left({T}_p-{T}_g\right){A}_g\right]\varDelta t $$
- Fluid portion in position 1 of the receiver:
$$ {m}_{r,k}{C}_f\varDelta {T}_{f,r}^1=\left[\dot{m}{C}_f\left({T}_{f,r}^2-{T}_{f,r}^1\right)+{h}_8\left({T}_p-{T}_{f,r}^1\right){A}_p-\frac{\lambda_f}{\varDelta {X}_r}\left({T}_{f,r}^1-{T}_{f,r}^2\right){A}_P-{h}_g\left({T}_{f,r}^1-{T}_a\right){S}_{ur}\right]\varDelta t $$
- Fluid portion in intermediate position k (1 < k < kmax) of the receiver:
$$ {m}_{r,k}{C}_f\varDelta {T}_{f,r}^k=\left[\dot{m}{C}_f\left({T}_{f,r}^{k+1}-{T}_{f,r}^k\right)+\frac{\lambda_f}{\varDelta {X}_r}\left({T}_{f,r}^{k-1}-{T}_{f,r}^k\right){A}_P-\frac{\lambda_f}{\varDelta {X}_r}\left({T}_{f,r}^k-{T}_{f,r}^{\begin{array}{l}\\ {}k+2\end{array}}\right){A}_P-{h}_g\left({T}_{f,r}^k-{T}_a\right){S}_{ur}\right]\varDelta t $$
- Fluid portion in position kmax of the receiver:
$$ {m}_{r,k}{C}_f\varDelta {T}_{f,r}^{k\max }=\left[\dot{m}{C}_f\left({T}_{f,r}^1-{T}_{f,r}^{k\max}\right)+\frac{\lambda_f}{\varDelta {X}_r}\left({T}_{f,r}^{k\max -1}-{T}_{f,r}^{k\max}\right){A}_P-{h}_g\left({T}_{f,r}^{k\max }-{T}_a\right)\left({S}_{ur}+{A}_P\right)\right]\varDelta t $$
Similarly, the storage tank is divided into imax fluid zones.
- Fluid portion in position 1 of the storage tank:
$$ {m}_{r,k}{C}_f\varDelta {T}_{f,r}^1=\left[\dot{m}{C}_f\left({T}_{f,r}^2-{T}_{f,r}^1\right)+\frac{\lambda_f}{\varDelta {X}_r}\left({T}_{f,r}^2-{T}_{f,r}^1\right){A}_p-{h}_g^{\prime}\left({T}_{f,r}^1-{T}_a\right)\left({S}_{ur}+{A}_s\right)\right]\varDelta t $$
- Fluid portion in position i (1 < i < imax) of the storage tank:
$$ {m}_{s,i}{C}_f\varDelta {T}_{f,s}^i=\left[{C}_f\left({T}_{f,s}^{i+1}-{T}_{f,s}^i\right)+\frac{\lambda_f}{\varDelta {X}_s}\left({T}_{f,s}^{i+1}-{T}_{f,s}^i\right){A}_s-\frac{\lambda_f}{\varDelta {X}_s}\left({T}_{f,s}^i-{T}_{f,s}^{i-1}\right){A}_S-{h}_g^{\hbox{'}}\left({T}_{f,s}^i-{T}_a\right){S}_{us}\right]\varDelta t $$
- Fluid portion in position imax of the storage tank:
$$ {m}_{s,i}{C}_f\varDelta {T}_{f,s}^{i\max }=\left[{C}_f\left({T}_{f,r}^1-{T}_{f,s}^{i\max}\right)-\frac{\lambda_f}{\varDelta {X}_s}\left({T}_{f,s}^{i\max }-{T}_{f,s}^{i\max -1}\right){A}_s-{h}_g^{\hbox{'}}\left({T}_{f,s}^{i\max }-{T}_a\right)\left({S}_{us}+{A}_s\right)\right]\varDelta t $$
The global heat loss coefficients h g and \( {h}_g^{\prime } \) are calculated as follows:
$$ {h}_g={h}_g^{\prime }=\frac{1}{\frac{1}{h_i}+\frac{1}{h_e}+\frac{e_{\mathrm{ins}}}{\lambda_{\mathrm{ins}}}} $$
The above governing equations are solved numerically to determine the time variations of the temperatures of the concentrator, the glass cover, the plate on the front face of the receiver, and the fluid at different positions both in the receiver and the storage tank. The simulation parameters for the model are summarized in Table 2.
Simulation parameters
Initial temperature
Receiver length step
Storage length step
Receiver plate surface area
Storage cross-section area
Receiver lateral surface area
Storage lateral surface area
Concentrator specific heat
J kg−1 °C−1
Glass specific heat
Heat transfer fluid
SAE-40 oil
Fluid specific heat
Receiver plate specific heat
Fluid thermal conductivity
W m−1 °C−1
Insulation thermal conductivity
Receiver plate mass
Mass flow rate
kg s−1
CHTC concentrator to ambient
RHTC concentrator to sky
RHTC concentrator to glass
CHTC glass to ambient
RHTC glass to sky
CHTC plate to glass
RHTC plate to glass
CHTC plate to fluid
GHTC receiver to ambient
GHTC storage to ambient
CHTC Convection Heat Transfer Coefficient, RHTC Radiation Heat Transfer Coefficient, GHTC Global Heat Transfer Coefficient
The time step is lowered until stability and convergence of the numerical solution is obtained. Figure 6 shows the time variation of the receiver plate temperature at time steps (∆t) 100 and 10 s; the latter coincides with the curves obtained using smaller value time steps (1 and 0.1 s). When the other temperatures are considered, similar behavior is observed, and total stability was obtained with 0.1 s which is the adopted value throughout the present work.
Time variation of the receiver plate temperature at different time steps
Our solution was compared with the results of the simpler model described by Newton [12] which consider that the receiver metal plate and the fluid have the same temperature. Zeghib [13] used the same model, but he considers that there are temperature gradients along the receiver. Figure 7 shows that the temperature difference between the receiver plate and the fluid in position 1 tends to vanish when the heat transfer coefficient is infinite (actually larger than 3000 W m−2 °C−1). Similarly, the temperature difference between the plate and the fluid in position kmax decreases to 0 when the fluid thermal conductivity is infinite (actually larger than 2000 W m−1 °C−1).
Time variation of the temperature difference between the plate and the fluid when we increase a the heat transfer coefficient and b the fluid thermal conductivity
Figure 8 presents the fluid temperature at position kmax in the receiver obtained by solving our model using a high heat transfer coefficient (a heat coefficient of 10,500 W m−2 °C−1 is taken, to guarantee that the plate temperature and the fluid temperature at position k = 1 are equal) and varying the fluid thermal conductivity. At higher values (λ f larger than 6000 W m−1 °C−1), the numerical solution matches the results obtained using the simple model of [12, 13], under the same operating conditions and with the same time step (0.1 s).
Comparison of the present model with the simple model of [12, 13]
The first experimental system was tested in the region of Rabat (Morocco) during the period from April 24 to July 10, 2014. Tests were conducted from 9:00 am to 5:30 pm local time, under clear sky conditions (see Fig. 9).
Time variation of solar radiation and ambient temperature in the closed circuit of the first experimental device
Figure 10 compares the measured and the theoretical fluid temperatures in the upper part of the receiver in the closed circuit of the first experimental device using SAE-40 synthetic oil at 15-min intervals which reached a maximum temperature of 153 °C after 5 h.
Measured and predicted fluid temperatures in the receiver in the closed circuit of the first experimental device
The second experimental system was tested at the same location during the period from May 15 to June 18, 2015, under clear sky conditions (see meteorological data in Fig. 11).
Time variation of solar radiation and ambient temperature in the closed circuit of the second experimental device
The measured and the theoretical fluid temperatures in the upper part of the receiver in the closed circuit of the second experimental device using SAE-40 synthetic oil at 5-min intervals are given in Fig. 12. The maximum fluid temperature was 150 °C reached after 1 h of heating.
Measured and predicted fluid temperatures in the receiver in the closed circuit of the second experimental device
A good agreement is noticed between the theoretical values and the experimental results. The relative error (RE) is between ± 2.4 and ± 4.3%, and the root mean square error (RMSE), which represents the arithmetic mean of the squares of the differences between the forecasts and the observations, is between 1.2 and 3.0 °C. These values are summarized in Table 3.
Relative and root mean square errors in the closed circuit
RE (%)—receiver
± 4.3
RMSE (°C)—receiver
RE relative error, RMSE root mean square error
The DNI and temperature data used in an open circuit are shown in Fig. 13.
Time variation of solar radiation and ambient temperature in the open circuit of the second experimental device
Figure 14 shows the oil temperature measurements in the upper part of the receiver and in the lower part of the storage tank using SAE-40 synthetic oil at 15-min intervals. The maximum fluid temperature in the storage was 75 °C.
Measured and predicted fluid temperatures in the receiver and in the storage tank in the open circuit of the second experimental device
Table 4 summarizes the relative error and the root mean square error which were, respectively, between ± 4.0 and 5.9% and between 1.3 and 1.5 °C in the receiver and between ± 4.4 and ± 7.5% and between 1.3 and 1.9 °C in the storage tank. The increase in RE noted on the day of May 27, 2015, is due to the temperature difference that was greater than 5 °C between the receiver and the storage tank due to the blocking of the pump shaft at the end of the afternoon.
Relative and root mean square errors in the open circuit
RE (%)—storage
RMSE (°C)—storage
The performance of a PSCS can be significantly affected by numerous parameters such as weather conditions (solar radiation, wind) that vary according to the site, material optical properties (reflectance, absorptance, emissivity), system design parameters (aspect ratio, rim angle, intercept factor, exposure ratio), and operating parameters (mass flow rate, glazing, air between the glass and the plate on the front face of the receiver, tracking mechanism, fluid nature, heat losses). The following paragraphs present the results of a parametric analysis of the effect of these parameters on the system performance measured by the maximum fluid temperature in the heat storage tank. This choice is dictated by the fact that this one will ensure the cooking (types of food to be cooked and cooking time).
Effect of maximal solar radiation
Figure 15 shows that the maximum heat storage temperature is greatly affected by maximal solar radiation, which may change depending on the season and geographic location. The daily maximal radiation varies between 500 and 950 W m−2, and a change of daily maximal solar radiation by 50 W m−2 increases the maximum storage temperature by about 4 °C. It should be noted that thermal losses also increase due to the increased receiver temperature. However, this increase is smaller than the enhanced absorbed solar energy. Rongrong et al. [8] reported that when solar radiation changes by 20 W m−2, the collector output thermal fluid decreases or increases by about 2 °C. Luo et al. [14] found when solar irradiation increases from 900 to 1100 W m−2 and decreases from 900 to 700 W m−2, the collector outlet temperature will increase or decrease by about 10 °C.
Maximum storage temperature as a function of solar radiation
Effects of the material reflectance and absorptance
Aluminum (0.7), silver mirrors (0.7 to 0.9), and Mylar (0.94) are commonly used as reflectors, whereas copper coated with black paint (0.75) and stainless steel coated with cermet (mixture of ceramics and metal) or black chrome (0.9 to 0.94) are used as absorbers. Figure 16 illustrates the effect of the reflectance and absorptance of the materials used on the system performance. For a reflectance of 0.75, an increase of the absorptance from 0.6 to 0.75 will cause an 18 °C increase of the heat storage temperature. Generally, an increase of 5% of the reflectance or the absorptance will increase the storage temperature by 3.6 and 3.9 °C, respectively.
Maximum storage temperature as a function of the receiver reflectance and absorptance
Effect of the aspect ratio
The aspect ratio defined as cavity length to diameter of the receiver is a design parameter which influences system performance. A large diameter increases the "dead area" created by the receiver shadow on the concentrator and the heat losses to ambient air, while a longer cylindrical creates a temperature gradient with slower heat transfer. For a given volume, it is crucial to optimize the aspect ratio as indicated in Fig. 17. The maximum heat storage temperature (85 °C) is reached when the aspect ratio of the receiver is near to 2. Beyond this value, the temperature decreases because the effect of heat losses particularly radiation in the receiver is greater than the amount of solar radiation intercepted.
Maximum storage temperature as a function of the aspect ratio L/d
This behavior is close to the results found by other authors [15–17] who also showed increased radiation loss with increasing receiver aperture diameter. Beltran et al. [16] found that the aperture diameter of the cavity reaches a point of maximum efficiency when the aperture diameter is equal to 0.13 m which corresponds to a receiver aspect of ratio of 1.5. Gil et al. [17] found with a heat loss coefficient of 12 W m−2 °C−1, the receiver best aspect ratio is 1.9 which is close to our case. Prakash et al. [18] found that the convective loss values increase with opening ratio or exposure ratio while the aspect ratio is equal to 1. The higher the opening ratio, the greater the convective zone leading to higher convective losses. An increase of about 30–50% in the convective loss values is observed when the opening ratio increases from 0.5 to 1.
Paitoonsurikarn et al. [19] used in their work an aspect ratio of 2.2 for the base case and show that the heat flux increases with decreasing cavity aspect ratio for different inclinations. Madadi et al. [20] used many receivers which have an aspect ratio of 2, 2.8, and 3.6 and show that the effect of receiver temperature on radiation loss for greater apertures is higher than smaller apertures. The radiation heat losses from the receiver in comparison with convection heat losses are so low. By decreasing the heat transfer fluid mass flow rate from 0.1 to 0.0083 kg s−1, with a receiver with an aspect ratio of 2, the average receiver temperature increases from 197 to 310 °C, the convective heat loss increases only 22% while the radiation heat loss increases up to 165%.
Abbasi-Shavazia et al. [21] used a receiver which has an aspect ratio of 2, and the comparison of radiation loss values based on bottom surface and area-weighted cavity average temperature shows no more than a 10–20% difference.
Effect of the thermal fluid mass flow rate
In high-temperature solar thermal systems with energy storage and automatic control, it is known that fluid mass flow rate is a very important operating parameter. A low mass flow rate causes higher temperatures in the receiver and thus bigger heat losses and lower storage tank temperatures. Conversely, with higher mass flow rate, thermal energy is carried away much faster from the receiver, but also increases the pump size and power consumption. It is therefore important to determine the "optimal" value that yields the highest storage temperature without oversizing the pump. In our case, Fig. 18 shows that increasing the fluid flow rate from 0 to 18 kg h−1 increases the storage temperature by 65 °C, and beyond this value, there is no significant improvement of the system performance. These results are very similar to those found by Rongrong et al. [8], Luo et al. [14], and Madadi et al. [20]. The outlet oil temperature in the receiver will be reduced when increasing flow rate in the same working condition.
Maximum storage temperature as a function of the thermal fluid mass flow rate
They found a curve which decreases when the flow rate increases. In this case, the curve of the receiver thermal efficiency presents a maximum at an optimal mass flow rate. The difference between fluid inlet and outlet temperatures is controlled and maintained at 5 °C in our case, which is commonly used in solar thermal systems.
Effect of insulation thickness
Heat losses must be carefully analyzed as their assessment can significantly affect the overall system performance accuracy. Thermal insulation of the receiver deserves a careful analysis as a thicker receiver wall reduces thermal losses but it increases the shadow area on the solar concentrator, as well as the receiver weight and system cost.
The receiver wall thermal resistance is the inverse of the global heat transfer coefficient resulting from conduction through the receiver and convection heat transfer on the interior and the exterior of the receiver.
An insulation thickness in the receiver between 10 and 80 mm, which corresponds to a global heat loss coefficient between 0.5 and 4.0 W m−2 °C−1 (or a receiver wall thermal resistance between 0.25 and 2 m2 °C W−1), is studied. Increasing the insulation thickness from 0.01 to .08 m improves the maximum storage temperature by 17 °C (see Fig. 19). To minimize conduction in the receiver, an insulation thickness of 75 mm has been suggested as an effective width by Fraser [15].
Maximum storage temperature as a function of receiver wall thermal resistance
The present model of parabolic solar cooking systems introduced with heat storage for continuous use allowed a valuable analysis of the performance of such systems. Improvement over previous simpler models included a non-uniform receiver temperature and temperature difference between the glass, receiver cover, and thermal fluid. The model-governing equations were solved using an explicit finite difference method, and the method was mathematically validated. The results of the simulation were compared with experimental results, which proved that the model predicts adequately the thermal behavior of the described system with a relative error ± 4.4% and a root mean square error of 3 °C. The model was used to carry out a detailed parametric study of the main design and operating parameters which affect the system energy performance. The results show that a change of daily maximum solar radiation of 50 W m−2 increases the heat storage temperature by about 4 °C and an increase of 5% of the reflectance or the absorptance improves the heat storage temperature by 3.6 and 3.9 °C, respectively. It was also shown that the best aspect ratio of the receiver is 2. Increasing the fluid mass flow rate from 0 to 18 kg h−1 leads to a maximum storage temperature improvement of 65 °C. Going from an insulation thickness of 0.01 to 0.08 m increases the maximum storage temperature by 17 °C.
Concentrator surface area
A p
Glass surface area
kJ kg−1 °C−1
Receiver diameter
\( {\mathrm{h}}_g^{\prime } \)
Direct normal irradiance
W m−2
Receiver length perpendicular to the aperture
Thermal fluid mass flow rate
Concentrator mass
Glass mass
m p
m r , k
Receiver partial mass in position k
m s , i
Storage partial mass in position i
S ur
Receiver partial lateral surface area
S us
Storage partial lateral surface area
T a
Concentrator temperature
Fluid temperature
T g
Glass temperature
T p
Receiver plate temperature
T s
Sky temperature
⍺ g
Glass absorptance
⍺ p
Receiver plate absorptance
∆t
Time step
∆X r
∆X s
λ f
λ ins
τ g
Glass transmittance
Subindex
NM worked as a PhD student under the supervision of AH, wrote the paper, and did the field tests. AH supervised the research work, gave guidance on modeling, and corrected the English text. All authors read and approved the final manuscript.
This work has received no external funding. I declare that there are no competing interests.
Process Engineering and Environment Research Unit, Institut Agronomique et Vétérinaire Hassan II, BP 6202-Rabat-Instituts, 10101 Rabat, Morocco
ASDER (2012) Solar cookers—technical presentation. Savoyard Association for the Development of Renewable Energies, Chambery. http://www.asder.asso.fr/phocadownload/cuiseur%20solaire.pdf.
Prasanna UR (2010) Modeling, optimization and design of a solar thermal energy transport system for hybrid cooking application. Ph.D. thesis, Indian Institute of Science. https://www.researchgate.net/publication/204520599_MODELING_OPTIMIZATION_AND_DESIGN_OF_A_SOLAR_THERMAL_ENERGY_TRANSPORT_SYSTEM_FOR_HYBRID_COOKING_APPLICATION.
Mbodji N, Hajji A (2016) Performance testing of a parabolic solar concentrator for solar cooking. ASME J Sol Energy Eng 138(4). doi:10.1115/1.4033501
Mawire A, McPherson M, Van den Heetkamp RRJ (2008) Simulated energy and exergy analyses of the charging of an oil-pebble bed thermal energy storage system for a solar cooker. Sol Energy Mater Sol Cells 92:1668–1676. doi:10.1016/j.solmat.2008.07.019 View ArticleGoogle Scholar
Mawire A, McPherson M, Van den Heetkamp RRJ (2010) Discharging simulations of a thermal energy storage (TES) system for an indirect solar cooker. Sol Energy Mater Sol Cells 94(6):1100–1106. doi:10.1016/j.solmat.2010.02.032 View ArticleGoogle Scholar
Mussard M (2013) A solar concentrator with heat storage and self-circulating liquid. Ph.D. thesis, Norwegian University of Science and Technology, Trondheim, Norway. https://brage.bibsys.no/xmlui/handle/11250/235100
Guendouz B (2012) Use of solar energy for air conditioning needs. Chapter 3: state of the art of solar collectors and modeling. Magister's thesis, Abou Bakr Belkaid University, Tlemcen, Algeria. http://dspace.univ-tlemcen.dz/bitstream/112/1200/1/GUENDOUZ-BOUHELAL.pdf
Rongrong Z, Yongping Y, Qin Y, Yong Z (2013) Modeling and characteristic analysis of a solar parabolic trough system: thermal oil as the heat transfer fluid. Journal of Renewable Energy. doi:10.1155/2013/389514
Duffie JA, Beckman WA (1980) Solar engineering of thermal processes, 2nd edn. John Wiley & Sons, Inc, New YorkGoogle Scholar
Incropera F, Dewitt DP, Bergman TL, Lavine AS (2007) Fundamentals of heat and mass transfer, 6th edn. Wiley, New York. http://academic.aua.am/Sacozey/Public/Fundamentals%20of%20Heat%20and%20Mass%20Transfer%20-%206th%20Edition%20Incropera%20.pdf.
Padilla RV (2011) Simplified methodology for designing parabolic trough solar power plants. Ph.D. thesis, South Florida University. http://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=4585&context=etd.
Newton CC (2006) A concentrated solar thermal energy system. Master's thesis, Florida State University, Tallahassee, FL. http://diginole.lib.fsu.edu/islandora/object/fsu:180899/datastream/PDF/view.
Zeghib I (2005) Design and construction of a parabolic solar concentrator. Master's thesis. Mentouri – Constantine University, Algeria. http://archives.umc.edu.dz/bitstream/handle/123456789/10146/ZEG4339.pdf?sequence=1.
Luo N, Yu G, Hou HJ, Yang YP (2015) Dynamic modeling and simulation of parabolic trough solar system. Energy Procedia 69:1344–1348. doi:10.1016/j.egypro.2015.03.137 View ArticleGoogle Scholar
Fraser PR (2008) Stirling dish system performance prediction model. Master of Science, Wisconsin. http://sel.me.wisc.edu/publications/theses/fraser08.zip.
Beltran R, Velazquez N, Espericueta AC, Sauceda D, Perez G (2012) Mathematical model for the study and design of a solar dish collector with cavity receiver for its application in Stirling engines. J Mech Sci Technol 26(10):3311–3321. doi:10.1007/s12206-012-0801-0 View ArticleGoogle Scholar
Gil R, Monné C, Bernal N, Muñoz M, Moreno F (2015) Thermal model of a dish Stirling cavity-receiver. Energies 8:1042–1057. doi:10.3390/en8021042 View ArticleGoogle Scholar
Prakash M, Kedare SB, Nayak JK (2009) Investigations on heat losses from a solar cavity receiver. Sol Energy 83(2):157–170. doi:10.1016/j.solener.2008.07.011 View ArticleGoogle Scholar
Paitoonsurikarn S, Lovegrove K, Hughes G, Pye J (2011) Numerical investigation of natural convection loss from cavity receivers in solar dish applications. ASME J Sol Energy Eng 133(2):1–10. doi:10.1115/1.4003582 View ArticleGoogle Scholar
Madadi V, Tavakoli T, Rahimi A (2015) Estimation of heat loss from a cylindrical cavity receiver based on simultaneous energy and exergy analyses. J Non-Equilib Thermodyn 40(1):49–61. doi:10.1016/j.solener.2008.07.011 View ArticleGoogle Scholar
Abbasi-Shavazia E, Hughes GO, Pye JD (2015) Investigation of heat loss from a solar cavity receiver. Energy Procedia 69:269–278. doi:10.1016/j.egypro.2015.03.031 View ArticleGoogle Scholar
|
CommonCrawl
|
Qualitative analysis of a diffusive prey-predator model with trophic interactions of three levels
DCDS-B Home
On computing heteroclinic trajectories of non-autonomous maps
January 2012, 17(1): 101-126. doi: 10.3934/dcdsb.2012.17.101
Periodic solutions of a non-divergent diffusion equation with nonlinear sources
Chunhua Jin 1, and Jingxue Yin 2,
School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
School of Mathematical Sciences, South China Normal University, Guangzhou 510631, China
Received July 2010 Revised March 2011 Published October 2011
This paper is concerned with the existence of nontrivial and nonnegative periodic solutions of a doubly degenerate and singular parabolic equation in non-divergent form with nonlinear sources. We will determine exact classification for the exponent values of the source, and so, for the nonexistence of nontrivial periodic solutions, as well as the existence of those solutions with compact support, and the existence of positive periodic solutions.
Keywords: Periodic Solution, Diffusion Equation, Non-Divergence..
Mathematics Subject Classification: Primary: 35B10, 35K6.
Citation: Chunhua Jin, Jingxue Yin. Periodic solutions of a non-divergent diffusion equation with nonlinear sources. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 101-126. doi: 10.3934/dcdsb.2012.17.101
L. J. S. Allen, Persistence and extinction in single-species reaction-diffusion models,, Bull. Math. Biol., 45 (1983), 209. Google Scholar
R. Dal Passo and S. Luckhaus, A degenerate diffusion problem not in divergence form,, J. Differential Equations, 69 (1987), 1. Google Scholar
B. C. Low, Resistive diffusion of force-free magnetic fields in a passive medium,, Astrophys. J., 81 (1973), 209. doi: 10.1086/152042. Google Scholar
S. Angenent, On the formation of singularities in the curve shortening flow,, J. Differential Geom., 33 (1991), 601. Google Scholar
C. H. Jin and J. X. Yin, Non-extinct shrinking self-similar solutions for a class of non-divergence equations,, in press., (). Google Scholar
C. T. Taam, On nonlinear diffusion equations,, J. Differential Equations, 3 (1967), 482. Google Scholar
S. J. Farlow, An existence theorem for periodic solutions of a parabolic boundary value problem of the second kind,, SIAM J. Appl. Math., 16 (1968), 1223. doi: 10.1137/0116102. Google Scholar
Y. Giga and N. Mizoguchi, On time periodic solutions of the Dirichlet problem for degenerate parabolic equations of nondivergence type,, J. Math. Anal. Appl., 201 (1996), 396. doi: 10.1006/jmaa.1996.0263. Google Scholar
N. Hirano and N. Mizoguchi, Positive unstable periodic solutions for superlinear parabolic equations,, Proceedings of the American Mathematical Society, 123 (1995), 1487. doi: 10.1090/S0002-9939-1995-1234627-2. Google Scholar
F. Browder, "Periodic Solutions of Nonlinear Equations of Evolution in Infinite Dimensional Spaces,", 1969 Lectures in Differential Equations, (1969), 71. Google Scholar
B. A. Ton, Periodic solutions of nonlinear evolution equations in Banach spaces,, Canad. J. Math., 23 (1971), 189. doi: 10.4153/CJM-1971-018-x. Google Scholar
Y. Wang, J. Yin and Z. Wu, Periodic solutions of evolution p-laplacian equations with nonlinear sources,, J. Math. Anal. Appl., 219 (1998), 76. doi: 10.1006/jmaa.1997.5783. Google Scholar
A. Beltramo and P. Hess, On the principal eigenvalue of a periodic-parabolic operator,, Comm. Partial Differential Equations, 9 (1984), 919. Google Scholar
M. J. Esteban, On periodic solutions of superlinear parabolic problems,, Tran. Amer. Math. Society, 293 (1986), 171. doi: 10.1090/S0002-9947-1986-0814919-8. Google Scholar
M. J. Esteban, A remark on the existence of positive periodic solutions of superlinear parabolic problems,, Proceedings of the American Mathematical Society, 102 (1988), 131. doi: 10.1090/S0002-9939-1988-0915730-7. Google Scholar
P. Quittner, Multiple equilibria, periodic solutions and a priori bounds for solutions in superlinear parabolic problems,, NoDEA Nonlinear Differ. Equ. Appl., 11 (2004), 237. Google Scholar
T. I. Seidman, Periodic solutions of a non-linear parabolic equation,, J. Differential Equations, 19 (1975), 242. Google Scholar
M. Nakao, On boundedness, periodicity, and almost periodicity of solutions of some nonlinear parabolic equations,, J. Differential Equations, 19 (1975), 371. Google Scholar
N. Mizoguchi, Periodic solutions for degenerate diffusion equations,, Indiana Univ. Math. J., 44 (1995), 413. doi: 10.1512/iumj.1995.44.1994. Google Scholar
J. X. Yin and C. H. Jin, Periodic solutions of the evolutionary $p$-Laplacian with nonlinear sources,, J. Math. Anal. Appl., 368 (2010), 604. doi: 10.1016/j.jmaa.2010.03.006. Google Scholar
C. H. Jin and J. X. Yin, The asymptotic behavior of a doubly degenerate parabolic equation not in divergence form,, in press., (). Google Scholar
C. H. Jin and J. X. Yin, Critical exponent of a doubly degenerate parabolic equation in non-divergence form with reaction sources,, Chinese Ann. Math. Ser. A, 30 (2009), 525. Google Scholar
J. García-Melián and J. Sabina de Lis, Maximum and comparison principles for operators involving the $p$-Laplacian,, J. Math. Anal. Appl., 218 (1998), 49. doi: 10.1006/jmaa.1997.5732. Google Scholar
C. Azizieh and Ph. Clément, A priori estimates and continuation methods for positive solutions of $p$-Laplace equations,, J. Differential Equations, 179 (2002), 213. Google Scholar
J. Serrin and H. Zou, Cauchy-Liouville and universal boundedness theorems for quasilinear elliptic equations and inequalities,, Acta Math., 189 (2002), 79. doi: 10.1007/BF02392645. Google Scholar
È. Mitidieri and S. I. Pokhozhaev, Absence of global positive solutions of quasilinear elliptic inequalities,, (Russian), 359 (1998), 456. Google Scholar
È. Mitidieri and S. I. Pokhozhaev, Absence of positive solutions for quasilinear elliptic problems in $\mathbb R^N$,, Proc. Steklov Inst. Math., 227 (1999), 186. Google Scholar
B. Gidas, W. M. Ni and L. Nirenberg, Symmetry and related properties via the maximum principle,, Comm. Math. Phys., 68 (1979), 209. doi: 10.1007/BF01221125. Google Scholar
J. L. Vásquez, A strong maximum principle for some quasilinear elliptic equations,, Appl. Math. Optim., 12 (1984), 191. doi: 10.1007/BF01449041. Google Scholar
M. Ôtani, Existence and nonexistence of nontrivial solutions of some nonlinear degenerate elliptic equations,, J. Func. Anal., 76 (1988), 140. doi: 10.1016/0022-1236(88)90053-5. Google Scholar
Peter Lindqvist, On the equation div $(|\nabla u|^{p-2}\nabla u)+ \lambda| u|^{p-2}u=0$,, Proc. Amer. Math. Soc., 109 (1990), 157. doi: 10.1090/S0002-9939-1990-1007505-7. Google Scholar
Benjamin B. Kennedy. A periodic solution with non-simple oscillation for an equation with state-dependent delay and strictly monotonic negative feedback. Discrete & Continuous Dynamical Systems - S, 2020, 13 (1) : 47-66. doi: 10.3934/dcdss.2020003
Jingxue Yin, Chunhua Jin. Critical exponents and traveling wavefronts of a degenerate-singular parabolic equation in non-divergence form. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 213-227. doi: 10.3934/dcdsb.2010.13.213
Raphaël Danchin, Piotr B. Mucha. Divergence. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1163-1172. doi: 10.3934/dcdss.2013.6.1163
Saulo R.M. Barros, Antônio L. Pereira, Cláudio Possani, Adilson Simonis. Spatially periodic equilibria for a non local evolution equation. Discrete & Continuous Dynamical Systems - A, 2003, 9 (4) : 937-948. doi: 10.3934/dcds.2003.9.937
Bhargav Kumar Kakumani, Suman Kumar Tumuluri. Asymptotic behavior of the solution of a diffusion equation with nonlocal boundary conditions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 407-419. doi: 10.3934/dcdsb.2017019
Adnan H. Sabuwala, Doreen De Leon. Particular solution to the Euler-Cauchy equation with polynomial non-homegeneities. Conference Publications, 2011, 2011 (Special) : 1271-1278. doi: 10.3934/proc.2011.2011.1271
Jaeyoung Byeon, Sungwon Cho, Junsang Park. On the location of a peak point of a least energy solution for Hénon equation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1055-1081. doi: 10.3934/dcds.2011.30.1055
Maria Rosaria Lancia, Valerio Regis Durante, Paola Vernole. Asymptotics for Venttsel' problems for operators in non divergence form in irregular domains. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1493-1520. doi: 10.3934/dcdss.2016060
Kaïs Ammari, Thomas Duyckaerts, Armen Shirikyan. Local feedback stabilisation to a non-stationary solution for a damped non-linear wave equation. Mathematical Control & Related Fields, 2016, 6 (1) : 1-25. doi: 10.3934/mcrf.2016.6.1
Patrizia Donato, Florian Gaveau. Homogenization and correctors for the wave equation in non periodic perforated domains. Networks & Heterogeneous Media, 2008, 3 (1) : 97-124. doi: 10.3934/nhm.2008.3.97
Jing Li, Boling Guo, Lan Zeng, Yitong Pei. Global weak solution and smooth solution of the periodic initial value problem for the generalized Landau-Lifshitz-Bloch equation in high dimensions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1345-1360. doi: 10.3934/dcdsb.2019230
Henri Berestycki, Nancy Rodríguez. A non-local bistable reaction-diffusion equation with a gap. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 685-723. doi: 10.3934/dcds.2017029
Giuseppe Tomassetti. Smooth and non-smooth regularizations of the nonlinear diffusion equation. Discrete & Continuous Dynamical Systems - S, 2017, 10 (6) : 1519-1537. doi: 10.3934/dcdss.2017078
Abdelaziz Rhandi, Roland Schnaubelt. Asymptotic behaviour of a non-autonomous population equation with diffusion in $L^1$. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 663-683. doi: 10.3934/dcds.1999.5.663
Changchun Liu, Hui Tang. Existence of periodic solution for a Cahn-Hilliard/Allen-Cahn equation in two space dimensions. Evolution Equations & Control Theory, 2017, 6 (2) : 219-237. doi: 10.3934/eect.2017012
Yingte Sun, Xiaoping Yuan. Quasi-periodic solution of quasi-linear fifth-order KdV equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 6241-6285. doi: 10.3934/dcds.2018268
Hiroshi Matsuzawa. On a solution with transition layers for a bistable reaction-diffusion equation with spatially heterogeneous environments. Conference Publications, 2009, 2009 (Special) : 516-525. doi: 10.3934/proc.2009.2009.516
Ndolane Sene. Fractional diffusion equation described by the Atangana-Baleanu fractional derivative and its approximate solution. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020173
Hirotada Honda. Global-in-time solution and stability of Kuramoto-Sakaguchi equation under non-local Coupling. Networks & Heterogeneous Media, 2017, 12 (1) : 25-57. doi: 10.3934/nhm.2017002
Futoshi Takahashi. On the number of maximum points of least energy solution to a two-dimensional Hénon equation with large exponent. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1237-1241. doi: 10.3934/cpaa.2013.12.1237
Chunhua Jin Jingxue Yin
|
CommonCrawl
|
Deep contextualized word representations
Matthew E. Peters and Mark Neumann and Mohit Iyyer and Matt Gardner and Christopher Clark and Kenton Lee and Luke Zettlemoyer
Keywords: cs.CL
Abstract: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.
[link] Summary by mnoukhov 3 years ago
This paper introduces a deep universal word embedding based on using a bidirectional LM (in this case, biLSTM). First words are embedded with a CNN-based, character-level, context-free, token embedding into $x_k^{LM}$ and then each sentence is parsed using a biLSTM, maximizing the log-likelihood of a word given it's forward and backward context (much like a normal language model).
The innovation is in taking the output of each layer of the LSTM ($h_{k,j}^{LM}$ being the output at layer $j$)
R_k &= \{x_k^{LM}, \overrightarrow{h}_{k,j}^{LM}, \overleftarrow{h}_{k,j}^{LM} | j = 1 \ldots L \} \\
&= \{h_{k,j}^{LM} | j = 0 \ldots L \}
and allowing the user to learn a their own task-specific weighted sum of these hidden states as the embedding:
ELMo_k^{task} = \gamma^{task} \sum_{j=0}^L s_j^{task} h_{k,j}^{LM}
The authors show that this weighted sum is better than taking only the top LSTM output (as in their previous work or in CoVe) because it allows capturing syntactic information in the lower layer of the LSTM and semantic information in the higher level. Table below shows that the second layer is more useful for the semantic task of word sense disambiguation, and the first layer is more useful for the syntactic task of POS tagging.
https://i.imgur.com/dKnyvAa.png
On other benchmarks, they show it is also better than taking the average of the layers (which could be done by setting $\gamma = 1$)
https://i.imgur.com/f78gmKu.png
To add the embeddings to your supervised model, ELMo is concatenated with your context-free embeddings, $\[ x_k; ELMo_k^{task} \]$. It can also be concatenated with the output of your RNN model $\[ h_k; ELMo_k^{task} \]$ which can show improvements on the same benchmarks
https://i.imgur.com/eBqLe8G.png
Finally, they show that adding ELMo to a competitive but simple baseline gets SOTA (at the time) on very many NLP benchmarks
https://i.imgur.com/PFUlgh3.png
It's all open-source and there's a tutorial [here](https://github.com/allenai/allennlp/blob/master/tutorials/how_to/elmo.md)
This paper builds on the paper "Learned in Translation: Contextualized Word Vectors" , which learned contextualized word representations by using the sequence of encodings generated by a Bidirectional LSTM as the representation of the sequence of input words. This paper says "if we're learning a deep LSTM, ie one with more than one layer, why should we use only the last layer that it produces as the representation of the word?". This paper instead suggests that it could be valuable for transfer learning if each task can learn a weighting of layer encodings that is most valuable for that task. In a prime example of "your model is a special case of my model," they note that this framework can easily learn the approach of only using the final encoding layer by just only giving that layer a non-zero weight. As intuition for why this might be a valuable thing to do: different layers tend to capture different levels of meaning, with lower layers more likely to capture part of speech information, and higher layers more likely to capture more rich semantic context.
https://i.imgur.com/s8Qn6YY.png
One difficulty in comparing this paper directly to the "take the top layer encoding from a LSTM" paper is that they were trained on different problems: the top layer paper learned using a machine translation objective, where, by contrast, this one learns by using a much simpler language model. Here, a simple language model means a RNN that is trained to predict the next word, given the hidden state built up over all prior words. Because we want to pull in word context from both directions, this is't just a LSTM but a bidirectional LSTM, which - surprise surprise - tries to pick the word *before* a word in a sentence by using all of the words that come after it. This has the advantage of not requiring parallel data, the way machine translation does, but also makes it difficult to make direct comparisons to prior work that isolate the effect of multi-layer combination, as separate from the switch between machine translation and direct language modeling.
Although this is likely also a benefit you see with just top-layer contextual vectors, it is interesting to examine the attached table, and look at how effectively the model is able to learn different representations of the word "play" depending on the context in which it appears; in each case, the nearest neighbor of a context of "play" is a sentence in which the word is used in the same context.
|
CommonCrawl
|
An association of Orf virus infection among sheep and goats with herd health programme in Terengganu state, eastern region of the peninsular Malaysia
Jamilu Abubakar Bala1,2,
Krishnan Nair Balakrishnan1,
Ashwaq Ahmed Abdullah3,4,
Lawan Adamu5,
Muhammad Syaafii bin Noorzahari1,
Lau Kah May1,
Hassana Kyari Mangga1,6,
Mohd Termizi Ghazali7,
Ramlan Bin Mohamed8,
Abd Wahid Haron5,
Mustapha Mohamed Noordin1 &
Mohd Azmi Mohd Lila ORCID: orcid.org/0000-0001-7153-10261
Orf virus causes a scabby skin lesions which decreases productivity in small ruminants. The unknown status of this disease in the eastern region of Peninsular Malaysia warrants a study to determine sero-prevalence of orf with regards to farmers' compliance level towards the Herd Health Program (HHP) programme.
Out of 504 animals, 115 were positive for Orf-virus antibodies. An overall prevalence rate of 22.8% indicated a high prevalence of orf disease in this region. It was observed that 25.1% (92/367) of goats were positive and 16.8% (23/137) of sheep sero-converted for Orf virus antibody. Several factors were measured for their possible association with prevalence of Orf virus infection. The prevalence was higher in LY farm, JC breed, kid and female animals, and in the presence of disease lesion. Chi-square analysis showed a significant association of three risk factors which are species, age and sex of the animals (P < 0.05). Notwithstanding, all other variables showed no significant difference (P > 0.05). Farms surveyed usually practised intensive management system, keeping animals in the shade at all time, due to limited availability of suitable land as a free-range grazing area. An interview with small holder farmers revealed a lack of awareness of the main goals of herd health programme. An overall compliance level of 42.7% was observed for all HHP parameters. Among the 14 main components of HHP modules, animal identification had recorded highest compliance level (84.62%) while milking management recorded the least compliance (− 82.69%). That explained why there was a high sporadic prevalence of Orf infection in this region.
Good herd health supervision is a rehearsal target to prevent an outbreak and the spread of diseases thus reduces economic losses among farmers. Therefore, a good herd health programme should be in place, in order to prevent and control disease transmission as well as to improve herd immunity.
Small ruminants have important contributions to human kind and sustainability in terms of their meat, milk, and other ornamental products including biological products for studies of diabetes and insulin production [1,2,3,4]. However, several diseases such as contagious ecthyma and pneumonic mannheimiasis present a serious challenge and also affects the productive capacities and benefits of these animal species [5]. Contagious ecthyma is a skin disease caused by Orf virus which is otherwise called sore mouth or infectious labial dermatitis. The virus belongs to the family Poxviridae and genus parapoxvirus which is made up of linear double stranded DNA particles [6]. Several species of animals are susceptible to the infection including sheep, goats, dogs, cattle, camels, some wild animal species and human [7, 8]. Nonetheless, contagious ecthyma is considered primarily a disease affecting goat and sheep population worldwide [9, 10]. The disease is also a very important zoonotic viral infection associated with painful non systemic proliferative lesions [11, 12]. Rarely, long standing infections can become complicated by secondary bacterial infection that may extend to internal organs [13]. Transmission occurs by direct contact with an infected animal and/or with contaminated fomites that contain orf virus. Traditionally, the virus will enter the skin through cuts or abrasions and establish the infection. The skin lesions develop and progress in multiple stages ranging from skin reddening (erythema), macule, papule, vesicle, pustule, scab and scar [14, 15]. Contagious ecthyma usually resolves spontaneously, however in severe cases due to secondary infections or delayed nursing intervention, the economic impact is significant due to deaths and wasting. Similarly, majority of human infections are localized and heal spontaneously without much complications; however, immunocompromised patients can develop large, poorly healed lesions.
The best method for diagnosis of Orf virus infection is culture of the virus on susceptible cell lines [16,17,18] but this is mostly characterized as laborious and time-consuming [17]. Moreso, molecular detection using specific Orf virus has been developed, but such assays are unlikely to be useful for herd screening [19]. Enzyme Linked-Immunorsorbent Assat (ELISA) method can be sensitive and inexpensive for field application in the detection of Orf viruses from large number of animal population.
At present some commercial vaccines can control this disease however, anecdotal epizootics of a persistent generalized form of the infection has been reported among goats vaccinated with the first generation orf vaccine prepared for sheep [20]. Similarly, outbreaks of more virulent contagious ecthyma has also been reported among goats vaccinated with the goat-derived contagious ecthyma vaccine [21]. Furthermore, these vaccines have been reported to be associated drawbacks including the inability to produce effective and desirable protection of the vaccinated animals, as well as persistent virus shedding into the environment which poses an increased risk to other susceptible animals [22]. Therefore, vaccination against Orf virus is only recommended in endemic areas [15]. The urgent need to develop an effective vaccine against contagious ecthyma is borne out of its economic importance to especially rural farmers as well as its zoonotic potentials particularly among animal handlers.
Many outbreaks of contagious ecthyma disease in different part of the world including but not limited to Africa, Middle East, Europe, North America and most of the Asean countries including Malaysia [7, 22, 23] thus it has have become a major concern due to the huge economic losses. According to Onyango et al. [24] the estimated national costs of orf disease in the British sheep industry based could reach up to a staggering ₤10 million. Moreover, livestock rearing is ranked the highest practice among other agricultural sub-sectors worldwide particularly in Asian regions including Malaysia. A report by the Malaysian Agricultural analysis and Development Institute (MARDI) revealed that small ruminants are among the most popular livestock farming practiced in Malaysia and it is estimated to be 280 times more than the poultry industry. However, this promising industry is being challenged by the menace of infectious diseases including contagious ecthyma which has resulted in a huge financial loss in the country [25]. A recent works conducted by Sadiq et al., [9], Jesse et al., [26] and Bala et al., [27] have elucidated on the prevalence of Orf in the state of Selangor, Malaysia which poses an alarming prevalence high prevalence. However, adequate information on Orf outbreak is lacking on eastern region of Peninsular Malaysia thus, the associated risk factors including the impact on sheep and goat husbandry practices is still obscure. The present study was conducted to determine the seroprevalence of Orf virus in Terengganu state. In our knowledge, this has given the idea that serves as the first documented report on the prevalence of contagious ecthyma in this region. Moreover, this study have further elucidated on the farmers' level of compliance with established herd health programme. A well thoughtful of the current epidemiological situation of Contagious ecthyma infection in the state of Terengganu and Herd health management would allow the establishment of improved disease control program that would benefit small holder farmers.
All the 13 farms surveyed have consented and responded to the questionnaire. A total of 504 blood samples were collected from sheep and goats.
Serological assay
Out of the 504 animals sampled, one hundred and fifteen (115) were positive for orf virus antibody based on the ELISA assay. This indicated an overall score of 22.8% among the animals sampled across the 13 farms selected. Goat population had the highest percentage (25.1%) of sero-conversion compared to sheep with 16.8% positive results. The finding is statistically significant with P-value of less than 0.05 (Table 1).
Table 1 Overall result of ELISA according to animal species
Cross-reactivity, sensitivity and specificity of ELISA assay
There is no cross-reactivity observed upon testing with other related viruses including bluetongue virus (BTV) and Schemallenburg virus (SBV) positive sera samples obtained from sero-converted infected animals. Both BTV and SBV postive control sera produced an optical density (O.D) value below the cut-off reading. Noteworthy, based on the sensitivity (Se) and specificity (Sp) obtained from ROC-curve analaysis determined using MedCalc sortware, the ELISA presented an Se and Sp of approximately 95.2 and 97.8% respectively showed by the area under the curve (AUC).
Rate of sero-conversion according to sample farms
Table 2 depicted level of antibody titre for orf virus in relation to sampling location. The highest sero-conversion rate was observed in farm LY (60%), followed by farm LZ (41.7%) and the least was evident in farm LM (12%). The association among farms and prevalence rate of orf virus infection is not significant Chi square ((X2) = 17.889; P = 0.1191).
Table 2 Sero-converted animals according to sample farms
Rate of sero-conversion according to breeds of animals
Table 3 depicted rate of sero-conversion for orf virus in relation to breeds of animals. The highest rate was recorded among Jamnapari cross (JC) breed (50%), followed by Saanen breed (27.6%) and the least was found among Boer (BO) and Toggenburg (TO) both had 0.0%. The association amongst the farms and prevalence rate of orf virus infection is not significant (X2 = 17.093; P = 0.0723).
Table 3 Sero-converted animals according to breeds
Rate of sero-conversion based on risk factors
The highest prevalence of orf disease was found among kids of less than 3-months old. All of them were sero-converted. This was followed by 29.7% in animals of older than 4 years. Interestingly, animals aged 4–9 months have the lowest positive rate (20%). The association amongst the various age groups and rate of sero-coversion for orf virus infection is significant (X2 = 8.163; P = 0.0428). The association amongst the various age groups and rate of sero-conversion antibody for orf virus infection is significant (X2 = 8.163; P = 0.0428).
From a total of 174 male sheep and goats that were examined, 31 (17.8%) were sero-converted for Orf virus. However, out of a total of 330 female sheep and goats examined, 84 (25.5%) were sero-converted for the virus (Table 4). The association amongst the gender of animals and the rate of sero-conversion for orf virus is significant (X2 = 3.886; P = 0.0487).
Table 4 Sero-converted animals according to other risk factors
The rate of sero-conversion among animals with clinical orf disease (66.7%) was higher than the animal's who are devoid of obvious clinical manifestations of orf (22.6%). The association between the occurrence of clinical orf disease and sero-conversion is not significant (X2 = 2.629; P = 0.1049).
Among 504 sheep and goats sampled, none of the farms were observed to practice Orf vaccination to their animals. Therefore, no association exists between the vaccination history and the sero-conversion rate of Orf disease amongst all the farms as the potential risk factor.
Result for HHP farmer's compliance level
All farmers raised either pure breed or crossbred Boer, Boer cross, Katjang, Katjan cross, Jamnapari, Jamnapari cross, Saanen, Saanen cross, Barbados Blackbelly, Doper, and Toggenburg breeds. Majority of the farms raise their sheep and goat primarily for meat, however, rearing for dairy purposes have been considered a convenient alternative. Animals were reared in a simple shed with low-cost housing materials. Animals were fed with cut-grasses and feed pellets under a standard ration. Farmers employed a strong rubber made feeding utensils for feeding. Drinking water were provided ad-libitum. Trees for shading and grasses for animal feed were grown in the pasture areas nearby. Figure 1 shows the type of housing, the type of feed and feeding system generally observed in most farms.
Housing condition, roofing, flooring, ventilation, sanitation, Feeds and feeding management. The flooring type is wooden and elevated (a, b, c). Some may use metal fence with wooden frame and wooden flooring (d), Feeding utensils (e); and (f) pellets and grasses that for the sheep and goats
Figure 2 depicted the general body condition of some of the animals surveyed. Both sheep goats were continuously kept under this housing confinement with limited access to grazing and pasture areas. The overall compliance level based on the total 93 questions answered by the farmers is 42.7% while an overall non-compliance level observed was 57.7%. Table 5 showed the farms level of compliance and non-compliance to HHP's modules, Farms AB, PT and LM showed the highest compliance level with 15, 14 and 14% respectively while, farms LY, LS, LZ, and LN recorded the highest non-compliance level of 12, 11, 11, and 11% respectively to HHP modules.
Animal condition inside the pens of the intensively managed farms: healthy animals (a, b); and sick animals in isolation (c, d) with clinical disease and orf skin lesions (as indicated in the photography)
Table 5 Respective compliance level of farms' herd health program
Table 6 presented the summary of the overall farm compliance level to HHP modules as analyzed by nominal regression analysis. Farms LS, PT, AB, LM, LZ, LY and LN showed strong compliance (p < 0.0001) for HHP. However, the effect of HHP modules on farms BF, MF, LH, LB and LW did not record a significant compliance level (p > 0.05); indicating decrease compliance level by these farms to parameters of HHP modules. The overall effect likelihood ratio of HHP program on various farms compliance levels is strong and significant (X2 = 302.61; P < 0.0001).
Table 6 Compliance level to herd health program
Table 7 presents the main components of HHP modules and the levels of compliances and non-compliances. Animal identification (T) recorded highest compliance level of 84.62%, followed by housing condition (eg: roof, flooring, ventilation, sanitation) (H) with value of 55.77% compliance level; moreover, milking management (M) recorded the highest level of non-compliance of 82.69%. The association amongst the main components of HHP modules and the level of compliances and non-compliance is strong and significant (X2 = 114.77; P < 0.0001). Mosaic plot presentation of the main Components of HHP Modules and compliance level is shown in Fig. 3.
Table 7 Main components of HHP modules and the level of farmer's compliance and non- compliance
Mosaic plot presentation of the main components of HHP modules and compliance level
Figure 4 depicts the distribution of mean antibody titre among all the farms sampled. Farms LI and PT have the highest antibody titres, whereas farm LW have the lowest antibody titres. Majority of the antibody titres were below OD reading of < 0.5.
Unequal distribution of antibody titres (based on OD reading) against orf virus among animals in the respective farms. Each data point in the graph represents antibody titre of individual animal. The horizontal line (−--) represents cut-off point of OD for negative sera (negative results). HHP compliance level (in %) for the respective farms
This investigation reported the current status of orf virus disease in the Terengganu state, eastern coast region of Peninsular Malaysia. The study revealed evidence of the presence of orf virus infection among the sheep and goats' population as confirmed by ELISA assay which amounted to a prevalence of 16.8% in sheep and 25.1% in goats. Replication of Orf virus occurs in the epithelial cell surfaces of the skin layers and at the mucosa of the mouth, oesophagus as well as hairless parts of the body serving as primary sites of the lesions. Accidental abrasions of the skin due to hard stubble, thistles or any analogous plant promote access of the virus and initiates the replication cycle [28,29,30,31]. Upon successful entry and attaining the incubation period of approximately 10 days the disease is disseminated to the other tissue and host's response is mounted and therefore this will lead to the production of antibodies that could be detected in body fluids such as blood, saliva and mucosal secretions [22, 32]. Several studies have stated that clinical orf virus symptoms are first seen in the fourth and fifth days of exposure, whereas the levels of anti-orf virus specific antibodies are normally detectable between eighth and tenth days of exposures to the virus.
ELISA test employed in this study revealed an overall seroprevalence rate of 22.8% based on antibody titre among the sampled population. This indicates a considerably high sero-conversion rate of the disease compared with a similar study in the Selangor, Malaysia with a prevalence rate of 14.4% in goats and 12.2% in sheep [27]. However another report by Jesse et al. [26] which was high prevalence rate (36.7%) among goat population, based on IgM detection observed and thus indicates an active infection in that State. Generally, it is suggested that orf disease is a serious issue in Malaysia which is recurring frequently at an alarming rate in different parts of the country [26]. Similarly, consistent with our findings, the prevalence of orf in other parts of the world is high [22]. Orf infection reported was 19.51% among lamb in England [24], 34.89% in China [11], staggering 98% Nilgiri Hills in Tamil Nadu, India [33] and 54% in Saudi Arabia [34]. Gökce et al. [28] had also reported a high sero-conversion rate of 52.8% among lambs within some selected districts in Turkey. The high morbidity of this disease underscores the infectious nature of this virus and its economic impact on the goat industry [9, 22, 35]. It has been speculated that contagious ecthyma disease of sheep does not confer long term protection thus, seasonal outbreaks among herds are common [36, 37]. Shed viruses from animals remains viable for decades as such they served as source for the sporadic spread of the virus in the same herd. Thus resulting in the ultimate transmission of the virus to neighbouring herds via transporting of infected animals.
Various putative risk factors for the prevalence rate were examined to ascertain the possible risk factors. Identification of relevant risk factors is crucial for the proper disease management and outbreak containments. We identified species, age and sex of the animal to be the most significant risk factors. Goat species recorded highest prevalence of 25.1% compared to 16.8% that of ovine like the report by Jesse et al. [26]. Naturally, goats are naturally more aggressive than ovine, hence they tend to cause injury among themselves leading to higher susceptibility to orf virus transmitted via direct contact. Additionally, most small holders do not practice dehorning for their animals, as such this may subject the animals to injuries due to fighting. Animals could easily get wounded, cuts and abrasions as a predisposal factor for virus penetration via a wounded skin [38,39,40,41].
Similarly, female gender had the highest prevalence and this was recorded as a significant risk factor in this study. A similar observation was reported by Orgeur et al. [38]. More aggressive behaviour of males expected to contribute higher number of cases. However, unequal sample size by which majority of the subjects studied were female may contribute to our unparallelled observation. Meanwhile, previous observations showed that orf infection have not discriminatory tendencies between gender [22, 42].
Among the 13 farms, farm LY showed the highest historical orf virus infection. This is strongly associated with the fact that farm LY recorded the highest non-compliance levelof HHP. A strict adherence to HHP modules shall reduce the general risk of exposure to contagious ecthyma disease in animal population [43, 44]. Animals are susceptible to orf virus infection regardless of their age. There is a moderate morbidity rate of approximately 50% in the affected farms observed in this study, however, the mortality rate of approximately 1% was identied in the affected farms. Interestingly, higher seroprevalence (100%) was observed among kids younger than 3 months of age in comparison to animals older than 4 months, however, this high seroprevalence among the kids has not been associated with high mortality, therefore a more detail confirmatory test such as virus culture and isolation would further distinguishes active infection that neccesiates for vaccination. Furthermore, we do not observed a very a high significant difference between the flocks show clinical disease and the high seroconversion. A similar phenomenon was observed by Bora et al. [36] who studied the prevalence of contagious ecthyma among goats in Assam, India. This observation is attributed to the fact that older animals developed better protective immunity against recurrence orf infection.
Other risk factors examined such as the presence lesion and abrasion, history of clinical orf infection in farms and vaccination practice did not appear to be a significant determinant of orf disease prevalence. The presence of skin lesions usually indicates current infections which are best diagnosed by standard virus isolation and identification [7, 45]. Antibodies produced by animal hosts as a part of either primary or secondary immune response and detected during the latter part of an infection. It may take 1 to 4 weeks following an infection before antibodies can be detected and assayed in ELISA test. Animals with previous exposure to orf virus may carry the virus in their hide or dried scabs and shed the virus into the surrounding environment. Orf virus had a higher survivability in the environment especially in tropical climate [46]. In an endemic environment, similar to other viral diseases, animals are often re-infected with orf especially when they become immunosuppressed [22, 47]. Additionally, previous orf infection does not provide a long-lasting immunity against orf but instead it provides farmers an experience to deal with subsequent infections. Animals that were reinfected often recovers faster compared to the first exposure and shows less severe lesions [46]. Vaccination had been a main prophylactic measure against Orf. Another issue is immunity induced from vaccination which can only last for about 6 months. Booster doses of vaccine were required especially for farms located in areas endemic with orf [48,49,50,51].
The HHP module analysed in this study contains variables that are pertinent to epidemiology of orf disease as previously described elsewhere [24, 52, 53]. Unfortunately, all the farms surveyed did not incorporate orf immunization as a part of their HHP. Vaccination of already infected animals has been found to reduce the course and severity of the disease [54]. Our results also indicated that orf virus infection is widespread in the areas of Terengganu State as such vaccination should have been advocated on a regular basis. Even though, some authors are in the opinion that a vaccine against orf disease should not be attempted in herds that do not have previous history of the disease since only live vaccines are available [55]. However, newer effective potential vaccine tried in some experimental animal revealed promising results, it is therefore advisable to vaccinate animals against orf regardless of previous outbreak in the farms [27, 50].
The general objective of HHP is to enhance the herd efficiency through general farming, nourishment administration, vaccination, ecological management and parasite control. It endeavors to arrange all data appropriate to goat crowd wellbeing into a straightforward, usable, and effectively recalled lists. Therefore, a proper record-keeping must be in place to enable and ensure the success of HHP [43, 44]. Vaccination is important to shorten the duration of disease transmission and to confine infections from being spread from animal to animal. As indicated by Steven and Jeremy [56], vaccination programs are intended to contain future infections in the flock and ought to be implemented together with neighboring farms. In Malaysia, the present vaccination experience is targeted against FMD and pneumonia which was not in practice by all farmers in the survey. Our study had successfully and thoroughly surveyed the existing HHP at the farms. An overall compliance level of 42.7% observed is lower than that previously reported by Abdullah et al. [43] on several farms with a compliance level of 56%. Good HHP supervision is a rehearsal target to prevent the expansion and spread of diseases thus reduces economic losses among farmers [57].
Disease monitoring is concerned with understanding changes in its endemicity and distribution [58]. Proper biosecurity measures such as foot dip, vehicle spray, use of proper boots and attire in the farms were not implemented. Interview with small holder farmers also revealed a lack of awareness of what are the main goals of herd health programme. Farms surveyed usually practised intensive management system due to limited availability of land to rear their goats. This caused the likelihood of animals contracting orf virus infection due to an increase of crowding among animals [27, 32].
Among the 14 main components of HHP modules, milking management is observed to have the highest level of non-compliances (82.69%), whereas, animal identification had recorded highest compliance level of 84.62%. The highest compliance level associated with animal identification indicated an appreciable livestock production is in accordance with the standard recommended by the OIE. Herd-health information is pertinent to livestock producers and to the public as well as animal welfare, fully comprehending of types and sources of animal's heath that farmers will utilized is very vital [59].
Many studies carried on the relationship between dose antibody responses and virus replication have generated controversies [60,61,62,63] The level of humoral immune response against the virus is directly related to the level of virus replication achieved [64, 65]. Titers of antibodies are normally positively correlated with the level of total virus binding antibody titers [66,67,68]. Therefore, it shall be a significant association amongst total virus production in the host and the humoral responses of antibody titres as directed against important viral disease such influenza [63]. However, in many cases, high antibody titre does not necessarily translate into protection against re-infection and previously exposed animals to orf viruses do not necessarily enjoy protection against re-infection [69]. We noted that despite the high compliance level to HHP, many animals in farms LI and PT had high antibody titres against orf virus. Due to the fact that no vaccination against orf virus was given and no current clinical infection was observed, it is suggested that the cohort group were suspected to have recovered from recent infection following an increase in antibody titres against the virus. The vaccination shall contribute the most significant weightage towards the success of HHP entirely. In addition, watchful understanding of the different types of protective immunity against orf virus is important for the development of a safer vaccines and containment of virus spread.
Control and prevention of Orf disease is important to ensure it does not widely spread in the entire animal population [70,71,72,73]. A viable animal health program is a fundamental piece of an effective animal production, herd heath program (HHP) is an essential tool for monitoring disease for prevention and control programs [43, 56]. Legitimate sustaining and rearing won't bring about most extreme generation if goats are not healthy, there's a scarcity of knowledge concerning the farmers' compliance level on correct herd health program practiced by the livestock farmers in Malaysia. This information is very important to increase the productivity of the farms and for future development of temporary herd health programs for small ruminant farms.
Interaction between virus and body immune system is a battle among the parties (virus and host). In primary orf virus disease, the virus replicates in the epithelial cells for a period before the host can mount an effective immune response. This leads to an appearance of IgG or IgM that were incriminated as a signal of orf infection. The overall prevalence was found to be 22.8% (25.1% in goats and 16.8% in sheep). Significant risk factors identified were specie, age, and sex. A higher sero-conversion rate was seen among kids younger than 3-months-old and female gender showed higher antibody titre. Poor implementation of HHP may also be associated with a higher sero-conversion rate of orf virus infection observed. Therefore, it is important to carry out epidemiological surveys [74, 75] in circumstances where there is a risk of introducing disease into a new herd through replacement of sheep from unknown premises. Based on our findings, it will be recommended that lambs in the region should be regularly vaccinated to reduce the severity of Orf and its consequential financial implications, along with routine vaccination, periodic surveillance could be enacted to determine the both temporal and spatial distribution of Orf viruses.
Informed consent and ethical consideration
All the procedures involving animal subjects were conducted in compliance with the recommendations of the Institutional Animal Care and Use Committee (IACUC) – UPM/IACUC/AUP-U013/2018. Goats and sheep from thirteen (13) farms were selected among the private and government owned farms at 4 districts of the Terengganu state. The sampling farms were selected on the basis for the availability of adequate study animals and diversity in agroecology of the areas. Terengganu State is divided into eight (8) administrative districts called Daerah in Malay language. The sampling was strategized to capture 4 out of the 8 eight administrative districts as the representative of this state and a total of 13 farms were selected based on the simple random sampling technique. Consent from all participating farms were obtained through written permission of the owners and witnessed by the Terengganu State division of the Department of Veterinary Service (DVS).
Questionnaire and data collection
A well-structured questionnaire which contained information on farm management practices, possible risk factors and herd health programme implemented by farm owners were filled via an interview session. The questionnaire was designed to contain three (3) sections, namely; Section A (farm management practice), Section B (farm's HHP compliance level) and Section C (demography and risk factors for exposure of individual animals). The questionnaire template was added separately in the Additional file 1.
Farm data collection
Section A of the questionnaire which relates to informations on the sampled farms was administered. The relevant data sought included; details of the operator, category of farmer, man-power, annual production, type of housing and management system, as well as population. Section B on the other hand (farmer's compliance level to HHP), contains questions relating to the farmer's awareness, compliance level, and knowledge of each of the 14 modules of herd health programs, based on the Department of Veterinary Service, Malaysia (Table 8). Lastly, section C of the questionnaire contains the information on the demography namely; age, sex, and breed, together with information on the putative risk factors such as cut and abrasion on the animal, presence of orf lesion and history of vaccination against orf or any related viral disease.
Table 8 Herd health program modules
Sampling of farms
This investigation involved thirteen (13) sheep and goat farms located at the four major districts namely; Kuala Terengganu, Kuala Nerus, Marang and Setiu in the Terengganu State, East Malaysia. The respondents were given the questionnaire; a response to each question is a dichotomous outcome as either "YES" or "NO". Where "YES" denotes the farmers' compliance to that segment of HHP module, while "NO" is otherwise.
Individual animal data collection
A Thorough physical examination to identify infected animals based on the clinical signs of erythema, papule, vesicle, or pustule around the lip, gums, mouth and tongue and the general body part was conducted. Relevant demographic data from each animal was also recorded in the data sheet prior to sampling. A total of 504 sheep and goat's samples were collected using simple random sampling method after calculating the sample size according to the standard formula [76, 77]. The formulae and the sample size calculation were as in shown below. After sample collection all the involved animals were closely monitored regularly to avoid any spread of the disease.
$$ \mathrm{n}=\frac{Z^2 pq}{L^2} $$
where, n = sample size
Z = Standard normal distribution at 95% confidence interval = 1.96
p = Prevalence in similar work
q = 1 – p
L = Allowable error, taken as 5% = 0.05
In this study, P = 36.7% (Jesse et al., 2018a)
$$ \mathrm{n}=\frac{Z^2 pq}{L^2}=\frac{(1.96)^2\times 0.367\times \left(1-0.0732\right)}{(0.05)^2}\approx 500\ \mathrm{samples} $$
Preparation of serum sample
Whole blood samples were collected from all the 504 animals in the sampling farms as. A method of collection via jugular venepuncture using a 21 gauge vacutainer needle was applied and collected into a plain serum collection tubes. The tubes containing the whole blood samples were then stored in a cooler box and transported to the laboratory for analysis. The blood samples were left to clot and centrifuged at 3,000 revolution per minute (rpm) for 5 min to separate the serum from the blood. The serum was then pipetted into a 1.5 ml microcentrifuge tubes and stored at -20 °C until required for assay.
Preparation of hyperimmune positive and negative sera against the pure Orf virus
Hyperimmune serum (HIS) against the UPM1/14 Orf virus isolate was prepared according to the method described by [34, 36, 78,79,80]. For this purpose, specific anti-Orf virus antibodies were prepared in two healthy goats of about 1 year old. Blood sample were collected as the negative control prior to the start of the procedure. The purified virus suspension contained in DMEM medium was first heat inactivated by incubation at 56 °C for 30 min. One mL of the pure Orf virus antigen was mixed with Complete Freud's Adjuvant (CFA) (GIBCO BRL, USA). An emulsion containing equal volume of pure virus and CFA was formed by homogenization until a good mixture was obtained. The emulsified suspension was allowed to settle at 4 °C. One goat was then injected subcutaneously with 0.5 mL of this emulsion while the other goat was kept as control. Two weeks later, the goat was re-injected with the same dose of antigens that was emulsified in Incomplete Freund's Adjuvant (IFA) (GIBCO BRL, USA). The injections were then repeated weekly for 4 weeks. One week after the last injection, the goat was injected finally with live virus (without adjuvant) at the last week. Two weeks after the last injection bleeding was carried out from the goat. The blood was allowed to clot at room temperature and spun at 1000 rpm for 5 min, hyperimmune serum was harvested, the antibody titer was determined by ELISA. The HIS showing the highest antibody titer was aliquoted and distributed in 1 mL quantity stored at − 20 °C until further required. Therefore, this HIS was employed in the ELISA test as a positive reference sera while pre-immune sera collected from experimental animal used as negative control.
Serological screening and assay procedures
Orf virus antigen and determination of Orf virus total protein concentration
The serological screening was done using an in-house developed antigen-coated ELISA. The virus antigen used for the coating of ELISA plate was a local orf virus isolate (UPM1/14) obtained from the Virology Laboratory, Faculty of Veterinary Medicine of the University Putra Malaysia from previous outbreak cases of contagious ecthyma [10]. This virus was propagated in lamb testicle cell (LT) monolayers as described by Bala et al. [27] and Abdullah et al. [10]. Upon propagation, the virus was concentrated in polyethylene Glycol (PEG) and purified in a cushion of 36% sucrose gradient and 10–50% sodium diatrizoate gradient [81]. The virus pellet obtained was reconstituted in sterile phosphate buffered saline pH 7.4 and kept at -70 °C until required. The total protein concentration of purified Orf virus was determined using Bradford assay method, total Orf virus concentration in the virus solutions was determine with aid of prism 5 software by plotting the protein concentration against the corresponding absorbance to obtain a standard curve.
Optimization of ELISA reagents
The working concentrations for Orf antigen, conjugate and antibodies were optimized using standard protocol of chequerboard titration adopted from Babiuk et al., [82]; Bhanuprakash et al., [83]; Niang, [84]; Azmi and Field, [85] with some minor changes. The positive and negative control sera obtained from post-vaccinated and pre-vaccinated animals were tested using a two-fold dilution. The optimal dilution the Orf virus antigen and reference HIS positive sera were selected using the antigen and serum dilutions that gave maximal difference in reading of the absorbance between positive and negative sera.
ELISA procedure
An enzyme linked immunosorbent assay (ELISA) was developed in-house as described by Azmi and Field [85] with minor modifications. All reagents including conjugate, substrate, buffers and washing procedures of plates were prepared according to standard procedures. ELISA test procedure was conducted by an initial coating of the 96-well plate specially designed for use in ELISA assays (Dynatech Immunolon, USA) [86]. The working volume of each of the reagents was 50 μl. Purified orf virus antigen was diluted in sodium hydrogen carbonate (NaHCO3) buffer (pH 9.6) to give a final concentration of 10 μg/ml antigen protein, and 50 μl of this was used for coating of the plate and then incubated at 4 °C overnight. Following overnight incubation, the plate was washed three times with Phosphate Buffered Saline Tween-20 (PBST), the washing was carried out manually by filling each of the well with 200 μl of the washing buffer, with the aid of a multi-channel micropipette, and then allowed to stand for a few seconds before discarding. This wash procedure was repeated 3 times. After washing 50 μl of 2% of Fraction V (Bovine Serum Albumin (BSA)) (Sigma, UK) was added and the plate was incubated at 45 °C for 2 h, in order to block any unspecific unbound antigens. Upon completion of the incubation, the plate was again washed three times with PBST-tween-20. A two-fold dilutions of the test sera were added to the plate and incubated at 37 °C for 1 h before the plate was washed 3 times using the same wash buffer. A pre-diluted rabbit anti-goat (for goat serum) and anti-sheep (for sheep serum) peroxidase conjugated immunoglobulin G (KPL, USA) was added and allowed to react with the antigen bound goat/sheep antibodies by incubation at 37 °C for 1 h. The plate was then washed 3 times and the substrate 2-2l – Azino-bis (3-ethylbenzthioline − 6-sulfonic acid) ABTS diluted in citrate phosphate buffer (containing 0.01 30% hydrogen peroxide (H2O2)) was added and the plate was incubated at room temperature for 30 min. At the end of the incubation period, the optical density was read at 450 nm in an ELISA reader (TECAN Infinite M200).
Determination of cut off value
The value for ELISA cut-off threshold for tested samples at an optimized dilution of all the reagents was determined by taking the mean absorbance (O.D) reading of pre-immune negative sera plus three standard deviations [36, 82] (mean + S.D.; mean = 0.17; S.D. = 0.097, three times S.D. = 0.291 therefore CV is equal to 0.461). Any sample(s) with O. D reading above this CV was considered as positive for anti-Orf virus antibodies. Therefore, all the O.D. readings obtained from sera of animals were interpreted as positive when the value is greater than the cut-off value or deduced as negative if the value is otherwise.
Testing for cross-reactivity of ELISA assay
As a means of quality control of this in-house ELISA and in order to rule out any cross reactivity of this ELISA assay with similar virus of small ruminants, a panel of positive sera for Blue tongue (BTV) and Schemallenburg (SBV) viruses were tested using this developed ELISA. This positive BTV and SBV sera samples were available in the virology laboratory of the Universiti Putra Malaysia which were initially collected from confirmed disease cases occurring among some farms.
Sensitivity and specificity of the ELISA assay
The sensitivity and specificity of the ELISA employed in this study was determined based on the true positive and false negative value subjected to analysis and interepreted from the optimal cut-off point described elsewhere [36, 82, 83]. Both sensitivity and specificity values were calculated using Receiver Operating Characteristics (ROC) curves with the aid of MedCalc software (MedCalc Statistical Software version 18.11 (MedCalc Software bvba, Ostend, Belgium; http://www.medcalc.org; 2018).
Information procured for both Sections A and B of the questionnaires was incorporated into Microsoft Office Excel and analysed using JMP software. The likelihood ratio and regression model were used to evaluate the farmer's compliance level towards HHP. Responses to each sub-unit questions of the 14 major components modules of HHP (Table 8) were expressed as percentage to indicate overall farmers' compliance level and their association determined after subjecting the result to chi-squre analysis with the aid of JMP Statistics software (SAS Campus Drive, USA). Similarly, all the data obtained from section B (demography and risk factors for exposure of individual animals) as well as the ELISA results were incorporated into the JMP software version 14, and analysed for prevalence rate and association of each risk factor using chi-square test.
The datasets generated and/or analysed in the present study are not available to the public since it belongs to the Universiti Putra Malaysia, however, data can be made available upon request by contacting Prof. Dr. Mohd Azmi Mohd Lila via email [email protected].
ABTS:
2-2l – Azino-bis (3-ethylbenzthioline − 6-sulfonic acid)
BSA:
Bovine Serum Albumin
DVS:
Department of Veterinary Services
ELISA:
Enzyme Linked Immunosorbent Assay
H2O2 :
HHP:
Herd Health Program
IACUC:
Institutional Animal Care and Use Committee
Malaysian Agricultural Analysis and Development Institute
MESTECC:
Ministry of Energy, Science, Technology, Environment & Climate Change
MOSTI:
Ministry of Science, Technology and Innovation Sciencefund
NaHCO3 :
Sodium Hydrogen Carbonate
O.D:
Optical Density
PBST:
Phosphate Buffered Saline Tween 20
PEG:
rpm:
Revolution Per Minute
UVH:
University Veterinary Hospital
X2 :
Chi Square
Hani H, Ibrahim TAT, Othman AM, Lila M-AM, BtAllaudin ZN. Isolation, density purification, and in vitro culture maintenance of functional caprine islets of Langerhans as an alternative islet source for diabetes study. Xenotransplantation. 2010;17(6):469–80. https://doi.org/10.1111/j.1399-3089.2010.00616.
Hani H, Allaudin ZN, Mohd-Lila M-A, Ibrahim TAT, Othman AM. Caprine pancreatic islet xenotransplantation into diabetic immunosuppressed BALB/c mice. Xenotransplantation. 2014;21(2):174–82. https://doi.org/10.1111/xen.12087.
Razis AFA, Ismail EN, Hambali Z, Abdullah MNH, Ali AM, Lila MAM. The periplasmic expression of recombinant human epidermal growth factor (hEGF) in Escherichia coli. Asia Pac J Mol Biol Biotechnol. 2006;14(2):41–5.
Ismail R, Allaudin ZN, Lila MAM. Scaling-up recombinant plasmid DNA for clinical trial: current concern, solution and status. Vaccine. 2012;30(41):5914–20. https://doi.org/10.1016/j.vaccine.2012.02.061.
Zamri-Saad M, Effendy AWM, Israf DA, Azmi ML. Cellular and humoral responses in the respiratory tract of goats following intranasal stimulation using formalin-killed Pasteurella haemolytica A2. Vet Microbiol. 1999;65(3):233–40. https://doi.org/10.1016/S0378-1135(898)900298-3.
Venkatesan G, Balamurugan V, Bora DP, Yogisharadhya R, Prabhu M, Bhanuprakash V. Sequence and phylogenetic analyses of an Indian isolate of orf virus from sheep. Vet Ital. 2011;47(3):323–32 Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/21947970.
Bala JA, Balakrishnan KN, Abdullah AA, Mohamed RB, Haron AW, Jesse FFA, Mohd-Azmi ML. The re-emerging of orf virus infection: a call for surveillance, vaccination and effective control measures. Microb Pathog. 2018a;120:55–63. https://doi.org/10.1016/j.micpath.2018.04.057.
Tedla M, Berhan N, Molla W, Temesgen W, Alemu S. Molecular identification and investigations of contagious ecthyma (Orf virus) in small ruminants, north West Ethiopia. BMC Vet Res. 2018;14(1):13. https://doi.org/10.1186/s12917-018-1339-x.
Sadiq MA, Abba Y, Jesse FFA, Chung ELT, Bitrus AA, Abdullah AA, Balakrishnan KN, Bala JA, Mohd Lila MA. Severe persistent case of contagious Ecthyma (Orf) in goats. J Anim Hlth Product. 2017;5(1):24–8. https://doi.org/10.14737/journal.jahp/2017/5.1.24.28.
Abdullah AA, Ismail MFB, Balakrishnan KN, Bala JA, Hani H, Abba Y, Mohd-Lila MA. Isolation and phylogenetic analysis of caprine Orf virus in Malaysia. VirusDisease. 2015a;26(4):255–9. https://doi.org/10.1007/s13337-015-0278-4.
Gao Y, Zhao Y, Liu J, Zhou M, Liu H, Liu F, Yang W, Chen D. Orf in goats in China: prevalence and risk factors. J Agric Sci Technol A. 2016;6:116–23.
Haig DM, Mercer AA. Orf. Vet Res. 1998;29(3–4):311–26.
CFSPH (2015): http://www.cfsph.iastate.edu/Factsheets/pdfs/contagious_ecthyma.pdf.
Fleming SB, Wise LM, Mercer AA. Molecular genetic analysis of orf virus: a poxvirus that has adapted to skin. Viruses. 2015;7(3):1505–39. https://doi.org/10.3390/v7031505.
Kinley GE, Schmitt CW, Stephens-devalle J. Case report A case of contagious Ecthyma ( Orf Virus ) in a nonmanipulated laboratory Dorset Sheep ( Ovis aries ), 2013.Article ID 210854, 5 pages, 2013; 2013. https://doi.org/10.1155/2013/210854.
Li W, Ning Z, Hao W, Song D, Gao F, Zhao K, Liao X, Li M, Rock DL, Luo S. Isolation and phylogenetic analysis of orf virus from the sheep herd outbreak in Northeast China. BMC Vet Res. 2012;8(1):229.
Chan KW, Lin JW, Lee SH, Liao CJ, Tsai MC, Hsu WL, Wong ML, Shih HC. Identification and phylogenetic analysis of orf virus from goats in Taiwan. Virus Genes. 2007;35(3):705–12. https://doi.org/10.1007/s11262-007-0144-6.
Amann R, Rohde J, Wulle U, Conlee D, Raue R, Martinon O, Rziha HJ. A new rabies vaccine based on a recombinant ORF virus (parapoxvirus) expressing the rabies virus glycoprotein. J Virol. 2013;87(3):1618–30.
Chan KW, Yang CH, Lin JW, Wang HC, Lin FY, Kuo ST, Wong ML, Hsu WL. Phylogenetic analysis of parapoxviruses and the C-terminal heterogeneity of viral ATPase proteins. Gene. 2009;432(1–2):44–53. https://doi.org/10.1016/j.gene.2008.10.029.
Jeffery et al., (2010): https://agrilifecdn.tamu.edu/sanangelo/files/2011/11/Evaluation-of-homologous-and-heterologous-protection-induced-by-a-virulent-field-strain-of-orf-virus-and-an-orf-vaccine-in-goats.pdf.
De La Concha-Bermejillo A, Guo J, Zhang Z, Waldron D. Severe persistent orf in young goats. J Vet Diagn Investig. 2003;15(5):423–31.
Kumar R, Trivedi RN, Bhatt P, Khan SUH, Khurana KS, Tiwari R, Karthik K, Malik YS, Dhama K, Chandra R. Contagious pustular dermatitis (Orf disease) – epidemiology, diagnosis, control and public health concerns. Adv Anim Vet Sci. 2015;3(12):649–76. https://doi.org/10.1056/NEJMra1112830.
Hota A, Biswal S, Sahoo N, Venkatesan G, Arya S, Kumar A, Ramakrishnan MA, Pandey AB, Rout M. Seroprevalence of Capripoxvirus infection in sheep and goats among different agro-climatic zones of Odisha, India. Vet World. 2018;11(1):66.
Onyango J, Mata F, McCormick W, Chapman S. Prevalence, risk factors and vaccination efficacy of contagious ovine ecthyma (orf) in England. Vet Rec. 2014;175(13):326. https://doi.org/10.1136/vr.102353.
Alim ABNM. Contagious ecthyma in Malaysia. Technical Report No. 9. Kuala Lumpur: Jabatan Perkhidmatan Haiwan; 1990.
Jesse FFA, Latif SN, Abba Y, Hambali IU, Bitrus AA, Peter ID, Haron AW, Bala JA, Balakrishnan KN, Abdullah AA, Lila MA. Seroprevalence of orf infection based on IgM antibody detection in sheep and goats from selected small ruminant farms in Malaysia. Comp Clin Pathol. 2018a;27(2):499–503. https://doi.org/10.1007/s00580-017-2619-8.
Bala JA, Balakrishnan KN, Abdullah AA, Yi LC, Bitrus AA, Abba Y, Aliyu IA, Peter ID, Hambali IU, Mohamed RB, Jesse FFA, Haron AW, Noordin MM, Mohd-Lila MA. Sero-epidemiology of contagious ecthyma based on detection of IgG antibody in selected sheep and goats farms in Malaysia. Adv Anim Vet Sci. 2018b;6(5):219–26. https://doi.org/10.17582/journal.aavs/2018/6.5.219.226.
Gökce HI, Genc O, Gökce G. Sero-prevalence of contagious ecthyma in lambs and humans in Kars, Turkey. Turk J Vet Anim Sci. 2005;29(1):95–101.
Bala JA, Balakrishnan KN, Abdullah AA, Kimmy T, Abba Y, Bin Mohamed R, Jesse FFA, Haron AW, Noordin MM, Bitrus AA, Hambali IU. Dermatopathology of Orf Virus (Malaysian Isolates) in mice experimentally inoculated at different sites with and without Dexamethasone Administration. J Pathog. 2018c;2018:9207576 12 pages.
Yeshwas F, Almaz H, Sisay G. Confirmatory diagnosis of contagious ecthyma (Orf) by polymerase chain reaction at Adet sheep research sub-center, Ethiopia: a case report. J Vet Med Anim Health. 2014;6(July):187–91. https://doi.org/10.5897/JVMAH2014.0289.
Mazur C, Ferreira II, Filho FB, Galler R. Molecular characterization of Brazilian isolates of orf virus. Vet Microbiol. 2000;73:253–9. https://doi.org/10.1016/S0378-1135(99)00151-0.
Jesse FFA, Hambali IU, Abba Y, Lin CC, Chung ELT, Bitrus AA, Mohd Lila MA. Effect of dexamethasone administration on the pathogenicity and lesion severity in rats experimentally inoculated with Orf virus (Malaysian isolates). Comp Clin Pathol. 2018b:1–10. https://doi.org/10.1007/s00580-018-2726-1.
Balakrishnan S, Venkataramanan R, Ramesh A, Roy P. Contagious ecthyma outbreak among goats at Nilgiri hills. Indian J Anim Res. 2017. https://doi.org/10.18805/ijar.10277.
Housawi FMT, Elzein EA, Al Afaleq AI, Amin MM. Sero-surveillance for Orf antibodies in sheep and goats in Saudi Arabia employing the ELISA technique. J Comp Pathol. 1992;106(2):153–8.
Nandi S, De UK, Chowdhury S. Current status of contagious ecthyma or orf disease in goat and sheep-a global perspective. Small Rumin Res. 2011;96(2–3):73–82. https://doi.org/10.1016/j.smallrumres.2010.11.018.
Bora M, Bora DP, Barman NN, Borah B, Das S. Seroprevalence of contagious ecthyma in goats of Assam: an analysis by indirect enzyme-linked immunosorbent assay. Vet World. 2016, 1028;9(9). https://doi.org/10.14202/vetworld.2016.1028-1033.
Chi X, Zeng X, Hao W, Li M, Li W, Huang X, Wang S, Luo S. Heterogeneity among Orf virus isolates from goats in Fujian Province, southern China. PLoS One. 2013;8(10). https://doi.org/10.1371/journal.pone.0066958.
Orgeur P, Mimouni P, Signoret JP. The influence of rearing conditions on the social relationships of young male goats ( Capra hircus). Appl Anim Behav Sci. 1990;27:105–13. https://doi.org/10.1016/0168-1591(90)90010-B Elsevier Science Publishers B. V.
McElroy MC, Bassett HF. The development of oral lesions in lambs naturally infected with orf virus. Vet J. 2007;174(3):663–4. https://doi.org/10.1016/j.tvjl.2006.10.024.
Chavez-Alvarez S, Barbosa-Moreno L, Villarreal-Martinez A, Vazquez-Martinez OT, Ocampo-Candiani J. Dermoscopy of contagious ecthyma (orf nodule). J Am Acad Dermatol. 2016;74(5):e95–6. https://doi.org/10.1016/j.jaad.2015.10.047.
Delhon G, Tulman ER, Afonso CL, Lu Z, De A, Lehmkuhl HD, Piccone ME, Kutish GF. Genomes of the Parapoxviruses Orf Virus and Bovine Papular Stomatitis Virus Genomes of the Parapoxviruses Orf Virus and Bovine Papular Stomatitis Virus. 2004;78(1):168–77. https://doi.org/10.1128/JVI.78.1.168.
Spyrou V, Valiakos G. Orf virus infection in sheep or goats. Vet Microbiol. 2015;181(1–2):178–82. https://doi.org/10.1016/j.vetmic.2015.08.010.
Abdullah FFJ, Rofie AMB, Tijjani A, Chung ELT, Mohammed K, Sadiq MA, Saharee AA, Abba Y. Survey of goat farmers' compliance on proper herd health program practices. Int J Livest Res. 2015b;5(11):8–14. https://doi.org/10.5455/ijlr.20151103105812.
Mobini S. Herd ealth Management practices for goat production. In Proc. 14th Ann. Langston: Goat Field Day. Langston: Langston Universit; 1999. p. 13-22. http://www.luresext.edu/sites/default/files/1999%20Field%20Day.pdf.
Zamri-Saad M, Al-Ajeeli KS, Ibrahim A. A severe outbreak of Orf involving the buccal cavity of goats. Trop Anim Health Prod. 1992;24(3):177–8.
Thurman RJ, Fitch RW. Contagious Ecthyma. N Engl J Med. 2015;372(8):e12.
Azmi MLM, Field HJ, Rixon F, McLauchlan J. Protective immune Reponses induced by non-infectious L-particles of equine herpesvirus Type-1. J Microbiol. 2002;40(1):11–9.
Musser JM, Taylor CA, Guo J, Tizard IR, Walker JW. Development of a contagious ecthyma vaccine for goats. Am J Vet Res. 2008;69(10):1366–70.
Musser JM, Waldron DF, Taylor CA. Evaluation of homologous and heterologous protection induced by a virulent field strain of Orf virus and an Orf vaccine in goats. Am J Vet Res. 2012;73(1):86–90.
Zhao K, He W, Gao W, Lu H, Han T, Li J, Zhang X, Zhang B, Wang G, Su G. Orf virus DNA vaccines expressing ORFV 011 and ORFV 059 chimeric protein enhances immunogenicity. Virol J. 2011;8(1):562. https://doi.org/10.1186/1743-422X-8-562.
Zhao K, Song D, He W, Lu H, Zhang B, Li C, Chen K, Gao F. Identification and phylogenetic analysis of an Orf virus isolated from an outbreak in sheep in the Jilin province of China. Vet Microbiol. 2010;142(3–4):408–15. https://doi.org/10.1016/j.vetmic.2009.10.006.
Lewis C. Update on orf. In Practice. 1996;18:376–81. https://doi.org/10.1136/inpract.18.8.376.
Scagliarini A, Piovesana S, Turrini F, Savini F, Sithole F, Mccrindle CM. Orf in South Africa: endemic but neglected. Onderstepoort J Vet Res. 2012;79:1–8 http://hdl.handle.net/10520/EJC129087.
Hosamani M, Scagliarini A, Bhanuprakash V, McInnes CJ, Singh RK. Orf: an update on current research and future perspectives. Expert Rev Anti-Infect Ther. 2009;7(7):879–93. https://doi.org/10.1586/eri.09.64.
Inoshima Y, Yamamoto Y, Takahashi T, Shino M, Katsumi A, Shimizu S, Sentsui H. Serological survey of parapoxvirus infection in wild ruminants in Japan in 1996–9. Epidemiol Infect. 2001;126(1):153–6. https://doi.org/10.1017/S0950268801005131.
Steven J, Jeremy P. Herd health program for meat goats. In livestock health series, agriculture and natural resources, publication of the division of agriculture, research and extension, University of Ankansas system. 2015. https://www.uaex.edu/publications/PDF/FSA-3097.pdf. Accessed 26 July 2015.
Doye D. The use of electronic technology in teaching farm record keeping. Am J Agric Econ. 2004;86(3):762–6. https://doi.org/10.1111/j.0002-9092.2004.00621.x.
Animal Health Australia. Canberra: Animal Health in Australia 2009; 2010. p. 49-56. https://animalhealthaustralia.com.au/wp-content/uploads/2015/12/AHIA-2009.pdf. Accessed 26 July 2015
Jensen KL, English BC, Menard RJ. Livestock farmers' use of animal or herd health information sources. J Ext. 2009;47(1):1FEA7 Available at:http://www.joe.org/joe/2009february/a7.php.
MacInnes H, Zhou Y, Gouveia K, Cromwell J, Lowery K, Layton RC, Zubelewicz M, Sampath R, Hofstadler S, Liu Y, Cheng YS. Transmission of aerosolized seasonal H1N1 influenza a to ferrets. PloS one. 2011;6(9):e24448. https://doi.org/10.1371/journal.pone.0024448.
Koopman G, Mooij P, Dekking L, Mortier D, Nieuwenhuis I, van Heteren M, Kuipers H, Remarque EJ, Radošević K, Bogers WM. Correlation between virus replication and antibody responses in macaques following infection with pandemic influenza a virus. J Virol, JVI-02757. 2015. https://doi.org/10.1128/JVI.02757-15.
Inoshima Y, Morooka A, Sentsui H. Detection and diagnosis of parapoxvirus by the polymerase chain reaction. J Virol Methods. 2000;84(2):201–8. https://www.sciencedirect.com/science/article/pii/S0166093499001445?via%3Dihub.
Inoshima Y, Takasu M, Ishiguro N. Establishment of an on-site diagnostic procedure for detection of orf virus from oral lesions of Japanese serows (Capricornis crispus) by loop-mediated isothermal amplification. J Vet Med Sci. 2016;78(12):1841–5. https://doi.org/10.1292/jvms.16-0268.
Chaudhri G, Panchanathan V, Bluethmann H, Karupiah G. Obligatory requirement for antibody in recovery from a primary poxvirus infection. J Virol. 2006;80(13):6339–44. https://doi.org/10.1128/JVI.00116-06.
Jegaskanda S, Weinfurter JT, Friedrich TC, Kent SJ. Antibody-dependent cellular cytotoxicity (ADCC) is associated with control of pandemic H1N1 influenza virus infection of macaques. J Virol. 2013:JVI–03030. https://doi.org/10.1128/JVI.03030-12.
Panchanathan V, Chaudhri G, Karupiah G. Protective immunity against secondary poxvirus infection is dependent on antibody but not on CD4 or CD8 T-cell function. J Virol. 2006;80(13):6333–8. https://doi.org/10.1128/JVI.00115-06.
Panchanathan V, Chaudhri G, Karupiah G. Correlates of protective immunity in poxvirus infection: where does antibody stand? Immunol Cell Biol. 2008;86(1):80–6. https://doi.org/10.1038/sj.icb.7100118.
Edghill-Smith Y, Golding H, Manischewitz J, King LR, Scott D, Bray M, Nalca A, Hooper JW, Whitehouse CA, Schmitz JE, Reimann KA. Smallpox vaccine–induced antibodies are necessary and sufficient for protection against monkeypox virus. Nat Med. 2005;11(7):740. https://doi.org/10.1038/nm1261.
Nollens HH, Gulland FM, Hernandez JA, Condit RC, Klein PA, Walsh MT, Jacobson ER. Seroepidemiology of parapoxvirus infections in captive and free-ranging California Sea lions Zalophus californianus. Dis Aquat Org. 2006;69(2–3):153–61. https://doi.org/10.3354/dao069153.
Haig DM. Orf virus infection and host immunity. Curr Opin Infect Dis. 2006;19(2):127–31.
Bergqvist C, Kurban M, Abbas O. Orf virus infection. Rev Med Virol. 2017;27(4):1–9. https://doi.org/10.1002/rmv.1932.
Kitchen M, Müller H, Zobl A, Windisch A, Romani N, Huemer H. Orf virus infection in a hunter in Western Austria, presumably transmitted by game. Acta Derm Venereol. 2014;94(2):212–4. https://doi.org/10.2340/00015555-1643.
Kottaridi C, Nomikou K, Teodori L, Savini G, Lelli R, Markoulatos P, Mangana O. Phylogenetic correlation of Greek and Italian orf virus isolates based on VIR gene. Vet Microbiol. 2006;116(4):310–6. https://doi.org/10.1016/j.vetmic.2006.04.020.
Lojkic I, Cac Z, Beck A, Bedekovic T, Cvetnic Z, Sostaric B. Phylogenetic analysis of Croatian orf viruses isolated from sheep and goats. Virol J. 2010;7:1–7. https://doi.org/10.1186/1743-422X-7-314.
Maganga GD, Relmy A, Bakkali-Kassimi L, Ngoubangoye B, Tsoumbou T, Bouchier C, Berthet N. Molecular characterization of Orf virus in goats in Gabon, Central Africa. Virol J. 2016;13(1):1–5. https://doi.org/10.1186/s12985-016-0535-1.
Arya R, Antonisamy B, Kumar S. Sample size estimation in prevalence studies. Indian J Pediatr. 2012. https://doi.org/10.1007/s12098-012-0763-3.
Bala JA, Kawo AH, Mukhtar MD, Sarki A, Magaji N, Aliyu IA, Sani MN. Prevalence of hepatitis C virus infection among blood donors in some selected hospitals in Kano, Nigeria. (IRJM) (ISSN: 2141-5463). Int Res J Microbiol. 2012;3(6):217–22.
Balamurugan V, Singh RP, Saravanan P, Sen A, Sarkar J, Sahay B, Rasool TJ, Singh RK. Development of an indirect ELISA for the detection of antibodies against Peste-des-petits-ruminants virus in small ruminants. Vet Res Commun. 2007;31(3):355–64.
Loh HS, Mohd-Azmi ML. Development of a quantitative real-time RT-PCR for kinetic analysis of immediate-early transcripts of rat cytomegalovirus. Acta Virol. 2009;53(4):261–9 http://www.elis.sk/download_file.php?product_id=1776&session_id=cufekitil8lnecdkjhp3vkbr02.
Tam YJ, Allaudin ZN, Lila MAM, Bahaman AR, Tan JS, Rezaei MA. Enhanced cell disruption strategy in the release of recombinant hepatitis B surface antigen from Pichia pastoris using response surface methodology. BMC Biotechnol. 2012;12, art. no.(70). https://doi.org/10.1186/1472-6750-12-70.
Guo J, Zhang Z, Edwards JF, Ermel RW, Taylor C, De La Concha-Bermejillo A. Characterization of a north American orf virus isolated from a goat with persistent, proliferative dermatitis. Virus Res. 2003;93(2):169–79. https://doi.org/10.1016/S0168-1702(03)00095-9.
Babiuk S, Wallace DB, Smith SJ, Bowden TR, Dalman B, Parkyn G, Copps J, Boyle DB. Detection of antibodies against capripoxviruses using an inactivated sheeppox virus ELISA. Transbound Emerg Dis. 2009;56(4):132–41.
Bhanuprakash V, Hosamani M, Juneja S, Kumar N, Singh RK. Detection of goat pox antibodies: comparative efficacy of indirect ELISA and counterimmunoelectrophoresis. J Appl Anim Res. 2006;30(2):177–80.
Niang AB. Principles of validation of diagnostic assays for infectious diseases. Manual of diagnostic tests and vaccines for terrestrial animals, vol. 1; 2004. p. 21–9.
Azmi M, Field HJ. Interactions between equine herpesvirus type 1 and equine herpesvirus type 4: T cell responses in a murine infection model. J Gen Virol. 1993;74(11):2339–45. https://doi.org/10.1099/0022-1317-74-11-2339.
Vakhshiteh F, Allaudin ZN, Mohd-Lila MAB, Hani H. Size-related assessment on viability and insulin secretion of caprine islets in vitro. Xenotransplantation. 2013;20(2):82–8. https://doi.org/10.1111/xen.12023.
The authors wish to acknowledge the entire management of Department of Veterinary Service, Terengganu, Malaysia, for their assistance and their professional services rendered during the sampling and our profound appreciation goes to University Veterinary Hospital (UVH), and Faculty of Veterinary Medicine Universiti Putra Malaysia for providing the necessary supports.
The work was supported by the grant Ministry of Energy, Science, Technology, Environment & Climate Change (MESTECC) formerly known as Ministry of Science, Technology and Innovation Sciencefund (MOSTI), Biotechnology Cluster 02–01-04-SF2459 (Grant no: 5450820). MESTECC supported entirely this project starting from design of the study, collection, analysis, and interpretation of data.
Virology Unit, Department of Pathology and Microbiology, Faculty of Veterinary Medicine, Universiti Putra Malaysia, 43400, Serdang, Selangor, Malaysia
Jamilu Abubakar Bala, Krishnan Nair Balakrishnan, Muhammad Syaafii bin Noorzahari, Lau Kah May, Hassana Kyari Mangga, Mustapha Mohamed Noordin & Mohd Azmi Mohd Lila
Microbiology Unit, Department of Medical Laboratory Science, Faculty of Allied Health Sciences, Bayero University Kano, P.M.B. 3011, Kano, Nigeria
Jamilu Abubakar Bala
Institute of Bioscience, Universiti Putra Malaysia, 43400, Serdang, Selangor, Malaysia
Ashwaq Ahmed Abdullah
Department of Microbiology, Faculty of Applied Science, Taiz University, Taiz, Yemen
Department of Veterinary Clinical Studies, Faculty of Veterinary Medicine, Universiti Putra Malaysia, 43400, Serdang, Selangor, Malaysia
Lawan Adamu & Abd Wahid Haron
Department of Microbiology, Faculty of Science, University of Maiduguri, P.M.B 1069, Maiduguri, Borno, Nigeria
Hassana Kyari Mangga
Jabatan Perkhidmatan Veterinar Negeri Terengganu Peti Surat 203, 20720, Kuala Terengganu, Malaysia
Mohd Termizi Ghazali
Institut Penyelidikan Haiwan, (IPH), Veterinary Research Institute, Ipoh, 59, Jalan Sultan Azlan Shah, 31400, Ipoh, Perak, Malaysia
Ramlan Bin Mohamed
Krishnan Nair Balakrishnan
Lawan Adamu
Muhammad Syaafii bin Noorzahari
Lau Kah May
Abd Wahid Haron
Mustapha Mohamed Noordin
Mohd Azmi Mohd Lila
The authors were hereby given a declaration that this work was done by all of them named in this paper and all liabilities pertaining to claims relating to the content of this article will be borne by them. JAB, MLMA, KNB, MTG, MSBSN, and LKM conceived the idea, participates in data collection and run the test. MLMA, MMN, AWH, RBM and all participated in conceptualization of the idea, study design, review, and editing of paper, JAB, LA, AAA and HKM participated in the all statistical analyses and interpretation as well as preparation of the manuscript draft. All authors have read and agreed with submission of final paper to the journal.
MLMA author is a PhD holder (1994), currently a serving Professor at Universiti Putra Malaysia (Former deputy vice chancellor research and innovation of the Universiti Putra Malysia) with expertise in virology, immunology and vaccine technology.
MMN author is a PhD holder (1992), currently a serving Professor at Universiti Putra Malaysia with expertise in Environmental & Nutrition Pathology.
AWH author is a PhD holder (1993) currently a serving Professor at Universiti Putra Malaysia with expertise in Theriogenology (Animal Reproduction).
LA author is a PhD holder (2014) currently a serving Associate Professor at University of Maiduguri, Nigeria with expertise in Equine Medicine.
RBM author is a PhD holder currently holding position of a Director at the Veterinary Research Institute at Ipoh, Perak, Malaysia.
MTG is a PhD holder author is currently holding position of a Director at the Terengganu State Department of Veterinary Services, Malaysia.
JAB author is a PhD student at the Universiti Putra Malaysia and holding position of Lecturer at Bayero University Kano, Nigeria with expertise in Virology, Immunology and vaccine technology.
KNB author is a PhD student at the Universiti Putra Malaysia and holding position of a Lecturer at Monash University Malysia with expertise in Molecular biology, vaccine and therapeutics.
HKM author is a PhD student at the Universiti Putra Malaysia and holding position of Lecturer at University of Maiduguri, Nigeria with expertise in Virology.
AAA author is a PhD student at the Universiti Putra Malaysia and holding position of Lecturer at Taiz University, Taiz, Yemen with expertise in Virology and Pharmacology.
MSBSN author is a Doctor of Veterinary Medicine from the Universiti Putra Malaysia and holding position of Veterinary Officer.
LKM author is a Doctor of Veterinary Medicine from the Universiti Putra Malaysia and holding position of Veterinary Officer.
Correspondence to Jamilu Abubakar Bala or Mohd Azmi Mohd Lila.
Ethical approval for was obtained from the Institutional Animal Care and Use Committee (IACUC) of the Universiti Putra Malaysia. Informed consent form was filled by the animal owners whom are farm managers/directors that participated in the study.
Questionnaire. (DOCX 59 kb)
Bala, J.A., Balakrishnan, K.N., Abdullah, A.A. et al. An association of Orf virus infection among sheep and goats with herd health programme in Terengganu state, eastern region of the peninsular Malaysia. BMC Vet Res 15, 250 (2019). https://doi.org/10.1186/s12917-019-1999-1
Orf virus
Herd-health programme
|
CommonCrawl
|
How can we describe the evolution of a density "injected" into an incompressible Newtonian fluid?
Let $d\in\left\{2,3\right\}$ and $\Omega\subseteq\mathbb R^d$ be a bounded domain. The evolution up to time $T>0$ of an incompressible Newtonian fluid with uniform density $\rho_0$ and viscosity $\nu$ is given by the instationary Navier-Stokes equations $$\left\{\begin{matrix}\displaystyle\left(\frac\partial{\partial t}+\boldsymbol u\cdot\nabla\right)\boldsymbol u&=&\displaystyle\nu\Delta\boldsymbol u-\frac 1\rho_0\nabla p+\boldsymbol f&&\text{in }\Omega\times (0,T)\\\nabla\cdot \boldsymbol u&=&0&&\text{in }\Omega\times (0,T)\end{matrix}\right.\;,\tag 1$$ where $\boldsymbol u:\Omega\times [0,T]\to\mathbb R^d$ and $p:\Omega\times [0,T]\to\mathbb R$ are the velocity field and pressure, respectively, and $\boldsymbol f:\Omega\times (0,T)\to\mathbb R^d$ is the sum of all external forces.
Now, I've read that the evolution of a density $\rho:\Omega\times[0,T]\to\mathbb R$ injected into the fluid is described by a PDE of the form $$\left(\frac\partial{\partial t}+\boldsymbol u\cdot\nabla\right)\rho=\kappa\Delta\rho+s\;\;\;\text{in }\Omega\times (0,T)\tag 2$$ where $\kappa\in\mathbb R$ is somehow a diffusion rate and $s:\Omega\times[0,T]$ is an (or the sum of many?) external source(s).
How do we obtain $(2)$? I'm searching for a simple, but mathematically coherent derivation
Obviously, $(2)$ looks very similar to $(1)$. Why did the term $-1/\rho_0\nabla p$ disappear? (Is $(2)$ a special case of a more general variant?)
What do people mean when they say, that density is injected into a fluid?
partial-differential-equations physics mathematical-physics fluid-dynamics
0xbadf00d
0xbadf00d0xbadf00d
$\begingroup$ Equation 2 is just a normal transport equation (or advection-diffusion), it's not modelling a fluid, but modelling something being carried by the fluid motion. $\endgroup$
I am not a mathematician but an engineer with a specialty in fluid mechanics so forgive me if i skip some mathematical details :)...
I think the problem here is the use of the word 'density' which in this context means 'species density' (aka concentration) as opposed to 'mass density'. The variable $\rho$ in equation (2) describes 'species density' as you say the incompressible fluid has uniform 'mass density' $\rho_0$. Then the injection of a mass of a certain species into the fluid at some point makes more sense (e.g. imagine injecting a tracer compound in the fluid).
How do we obtain (2)? I'm searching for a simple, but mathematically coherent derivation
In the same way the Navier-Stokes equations are a representation of the local conservation of momentum of an infinitessimal fluid volume, (2) is a representation of the local conservation of (species) mass. A species concentration can be transported by the velocity field (aka advection, i.e. the $\boldsymbol{u}\cdot\boldsymbol{\nabla}\rho$ term) and by a diffusion process (aka Fick's law, i.e. the $k\Delta\rho$ term). Furthermore, it may be produced or consumed ($s$ term) by e.g. a chemical reaction.
The derivation is fairly straightforward given that the accumulation of the 'species mass' in a infinitessimal fluid volume $dV$ is the result of the in- and outflux $\boldsymbol{f}$ of the 'species mass' at the closed surface $S$ of the volume $V$:
$$d_{t}\int_{V}\rho dV=-\int_{S}\boldsymbol{f}\cdot\boldsymbol{n}dS+\int_{V}sdV$$
Here, $\boldsymbol{n}$ indicates the unit normal to the surface and the convective flux $f$ has a advective and diffusive contribution, respectively:
$$\boldsymbol{f}=\rho\boldsymbol{u}+\boldsymbol{j} = \rho \boldsymbol{u} - k\boldsymbol{\nabla}\rho$$
Using Gauss' divergence theorem the integral equation is transformed:
$$\int_{V}\partial_{t}\rho dV=\int_{V}\left[-\boldsymbol{\nabla}\cdot\boldsymbol{f}+s\right]dV$$
and we obtain the differential form:
$$\partial_{t}\rho =-\boldsymbol{\nabla}\cdot\boldsymbol{f}+s$$
Substituting in the expression for the flux we retrieve the requested 'species density' equation:
$$\partial_{t}\rho+\boldsymbol{\nabla}\cdot\rho \boldsymbol{u}=k\Delta\rho+s$$
where we can subsequently simplify $\boldsymbol{\nabla}\cdot\rho \boldsymbol{u}=\rho\boldsymbol{\nabla}\cdot\boldsymbol{u}+\boldsymbol{u}\cdot\boldsymbol{\nabla}\rho=\boldsymbol{u}\cdot\boldsymbol{\nabla}\rho$ because of the continuity equation $\boldsymbol{\nabla}\cdot\boldsymbol{u}=0$.
Obviously, (2) looks very similar to (1). Why did the term $−1/ρ_0∇p$ disappear? (Is (2) a special case of a more general variant?)
All conservation equations look similar because the starting point, i.e. conservation of mass, momentum, energy, entropy, etc. are similar. For the Navier-Stokes equations, we are interested in the accumulation of momentum in an infinitessimal fluid volume:
$$d_{t}\int_{V}\rho\boldsymbol{u} dV=-\int_{S}\boldsymbol{\sigma}\cdot\boldsymbol{n}dS+\int_{V}fdV$$
I have changed notation slightly where $\sigma$ is now the in- and outflux through the surface $S$ and $f$ is a source or sink of momentum acting on the volume $V$ (aka known as a body force, see Newton's second law), but the structure of the equations are the same. The change of notation was to make the point that the convective flux $\sigma$ is different from the convective flux $f$ in that $\sigma$ is now a rank 2 tensor, however still contains a advective and diffusive contribution: $$\boldsymbol{\sigma}=\rho\boldsymbol{u}\otimes\boldsymbol{u}+\boldsymbol{\tau}=\rho\boldsymbol{u}\otimes\boldsymbol{u}+p\boldsymbol{I}-\mu\left[\boldsymbol{\nabla}\boldsymbol{u}+\boldsymbol{\nabla}\boldsymbol{u}^{T}\right]$$
Here $\rho\boldsymbol{u}\otimes\boldsymbol{u}$ denotes the advective flux of momentum, whereas $\tau=p\boldsymbol{I}-\mu\left[\boldsymbol{\nabla}\boldsymbol{u}+\boldsymbol{\nabla}\boldsymbol{u}^{T}\right]$ represents the diffusive flux of momentum, most often refered to as the stress tensor. This stress tensor contains two types of stress; normal and shear stress. Normal stresses are caused by mainly the pressure whereas shear stresses are caused by viscosity due to velocity gradients across laminae of fluid (see Newtonian fluids).
The reason why the pressure term is in the Navier-Stokes equations is because a pressure gradient directly leads to changes in momentum, whereas it has no direct effect on the transport of 'species mass'. It instead would be reflected in the convective term of the 'species mass' equation.
nluiginluigi
$\begingroup$ What do you mean by $$\textit{it's a pertubation of }\rho\textit{ on top of }ρ_0$$\;? And how can we derive your first centered equation? $\endgroup$
– 0xbadf00d
$\begingroup$ @0xbadf00d: I should not have said that as it is not true, the sum of the species densities is by definition the mass density $\rho_0$. please forget it. The first equation has no derivation (i know of) but is a sort of 'law' often used in engineering that states intuitively that: accumulation quantity = quantity in - quantity out + production - consumption. I guess you could derive it using infinitessimals but it doesn't provide new insights as far as i am concerned. $\endgroup$
– nluigi
$\begingroup$ I think I've almost got it, but it's still unclear to me why $\Delta\rho$ appears in your formula and why your $f$ needs to have the form you've given. I've tried to derive (a part) of $(2)$ using Reynolds' transport theorem and asked another question in order to clarify how I need to proceed: math.stackexchange.com/questions/1582114/… $\endgroup$
$\begingroup$ @0xbadf00d: the laplacian of $\rho$ is simply the result of $\nabla\cdot\nabla\rho=\nabla^2\rho=\Delta\rho$. $f$ is just the most used form of advection and diffusion, there could however also be other terms that need to be included. An example is the thermophoresis where a mass flux is induced by a temperature gradient. I have a feeling you aren't satisfied with my answers because you would like 'pure' mathematical proofs, unfortunately i am not qualified to give them as i usually approach these problem purely from an engineering perspective. $\endgroup$
$\begingroup$ @nluigi Maybe my confusion is due to an elementary misunderstanding. I've translated my thoughts into a question at PhysicsSE: physics.stackexchange.com/questions/225143/… $\endgroup$
Not the answer you're looking for? Browse other questions tagged partial-differential-equations physics mathematical-physics fluid-dynamics or ask your own question.
How can we describe the diffusion of "things" injected into a fluid?
Solution for "Diffusion-Like" 1-D Navier-Stokes Equation With Moving Boundaries
Finding the Density Change of a Fluid
writing the Navier-Stokes equation in Lagrangian form along the streamline
Euler's equation for the motion of incompressible fluid of velocity
Incompressible Fluid form of the Navier Stokes Equations - Is pressure given?
|
CommonCrawl
|
The classical version of Stokes' Theorem revisited
Steen Markvorsen
1377 Downloads (Pure)
Using only fairly simple and elementary considerations - essentially from first year undergraduate mathematics - we show how the classical Stokes' theorem for any given surface and vector field in $\mathbb{R}^{3}$ follows from an application of Gauss' divergence theorem to a suitable modification of the vector field in a tubular shell around the given surface. The two stated classical theorems are (like the fundamental theorem of calculus) nothing but shadows of the general version of Stokes' theorem for differential forms on manifolds. The main points in the present paper, however, is firstly that this latter fact usually does not get within reach for students in first year calculus courses and secondly that calculus textbooks in general only just hint at the correspondence alluded to above. Our proof that Stokes' theorem follows from Gauss' divergence theorem goes via a well known and often used exercise, which simply relates the concepts of divergence and curl on the local differential level. The rest of the paper uses only integration in $1$, $2$, and $3$ variables together with a 'fattening' technique for surfaces and the inverse function theorem.
International Journal of Mathematical Education in Science and Technology
Gauss' divergence theorem
undergraduate mathematics
Stokes' theorem
StokesDSubmitted manuscript, 583 KB
Fingerprint Dive into the research topics of 'The classical version of Stokes' Theorem revisited'. Together they form a unique fingerprint.
divergence theorem Social Sciences
Stokes' theorem Mathematics
Divergence theorem Mathematics
Gauss Mathematics
Vector Field Mathematics
Calculus Mathematics
Fundamental theorem of calculus Mathematics
divergence Social Sciences
Markvorsen, S. (2008). The classical version of Stokes' Theorem revisited. International Journal of Mathematical Education in Science and Technology, 39(7), 879-888. https://doi.org/10.1080/00207390802091146
Markvorsen, Steen. / The classical version of Stokes' Theorem revisited. In: International Journal of Mathematical Education in Science and Technology. 2008 ; Vol. 39, No. 7. pp. 879-888.
@article{2b9d76efb5ad4256b867844c990e8d3c,
title = "The classical version of Stokes' Theorem revisited",
abstract = "Using only fairly simple and elementary considerations - essentially from first year undergraduate mathematics - we show how the classical Stokes' theorem for any given surface and vector field in $\mathbb{R}^{3}$ follows from an application of Gauss' divergence theorem to a suitable modification of the vector field in a tubular shell around the given surface. The two stated classical theorems are (like the fundamental theorem of calculus) nothing but shadows of the general version of Stokes' theorem for differential forms on manifolds. The main points in the present paper, however, is firstly that this latter fact usually does not get within reach for students in first year calculus courses and secondly that calculus textbooks in general only just hint at the correspondence alluded to above. Our proof that Stokes' theorem follows from Gauss' divergence theorem goes via a well known and often used exercise, which simply relates the concepts of divergence and curl on the local differential level. The rest of the paper uses only integration in $1$, $2$, and $3$ variables together with a 'fattening' technique for surfaces and the inverse function theorem.",
keywords = "Gauss' divergence theorem, undergraduate mathematics, Stokes' theorem, curriculum",
author = "Steen Markvorsen",
journal = "International Journal of Mathematical Education in Science and Technology",
publisher = "CRC Press/Balkema",
Markvorsen, S 2008, 'The classical version of Stokes' Theorem revisited', International Journal of Mathematical Education in Science and Technology, vol. 39, no. 7, pp. 879-888. https://doi.org/10.1080/00207390802091146
The classical version of Stokes' Theorem revisited. / Markvorsen, Steen.
In: International Journal of Mathematical Education in Science and Technology, Vol. 39, No. 7, 2008, p. 879-888.
T1 - The classical version of Stokes' Theorem revisited
AU - Markvorsen, Steen
N2 - Using only fairly simple and elementary considerations - essentially from first year undergraduate mathematics - we show how the classical Stokes' theorem for any given surface and vector field in $\mathbb{R}^{3}$ follows from an application of Gauss' divergence theorem to a suitable modification of the vector field in a tubular shell around the given surface. The two stated classical theorems are (like the fundamental theorem of calculus) nothing but shadows of the general version of Stokes' theorem for differential forms on manifolds. The main points in the present paper, however, is firstly that this latter fact usually does not get within reach for students in first year calculus courses and secondly that calculus textbooks in general only just hint at the correspondence alluded to above. Our proof that Stokes' theorem follows from Gauss' divergence theorem goes via a well known and often used exercise, which simply relates the concepts of divergence and curl on the local differential level. The rest of the paper uses only integration in $1$, $2$, and $3$ variables together with a 'fattening' technique for surfaces and the inverse function theorem.
AB - Using only fairly simple and elementary considerations - essentially from first year undergraduate mathematics - we show how the classical Stokes' theorem for any given surface and vector field in $\mathbb{R}^{3}$ follows from an application of Gauss' divergence theorem to a suitable modification of the vector field in a tubular shell around the given surface. The two stated classical theorems are (like the fundamental theorem of calculus) nothing but shadows of the general version of Stokes' theorem for differential forms on manifolds. The main points in the present paper, however, is firstly that this latter fact usually does not get within reach for students in first year calculus courses and secondly that calculus textbooks in general only just hint at the correspondence alluded to above. Our proof that Stokes' theorem follows from Gauss' divergence theorem goes via a well known and often used exercise, which simply relates the concepts of divergence and curl on the local differential level. The rest of the paper uses only integration in $1$, $2$, and $3$ variables together with a 'fattening' technique for surfaces and the inverse function theorem.
KW - Gauss' divergence theorem
KW - undergraduate mathematics
KW - Stokes' theorem
KW - curriculum
JO - International Journal of Mathematical Education in Science and Technology
JF - International Journal of Mathematical Education in Science and Technology
Markvorsen S. The classical version of Stokes' Theorem revisited. International Journal of Mathematical Education in Science and Technology. 2008;39(7):879-888. https://doi.org/10.1080/00207390802091146
|
CommonCrawl
|
A Rate-Reduced Neuron Model for Complex Spiking Behavior
Koen Dijkstra ORCID: orcid.org/0000-0002-4076-26621,
Yuri A. Kuznetsov1,2,
Michel J. A. M. van Putten3,4 &
Stephan A. van Gils1
The Journal of Mathematical Neuroscience volume 7, Article number: 13 (2017) Cite this article
We present a simple rate-reduced neuron model that captures a wide range of complex, biologically plausible, and physiologically relevant spiking behavior. This includes spike-frequency adaptation, postinhibitory rebound, phasic spiking and accommodation, first-spike latency, and inhibition-induced spiking. Furthermore, the model can mimic different neuronal filter properties. It can be used to extend existing neural field models, adding more biological realism and yielding a richer dynamical structure. The model is based on a slight variation of the Rulkov map.
Networks of coupled neurons quickly become analytically intractable and computationally infeasible due to their large state and parameter spaces. Therefore, starting with the work of Beurle [1], a popular modeling approach has been the development of continuum models, called neural fields, that describe the average activity of large populations of neurons (Wilson and Cowan [2, 3], Nunez [4], Amari [5, 6]). In neural field models, the network architecture is represented by connectivity functions and the corresponding transmission delays, while differential operators characterize synaptic dynamics. All intrinsic properties of the underlying neuronal populations are condensed into firing rate functions, which replace individual neuronal action potentials and map the sum of all incoming synaptic currents to an outgoing firing rate. While some neural field models incorporate spike-frequency adaptation (Pinto and Ermentrout [7, 8], Coombes and Owen [9], Amari [10, 11]), more complex spiking behavior such as postinhibitory rebound, phasic spiking and accommodation, first-spike latency, and inhibition-induced spiking is mostly absent, an exception being the recent reduction of the Izhikevich neuron (Nicola and Campbell [12], Visser and van Gils [13]).
Here, we present a rate-reduced model that is based on a slight modification of the Rulkov map (Rulkov [14], Rulkov et al. [15]), a phenomenological, map-based single neuron model. Similar to Izhikevich neurons (Izhikevich [16]), the Rulkov map can mimic a wide variety of biologically realistic spiking patterns, all of which are preserved by our rate formulation. The rate-reduced model can therefore be used to incorporate all the aforementioned types of spiking behavior into existing neural field models.
This paper is organized as follows. In Sect. 2, we present the single spiking neuron model our rate-reduced model is based upon, and illustrate different spiking patterns and filter properties. In Sect. 3 we heuristically reduce the single neuron model to a rate-based formulation, and show that the rate-reduced model preserves spiking and filter properties. We give an example of a neural field that is augmented with our rate model in Sect. 4 and end with a discussion in Sect. 5.
Single Spiking Neuron Model
In this section we present a phenomenological, map-based single neuron model, which is a slight modification of the Rulkov map (Rulkov [14], Rulkov et al. [15]). The Rulkov map was designed to mimic the spiking and spiking-bursting activity of many real biological neurons. It has computational advantages because the map is easier to iterate than continuous dynamical systems. Furthermore, as we will show in this paper, it is straightforward to obtain a rate-reduced version of a slightly modified version of the Rulkov model.
The Rulkov map consists of a fast variable v, resembling the neuronal membrane potential, and a slow adaptation variable a. In our modification of the original model, the adaptation only implicitly depends on the membrane potential through a binary spiking variable. As we will show in the next section, this modification allows for an easy decoupling of the membrane potential and adaptation variable, and therefore a straightforward rate reduction of the model. The cost of the modification is the loss of subthreshold oscillation dynamics. The modified Rulkov map is given by
$$ \textstyle\begin{cases} v_{n+1} = f(v_{n}, v_{n-1},\kappa u_{n}-a_{n}-\theta), \\ a_{n+1} = a_{n}-\varepsilon(a_{n}+(1-\kappa)u_{n}-\gamma s_{n}), \end{cases} $$
(SNM)
where the piecewise continuous function \(f\colon\mathbb{R}^{3}\to \mathbb{R}\) is given by
$$ f(x_{1},x_{2},x_{3})= \textstyle\begin{cases}\frac{2500+150x_{1}}{50-x_{1}}+50x_{3} &\text{if } x_{1}< 0, \\ 50+50x_{3} &\text{if } 0\leq x_{1}< 50+50x_{3} \quad\wedge\quad x_{2}< 0, \\ -50 &\text{otherwise}. \end{cases} $$
The form of f is chosen to mimic the shape of neuronal action potentials. The variable u in (SNM) represents external (synaptic) input to the cell, which we assume to be given, and s is a binary indicator variable, given by
$$ s_{n} = \textstyle\begin{cases} 1&\text{if the neuron spiked at iteration $n$,} \\ 0 &\text{otherwise.} \end{cases} $$
A Rulkov neuron spikes at iteration n if its membrane potential is reset to \(v_{n+1}=-50\) in the next iteration. It follows from (1) that the spiking condition in (2) is satisfied if and only if
$$ v_{n}\geq0\quad\wedge\quad \bigl(v_{n}\geq50+50(\kappa u_{n}-a_{n}-\theta ) \quad\vee\quad v_{n-1}\geq0 \bigr). $$
The dependence of \(v_{n+1}\) on \(v_{n-1}\) in (SNM) ensures that a neuron always spikes if its membrane potential is non-negative for two consecutive iterations, independent of the external input u. To mimic spiking patterns of real biological neurons, one time step should correspond to approximately 0.5 ms of time.
The parameter \(0<\varepsilon<1\) in (SNM) sets the time scale of the adaptation variable and γ determines the adaptation strength. The parameter θ can be interpreted as a spiking threshold: for constant external input \(u_{n}=\varphi\), the neuron spikes persistently if and only if \(\varphi>\theta\). After a change of variable \(a_{n}\rightarrow a_{n}+(1-\kappa)\varphi\) and parameter \(\theta\rightarrow\theta-\varphi\), constant external input vanishes. Therefore, the asymptotic response to constant input does not depend on the parameter κ. However, the parameter κ can be used to tune the transient response of the neuron to changes in external input, as it determines how input is divided between the fast and the slow subsystem of (SNM). For parameter values \(\kappa \in[0,1]\), κ can be interpreted as the fraction of the input that is applied to the fast subsystem, and therefore determines (together with ε) how quickly the membrane potential dynamics react to changes in input. Since the effective drive of the system is given by \(\kappa u_{n}-a_{n}\), changes in external input are initially magnified for \(\kappa>0\). Asymptotically, this is then counterbalanced by additional adaption. Finally, for \(\kappa<0\), the initial response of the membrane potential to a change in input is reversed, i.e. an increase in external input initially has an inhibitory effect, and a decrease in external input initially has an excitatory effect.
Fast Dynamics
The Rulkov map (SNM) with \(0<\varepsilon\ll1\) is a slow-fast system, and we can explore the fast spiking dynamics of the model by assuming the suprathreshold drive \(\kappa u_{n}-a_{n}-\theta =\varsigma\) is constant. In this case, (SNM) reduces to the fast subsystem
$$ v_{n+1}= \textstyle\begin{cases}\frac{2500+150v_{n}}{50-v_{n}}+50\varsigma& \text{if } v_{n}< 0, \\ 50+50\varsigma& \text{if } 0\leq v_{n}< 50+50\varsigma, \\ -50 & \text{otherwise}. \end{cases} $$
(FSS)
The map (FSS) undergoes a saddle-node bifurcation at \(\varsigma=0\) (Fig. 1). For \(\varsigma<0\) there exist a stable and an unstable fixed point, given by
$$ v_{\mathrm{s}}=25 \bigl(\varsigma-2-\sqrt{\varsigma^{2}-8\varsigma } \bigr), \qquad v_{\mathrm{u}}=25 \bigl(\varsigma-2+\sqrt{\varsigma ^{2}-8\varsigma} \bigr), $$
respectively (Fig. 1A), while the system will settle into a stable periodic orbit for \(\varsigma>0\) (Fig. 1B). In the former case the unstable fixed point acts as an excitation threshold: if the value of the membrane potential exceeds this point, it will spike once and then decay back to the stable equilibrium. Since the unstable fixed point \(v_{\mathrm{u}}\) always lies to the right of the 'reset potential' \(v=-50\), a stable fixed point and a periodic orbit can never coexist. This guarantees that we can define a firing rate function \(S\colon\mathbb{R}\to\mathbb{Q}\) for the fast subsystem (FSS), given by
$$ S(\varsigma)= \textstyle\begin{cases} 0 & \text{for } \varsigma\leq0,\\ \frac{1}{P(\varsigma)} & \text{for } \varsigma>0, \end{cases} $$
where \(P\colon\mathbb{R}_{>0}\to\mathbb{N}\) maps the drive to the period of the corresponding stable limit cycle of (FSS). The fast subsystem (FSS) is piecewise-defined on the 'left' interval \((-\infty,0)\), the 'middle' interval \([0,50+50\varsigma)\), and the 'right' interval \([50+50\varsigma,\infty)\). The left interval is mapped to the left and middle interval, and the middle and right interval are mapped to right and left interval, respectively. The period of a limit cycle of (FSS) therefore only depends on the number of iterations in the left interval. Note, however, that the shape of the function f given in (1) can easily be changed to support bistability in the fast subsystem, which allows for some additional dynamics such as 'chattering', a response of periodic bursts of spikes to constant input (Rulkov [14]).
Illustration of the fast subsystem (FSS) of (SNM). ( A ) For \(\varsigma=-\tfrac{1}{10}\) there exist a stable (green) and unstable (orange) fixed point. ( B ) For \(\varsigma=\tfrac{1}{10}\) the system will settle into a stable periodic orbit (dashed green line) with period \(P (\tfrac{1}{10} )=8\)
Spiking Patterns
Izhikevich [17] classified different features of biological spiking neurons, most of which can be mimicked by our modified Rulkov model (SNM). In the following, we discuss the role of the model parameters with the help of a few physiologically relevant examples.
Tonic Spiking/Fast Spiking
Tonically spiking (also called 'fast spiking') neurons respond to a step input with spike trains of constant frequency. Most inhibitory neurons are fast spiking (Izhikevich [17]). In the modified Rulkov model this can be achieved by choosing a 'large' \((1>\varepsilon>\frac{1}{10} )\) value for the time scale parameter, in which case the influence of a single spike on the adaptation variable decays very fast. Therefore, the value of the adaptation variable is dominated by the timing of the last spike and the influence of older spikes is negligible (Fig. 2A). Since the time scale separation is small, the qualitative dynamics does not depend on κ.
Different types of spiking patterns generated by the single neuron model (SNM). Corresponding parameter values \((\theta,\kappa,\varepsilon,\gamma )\) are given in brackets. ( A ) Tonic spiking \((\tfrac{1}{10},\tfrac{1}{2},\tfrac{1}{2},\tfrac{1}{2} )\). ( B ) Spike-frequency adaptation \((\tfrac{1}{10},1,\tfrac{1}{1000},5 )\). ( C ) Rebound spiking \((\tfrac{1}{50},2,\tfrac{1}{100},\frac{1}{5} )\). ( D ) Accommodation \((\tfrac{3}{25},3,\tfrac{1}{50},\tfrac{2}{5} )\). ( E ) Spike latency \((\tfrac{1}{10},0,\tfrac{1}{200},\tfrac{2}{5} )\). ( F ) Inhibition-induced spiking \((\tfrac{1}{50},-1,\tfrac{1}{500},\tfrac{2}{5} )\)
Spike-Frequency Adaptation/Regular Spiking
Most cortical excitatory neurons are not 'fast spiking', but respond to a step input with a spike train of slowly decreasing frequency, a phenomenon known as 'spike-frequency adaptation' (also called 'regular spiking'). This kind of spiking behavior can be modeled by applying all input to the fast subsystem (\(\kappa=1\)) and choosing \(\varepsilon\ll 1\). The adaptation variable then acts as a slow time scale, such that a single spike has a long-lasting effect on the adaptation variable (Fig. 2B). The level of adaptation can be controlled with γ.
Rebound Spiking and Accommodation
The excitability of some neurons is temporarily enhanced after they are released from hyperpolarizing current, which can result in the firing of one or more 'rebound spikes'. Rebound spiking is an important mechanism for central pattern generation for heartbeat and other motor patterns in many neuronal systems (Chik et al. [18]). In the modified Rulkov map, postinhibitory rebound spiking can be modeled by choosing \(\kappa>1\). In this case, the adaptation variable will become negative while the cell gets hyperpolarized, which can be sufficient to trigger temporary spiking once the inhibitory input is turned off (Fig. 2C). Similarly, excitatory 'subthreshold' (\(u_{n}<\theta\)) input can elicit temporary spiking if the input is ramped up sufficiently fast (Fig. 2D).
Spike Latency and Inhibition-Induced Spiking
If all input is applied to the slow subsystem (\(\kappa=0\)), there can be a large latency between the input onset and the first spike of the neuron, yielding a delayed response to a pulse input (Fig. 2E). For \(\kappa<0\), the initial response of the model to changes in input is reversed: excitation initially leads to hyperpolarization of the neuron and inhibition can induce temporary spiking (Fig. 2F). This inhibition-induced spiking is a feature of many thalamo-cortical neurons (Izhikevich [17]).
Neuronal Filtering
In the previous section, we illustrated how the parameter κ can tune transient spiking responses of the modified Rulkov map to changes in external input. In reality, neurons often receive strong periodic input, e.g. from a synchronous neuronal population nearby. Information transfer between neurons may be optimized by temporal filtering, which is especially important when the same signal transmits distinct messages (Blumhagen et al. [19]). In this section, we study the response of (SNM) to harmonic input
$$ u_{n}= \varphi\cos \biggl(\frac{\omega\pi n}{1000}+\vartheta \biggr), $$
with amplitude φ, phase shift \(\vartheta\in[0,2\pi)\), and where \(\omega\in[0,1000]\) corresponds to the input frequency in Hz assuming that one iteration of (SNM) corresponds to 0.5 ms of time. A Rulkov neuron (SNM) will never spike if
$$ \kappa u_{n}-a_{n}\leq\theta \quad\forall n. $$
In this case, the adaptation reduces to the simple linear equation
$$ a_{n+1} = (1-\varepsilon)a_{n}-\varepsilon(1- \kappa)u_{n}, $$
with explicit solution
$$ a_{n} = -\varepsilon(1-\kappa)\sum _{m=1}^{\infty }(1-\varepsilon)^{m-1}u_{n-m}. $$
Inserting (6) into (9) now yields
$$ \begin{aligned}[b] \kappa u_{n}-a_{n}&= \kappa\varphi\cos \biggl(\frac{\omega\pi n}{1000}+\vartheta \biggr)\\ &\quad {}+\varepsilon(1- \kappa)\varphi\sum_{m=1}^{\infty}(1- \varepsilon)^{m-1}\cos \biggl(\frac{\omega\pi (n-m)}{1000}+\vartheta \biggr) \\ &=F(\omega)\frac{\varphi}{2}e^{(\frac{\omega\pi n}{1000}+\vartheta )i}+\overline{F(\omega)} \frac{\varphi}{2}e^{-(\frac{\omega\pi n}{1000}+\vartheta)i} \\ &=\bigl\lvert F(\omega)\bigr\rvert \varphi\cos \biggl(\frac{\omega\pi n}{1000}+ \vartheta+\arg \bigl(F(\omega) \bigr) \biggr), \end{aligned} $$
where the overline denotes complex conjugation and the frequency response \(F\colon[0,1000]\mapsto\mathbb{C}\) is given by
$$ F(\omega)=\kappa+\frac{\varepsilon(1-\kappa)}{e^{\frac{\omega \pi i}{1000}}+\varepsilon-1}. $$
The absolute value and argument of the frequency response determine the relative magnitude and phase of the output, respectively. It follows that a Rulkov neuron (SNM) receiving periodic input (6) does not spike if
$$ \bigl\lvert F(\omega)\bigr\rvert \varphi\leq\theta. $$
The inverse statement is not true, even if ω and ϑ in (6) are chosen such that
$$ \cos \biggl(\frac{\omega\pi n}{1000}+\vartheta+\arg \bigl(F(\omega ) \bigr) \biggr)=1 \quad\text{for some }n\in\mathbb{N}. $$
Since it can take a few iterations of the map to converge to its periodic orbit, a neuron will only spike if its drive is larger than the threshold θ for a sufficiently long time. The modulus of the frequency response (11) is given by
$$ \bigl\lvert F(\omega) \bigr\rvert =\sqrt{\frac{\varepsilon^{2}+2\kappa (\kappa-\varepsilon ) (1-\cos (\frac{\omega\pi }{1000} ) )}{\varepsilon^{2}+2 (1-\varepsilon ) (1-\cos (\frac{\omega\pi}{1000} ) )}}, $$
and it follows that \(\lvert F\rvert\) is strictly decreasing if and only if \(\kappa\in(-1+\varepsilon,1)\), and increasing otherwise (Fig. 3). Clearly,
$$ F(0)=1, \qquad F(1000)=\frac{2\kappa-\varepsilon}{2-\varepsilon}. $$
The input parameter κ can therefore be used to model filter properties of the neuron. For \(-1+\varepsilon<\kappa<1\) high frequencies get attenuated and a neuron can act as a low-pass filter in the sense that periodic input within a certain amplitude range only elicits a spiking response if its frequency is low enough (Fig. 4A). Similarly, for \(\kappa>1\) (and \(\kappa<-1+\varepsilon\)), high frequencies get amplified and there exists an amplitude range for which the neuron acts as a high-pass filter (Fig. 4B).
Illustration of the frequency response (11) for different values of ε. ( A ) For \(\kappa=\tfrac{1}{10}\) high frequencies get attenuated. ( B ) For \(\kappa=2\) high frequencies get amplified. Note the similarity, which is caused by the fact that \(F(\omega)-1\) is an odd function of \(1-\kappa\)
Responses of (SNM) to periodic input, illustrating neuronal filter properties. ( A ) For \(\kappa=\tfrac{1}{10}\) the neuron acts as a low-pass filter. Input with an amplitude of \(\varphi=\tfrac{1}{5}\) elicits a spiking response for \(\omega=1\), whereas the neuron is quiescent for \(\omega=2\). ( B ) For \(\kappa=2\), the neuron acts as a high-pass filter. Input with amplitude \(\varphi=\tfrac{1}{10}\) elicits a spiking response for \(\omega=2\), whereas a lower input frequency of \(\omega=1\) does not. In both examples, \((\theta,\varepsilon,\gamma )= (\tfrac{1}{7},\tfrac{1}{200},2 )\)
The Rate-Reduced Neuron Model
Neural field models are based on the assumption that neuronal populations convey all relevant information in their (average) firing rates. If one wants to incorporate certain spiking dynamics, one has to come up with a corresponding rate-reduced formulation first. In this section we present a rate-reduced version of the Rulkov model (SNM) that can be used to extend existing neural field models.
The adaptation variable a in the spiking neuron model (SNM) only implicitly depends on the membrane potential v via the binary spiking variable s. We can therefore decouple the adaption variable from the membrane potential by replacing the binary spiking variable defined in (2) by the instantaneous firing rate (5) of the fast subsystem (FSS), yielding
$$ a_{n+1} = a_{n}-\varepsilon \bigl(a_{n}+(1-\kappa)u_{n}-\gamma S(\kappa u_{n}-a_{n}-\theta) \bigr). $$
By interpreting (16) as the forward discretization of an ordinary differential equation, we arrive at the continuous time rate-reduced model
$$ \frac{1}{\varepsilon}\frac{\mathrm{d}a}{\mathrm{d}t} = -a-(1-\kappa)u+ \gamma S(\kappa u-a-\theta). $$
(RNM)
The rate-reduced neuron model (RNM) preserves the dynamical features of the full model (SNM) and reproduces all previous example spiking patterns (Fig. 5).
Different types of spiking behavior generated by the rate-reduced model (RNM). Top traces show the firing rate with \(r(t)=\kappa u(t)-a(t)-\theta\). Corresponding parameter values \((\theta,\kappa,\varepsilon,\gamma)\) are given in brackets. For small values of ε (i.e. a large time scale separation), there is excellent agreement with the corresponding examples of the full model (Fig. 2), which is quantified by comparing the integral of the spiking rate in the reduced model to the number of spikes in the full model. ( A ) Tonic spiking \((\frac{1}{10},\tfrac{1}{2},\tfrac{1}{2},\tfrac{1}{2})\); \(27.18(23)\). ( B ) Spike-frequency adaptation \((\tfrac{1}{10},1,\tfrac{1}{1000},5)\); \(29.13(29)\). ( C ) Rebound spiking \((\tfrac{1}{50},2,\tfrac{1}{100},\tfrac{1}{5})\); \(7.84(8)\). ( D ) Accommodation \((\tfrac{3}{25},3,\tfrac{1}{50},\tfrac{2}{5})\); \(3.09 (3)\). ( E ) Spike latency \((\tfrac{1}{10},0,\tfrac{1}{200},\tfrac{2}{5})\); \(16.12(16)\). ( F ) Inhibition-induced spiking \((\tfrac{1}{50},-1,\tfrac{1}{500},\tfrac{2}{5})\); \(15.75(16)\)
Frequency Response of the Reduced Model
Analogously to Sect. 2.3, we now study the response of the rate-reduced model (RNM) to sinusoidal input
$$ u(t)= \varphi\cos \biggl(\frac{\omega\pi t}{1000}+\vartheta \biggr). $$
Under the assumption that
$$ \kappa u(t)-a(t)\leq\theta\quad\forall t, $$
the explicit solution of (RNM) is given by
$$ a(t)=-\varepsilon(1-\kappa) \int_{-\infty}^{t} e^{-\varepsilon (t-\tau)}u(\tau) \,\mathrm{d} \tau, $$
cf. (9). Inserting the input (17) into (19) yields
$$ \begin{aligned} \kappa u(t)-a(t)&= \kappa\varphi\cos \biggl(\frac{\omega\pi t}{1000}+\vartheta \biggr)+\varepsilon(1-\kappa)\varphi \int _{-\infty}^{t} e^{-\varepsilon(t-\tau)}\cos \biggl( \frac{\omega\pi \tau}{1000}+\vartheta \biggr) \,\mathrm{d}\tau \\ &=G(\omega)\frac{\varphi}{2}e^{(\frac{\omega\pi t}{1000}+\vartheta )i}+\overline{G(\omega)} \frac{\varphi}{2}e^{-(\frac{\omega\pi t}{1000}+\vartheta)i} \\ &=\bigl\lvert G(\omega)\bigr\rvert \varphi\cos \biggl(\frac{\omega\pi t}{1000}+ \vartheta+\arg \bigl(G(\omega) \bigr) \biggr), \end{aligned} $$
where the frequency response \(G\colon\mathbb{R}_{\geq0}\mapsto \mathbb{C}\) is given by
$$ G(\omega)=\kappa+\frac{\varepsilon(1-\kappa)}{\varepsilon+\frac {\omega\pi i}{1000}}. $$
It follows that for the rate-reduced model (RNM) receiving harmonic input (17) we have
$$ S \bigl(\kappa u(t)-a(t)-\theta \bigr)=0 \quad\forall t \quad \text{if and only if} \quad\bigl\lvert G(\omega)\bigr\rvert \varphi\leq\theta. $$
Because we neglected the transient corresponding to the convergence from fixed point to limit cycle in the rate-reduced model (RNM), the inequality in (22) defines a clear 'spiking condition'. The modulus of the frequency response (21) is given by
$$ \bigl\lvert G(\omega)\bigr\rvert =\sqrt{\frac{\varepsilon^{2}+\kappa^{2} (\frac{\pi\omega}{1000} )^{2}}{\varepsilon^{2}+ (\frac{\pi \omega}{1000} )^{2}}}, $$
and \(\lvert G\rvert\) therefore is strictly decreasing if and only if \(\lvert\kappa\rvert\leq1\), and increasing otherwise. Revisiting the examples from Sect. 2.3 (Fig. 4), we have
$$ \bigl\lvert G(1)\bigr\rvert \varphi= 0.1696\ldots>\theta>0.1255\ldots=\bigl\lvert G(2)\bigr\rvert \varphi, $$
for \((\kappa,\varepsilon,\theta,\varphi )= (\frac {1}{10},\frac{1}{200},\frac{1}{7},\frac{1}{5} )\), and
$$ \bigl\lvert G(1)\bigr\rvert \varphi= 0.1359\ldots< \theta< 0.1684\ldots=\bigl\lvert G(2)\bigr\rvert \varphi, $$
for \((\kappa,\varepsilon,\theta,\varphi )= (2,\frac {1}{200},\frac{1}{7},\frac{1}{10} )\). Indeed, the rate-reduced model (RNM) reproduces the examples of the full model both qualitatively and quantitatively (Fig. 6). When the rate-reduced model (RNM) is incorporated into existing neural field models, the frequency response of the reduced model can be used to tune the individual temporal filter properties of the different neuronal populations.
Responses of the rate-reduced model (RNM) to periodic input. Top traces show the firing rate with \(r(t)= \kappa u(t)-a(t)-\theta\). ( A ) For \(\kappa=\frac{1}{10}\) the model acts as a low-pass filter. Input with an amplitude of \(\varphi=\frac{1}{5}\) yields a response in the firing rate for \(\omega=1\), whereas the firing rate remains zero for \(\omega=2\). In the former case, the integral of the spiking rate during one period is approximately 4.55, while there are 5 spikes per period in the full model (Fig. 4A). ( B ) For \(\kappa=2\), the reduced model acts as a high-pass filter. Input with amplitude \(\varphi=\tfrac{1}{10}\) elicits a firing rate response for \(\omega=2\), whereas a lower input frequency of \(\omega=1\) does not. In the former case, the integral of the spiking rate during one period is approximately 3.14, while there are 3 spikes per period in the full model (Fig. 4B). In both examples, \((\theta,\varepsilon,\gamma )= (\tfrac{1}{7},\tfrac{1}{200},2 )\)
The Firing Rate Function
Since our neuron model (SNM) is a map, the period P of its limit cycle lies in \(\mathbb{N}\) for all positive suprathreshold drives ς. Therefore, the spiking rate function (5) is staircase-like, with points of discontinuity whenever \(P\to P+1\). Let \(\lbrace\varsigma_{1},\varsigma_{2},\ldots\rbrace\) denote the set of all points of discontinuity of the firing rate function in decreasing order. For \(\varsigma\geq\varsigma_{1}=1\) the 'reset potential' \(v=-50\) in (FSS) is immediately mapped to a non-negative number, and the neuron is therefore spiking at its maximal frequency of once in three iterations. Similarly, the voltage stays in the left interval for two iterations and the neuron is spiking once in four iterations for \(\varsigma_{1}>\varsigma\geq\varsigma_{2}=\frac {1}{2}(5-\sqrt{17})\). In general, at \(\varsigma_{k}\), there is a jump discontinuity of size
$$ \lim_{\varsigma\rightarrow\varsigma_{k}^{+}}S(\varsigma)-\lim_{\varsigma\rightarrow\varsigma_{k}^{-}}S( \varsigma)=\frac {1}{(k+2)(k+3)},\quad\text{with } S(\varsigma_{k})= \frac{1}{k+2}. $$
The firing rate of the fast subsystem (FSS) can therefore be written as
$$ S(\varsigma)=\sum_{k=1}^{\infty} \frac{H(\varsigma -\varsigma_{k})}{(k+2)(k+3)}, $$
where H is the Heaviside step function and
$$ \lim_{k\to\infty}\varsigma_{k}=0. $$
In large neuronal networks, it is often assumed that the spiking thresholds of the individual neurons are randomly distributed. This ensures heterogeneity and models intrinsic interneuronal differences or random input from outside the network. If we add Gaussian noise to the threshold parameter θ in (SNM), it is natural to define an expected firing rate \(\langle S\rangle\colon\mathbb{R}\mapsto\mathbb{R}\), given by
$$ \bigl\langle S(\varsigma)\bigr\rangle =\frac{1}{\sqrt{2\pi\sigma^{2}}} \int _{-\infty}^{\infty}e^{\frac{-w^{2}}{2\sigma^{2}}}S(\varsigma +w) \, \mathrm{d}w, $$
where \(\sigma^{2}\) is the variance of the noise. Using (27), we can rewrite (29) as
$$ \bigl\langle S(\varsigma)\bigr\rangle =\frac{1}{6}+\sum _{k=1}^{\infty }\frac{\operatorname{erf}(\frac{\varsigma-\varsigma_{k}}{\sqrt{2\sigma ^{2}}} )}{2(k+2)(k+3)}, $$
where erf denotes the error function. While \(S(\varsigma)\) can readily be computed for any \(\varsigma\in\mathbb{R}\) and we derived a concise expression for the expected firing rate, the infinite sum (30) cannot easily be evaluated. For this reason, we approximate \(\langle S(\varsigma)\rangle\) by a finite sum of the form
$$ \frac{1}{6}+\frac{1}{6N}\sum _{i=1}^{N}\operatorname{erf}\biggl(\frac {\varsigma-\nu_{i}}{\chi_{i}} \biggr), $$
for some fixed \(N\in\mathbb{N}\) and constants \(\nu_{i},\chi_{i}\in \mathbb{R}\), which are chosen by (numerically) minimizing
$$ \Biggl\lVert \frac{1}{\sqrt{2\pi\sigma^{2}}} \int_{-\infty }^{\infty}e^{\frac{-w^{2}}{2\sigma^{2}}}S(\varsigma+w) \, \mathrm{d}w-\frac{1}{6}-\frac{1}{6N}\sum _{i=1}^{N}\operatorname{erf}\biggl(\frac{\varsigma-\nu_{i}}{\chi_{i}} \biggr) \Biggr\rVert _{2}. $$
For large noise levels \(\sigma^{2}\), the average firing rate (29) has a sigmoidal shape and can be very well approximated with a small value of N (Fig. 7).
Expected firing rate for a noise level of \({\sigma^{2}=\tfrac{1}{4}}\). Shown are a numerical integration of (29) (blue) and its approximation (31) for \(N=2\) and \((\nu_{1},\nu_{2},\chi_{1},\chi_{2} )= (0.0335, [4] 0.7099,0.6890,0.8213 )\) (orange)
Augmenting Neural Fields
When large populations of neurons are modeled by networks of individual, interconnected cells, the high dimensionality of state and parameter spaces makes mathematical analysis intractable and numerical simulations costly. Moreover, large network simulations provide little insight into global dynamical properties. A popular modeling approach to circumventing the aforementioned problems is the use of neural field equations. These models aim to describe the dynamics of large neuronal populations, where spikes of individual neurons are replaced by (averaged) spiking rates and space is continuous. Another advantage of neural fields is that they are often well suited to model experimental data. In brain slice preparations, spiking rates can be measured with an extracellular electrode, while intracellular recordings are much more involved. Furthermore, the most common clinical measurement techniques of the brain, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), both represent the average activity of large groups of neurons and may therefore be better modeled by population equations. The first neural field model can be attributed to Beurle [1], however, the theory really took off with the work of Wilson and Cowan [2, 3], Amari [5, 6], and Nunez [4].
In 'classical' neural field models the firing rate of a neuronal population is assumed to be given by its instantaneous input, which is only valid for tonically spiking neurons. With the help of our rate-reduced model (RNM), it is straightforward to augment existing neural field models with more complex spiking behavior. As an example, we will look at the following two-population model on the one-dimensional spatial domain \(\varOmega=(-1,1)\):
$$ \begin{aligned} \biggl(1+\frac{1}{\alpha_{1}} \frac{\partial}{\partial t} \biggr) u_{1}(t,x) &= \int_{-1}^{1}J_{11} \bigl(x,x' \bigr)S_{1} \bigl(r_{1}\bigl(t,x'\bigr) \bigr)\\ &\quad {}+J_{12} \bigl(x,x' \bigr)S_{2} \bigl(r_{2}\bigl(t,x'\bigr) \bigr) \,\mathrm{d}x', \\ \biggl(1+\frac{1}{\varepsilon_{1}}\frac{\partial}{\partial t} \biggr) a_{1}(t,x) &= -(1-\kappa_{1})u_{1}(t,x)+\gamma_{1} S_{1} \bigl(r_{1}(t,x) \bigr), \\ \biggl(1+\frac{1}{\alpha_{2}}\frac{\partial}{\partial t} \biggr) u_{2}(t,x) &= \int_{-1}^{1}J_{21} \bigl(x,x' \bigr)S_{1} \bigl(r_{1}\bigl(t,x'\bigr) \bigr)\\ &\quad {}+J_{22} \bigl(x,x' \bigr)S_{2} \bigl(r_{2}\bigl(t,x'\bigr) \bigr) \,\mathrm{d}x', \\ \biggl(1+\frac{1}{\varepsilon_{2}}\frac{\partial}{\partial t} \biggr) a_{2}(t,x) &= -(1-\kappa_{2})u_{2}(t,x)+\gamma_{2} S_{2} \bigl(r_{2}(t,x) \bigr), \end{aligned} $$
(ANF)
where, as before,
$$ r_{i}(x,t)=\kappa_{i} u_{i}(t,x)-a_{i}(t,x)- \theta_{i}, $$
for \(i\in\lbrace1,2\rbrace\). The differential operators in the left-hand side of the integral equations in (ANF) model exponentially decaying synaptic currents with decay rate \(\alpha_{i}\). The connectivity \(J_{ij} (x,x' )\) measures the connection strength from neurons of population j and position \(x'\) to neurons of population i and position x. The connectivity kernels \(J_{ij}\colon \overline{\varOmega}\times\overline{\varOmega}\mapsto\mathbb{R}\) are assumed to be isotropic and given by
$$ J_{ij}\bigl(x,x'\bigr)=\rho_{j} \eta_{ij}e^{-\mu_{ij}\lvert x-x'\rvert}, $$
where \(\rho_{j}\) is the density of neurons of type j, \(\eta_{ij}\) is the maximal connection strength, and \(\mu_{ij}\) is the spatial decay rate of the connectivity. Both firing rate functions \(S_{i}\colon\mathbb{R}\mapsto\mathbb{R}\) are chosen to approximate the expected firing rate of Rulkov neurons (SNM) with a noise level of \(\sigma^{2}=\frac{1}{4}\) (Fig. 7),
$$ S_{1}(\varsigma)=S_{2}(\varsigma)=\frac{1}{6}+ \frac{1}{12}\operatorname{erf}\biggl(\frac{\varsigma-0.0335}{0.6890} \biggr)+\frac{1}{12}\operatorname{erf}\biggl(\frac{\varsigma-0.7099}{0.8213} \biggr). $$
We conclude this section with a simulation of (ANF) for a particular parameter set (Table 1), which illustrates that our augmented neural field can generate interesting spatiotemporal behavior that closely resembles spiking patterns of a network of Rulkov neurons (SNM) with corresponding parameter values (Fig. 8). In the Rulkov network, synaptic input to neuron i is given by
$$ u^{(i)}_{n+1} = (1-\alpha_{i})u^{(i)}_{n}+ \alpha_{i}\sum_{j=1}^{N} c_{ij}s^{(j)}_{n}, $$
where N denotes the total number of neurons in the network, and \(c_{ij}\) is the connection strength from neuron j to neuron i. To match the parameters in Table 1, we split the total population in two subpopulations of 300 neurons each, which are both equidistantly placed on the interval \([-1,1]\). Neurons within the same subpopulation share the same intrinsic parameters, and uncorrelated (in space and time) Gaussian noise is added to the threshold parameters. Finally, the connection strengths in the Rulkov network are given by
$$ c_{ij}=\eta_{{p_{i}}{p_{j}}}e^{-\mu_{{p_{i}}{p_{j}}}\lvert x_{i}-x_{j}\rvert}, $$
where \(p_{i}\) and \(x_{i}\) are the subpopulation and position of neuron i, respectively.
Spatio-temporal spiking patterns. ( A ) Simulation of the augmented neural field (ANF) with parameter values given in Table 1. Shown is the firing rate \(S_{1} (\kappa_{1} u_{1}(t,x)-a_{1}(t,x)-\theta_{1} )\) of the first population. ( B ) Simulation of a corresponding network of 300 excitatory and 300 inhibitory Rulkov neurons, all-to-all coupled via simple exponential synapses. Both populations are equidistantly placed on the interval \([-1,1]\). Uncorrelated (in space and time) Gaussian noise with variance \(\sigma^{2}=\tfrac{1}{4}\) is added to the threshold parameter of each neuron. Shown is the spiking activity of the excitatory population. Each spike is denoted by a black dot
Table 1 Parameter overview for the neural field ( ANF )
This paper presents a simple rate-reduced neuron model that is based on a variation of the Rulkov map (Rulkov [14], Rulkov et al. [15]), and can be used to incorporate a variety of non-trivial spiking behavior into existing neural field models.
The modified Rulkov map (SNM) is a phenomenological, two-dimensional single neuron model. The isolated dynamics of its fast time scale either generates a stable limit cycle, mimicking spiking activity, or a stable fixed point, corresponding to a neuron at rest (Fig. 1). The slow time scale of the Rulkov map acts as a dynamic spiking threshold and emulates the combined effect of slow recovery processes. The modified Rulkov map can mimic a wide variety of spiking patterns, such as spike-frequency adaptation, postinhibitory rebound, phasic spiking, spike accommodation, spike latency and inhibition-induced spiking (Fig. 2). Furthermore, the model can be used to model neuronal filter properties. Depending on how external input is applied to the model, it can act as either a high-pass or low-pass filter (Figs. 3 and 4).
The rate-reduced model (RNM) is derived heuristically and given by a simple one-dimensional differential equation. On the single cell level, the rate-reduced model closely mimics the spiking dynamics (Fig. 5) and filter properties (Fig. 6) of the full spiking neuron model. While a close approximation of the (expected) firing rate of Rulkov neurons (Fig. 7) is needed to mimic their behavior quantitatively, the types of qualitative dynamics of the rate-reduced model do not depend on the exact choice of firing rate function.
Due to its simplicity, it is straightforward to add the rate-reduced model to existing neural field models. In the resulting augmented equations, parameters can be chosen according to the spiking behavior of a single isolated cell. In our particular example (ANF), the emerging spatiotemporal pattern closely resembles the dynamics of the corresponding spiking neural network (Fig. 8). We believe that this is an elegant way to add more biological realism to existing neural field models, while simultaneously enriching their dynamical structure.
We used a variation of a simple toy model of a spiking neuron (Rulkov [14], Rulkov et al. [15]) to derive a corresponding rate-reduced model. While being purely phenomenological, the model could mimic a wide variety of biologically observed spiking behaviors, yielding a simple way to incorporate complex spiking behavior into existing neural field models. Since all parameters in the resulting augmented neural field equations have a representative in the spiking neuron network (and vice versa), this greatly simplifies the otherwise often problematic translation from results obtained by neural field models back to biophysical properties of spiking networks. An example demonstrated that the augmented neural field equations can produce spatiotemporal patterns that cannot be generated with corresponding 'classical' neural fields.
Beurle RL. Properties of a mass of cells capable of regenerating pulses. Philos Trans R Soc Lond B. 1956;240:55–94.
Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24.
Wilson HR, Cowan JD. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik. 1973;13:55–80.
Nunez PL. The brain wave equation: A model for the EEG. Math Biosci. 1974;21:279–97.
Amari S. Homogeneous nets of neuron-like elements. Biol Cybern. 1975;17:211–20.
Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27:77–87.
Pinto DJ, Ermentrout GB. Spatially structured activity in synaptically coupled neuronal networks: I. Traveling fronts and pulses. SIAM J Appl Math. 2001;62:206–25.
Pinto DJ, Ermentrout GB. Spatially structured activity in synaptically coupled neuronal networks: II. Lateral inhibition and standing pulses. SIAM J Appl Math. 2001;62:226–43.
Coombes S, Owen MR. Bumps, breathers, and waves in a neural network with spike frequency adaptation. Phys Rev Lett. 2005;94:148102.
Kilpatrick ZP, Bressloff PC. Effects of synaptic depression and adaptation on spatiotemporal dynamics of an excitatory neuronal network. Physica D. 2010;239:547–60.
Kilpatrick ZP, Bressloff PC. Stability of bumps in piecewise smooth neural fields with nonlinear adaptation. Physica D. 2010;239:1048–60.
Nicola W, Campbell SA. Bifurcations of large networks of two-dimensional integrate and fire neurons. J Comput Neurosci. 2013;35:87–108.
Visser S, van Gils SA. Lumping Izhikevich neurons. EPJ Nonlinear Biomed Phys. 2014;2:226–43.
Rulkov NF. Modeling of spiking-bursting neural behavior using two-dimensional map. Phys Rev E. 2002;65:041922.
Rulkov NF, Tomofeev I, Bazhenov M. Oscillations in large-scale cortical networks: Map-based model. J Comput Neurosci. 2004;17:203–23.
Izhikevich EM. Simple model of spiking neurons. IEEE Trans Neural Netw. 2003;14:1569–72.
Izhikevich EM. Which model to use for cortical spiking neurons? IEEE Trans Neural Netw. 2004;15:1063–70.
Chik DTW, Coombes S, Wang ZD. Clustering through postinhibitory rebound in synaptically coupled neurons. Phys Rev E. 2004;70:011908.
Blumhagen F, Zhu P, Shum J, Schärer Y-PZ, Yaksi E, Deisseroth K, Friedrich RW. Neuronal filtering of multiplexed odour representations. Nature. 2011;479:493–8.
The conclusions of this paper are solely based on mathematical models.
K.D. was supported by a grant from the Twente Graduate School (TGS).
Koen Dijkstra, Yuri A. Kuznetsov & Stephan A. van Gils
Department of Clinical Neurophysiology, University of Twente, Enschede, The Netherlands
Michel J. A. M. van Putten
Department of Clinical Neurophysiology, Medisch Spectrum Twente, Enschede, The Netherlands
Koen Dijkstra
Conceptualization, K.D., Y.K., M.v.P. and S.v.G.; methodology, K.D. and S.v.G.; investigation, K.D.; writing original Draft, K.D.; writing review & Editing, K.D., Y.K., M.v.P. and S.v.G.; visualization, K.D.; supervision, Y.K., M.v.P. and S.v.G. All authors read and approved the final manuscript.
Correspondence to Koen Dijkstra.
Our study don't involve human participants, human data or human tissue.
The authors declare no competing financial interests.
This manuscript does not contain any individual person's data.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Dijkstra, K., Kuznetsov, Y.A., van Putten, M.J.A.M. et al. A Rate-Reduced Neuron Model for Complex Spiking Behavior. J. Math. Neurosc. 7, 13 (2017). https://doi.org/10.1186/s13408-017-0055-3
Spiking Behavior
Rulkov
Neural Field Model
Postinhibitory Rebound
Spike Frequency Adaptation
|
CommonCrawl
|
The number sequence algorithm that solves them all!
Inspired by a student who was asked the typical question of
What is the missing number in the sequence
8, 15, 25, 38, ??, 73
I thought: Well, this type of question is silly; if the student knows about OEIS and it exists there, then there is no challenge and if it does not appear there, then there is a chance that the question is too hard or too broad for such low-level student!
What a silly teacher I thought, as I found the 54 that their teacher was probably looking for, using the generator $3/2\,{n}^{2}+5/2\,n+4 $.
Now, as we all know, such sequence is bound to not be unique. An example, I like to use is to consider the function
$$ \frac{(1-n)(2-n)(3-n)(4-n)(5-n)(6-n)(7-n)(8-n)(9-n)( y-n)}{(n-1)!} +n $$ This silly function returns
\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|} n&1&2&3&4 & 5& 6& 7& 8& 9& 10& 11& 12 \\\hline \text{out} &1& 2& 3& 4& 5& 6& 7& 8& 9& y& y& 6+ y/2 \end{array}
for $n = 1,\ldots, 12$ and any chosen $y$(!)
Now, in a similar fashion, can you create a function or algorithm that, for an input $n$ satisfies the below table for some $y$ so we can help our beloved student show off to their teacher?
\begin{array}{|c|c|c|c|} n& 1& 2 & 3 & 4 & 5 &6 \\\hline \text{out} & 8& 15 & 25& 38& y & 73 \end{array}
number-sequence
TherkelTherkel
$\begingroup$ The examples you showed are called rational interpolating polynomials; although there are lots of alternatives. Do you require the answer to be a rational polynomial? or even continuous (can they only have values at integer n?) $\endgroup$ – smci Mar 23 '17 at 15:38
$\begingroup$ @smci I did not know that, actually. That is nice to know! I do not require either, however I feel like a combination of step functions or just a piecewise function is a little non-convincing for the happy high-school student I was trying to refer to. :) $\endgroup$ – Therkel Mar 23 '17 at 21:36
$\begingroup$ I did something like this a while ago: desmos.com/calculator/hrlzf1zyxi $\endgroup$ – greenturtle3141 Mar 24 '17 at 0:37
Sure. The function
$8\frac{(n-2)(n-3)(n-4)(n-5)(n-6)}{(1-2)(1-3)(1-4)(1-5)(1-6)}$
0 when $n=2,3,4,5,6$ because there's a zero factor in the numerator and 8 when $n=1$ because each factor in the numerator cancels with one in the denominator.
construct another five such terms -- e.g., the one involving $y$ will be $y\frac{(n-1)(n-2)(n-3)(n-4)(n-6)}{(5-1)(5-2)(5-3)(5-4)(5-6)}$ -- and add them up.
Gareth McCaughan♦Gareth McCaughan
$\begingroup$ maybe I understood wrong, but for n=2 shouldn't the result be 15 not 0? Or am I missing something? $\endgroup$ – Marius Mar 23 '17 at 11:13
$\begingroup$ Ah!. Never mind. You explained how to construct one single term. Ignore me. $\endgroup$ – Marius Mar 23 '17 at 11:15
$\begingroup$ Well, that was quick and easy! Your solution is also nice because it can be constructed for any sequence. $\endgroup$ – Therkel Mar 23 '17 at 11:47
$\begingroup$ Yup. It's called the Lagrange interpolation formula. Note that if you're actually doing interpolation for any practical purpose that formula is probably the wrong way to do the calculations. $\endgroup$ – Gareth McCaughan♦ Mar 23 '17 at 12:38
$\begingroup$ @GarethMcCaughan I beg to differ. In some advanced high school maths streams you'll need to know it, and in Olympiad mathematics it's also a useful algebraic tool. $\endgroup$ – boboquack Mar 23 '17 at 20:21
Using @garethmccaughan's procedure in his answer, here is the full version that creates the bottom table in the question. Self-answer for completeness.
\begin{aligned} \frac{1}{24}\Bigl( & (54-y)n^5 - 16(54-y)n^4 \!+ 95(54-y)n^3 \bigr. \\& -4(3501-65y)n^2 + 12(1463-27y)n -48(160-3y) \Bigr ) \end{aligned}
Raw version:
1/24*((54-a)*n^5-16*(54-a)*n^4+95*(54-a)*n^3
-4*(3501-65*a)*n^2+12*(1463-27*a)*n-48*(160-3*a))
Not the answer you're looking for? Browse other questions tagged number-sequence or ask your own question.
Why aren't the numbers playing nicely anymore?
Simple mathematical puzzle
Hidden Number Sequence
What is a One-In-A-Million Number™?
What is a VP Number™?
A Computer With Unexpected Output
What's the solution to this complex number puzzle from Crystal Mentality?
What is a Number with Feeling™?
The Generic Number Sequence Puzzle
SafeCracker #1 - Look who's talking
|
CommonCrawl
|
Rubik's Cube Not a Group?
I read online that
although the 3x3x3 is a great example of a mathematical group, larger cubes aren't groups at all.
How can that be true? There is obviously an identity and it is closed, so that must mean that some moves aren't invertible. But this seems unlikely to me.
group-theory recreational-mathematics rubiks-cube
J. M. is a poor mathematician
XodarapXodarap
$\begingroup$ The moves on a "Rubik cube" of whatever size are a group action. $\endgroup$ – hardmath May 13 '12 at 0:35
$\begingroup$ I don't know what the writer of that web page thought they meant, but the claim is incorrect. Of course it is a group. $\endgroup$ – MJD May 13 '12 at 0:36
$\begingroup$ The author seems to be implying that because you can have a "solved cube" that is not in its original position (by rotation of centers), then the cube is not "a group" because there are many "identities". But this is a confusion between a group and a group action, or else a confusion as to what constitutes an "identity"; or else you can view the cube as a suitable quotient. In short, the author is confused. $\endgroup$ – Arturo Magidin May 13 '12 at 0:45
$\begingroup$ Don't worry too much about what you read online. A lot of it is wrong. Much of the rest is trivial. $\endgroup$ – André Nicolas May 13 '12 at 0:54
$\begingroup$ @AndréNicolas On the other hand, the entirety of MSE and MathOverflow are online. Not saying your advice is good or bad, just an observation :-) $\endgroup$ – treble May 13 '12 at 1:02
The 4x4x4 cube and higher aren't groups in the same sense that the 3x3x3 cube is a group.
The set of reachable positions of a 3x3x3 cube, viewed as functions from a 54-element set (representing the locations of the stickers) to a 6-element set (the colors of the stickers) form a group. The operation here is given by the following: For x, y positions of a 3x3x3 cube, let $a_1a_2 \cdots a_n$ be a sequence of moves which, starting from the identity, puts the cube into position x, and $b_1b_2 \cdots b_m$ the same for y. The product xy is the state the cube is in after the sequence $a_1 \cdots a_n b_1 \cdots b_m$.
The fact that this operation indeed forms a group isn't as trivial as it first seems. The issue isn't with associativity, identity, or invertibility. Instead, it's with well-definedness. How do you know that the choice of sequences $a_1 \cdots a_n$ and $b_1 \cdots b_m$ doesn't make a difference?
For the 3x3x3 cube, the way to solve this is viewing it as a subgroup of the symmetric group on the set of all 54 stickers. This doesn't work for larger cubes, because it's possible to come up with moves that move some of the cubies around, without changing how the stickers. The stickers can move, without their apparent colors changing. To see why this is impossible for a 3x3x3 cube, note that any cubie is uniquely specified by its stickers, so any permutation of 2 stickers of the same color in the cube group must permute the cubies, which then can not be the identity.
But on the 4x4x4 cube, this fails. All 4 of the starred stickers here, for example, are indistinguishable, in the sense that you could permute them and still have a solved state. I don't think a permutation of just these 4 stickers is possible, but there are many permutations of indistinguishable cubies which can be done. According to Dustan Levenstein in the comments, any even permutation of these 4 stickers (or more generally of any center stickers) is possible.
The way to prove formally that the 4x4x4 cube is not a group is to find a sequence of moves that acts as the identity on one configuration, but not on another configuration. This is pretty easy to do, but I don't remember the precise solution, so I'll omit it. (If anyone really wants such an example, comment and I can dig one up.)
It is true that if we labeled all of the stickers somehow, forming a so-called "supercube", and required that the labels also match up, then we would have a group. This group would be constructed in the same way as the 3x3x3 group, as a subgroup of the symmetric group on the 96 stickers of the 4x4x4 cube.
This group acts on the set of positions of the 4x4x4 cube transitively, but not freely. This, in group theoretical language, is why the 4x4x4 cube positions do not form a group. We can still study the larger group, but we need to take into account that the action isn't as nice as in the 3x3x3 case.
Logan MLogan M
$\begingroup$ Any even permutation of the middle stickers is possible, so in particular, an even permutation of just those those four stickers is possible. $\endgroup$ – Dustan Levenstein May 13 '12 at 6:25
$\begingroup$ What you are saying is that the action of the group of moves on the set of achievable positions of a cube with edges of length 4 or more has nontrivial stabilizers. Statements like "Rubik's Cube is a group" are so unclear that that it is pointless to discuss whether they are true or not! $\endgroup$ – Derek Holt May 13 '12 at 15:44
$\begingroup$ What is true, I believe, is that in the $3\times3$ case, the stabilizer of an achievable position (which is the stabilizer of any achievable position) is a normal subgroup. So, if we want, we can define the Rubik's cube group for the $3\times3$ cube to be the quotient of the full group by this stabilizer. We can't do this for $4\times 4$ and higher since the stabilizers are not normal. $\endgroup$ – Will Orrick May 13 '12 at 16:37
$\begingroup$ @DerekHolt Having nontrivial stabilizers is, of course, precisely what it means for an action to be non-free. As for the statement, of course it's unclear, that's why the OP wanted someone to interpret it. That doesn't mean it has no content. And this particular abuse of language is common among people who study twisty puzzles, so it's certainly worth knowing. $\endgroup$ – Logan M May 13 '12 at 17:20
$\begingroup$ @WillOrrick You are correct, if you consider the orientation, and not just the color, of the stickers. I ignored this in my answer, but perhaps I should not have. The reason why I did is because the group of all moves which preserve the position, but not necessarily orientation, of each sticker is normal even for larger cubes, so we can mod out by it (and I did without explicitly saying so). For the 3x3x3 cube, this just so happens to be the stabilizer of every element, so the result is a normal action on the quotient. For the 4x4x4 cube, stabilizers are bigger. $\endgroup$ – Logan M May 13 '12 at 17:26
Arguments that the 3x3x3 is a group while the 4x4x4 isn't, based on the permutability of the center pieces in the latter case, are confused nonsense. By that standard, the 3x3x3 isn't a group either. Reason: although it is true that one cannot move the centers to other locations in the 3x3x3 case, one can rotate them in place. Consider this twist sequence: $(RLFR^-L^-F^2)^2$. If you use a cube with marked stickers, you will see that the net effect of this sequence is to rotate the $F$ center by 180 degrees. There is another sequence which rotates one center 90 degrees clockwise and another center 90 degrees counterclockwise. I see no reason to discount center rotations as 'movement'.
The real question is, when we say "the cube's group", what are we referring to? From all the articles I've read, the universal presumption is that the group is the set of all 'sticker' permutations (including permutations of the stickers' corners) achievable by any sequence of face/slice twists, regardless of visual distinguishability. In group-theoretic terms, there are three generally acknowledged constructs here:
The free group generated by { $R, L, F, B, U, D $ }.
The cube group which is the free group modulo the subgroup of all words which act as the identity on the sticker set - i.e the words which end up leaving all stickers exactly in their original position and orientation, not just in a perceptually indistinguishable state.
The perceived state of the physical cube itself, which is the result of the action of the most recently applied element of the cube group.
I agree that the third construct isn't a group, but (to my knowledge) no one serious about cube theory ever thinks of that construct when referring to the cube as representing a group.
PMarPMar
Not the answer you're looking for? Browse other questions tagged group-theory recreational-mathematics rubiks-cube or ask your own question.
The redundancy of Rubik's cube states
Nice examples of groups which are not obviously groups
Are actions in the $3\times 3\times 3$ rubik cube a group?
How to learn about the Rubik's Cube?
non-cyclic example that there is an element not contained in any maximal subgroup
Minimal generating set of Rubik's Cube group
Anything wrong with this proof on Rubik's cube group?
Rubik's cube group elements and their distance from the solved position
How Can the Maximum Number of Moves Needed to Solve a 3x3x3 Rubik's Cube Be Simply Proven
Cardinality of permutation group
Exists a discrete isometry group that's not finitely generated?
Group-subsets of monoids with different identities
Identity element or neutral element in a Rubik's Cube
|
CommonCrawl
|
An improved equivalent circuit model of a four rod deflecting cavity
Apsimon, Robert James and Burt, Graeme Campbell (2017) An improved equivalent circuit model of a four rod deflecting cavity. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 847. pp. 52-60. ISSN 0168-9002
PDF (4RCC_NIMA)
4RCC_NIMA.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial-NoDerivs.
Official URL: https://doi.org/10.1016/j.nima.2016.11.030
In this paper we present an improved equivalent circuit model for a four rod deflecting cavity which calculates the frequencies of the first four modes of the cavity as well as the $\frac{R_{T}}{Q}$ for the deflecting mode. Equivalent circuit models of RF cavities give intuition and understanding about how the cavity operates and what changes can be made to modify the frequency, without the need for RF simulations, which can be time-consuming. We parameterise a generic four rod deflecting cavity into a geometry consisting of simple shapes. Equations are derived for the line impedance of the rods and the capacitance between the rods and these are used to calculate the resonant frequency of the deflecting dipole mode as well as the lower order mode and the model is bench-marked against two test cases; the CEBAF separator and the HL-LHC 4-rod LHC crab cavity. CST and the equivalent circuit model agree within $4\%$ for both cavities with the LOM frequency and within 1$\%$ for the deflecting frequency. $\frac{R_{T}}{Q}$ differs between the model and CST by $37\%$ for the CEBAF separator and $25\%$ for the HL-LHC 4-rod crab cavity; however this is sufficient for understanding how to optimise the cavity design. The model has then been utilised to suggest a method of separating the modal frequencies in the HL-LHC crab cavity and to suggest design methodologies to optimise the cavity geometries.
Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment
This is the author's version of a work that was accepted for publication in Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 847 2017 DOI: 10.1016/j.nima.2016.11.030
Faculty of Science and Technology > Engineering
|
CommonCrawl
|
OSA Publishing > Optics Express > Volume 28 > Issue 21 > Page 30362
James Leger, Editor-in-Chief
Issues in Progress
Feature Issues
Post-processing method for the removal of mixed ring artifacts in CT images
Yafei Yang, Dinghua Zhang, Fuqiang Yang, Mingxuan Teng, You Du, and Kuidong Huang
Yafei Yang,1,2 Dinghua Zhang,1,2 Fuqiang Yang,1,2 Mingxuan Teng,1,2 You Du,1,2 and Kuidong Huang1,2,*
1Key Laboratory of High Performance Manufacturing for Aero Engine, Northwestern Polytechnical University, Ministry of Industry and Information Technology, Xian, Shanxi 710072, China
2Engineering Research Center of Advanced Manufacturing Technology for Aero Engine, Northwestern Polytechnical University, Ministry of Education, Xian, Shanxi 710072, China
*Corresponding author: [email protected]
Yafei Yang https://orcid.org/0000-0003-0015-9575
Fuqiang Yang https://orcid.org/0000-0003-1724-5293
Y Yang
D Zhang
F Yang
M Teng
Y Du
K Huang
Issue 21,
pp. 30362-30378
•https://doi.org/10.1364/OE.401088
Yafei Yang, Dinghua Zhang, Fuqiang Yang, Mingxuan Teng, You Du, and Kuidong Huang, "Post-processing method for the removal of mixed ring artifacts in CT images," Opt. Express 28, 30362-30378 (2020)
Ring-artifacts removal for photon-counting CT (OE)
Ring artifact correction using detector line-ratios in computed tomography (OE)
Removal of ring artifacts in microtomography by characterization of scintillator variations (OE)
Table of Contents Category
Terahertz and X-ray Optics
Nonlinear spatial filtering
Photon counting
X ray computed tomography
Original Manuscript: June 24, 2020
Revised Manuscript: September 22, 2020
Manuscript Accepted: September 22, 2020
Equations (3)
Ring artifacts seriously deteriorate the quality of CT images. Intensity-dependence of detector responses will result in intensity-dependent ring artifacts and time-dependence of CT hardware systems will result in time-dependent ring artifacts. However, only the intensity-dependent ring artifacts are taken into consideration in most post-processing methods. Therefore, the purpose of this study is to propose a general post-processing method, which has a significant removal effect on the intensity-dependent ring artifacts and the time-dependent ring artifacts. First in the proposed method, transform raw CT images into polar coordinate images, and the ring artifacts will manifest as stripe artifacts. Secondly, obtain structure images by smoothing the polar coordinate images and acquire texture images containing some details and stripe artifacts by subtracting the structure images from the polar coordinate images. Third, extract the stripe artifacts from the texture images using mean extraction and texture classification, and obtain the extracted ring artifacts by transforming the extracted stripe artifacts from polar coordinates into Cartesian coordinates. Finally, obtain corrected CT images by subtracting the extracted ring artifacts from the raw CT images, and iterate the corrected CT images in above steps until the ring artifacts extracted in the last iteration are weak enough. Simulation and real data show that the proposed method can remove the intensity-dependent ring artifacts and the time-dependent ring artifacts effectively while preserving image details and spatial resolution. In particular, real data prove that the method is suitable for new CT systems such as the photon counting CT.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
X-ray computed tomography (CT) has been widely used in many fields such as disease detection [1–4], image-guided radiation therapy (IGRT) [5,6], and industrial detection. The flat-field correction method in CT can get high quality projection images by suppressing the non-uniformity of incident cone beam and the inconsistency of detector responses [7,8]. However, in practice, raw CT images often have ring artifacts in the form of concentric circles, which may interfere with the diagnosis of diseases and the detection of industrial parts [9]. The causes of these ring artifacts are more complicated than expected, and can be summarized as the intensity-dependence of the detector responses and the time-dependence of the CT hardware systems (including the time-dependence of incident cone beam [10,11] and the time-dependence of the detector responses [12,13]). These two factors make the flat-field correction method not ideal. For energy integrated CT systems, the ring artifacts seriously reduce the accuracy of clinical diagnosis and the precision of industrial-part detection; for photon counting CT systems, the ring artifacts further deteriorate the accuracy of material identification, making it impossible to take advantage of their unique advantages. Therefore, it is urgent to develop an effective and stable ring artifact removal method.
Because ring artifacts are directly caused by the CT hardware systems, there are roughly four methods to remove the ring artifacts by adjusting the CT hardware systems. The first method is to replace the usual two-point flat field correction by a multi-point, piecewise linear flat field correction [14]. The second method requires a series of flat-field images acquired before, during, and after the CT scan to capture the dynamic characteristics of the CT hardware systems [15]. The third method is to move the sample or detector systems during an acquisition in defined horizontal and vertical steps to overcome the shortcomings of the hardware systems [16,17]. The fourth method requires multiple flat-field images under different beam filters with varying thicknesses to identify the clustered projection image artifacts that lead to the ring artifacts [18]. However, these methods have high operating costs on hardware level and are difficult to implement. In addition, due to the instability or time-dependence of the CT hardware systems, it is possible to retain some ring artifacts in the corrected CT images [19]. In recent years, pre-processing methods and post-processing methods have become research hotspots because they do not have any operations in hardware level and can also achieve good performance [4,9,20–23].
Pre-processing methods are mainly based on the raw projection sinograms [24,25]. The main idea of these methods is that the ring artifacts manifest as stripe artifacts on the raw projection sinograms, reducing the difficulty of detection and elimination. Originally, Kowalski removed the stripe artifacts in the raw projection sinograms through simple low-pass filters, but this method will lose high-frequency details and deteriorate image quality [26]. To preserve more high-frequency image details, Raven discovered that the stripe artifacts in the polar angle direction are located in the center of image after Fourier transform and can be further removed by low-pass filters [27]. Furthermore, Munch et al. combined wavelet decomposition and a Fourier low-pass filter to distinguish the stripe artifacts and the image details more accurately [28]. Boin et al. proposed a method for removing the stripe artifacts in the sinograms using a moving average filter during the reconstruction process [29]. Subsequently, Ashrafuzzaman et al. proposed a variable window moving average (VWMA) filter and a weighted moving average (WMA) filter to remove the stripe artifacts in the sinograms [30]. Titarenko et al. proposed a method based on the assumption of smoothness of the raw projection sinograms. This method uses the notion of a regularized solution from the theory of ill-posed problems and is applied to the sinograms before image reconstruction [31]. Miqueles et al. proposed a fast algorithm for ring artefact reduction in tomography by generalizing the method of Titarenko et al. Compared with the original method, this method is fast due to the use of the conjugate gradient method with an explicit solution [32]. Kim et al. proposed a method to remove stripe artifacts from sinograms by estimating the sensitivity of each detector element and equalizing them in sinograms [33]. Nieuwenhove et al. computed the computed through principal component analysis of a set of flat fields, and used a linear combination of the most important eigen flat fields to individually normalize each X-ray projection [15]. Titarenko et al. wrote the basic ring artifact suppression algorithm as a 1D filter and combined with standard filtering used in filtered back-projection algorithms [34]. Vågberg et al. proposed a method to correct ring artifacts by obtaining changes in scintillator thickness [35]. Vo et al. divided the streak artifacts into four categories and correcting them separately, and achieved good results in pre-processing methods [36]. Croton et al. proposed a pixel-wise detector calibration using hundreds of data points for each position on the detector, rather than the standard two-point flat-field calibration [37]. Pre-processing methods work on the raw projections, and it is difficult to intuitively select parameters, which limits the applicability of these methods. Besides, commercial cone-beam systems often go with their own reconstruction software, which may only output reconstructed images to end-users. This greatly limits the application of pre-processing methods. Post-processing methods work on the reconstruction images with no need for the raw projection and have better adaptability and flexibility that pre-processing methods. In addition, only the post-processing methods involve the transformation of Cartesian coordinates to polar coordinates and possibility of reducing the spatial resolution of the CT images. Therefore, pre-processing methods and post-processing methods are generally considered to be two different methods [36]. We focus on post-processing methods in this paper, and the proposed method is not tested against those seemingly comparable pre-processing methods.
Post-processing methods are mainly based on the idea of transforming the raw CT images from Cartesian coordinates to polar coordinates [38]. The ring artifacts in the raw CT images manifest as the stripe artifacts in polar coordinates, greatly reducing the difficulty of detection and elimination. Sijbars et al. proposed a method for removing the stripe artifacts based on morphological operators [39]. In polar coordinates, this method uses a sliding window to remove artifacts in the areas with serious stripe artifacts, but the selection of parameters is very important to the results. Subsequently, Wei et al. achieved a more rigorous distinction of artifacts and image details in synchrotron radiation CT images by combing wavelet decomposition and a Fourier low-pass filter [40]. According to the assumption that the gray-value features of the ring artifacts are local extrema, Kyriakou et al. [41] and Prell et al. [42] used a median filter to remove the stripe artifacts in polar coordinates. Since the stripe artifacts have obvious structure features, the variational methods have natural advantages in removing the stripe artifacts. Bouali et al. used unidirectional variational model (UV) to remove the stripe artifacts in images acquired by the moderate resolution imaging spectroradiometer (MODIS) [43]. Wu et al. used the variational method to remove shading artifacts in CT images without prior texture information [44]. Xu et al. used relative total variation (RTV) methods to separate texture and structure in natural images [45]. According to the sparse distribution of the stripe artifacts, Yan et al. proposed a variational method that can effectively remove the stripe artifacts, but the coordinate transformation process deteriorated the spatial resolution of the images [46]. Furthermore, Liang et al. applied the relative total variance (RTV) method in polar coordinates, and proposed a general iterative image domain ring artifact removal method [47]. This method can be widely used in clinical CT images without deteriorating the image details. However, this method may not be able to completely remove strong ring artifacts (this method deteriorates the spatial resolution when transforming the stripe artifacts into the ring artifacts, resulting in remaining some ring artifacts in the corrected CT images). Chao et al. proposed a radial basis function neural network (RBFNN) to remove ring artifacts. However, for the best important step recognition of artifacts, the method of manual recognition after artifact enhancement are used in this method. If there are strips or gaps in the artifacts with no obvious features, the manual recognition method is likely to have a certain error rate [48]. Fang et al. used deep learning methods for ring artifacts removal respectively in image domain, projection domain and the polar coordinate system. However, this method requires extensive training data, and the ring artefacts in the training data should be similar to the artefacts in experimental data [49]. Generally, most of post-processing methods default that the stripe artifacts in polar coordinates have constant gray values in the polar angle direction. That is, only the ring artifacts caused by the intensity-dependent detector responses, which can be called the intensity-dependent artifacts in this paper, are taken into consideration. However, in practice, the ring artifacts caused by the time-dependence of the CT hardware systems, which can be called the time-dependent artifacts in this paper, may cause gray values of the stripe artifacts to be not constant. For the above two kinds of ring artifacts, we propose a general image domain iterative post-processing method based on the relative total variance (RTV) in this paper. This method can effectively remove the intensity-dependent ring artifacts and the time-dependent ring artifacts in two parts while preserving the spatial resolution and the image details.
Both numerical simulation and real data are used to evaluate the proposed method. In the numerical simulation, we add some mixed ring artifacts (the mixed ring artifacts are composed of the intensity-dependent ring artifacts and the time-dependent ring artifacts) in a Shepp Logan phantom. The real data from real photon counting CT system (the photon counting CT has become a research hotspot) and real synchrotron data sets are used to evaluate the practical effectiveness of the proposed method for new CT systems. Considering that the method of Liang et al. can be regarded as a representative of post-processing variational method, and the method of Wei et al. (based on Wavelet-Fourier filter) can be regarded as a representative of post-processing filtering method, so these two algorithms are comparison algorithms in this article. The subjective evaluations are mainly based on image comparisons, and the objective evaluations are mainly based on signal to noise ratio (SNR), contrast noise ratio (CNR), and full width at half maxima (FWHM). The results show that the proposed method has achieved good results in the simulation and the real data, and can be widely used in clinical medicine and industrial detection.
2. Methods and materials
2.1 Workflow
As is widely known, the ring artifacts manifest as the stripe artifacts in polar coordinates, greatly reducing the difficulty of detection and elimination. Therefore, in this paper, we can transform the raw CT images into polar coordinate images. Most images can be expressed as "structure + texture". Xu et al. have obtained the structure images with good edge preservation from natural images using the RTV method [37]. For a polar coordinate image, both the stripe artifacts and the image details have obvious texture characteristics, and they constitute the texture image. Therefore, the main idea of the proposed method is to obtain the texture image using the RTV algorithm, and then extract the stripe artifacts from the texture image. The framework for removing the ring artifacts are shown in Fig. 1. The steps of the proposed method are as follows:
1. Transform the raw CT image with ring artifacts into a polar coordinate image using interpolation [41].
2. Obtain a structure image by smoothing the polar coordinate image using an edge-preserving smoothing method such as the RTV algorithm (this step is described in detail in Section 2.2).
3. Generate a texture image by subtracting the structure image from the polar coordinate image. The texture image can hardly contain all image details and stripe artifacts, which is a disadvantage of this step.
4. Extract the stripe artifacts from the texture image (this step is described in detail in Section 2.3).
5. Extract the ring artifacts by transforming the extracted stripe artifacts into Cartesian coordinate system. The spatial resolution will be lost when the extracted stripe artifacts are transformed into the ring artifacts, which is a disadvantage of this step.
6. Obtain a corrected CT image with less ring artifacts by subtracting the extracted ring artifacts obtained in Step 5 from the raw CT image. Because the signal of the extracted ring artifacts is weak enough, the loss of spatial resolution in Step 5 has little effect on the corrected CT image. Compared with the raw CT image, the corrected CT image hardly loses spatial resolution and has less ring artifacts.
7. Update the raw CT image using the corrected CT image and repeat Steps 1-6 to remove ring artifacts gradually until the ring artifacts extracted in the last iteration are weak enough (the stopping criterion of iterations is described in detail in Section 2.4). The purpose of this step is to remove the remaining ring artifacts in the corrected CT image to overcome the disadvantages in Steps 3 and 5.
Fig. 1. The framework for removing the ring artifacts.
Download Full Size | PPT Slide | PDF
The most critical points in the proposed method are as follows: texture image acquisition based on the RTV algorithm, stripe artifact extraction and iterative stopping criterion.
2.2 Texture image acquisition based on the RTV algorithm
The polar coordinate image can be expressed as "structure + texture", and we can obtain the structure image by smoothing the polar coordinate image. In order to obtain the structure image accurately, it is critical to select a good edge-preserving smoothing algorithm. In this paper, we use the RTV algorithm as the smoothing algorithm, which does not require a priori information. The relative total variation is used to capture the nature of structure and texture, and the non-uniform anisotropic texture can be effectively removed. More details about the RTV algorithm can be found in the work of Xu et al. [45]. From the perspective of practical application, the RTV algorithm can be simply described by:
(1)$$S = tsmooth(I,\lambda ,\sigma ,\max Tter,\varepsilon ),$$
where S is the output image (structure image). I is the input image (polar coordinate image).$\lambda $ is the smoothing weight and should be selected according to the richness of ring artifacts and details in the polar coordinate image (the smaller the smoothing weight, the more details in the structure image). $\sigma $ is the maximum size of texture element (in order to completely remove the stripe artifacts, $\sigma \textrm{ = }4$ is taken in this paper according to the width of most stripe artifacts in the polar coordinate image). $\max Tter$ is the number of iterations (we select $\max Tter\textrm{ = }4$, which is the default value in the work of Xu et al.). $\varepsilon $ is the parameter that controls the sharpness of the output image (we select $\varepsilon \textrm{ = }0.02$, which is the default value in the work of Xu et al.) [45]. In fact, most of parameters are default value in the work of Xu et al. We only modify the value of $\sigma $ according to the width of most stripe artifacts in the polar coordinate image. $\lambda $ is the most important parameter in Eq. (1), and we need to adjust $\lambda $ according to the specific polar coordinate images.
We can obtain the texture image by subtracting the structure image from the polar coordinate image. In order to avoid the possibility of losing image details when extracting the stripe artifacts from the texture image, the key of this step is that the texture image should not contain any structure information. Therefore, for the RTV algorithm, we should select a small smoothing weight. At this time, the texture image contains only some image details and stripe artifacts. We can only remove some ring artifacts in a single extraction. The remaining ring artifacts can be extracted in subsequent iterations. However, it is notable that if the weight is too small, it may increase the time consumption of the whole workflow. Therefore, we need to adjust $\lambda $ according to the specific image to ensure that the texture image contain no structure information and the workflow is not too time-consuming.
2.3 Stripe artifact extraction
The texture image is composed of the stripe artifacts and the image details. The key step of the proposed method is to extract the stripe artifacts from the texture image. Generally, for stable CT systems, their polar coordinate images may only contain the intensity-dependent stripe artifacts. For nonstable CT systems, their polar coordinate images may contain the mixed stripe artifacts (the mixed stripe artifacts are composed of the intensity-dependent stripe artifacts and the time-dependent stripe artifacts). Therefore, we can discuss the intensity-dependent stripe artifacts and the time-dependent stripe artifacts respectively.
2.3.1 Extraction of intensity-dependent stripe artifacts
Firstly, we discuss the intensity-dependent stripe artifacts whose features are simple enough. When the smoothing weight of the RTV algorithm is small, for the image details, the sum of gray values of all pixels in the polar angle direction can be regarded as zero. It is well known that the intensity-dependent stripe artifacts have same gray values in the polar angle direction of the texture image. Since the texture image can be divided into intensity-dependent stripe artifacts and the image details, the intensity-dependent stripe artifacts can be estimated as the average gray value of all pixels in the polar angle direction of the texture image [47], as shown in Fig. 2. If necessary, we can take the range of 80% in the middle size of the gray value points for mean value calculation in order to exclude the influence of dead pixels. At this time, our method is similar to the method of Liang et al. and can be considered as an improvement of the whole framework for removing the intensity-dependent ring artifacts [47]. The method updates the raw CT image using the corrected CT image, which avoids the loss of spatial resolution when the stripe artifacts are transformed into ring artifacts, and has a good intensity-dependent ring artifact removal effect.
Fig. 2. The process for extracting the intensity-dependent stripe artifacts.
2.3.2 Extraction of time-dependent stripe artifacts
Secondly, we discuss the time-dependent stripe artifacts whose features are complex. The time-dependent ring artifacts in CT images may be caused by many factors. For example, in synchrotron tomography, the thermal instability of the optics, thermal fluctuations in the cooling system, the instability of the bending magnet on the APS storage ring, and small time-dependent perturbations of the CCD sensor may cause the time-dependent ring artifacts [11]. In photon counting CT, the photon counting detector usually suffer from the local charge trap effects, which usually depend on the polarization time and the exposure time [19]. In addition, because the manufacturing techniques of the photon counting detector are not fully mature enough, some pixels often show gain instability. In conclusion, both the instability of X-ray sources and the instability of the detector responses may lead to time-related ring artifacts, and the characteristics of time-dependent artifacts may be related to specific CT hardware systems. From the work of Brombal et al. [19] and the data obtained from our photon counting CT system, it can be found that the actual time-dependent stripe artifacts are discontinuous in the polar angle direction (the gray value is not constant). We hope to provide a more accurate and detailed description and analysis, but the time dependence of different CT hardware systems is difficult to predict. According to the discontinuity characteristics of stripe artifacts, we can divide the pixels into different categories. The pixels in each category have similar gray values in the in the texture image, and should be spatially continuous. It is difficult to estimate the exact category number of time-dependent stripe artifacts because of the complexity of the time-dependent stripe. Therefore, for simplicity, we assume that the stripe artifacts can be divided into two categories, and other categories of the stripe artifacts (if they do exist) can be extracted in subsequent iterations. This paper requires that pixels of the same category have the highest gray-value similarity and the smallest differences. The classification function can be described by:
(2)$$\begin{array}{l} \arg \min {\kern 1pt} {\kern 1pt} {\kern 1pt} F({p,q} ){\kern 1pt} {\kern 1pt} = \sum\limits_{i = p}^q {{{({{R_i} - E{C_1}} )}^2}} \\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} + {\kern 1pt} \sum\limits_{j = 1}^{(p - 1)} {{{({{R_j} - E{C_2}} )}^2} + {\kern 1pt} \sum\limits_{j = (q + 1)}^M {{{({{R_j} - E{C_2}} )}^2}} } {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {R_i} \in {C_1},{R_j} \in {C_2}, \end{array}$$
where ${R_i}$ is the $i\textrm{ - }th$ pixel in the polar angle direction. ${C_1}$ represents the first category of pixels, that is, the pixels from the $p\textrm{ - }th$ pixel to the $q\textrm{ - }th$ pixel in the polar angle direction. $E{C_1}$ represents the mean value of all pixels in the first category. ${C_2}$ represents the second category of pixels, that is, all pixels except the first category of pixels in the polar angle direction (from the first pixel to the $(p - 1)th$ pixel and the $({q + 1} )th$ pixel to the $M\textrm{ - }th$ in the polar angle direction). $E{C_2}$ represents the mean value of all pixels in the second category; $M$ represents the total number of pixels in the polar angle direction. All pixels of the two categories are continuous.
In order to preserve the image details, the number of the pixels in each category should be greater than the set threshold. Here we must analyze the effect of this threshold on the results. We obtain the raw structure image by smoothing the polar coordinate image using the RTV algorithm. The structure image contains almost no image details, makes the details in the texture image obvious as shown in Fig. 2. If the threshold is too small, the classification function may mistakenly divide the image details into one category and other pixels into another category, resulting in the loss of image details. In this paper, we select the threshold as 10% of total pixels. The threshold is large enough to avoid the loss of small (within 10% of the total pixels) details. In fact, the threshold can be modified to other values according to the specific texture image, such as 5% (as long as the loss of image details can be avoided, the threshold has little effect on the results). Figure 3 shows the three-dimensional (3D) graph of the classification function $F({p,q} ){\kern 1pt} {\kern 1pt}$ for a certain stripe artifact in the polar angle direction (if $q > = p$ or $(q - p) < = 0.1\ast M$, we set the value of the classification function $F({p,q} ){\kern 1pt} {\kern 1pt}$ to the maximum value).
Fig. 3. The 3D graph of the classification function $F({p,q} ){\kern 1pt} {\kern 1pt}$ changes with p and q for a certain stripe artifact.
The time-dependent stripe artifacts can be estimated as the mean value of all pixels in their corresponding categories. The number of the pixels in each category should be greater than the set threshold. The corrected texture image can be obtained by subtracting the extracted stripe artifacts from the texture image. Considering the complexity of the time-dependent stripe artifacts, it is difficult to remove them completely in a single extraction. We can increase the number of iterations according to the complexity of the time-dependent stripe artifacts. Finally, we can extract the final time-dependent stripe artifacts by subtracting the final corrected texture image from the texture image. The process for extracting the time-dependent stripe artifacts is shown in Fig. 4, and we can find that there are more than two categories of stripe artifacts in the final time-dependent stripe artifacts. Therefore, we can extract complex (multi-category) time-dependent stripe artifacts by dividing the pixels into two categories. Dividing pixels into three or more categories may increase the complexity of the method, but it does not help in extracting time-dependent stripe artifacts.
Fig. 4. The process for extracting the time-dependent stripe artifacts.
For the intensity-dependent stripe artifacts and the time-dependent stripe artifacts, the premise of this step is that the texture image does not contain any structural information. Therefore, when obtaining the texture image, we should select a small smoothing weight for the RTV algorithm.
2.4 Stopping criterion of iterations
The main idea of the method is to remove some ring artifacts during each iteration, and gradually removes the ring artifacts in CT images in subsequent iterations. If the ring artifacts extracted in the last iteration are weak enough compared to the ring artifacts extracted in the first iteration, the workflow is terminated. We can use L1 norm to quantize the ring artifacts and the stopping criterion can be described by:
(3)$${C_s} = {||{RIN{G_K}} ||_1}/{||{RIN{G_1}} ||_1},$$
where $RIN{G_K}$ is the ring artifacts extracted in the $K - th$ iteration, and $RIN{G_1}$ is the ring artifacts extracted in the first iteration.
In practice, the raw CT images may contain the intensity-dependent ring artifacts for the stable CT hardware systems, and may contain mixed ring artifacts for the unstable CT hardware systems. In this case, we can divide the proposed method into two parts. The first part of the proposed method only contains the process for extracting the intensity-dependent stripe artifacts. The second part only contains the process for extracting the time-dependent stripe artifacts. We can remove intensity-dependent ring artifacts through the first part of the proposed method. In addition, we can remove the remaining time-dependent ring artifacts in the corrected images through the second part of the proposed method (the corrected images are obtained by the first part of the proposed method). Since the features of the intensity-dependent ring artifacts are simple enough, ${C_s}$ can be set to 0.008 in the first part of the proposed method. However, because the features of the time-dependent ring artifacts are complex, ${C_s}$ can be determined according to the severity of the time-dependent ring artifacts.
2.5 Experiment and evaluation method
In order to verify the effectiveness of the proposed method, we used the proposed method to remove the ring artifacts in the Shepp-Logan phantom and an aluminum part [50]. Reconstruction parameters are shown in Table 1:
Table 1. Reconstruction parameters.
View Table | View all tables in this article
Considering that the first part of the proposed method is similar to the work of Liang et al., we use the method of Liang et al. as the comparison method, and the parameters and the stopping criterion are the same as the first part of the proposed method. Image qualities were compared by subjective observation, CNR, FWHM, and SNR. The weaker the ring artifacts, the higher the CNR and SNR. The better the image maintains the spatial resolution, the closer the FWHM is to the ideal value.
Due to space limitations, we just show simulation results with mixed artifacts and the real data from real photon counting CT system and synchrotron data sets. The photon counting CT system is mainly composed of CdTe photon counting detector XCounter Hydra FX50 and X-ray source Comet MXR-451HP/11. The synchrotron data sets used in this paper are from Zenodo https://doi.org/10.5281/zenodo.1443568.
3.1 Simulation results
First, in order to verify the effectiveness of the proposed method for removing mixed ring artifacts, we randomly changed detector responses in the Shepp-Logan phantom 360° projections and some detector responses are time-dependent. We reconstructed the CT images with the mixed ring artifacts using the FDK algorithm. We used the methods of Wei et al., Liang et al., and the proposed method to obtain the corrected images. For the mixed ring artifacts, we removed the intensity-dependent ring artifacts through the first part of the proposed method, and removed the remaining time-dependent ring artifacts through the second part of the proposed method. The comparisons are shown in Fig. 5. Through visual observations, we can compare the effect of the three methods for removing mixed ring artifacts. The method of Wei et al. suppressed the intensity of some ring artifacts, but introduced some additional artifacts and destroyed some details. The method of Liang et al. suppressed the intensity of some ring artifacts, but introduced some additional ring artifacts. The first part of the proposed method was performed closely to the method of Liang et al. However, the second part of the proposed method effectively removed the remaining ring artifacts while preserving the spatial resolution and the image details. Therefore, the proposed method (including the first part and the second part) is superior to the methods of Wei et al. and Liang et al. for the Shepp-Logan phantom with the mixed ring artifacts.
Fig. 5. The comparisons of the corrected images of the Shepp-Logan phantom with the mixed artifacts. (a) The uncorrected image. (b) The corrected image obtained by the method of Wei et al. (c) The corrected image obtained by the method of Liang et al. (d) The corrected image obtained by the first part of the proposed method. (e) The corrected image obtained by the second part of the proposed method. (f) The reference image; the images in the second row are zoomed-in views of the images in the first row; all above display windows are [0.98, 1.05]. The images in third row are the ring artifacts of the images in the first row, and the display windows are [-0.001, 0.005].
3.2 Real data results
In order to verify the practical effectiveness of the proposed method, the images of an aluminum part were obtained by the photon counting CT system, and we used the methods of Wei et al., Liang et al., and the proposed method to obtain the corrected images. In order to verify the anti-noise performance of the proposed method, the images have some noise. We can find that there are serious real mixed ring artifacts in the uncorrected CT images. In this case, we removed the intensity-dependent ring artifacts through the first part of the proposed method, and removed the remaining time-dependent ring artifacts through the second part of the proposed method. The comparisons are shown in Fig. 6. Through visual observations, we can compare the effect of three methods for removing the real mixed ring artifacts. The method of Wei et al. suppressed the intensity of some ring artifacts, but some ring artifacts still remained. The method of Liang et al. suppressed the intensity of some ring artifacts, but introduced some additional ring artifacts. The first part of the proposed method performed closely to the method of Liang et al. However, the second part of the proposed method effectively removed the remaining ring artifacts while preserving the spatial resolution and the image details. Therefore, the proposed method (including the first part and the second part) is superior to the methods of Wei et al. and Liang et al. for the real data. At the same time, it proves that the proposed method can be applied to new CT systems whose manufacturing techniques are not fully mature enough.
Fig. 6. The comparisons of the corrected images of the aluminum part with the real mixed ring artifacts. (a) The uncorrected image. (b) The corrected image obtained by the method of Wei et al. (c) The corrected image obtained by the method of Liang et al. (d) The corrected image obtained by the first part of the proposed method. (e) The corrected image obtained by the second part of the proposed method. The images in the second row are zoomed-in views of the images in the first row; all above display windows are [-0.2,1.2]. The images in third row are the ring artifacts of the images in the first row (if we assume Fig. 7(e) as the reference image), and the display windows are [-0.05,0.25].
Furthermore, we used the methods of Wei et al., Liang et al., and the proposed method to correct the real data from synchrotron data sets. There are ring artifacts in the uncorrected CT images. In this case, we removed the intensity-dependent ring artifacts through the first part of the proposed method, and removed the remaining time-dependent ring artifacts through the second part of the proposed method. Since the original images is very large, we only look at some parts for details comparison. The comparisons are shown in Fig. 7. Through the visual observations, we can compare the effect of three methods for removing the real ring artifacts. The method of Wei et al. suppressed the intensity of some ring artifacts, but some ring artifacts remained such as the position marked as A. From the position marked as B in the second row of Fig. 7, it can be seen that the hole is smaller than the normal, so some details may be lost in Fig. 7(b). The method of Liang et al. suppressed the intensity of some ring artifacts, but introduced some additional ring artifacts. The first part of the proposed method performed closely to the method of Liang et al. However, the second part of the proposed method effectively removed the remaining ring artifacts while preserving the spatial resolution and the image details. Therefore, the proposed method (including the first part and the second part) is superior to the methods of Wei et al. and Liang et al. for the real data from synchrotron data sets. At the same time, it proves that the proposed method can be applied to new CT systems whose manufacturing techniques are not fully mature enough.
Fig. 7. The comparisons of the corrected images of the real synchrotron data sets with ring artifacts. (a) The uncorrected image. (b) The corrected image obtained by the method of Wei et al. (c) The corrected image obtained by the method of Liang et al.; (d) the corrected image obtained by the first part of the proposed method; (e) the corrected image obtained by the second part of the proposed method; the images in the second row are zoomed-in views of the images in the first row. The images in third row are the ring artifacts of the images in the first row (if we assume Fig. 7(e) as the reference image), and all above display windows are [-0.0015,0.0005].
3.3 Numerical evaluations
Because the data from synchrotron data sets have no obvious structure, we only evaluate the results of simulated data from Shepp-Logan phantom and real data from the photon counting CT system. We use FWHM as a spatial resolution metric and the simulated data are used to verify the effect the proposed method numerically. In the simulated Shepp Logan phantom, we fitted the line profiles around the edge marked as line C in the second row of Fig. 5 with a logistic error function and computed the FWHM and shows the result in Table 2. The SNR at the position A, the CNR at position B in Figs. 5 and 6 were calculated and shows the result in Table 2.
Table 2. Numerical evaluations of the simulation and experimental data.
From the data in Table 2, we can find the corrected images obtained by the proposed method have significant improvements in image quality compared to the uncorrected images and the corrected images obtained by the methods from Wei et al. and Liang et al. Particularly, when the time-dependent ring artifacts are serious, the first part of the proposed method (for removing the intensity-dependent ring artifacts) still performs slightly better than the method of Liang et al., which further shows the superiority of the proposed method. The proposed method can remove the ring artifacts and maintain the spatial resolution as much as possible.
4.1 Influence of the smoothing weight on the removal of intensity-dependent ring artifacts
When using the RTV algorithm to smooth the polar coordinate images, the smoothing weight has great influence on the generation of the structure images. In order to verify the influence of the smoothing weight on the intensity-dependent ring artifact removal, the Shepp-Logan phantom with the mixed ring artifacts shown in Fig. 5(a) were corrected with the smoothing weights $\lambda \textrm{ = 5}.0e - 6$ and $\lambda \textrm{ = 5}.0e - 7$, respectively. We consider the effect of removing intensity-dependent artifacts in the first part of the method, and consider the effect of removing time-dependent artifacts in the second part of the method. In this paper, we calculated the evaluation of image detail maintenance using the structural similarity index (SSIM) [47]. The numbers of iterations required to reach the stopping criterion are 29 and 198, respectively. The variation of SSIM with the methods, the smoothing weights and the number of iterations is shown in Fig. 8. The results show that the SSIM increases gradually with the number of iterations, and both the method of Liang et al. and the first part of the proposed method are robust in some degree (different smoothing parameters have little effect on the final SSIM). For the Shepp-Logan phantom with the mixed ring artifacts shown in Fig. 5(a), the first part of the proposed method achieves better results than the method of Liang et al.
Fig. 8. The variation of SSIM with the methods, the smoothing weights and the number of iterations.
4.2 Influence of the smoothing weight on the removal of time-dependent ring artifacts
In order to verify the influence of the smoothing weight on the removal of the time-dependent ring artifacts, the Shepp-Logan phantom with the time-dependent ring artifacts shown in Fig. 5(e) were corrected with the smoothing weights $\lambda \textrm{ = }3.0 \times {10^{\textrm{ - }6}}$, $\lambda \textrm{ = 5}.0 \times {10^{\textrm{ - }6}}$ and $\lambda \textrm{ = 7}.0 \times {10^{\textrm{ - }6}}$, respectively. The number of iterations required to reach the stopping criterion are 91, 68, and 44 respectively. When the stopping criterion of iterations was reached, the workflow would be stopped. The variation of SSIM with the smoothing weights and the number of iterations is shown in Fig. 9. The results show that the SSIM increases gradually with the number of iterations, and different smoothing parameters have little effect on the final SSIM. This proves that the second part of the proposed method is robust in some degree, and can effectively remove the time-dependent ring artifacts.
4.3 Advantages and prospects of research
The proposed method shows good performance in simulation and real data. There are two main advantages of the proposed method listed as follows. Firstly, for the intensity-dependent ring artifacts, the first part of the proposed method overcomes the shortcoming of the method if Liang et al. by iterating the corrected CT image again. Secondly, the second part of the method can effectively remove the time-dependent ring artifacts while preserving the spatial resolution and the image details, greatly improving the applicability of the proposed method. However, for multi-material samples, when attenuation characteristics of materials vary greatly and the time-dependent ring artifacts are very serious, the proposed method may face some difficulties: for the RTV algorithm, selecting a large smoothing weight may lead to a loss of image details, and selecting a small smoothing weight may be unable to completely remove the time-dependent ring artifacts. Pre-processing methods work on the raw projections, and post-processing methods work on the reconstruction images. For the multi-material CT images, the method for removing serious time-dependent ring artifacts, the combination of pre-processing methods and post-processing methods may be a good solution to completely remove ring artifacts, because it is hard to obtain a perfect algorithm suppressing all ring artifacts.
4.4 Computation load of the proposed method
The computation load of the first step of the proposed method is basically the same as that of the method of Liang et al. However, we must point out that the second part of the proposed method is more time-consuming than the method of Liang et al. and the first part of the proposed method. Even though the second part of the proposed method is more computationally intense than the other methods here considered, we believe that there are some methods to reduce the computation load. We can accelerate the computation with CPU multi-thread and GPU multi-thread. Furthermore, we can reduce the computation load of the texture classification (the only time-consuming step) by image scaling via the following steps. First, scale the texture image to reduce its width. Second, generate low-resolution classification function 3D graphs and find the temporary minimum points. Third, search the real minimum points of classification function in the neighborhoods of the temporary minimum points. Finally, extract the stripe artifacts according to the real minimum points. If we reduce the width of the texture image to 1/2 of the original, the computation load can be reduced to 1/4 of the original cost time. The optimized method takes 22x longer than the method of Liang et al., but have better performance in removing ring artifacts.
In this study, we propose a robust iterative post-processing method to remove the intensity-dependent ring artifacts and the time-dependent ring artifacts. The proposed method is evaluated on a Shepp Logan phantom with the mixed ring artifacts, an industrial aluminum part with real ring artifacts, and real synchrotron data sets. The simulation and real data show that the proposed method can effectively remove intensity-dependent and time-dependent ring artifacts while preserving the image details and the spatial resolution. In real data, this method increased the SNR from 5.88 to 14.01 and CNR from 3.89 to 11.26 of the regions of interest. In particular, this method is particularly suitable for new CT systems (such as the photon counting CT) with immature manufacturing technologies but unique advantages, and can be widely used in clinical, medical, and industrial detection.
Ministry of Industry and Information Technology of the People's Republic of China (MJ-2017-F-05); Fundamental Research Funds for the Central Universities (31020190504006); Technology Field Fund of Basic Strengthening Plan (2019-JCJQ-JJ-391); Shanxi Provincial Key Research and Development Project (2020GY-145).
1. C. Han and J. Baek, "Multi-pass approach to reduce cone-beam artifacts in a circular orbit cone-beam CT system," Opt. Express 27(7), 10108–10126 (2019). [CrossRef]
2. A. Cuadros, X. Ma, and G. R. Arce, "Compressive spectral X-ray tomography based on spatial and spectral coded illumination," Opt. Express 27(8), 10745–10764 (2019). [CrossRef]
3. X. Zhao, P. Chen, J. Wei, and Z. Qu, "Spectral CT imaging method based on blind separation of polychromatic projections with Poisson prior," Opt. Express 28(9), 12780–12794 (2020). [CrossRef]
4. Z. Wang, J. Li, and M. Enoh, "Removing ring artifacts in CBCT images via generative adversarial networks with unidirectional relative total variation loss," Neural. Comput. Appl. 31(9), 5147–5158 (2019). [CrossRef]
5. H. Yan, X. Wang, F. Shi, T. Bai, M. Folkerts, L. Cervino, S. B. Jiang, and X. Jia, "Towards the clinical implementation of iterative low-dose cone-beam CT reconstruction in image-guided radiation therapy: Cone/ring artifact correction and multiple GPU implementation," Med. Phys. 41(11), 111912 (2014). [CrossRef]
6. X. Liang, S. Gong, Q. Zhou, Z. Zhang, Y. Xie, and T. Niu, "SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan," Med. Phys. 43(6), 3457 (2016). [CrossRef]
7. W. Yuan, "Extended Applications of Image Flat-Field Correction Method," Acta. Photonica. Sin. 36(9), 1587–1590 (2007).
8. X. Tang, R. Ning, R. Yu, and D. Conover, "Cone beam volume CT image artifacts caused by defective cells in x-ray flat panel imagers and the artifact removal using a wavelet-analysis-based algorithm," Med. Phys. 28(5), 812–825 (2001). [CrossRef]
9. F. Sadi, S. Y. Lee, and M. K. Hasan, "Removal of ring artifacts in computed tomographic imaging using iterative center weighted median filter," Comput. Biol. Med. 40(1), 109–118 (2010). [CrossRef]
10. R. Tucoulou, G. Martinezcriado, P. Bleuet, I. Kieffer, P. Cloetens, S. Laboure, T. Martin, C. Guilloud, and J. Susini, "High-resolution angular beam stability monitoring at a nanofocusing beamline," J. Synchrotron Radiat. 15(4), 392–398 (2008). [CrossRef]
11. V. Titarenko, S. Titarenko, P. J. Withers, D. C. Francesco, and X. Xiao, "Improved tomographic reconstructions using adaptive time dependent intensity normalization," J. Synchrotron Radiat. 17(5), 689–699 (2010). [CrossRef]
12. V. Astromskas, E. N. Gimenez, A. Lohstroh, and N. Tartoni, "Evaluation of Polarization Effects of, Collection Schottky CdTe Medipix3RX Hybrid Pixel Detector," IEEE Trans. Nucl. Sci. 63(1), 252–258 (2016). [CrossRef]
13. P. Delogu, L. Brombal, V. D. Trapani, S. Donato, U. Bottigli, D. Dreossi, B. Golosio, P. Oliva, L. Rigon, and R. Longo, "Optimization of the equalization procedure for a single-photon counting CdTe detector used for CT," J. Instrum. 12(11), C11014 (2017). [CrossRef]
14. J. Lifton and T. Liu, "Ring artefact reduction via multi-point piecewise linear flat field correction for X-ray computed tomography," Opt. Express 27(3), 3217–3228 (2019). [CrossRef]
15. N. V. Van, B. J. De, C. F. De, L. Mancini, F. Marone, and J. Sijbers, "Dynamic intensity normalization using eigen flat fields in X-ray imaging," Opt. Express 23(21), 27975–27989 (2015). [CrossRef]
16. G. R. Davis and J. C. Elliott, "X-ray microtomography scanner using time-delay integration for elimination of ring artefacts in the reconstructed image," Nucl. Instrum. Methods Phys. Res., Sect. A 394(1-2), 157–162 (1997). [CrossRef]
17. W. Gorner, M. P. Hentschel, B. R. Mullera, H. Riesemeier, M. Krumrey, G. Ulm, W. Diete, U. Klein, and R. Frahm, "BAMline: the first hard X-ray beamline at BESSY II," Nucl. Instrum. Methods Phys. Res., Sect. A 467-468, 703–706 (2001). [CrossRef]
18. C. Altunbas, C. J. Lai, Y. Zhong, and C. C Shaw, "Reduction of ring artifacts in CBCT: Detection and correction of pixel gain variations in flat panel detectors," Med. Phys. 41(9), 091913 (2014). [CrossRef]
19. L. Brombal, D. Sandro, B. Francesco, P. Delogu, V. Fanti, P. Oliva, L. Rigon, V. D. Trapani, R. Longo, and B. Golosio, "Large-area single-photon-counting CdTe detector for synchrotron radiation computed tomography: a dedicated pre-processing procedure," J. Synchrotron Radiat. 25(4), 1068–1077 (2018). [CrossRef]
20. X. Tang, R. Ning, R. Yu, and D. L. Conover, "2D wavelet-analysis-based calibration technique for flat panel imaging detectors: Application in cone beam volume CT," Proc. SPIE 3659, 806–816 (1999). [CrossRef]
21. M. Boin and A. Haibel, "Compensation of ring artefacts in synchrotron tomographic images," Opt. Express 14(25), 12071–12075 (2006). [CrossRef]
22. R. A. Ketcham, "New algorithms for ring artifact removal," Proc. SPIE 6318(38), 63180O (2006). [CrossRef]
23. P. Wu, T. Mao, S. Xie, K. Sheng, and T. Niu, "WE-G-207-09: A Practical Bowtie Ring Artifact Correction Algorithm for Cone-Beam CT," Med. Phys. 42(6Part41), 3698 (2015). [CrossRef]
24. E. M. A. Anas, S. Y. Lee, and M. K. Hasan, "Removal of ring artifacts in CT imaging through detection and correction of stripes in the sinogram," Phys. Med. Biol. 55(22), 6911–6930 (2010). [CrossRef]
25. A. N. M. Ashrafuzzaman, S. Y. Lee, and M. K. Hasan, "A self-adaptive approach for the detection and correction of stripes in the sinogram: suppression of ring artifacts in CT imaging," EURASIP J. Adv. Signal. Process. 2011(1), 183547 (2011). [CrossRef]
26. G. Kowalski, "Suppression of Ring Artefacts in CT Fan-Beam Scanners," IEEE Trans Nucl. Sci. 25(5), 1111–1116 (1978). [CrossRef]
27. C. Raven, "Numerical removal of ring artifacts in microtomography," Rev. Sci. Instrum. 69(8), 2978–2980 (1998). [CrossRef]
28. B. Munch, P. Trtik, F. Marone, and M. Stampanoni, "Stripe and ring artifact removal with combined wavelet — Fourier filtering," Opt. Express 17(10), 8567–8591 (2009). [CrossRef]
30. C. Altunbas, C. Lai, Y. Zhong, and C. C. Shaw, "Reduction of ring artifacts in CBCT: Detection and correction of pixel gain variations in flat panel detectors," Med. Phys. 41(9), 091913 (2014). [CrossRef]
31. S. Titarenko, P. J. Withers, and A. Yagola, "An analytical formula for ring artefact suppression in X-ray tomography," Appl. Math. Lett. 23(12), 1489–1495 (2010). [CrossRef]
32. E. X. Miqueles, J. Rinkel, F. O'Dowd, and J. S. V. Bermúdez, "Generalized Titarenko's algorithm for ring artefacts reduction," J. Synchrotron Radiat. 21(6), 1333–1346 (2014). [CrossRef]
33. Y. Kim, J. Baek, and D. Hwang, "Ring artifact correction using detector line-ratios in computed tomography," Opt. Express 22(11), 13380–13392 (2014). [CrossRef]
34. V. Titarenko, "1D Filter for Ring Artifact Suppression," J. Synchrotron Radiat. 23(6), 800–804 (2016). [CrossRef]
35. W. Vågberg, J. C. Larsson, and H. M. Hertz, "Removal of ring artifacts in microtomography by characterization of scintillator variations," Opt. Express 25(19), 23191–23198 (2017). [CrossRef]
36. N. T. Vo, R. C. Atwood, and M. Drakopoulos, "Superior techniques for eliminating ring artifacts in x-ray microtomography," Opt. Express 26(22), 28396–28412 (2018). [CrossRef]
37. L. Croton, G. Ruben, K. S. Morgan, D. M. Pagnin, and M. Kitchen, "Ring artifact suppression in X-ray computed tomography using a simple, pixel-wise response correction," Opt. Express 27(10), 14231–14245 (2019). [CrossRef]
38. W. Chen, D. Prell, Y. Kyriakou, and W. A. Kalender, "Accelerating Ring Artifact Correction for Flat-Detector CT using the CUDA Framework," Proc. SPIE 7622, 76223A (2010). [CrossRef]
39. J. Sijbers and A. Postnov, "Reduction of ring artefacts in high resolution micro-CT reconstructions," Phys. Med. Biol. 49(14), N247–N253 (2004). [CrossRef]
40. Z. Wei, S. Wiebe, and D. Chapman, "Ring artifacts removal from synchrotron CT image slices," J. Instrum. 8(06), C06006 (2013). [CrossRef]
41. Y. Kyriakou, D. Prell, and W. A. Kalender, "Ring artifact correction for high-resolution micro CT," Phys. Med. Biol. 54(17), N385–N391 (2009). [CrossRef]
42. D. Prell, Y. Kyriakou, and W. A. Kalender, "Comparison of ring artifact correction methods for flat-detector CT," Phys. Med. Biol. 54(12), 3881–3895 (2009). [CrossRef]
43. M. Bouali and S. Ladjal, "Toward Optimal Destriping of MODIS Data Using a Unidirectional Variational Model," IEEE Trans. Geosci. Remote. Sens. 49(8), 2924–2935 (2011). [CrossRef]
44. P. Wu, X. Sun, H. Hu, T. Mao, W. Zhao, K. Sheng, A. A. Cheung, and T. Niu, "Iterative CT shading correction with no prior information," Phys. Med. Biol. 60(21), 8437–8455 (2015). [CrossRef]
45. L. Xu, Q. Yan, Y. Xia, and J. Jia, "Structure extraction from texture via relative total variation," ACM Trans. Graph. 31(6), 1–10 (2012). [CrossRef]
46. L. Yan, T. Wu, S. Zhong, and Q. Zhang, "A variation-based ring artifact correction method with sparse constraint for flat-detector CT," Phys. Med. Biol. 61(3), 1278–1292 (2016). [CrossRef]
47. X. Liang, Z. Zhang, T. Niu, S. Yu, S. Wu, Z. Li, H. Zhang, and Y. Xie, "Iterative image-domain ring artifact removal in cone-beam CT," Phys. Med. Biol. 62(13), 5276–5292 (2017). [CrossRef]
48. Z. Chao and H. Kim, "Removal of computed tomography ring artifacts via radial basis function artificial neural networks," Phys. Med. Biol. 64(23), 235015 (2019). [CrossRef]
49. W. Fang, L. Li, and Z. Chen, "Removing Ring Artefacts for Photon-Counting Detectors Using Neural Networks in Different Domains," IEEE Access 8, 42447–42457 (2020). [CrossRef]
50. L. A. Shepp and B. F. Logan, "The Fourier reconstruction of a head section," IEEE Trans. Nucl. Sci. 21(3), 21–43 (1974). [CrossRef]
Article Order
C. Han and J. Baek, "Multi-pass approach to reduce cone-beam artifacts in a circular orbit cone-beam CT system," Opt. Express 27(7), 10108–10126 (2019).
[Crossref]
A. Cuadros, X. Ma, and G. R. Arce, "Compressive spectral X-ray tomography based on spatial and spectral coded illumination," Opt. Express 27(8), 10745–10764 (2019).
X. Zhao, P. Chen, J. Wei, and Z. Qu, "Spectral CT imaging method based on blind separation of polychromatic projections with Poisson prior," Opt. Express 28(9), 12780–12794 (2020).
Z. Wang, J. Li, and M. Enoh, "Removing ring artifacts in CBCT images via generative adversarial networks with unidirectional relative total variation loss," Neural. Comput. Appl. 31(9), 5147–5158 (2019).
H. Yan, X. Wang, F. Shi, T. Bai, M. Folkerts, L. Cervino, S. B. Jiang, and X. Jia, "Towards the clinical implementation of iterative low-dose cone-beam CT reconstruction in image-guided radiation therapy: Cone/ring artifact correction and multiple GPU implementation," Med. Phys. 41(11), 111912 (2014).
X. Liang, S. Gong, Q. Zhou, Z. Zhang, Y. Xie, and T. Niu, "SU-F-J-211: Scatter Correction for Clinical Cone-Beam CT System Using An Optimized Stationary Beam Blocker with a Single Scan," Med. Phys. 43(6), 3457 (2016).
W. Yuan, "Extended Applications of Image Flat-Field Correction Method," Acta. Photonica. Sin. 36(9), 1587–1590 (2007).
X. Tang, R. Ning, R. Yu, and D. Conover, "Cone beam volume CT image artifacts caused by defective cells in x-ray flat panel imagers and the artifact removal using a wavelet-analysis-based algorithm," Med. Phys. 28(5), 812–825 (2001).
F. Sadi, S. Y. Lee, and M. K. Hasan, "Removal of ring artifacts in computed tomographic imaging using iterative center weighted median filter," Comput. Biol. Med. 40(1), 109–118 (2010).
R. Tucoulou, G. Martinezcriado, P. Bleuet, I. Kieffer, P. Cloetens, S. Laboure, T. Martin, C. Guilloud, and J. Susini, "High-resolution angular beam stability monitoring at a nanofocusing beamline," J. Synchrotron Radiat. 15(4), 392–398 (2008).
V. Titarenko, S. Titarenko, P. J. Withers, D. C. Francesco, and X. Xiao, "Improved tomographic reconstructions using adaptive time dependent intensity normalization," J. Synchrotron Radiat. 17(5), 689–699 (2010).
V. Astromskas, E. N. Gimenez, A. Lohstroh, and N. Tartoni, "Evaluation of Polarization Effects of, Collection Schottky CdTe Medipix3RX Hybrid Pixel Detector," IEEE Trans. Nucl. Sci. 63(1), 252–258 (2016).
P. Delogu, L. Brombal, V. D. Trapani, S. Donato, U. Bottigli, D. Dreossi, B. Golosio, P. Oliva, L. Rigon, and R. Longo, "Optimization of the equalization procedure for a single-photon counting CdTe detector used for CT," J. Instrum. 12(11), C11014 (2017).
J. Lifton and T. Liu, "Ring artefact reduction via multi-point piecewise linear flat field correction for X-ray computed tomography," Opt. Express 27(3), 3217–3228 (2019).
N. V. Van, B. J. De, C. F. De, L. Mancini, F. Marone, and J. Sijbers, "Dynamic intensity normalization using eigen flat fields in X-ray imaging," Opt. Express 23(21), 27975–27989 (2015).
G. R. Davis and J. C. Elliott, "X-ray microtomography scanner using time-delay integration for elimination of ring artefacts in the reconstructed image," Nucl. Instrum. Methods Phys. Res., Sect. A 394(1-2), 157–162 (1997).
W. Gorner, M. P. Hentschel, B. R. Mullera, H. Riesemeier, M. Krumrey, G. Ulm, W. Diete, U. Klein, and R. Frahm, "BAMline: the first hard X-ray beamline at BESSY II," Nucl. Instrum. Methods Phys. Res., Sect. A 467-468, 703–706 (2001).
C. Altunbas, C. J. Lai, Y. Zhong, and C. C Shaw, "Reduction of ring artifacts in CBCT: Detection and correction of pixel gain variations in flat panel detectors," Med. Phys. 41(9), 091913 (2014).
L. Brombal, D. Sandro, B. Francesco, P. Delogu, V. Fanti, P. Oliva, L. Rigon, V. D. Trapani, R. Longo, and B. Golosio, "Large-area single-photon-counting CdTe detector for synchrotron radiation computed tomography: a dedicated pre-processing procedure," J. Synchrotron Radiat. 25(4), 1068–1077 (2018).
X. Tang, R. Ning, R. Yu, and D. L. Conover, "2D wavelet-analysis-based calibration technique for flat panel imaging detectors: Application in cone beam volume CT," Proc. SPIE 3659, 806–816 (1999).
M. Boin and A. Haibel, "Compensation of ring artefacts in synchrotron tomographic images," Opt. Express 14(25), 12071–12075 (2006).
R. A. Ketcham, "New algorithms for ring artifact removal," Proc. SPIE 6318(38), 63180O (2006).
P. Wu, T. Mao, S. Xie, K. Sheng, and T. Niu, "WE-G-207-09: A Practical Bowtie Ring Artifact Correction Algorithm for Cone-Beam CT," Med. Phys. 42(6Part41), 3698 (2015).
E. M. A. Anas, S. Y. Lee, and M. K. Hasan, "Removal of ring artifacts in CT imaging through detection and correction of stripes in the sinogram," Phys. Med. Biol. 55(22), 6911–6930 (2010).
A. N. M. Ashrafuzzaman, S. Y. Lee, and M. K. Hasan, "A self-adaptive approach for the detection and correction of stripes in the sinogram: suppression of ring artifacts in CT imaging," EURASIP J. Adv. Signal. Process. 2011(1), 183547 (2011).
G. Kowalski, "Suppression of Ring Artefacts in CT Fan-Beam Scanners," IEEE Trans Nucl. Sci. 25(5), 1111–1116 (1978).
C. Raven, "Numerical removal of ring artifacts in microtomography," Rev. Sci. Instrum. 69(8), 2978–2980 (1998).
B. Munch, P. Trtik, F. Marone, and M. Stampanoni, "Stripe and ring artifact removal with combined wavelet — Fourier filtering," Opt. Express 17(10), 8567–8591 (2009).
C. Altunbas, C. Lai, Y. Zhong, and C. C. Shaw, "Reduction of ring artifacts in CBCT: Detection and correction of pixel gain variations in flat panel detectors," Med. Phys. 41(9), 091913 (2014).
S. Titarenko, P. J. Withers, and A. Yagola, "An analytical formula for ring artefact suppression in X-ray tomography," Appl. Math. Lett. 23(12), 1489–1495 (2010).
E. X. Miqueles, J. Rinkel, F. O'Dowd, and J. S. V. Bermúdez, "Generalized Titarenko's algorithm for ring artefacts reduction," J. Synchrotron Radiat. 21(6), 1333–1346 (2014).
Y. Kim, J. Baek, and D. Hwang, "Ring artifact correction using detector line-ratios in computed tomography," Opt. Express 22(11), 13380–13392 (2014).
V. Titarenko, "1D Filter for Ring Artifact Suppression," J. Synchrotron Radiat. 23(6), 800–804 (2016).
W. Vågberg, J. C. Larsson, and H. M. Hertz, "Removal of ring artifacts in microtomography by characterization of scintillator variations," Opt. Express 25(19), 23191–23198 (2017).
N. T. Vo, R. C. Atwood, and M. Drakopoulos, "Superior techniques for eliminating ring artifacts in x-ray microtomography," Opt. Express 26(22), 28396–28412 (2018).
L. Croton, G. Ruben, K. S. Morgan, D. M. Pagnin, and M. Kitchen, "Ring artifact suppression in X-ray computed tomography using a simple, pixel-wise response correction," Opt. Express 27(10), 14231–14245 (2019).
W. Chen, D. Prell, Y. Kyriakou, and W. A. Kalender, "Accelerating Ring Artifact Correction for Flat-Detector CT using the CUDA Framework," Proc. SPIE 7622, 76223A (2010).
J. Sijbers and A. Postnov, "Reduction of ring artefacts in high resolution micro-CT reconstructions," Phys. Med. Biol. 49(14), N247–N253 (2004).
Z. Wei, S. Wiebe, and D. Chapman, "Ring artifacts removal from synchrotron CT image slices," J. Instrum. 8(06), C06006 (2013).
Y. Kyriakou, D. Prell, and W. A. Kalender, "Ring artifact correction for high-resolution micro CT," Phys. Med. Biol. 54(17), N385–N391 (2009).
D. Prell, Y. Kyriakou, and W. A. Kalender, "Comparison of ring artifact correction methods for flat-detector CT," Phys. Med. Biol. 54(12), 3881–3895 (2009).
M. Bouali and S. Ladjal, "Toward Optimal Destriping of MODIS Data Using a Unidirectional Variational Model," IEEE Trans. Geosci. Remote. Sens. 49(8), 2924–2935 (2011).
P. Wu, X. Sun, H. Hu, T. Mao, W. Zhao, K. Sheng, A. A. Cheung, and T. Niu, "Iterative CT shading correction with no prior information," Phys. Med. Biol. 60(21), 8437–8455 (2015).
L. Xu, Q. Yan, Y. Xia, and J. Jia, "Structure extraction from texture via relative total variation," ACM Trans. Graph. 31(6), 1–10 (2012).
L. Yan, T. Wu, S. Zhong, and Q. Zhang, "A variation-based ring artifact correction method with sparse constraint for flat-detector CT," Phys. Med. Biol. 61(3), 1278–1292 (2016).
X. Liang, Z. Zhang, T. Niu, S. Yu, S. Wu, Z. Li, H. Zhang, and Y. Xie, "Iterative image-domain ring artifact removal in cone-beam CT," Phys. Med. Biol. 62(13), 5276–5292 (2017).
Z. Chao and H. Kim, "Removal of computed tomography ring artifacts via radial basis function artificial neural networks," Phys. Med. Biol. 64(23), 235015 (2019).
W. Fang, L. Li, and Z. Chen, "Removing Ring Artefacts for Photon-Counting Detectors Using Neural Networks in Different Domains," IEEE Access 8, 42447–42457 (2020).
L. A. Shepp and B. F. Logan, "The Fourier reconstruction of a head section," IEEE Trans. Nucl. Sci. 21(3), 21–43 (1974).
Altunbas, C.
Anas, E. M. A.
Arce, G. R.
Ashrafuzzaman, A. N. M.
Astromskas, V.
Atwood, R. C.
Baek, J.
Bai, T.
Bermúdez, J. S. V.
Bleuet, P.
Boin, M.
Bottigli, U.
Bouali, M.
Brombal, L.
Cervino, L.
Chao, Z.
Chapman, D.
Chen, P.
Chen, W.
Cheung, A. A.
Cloetens, P.
Conover, D.
Conover, D. L.
Croton, L.
Cuadros, A.
Davis, G. R.
De, B. J.
De, C. F.
Delogu, P.
Diete, W.
Drakopoulos, M.
Dreossi, D.
Elliott, J. C.
Enoh, M.
Fanti, V.
Folkerts, M.
Frahm, R.
Francesco, B.
Francesco, D. C.
Gimenez, E. N.
Golosio, B.
Gong, S.
Gorner, W.
Guilloud, C.
Haibel, A.
Han, C.
Hasan, M. K.
Hentschel, M. P.
Hertz, H. M.
Hu, H.
Hwang, D.
Jia, X.
Jiang, S. B.
Kalender, W. A.
Ketcham, R. A.
Kieffer, I.
Kim, Y.
Kitchen, M.
Kowalski, G.
Krumrey, M.
Kyriakou, Y.
Laboure, S.
Ladjal, S.
Lai, C.
Lai, C. J.
Larsson, J. C.
Lee, S. Y.
Liang, X.
Lifton, J.
Logan, B. F.
Lohstroh, A.
Longo, R.
Ma, X.
Mancini, L.
Mao, T.
Marone, F.
Martin, T.
Martinezcriado, G.
Miqueles, E. X.
Morgan, K. S.
Mullera, B. R.
Munch, B.
Ning, R.
Niu, T.
O'Dowd, F.
Oliva, P.
Pagnin, D. M.
Postnov, A.
Prell, D.
Qu, Z.
Raven, C.
Riesemeier, H.
Rigon, L.
Rinkel, J.
Ruben, G.
Sadi, F.
Sandro, D.
Shaw, C. C
Shaw, C. C.
Sheng, K.
Shepp, L. A.
Shi, F.
Sijbers, J.
Stampanoni, M.
Susini, J.
Tang, X.
Tartoni, N.
Titarenko, S.
Titarenko, V.
Trapani, V. D.
Trtik, P.
Tucoulou, R.
Ulm, G.
Vågberg, W.
Van, N. V.
Vo, N. T.
Wei, J.
Wei, Z.
Wiebe, S.
Withers, P. J.
Wu, P.
Wu, S.
Wu, T.
Xia, Y.
Xie, Y.
Yagola, A.
Yan, H.
Yan, L.
Yan, Q.
Yu, R.
Yu, S.
Yuan, W.
Zhang, Q.
Zhao, W.
Zhao, X.
Zhong, S.
Zhong, Y.
Zhou, Q.
ACM Trans. Graph. (1)
Acta. Photonica. Sin. (1)
Appl. Math. Lett. (1)
Comput. Biol. Med. (1)
EURASIP J. Adv. Signal. Process. (1)
IEEE Access (1)
IEEE Trans Nucl. Sci. (1)
IEEE Trans. Geosci. Remote. Sens. (1)
IEEE Trans. Nucl. Sci. (2)
J. Instrum. (2)
J. Synchrotron Radiat. (5)
Med. Phys. (6)
Neural. Comput. Appl. (1)
Nucl. Instrum. Methods Phys. Res., Sect. A (2)
Opt. Express (12)
Phys. Med. Biol. (8)
Proc. SPIE (3)
Rev. Sci. Instrum. (1)
OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.
Alert me when this article is cited.
Click here to see a list of articles that cite this paper
View in Article | Download Full Size | PPT Slide | PDF
Equations on this page are rendered with MathJax. Learn more.
(1) S = t s m o o t h ( I , λ , σ , max T t e r , ε ) ,
(2) arg min F ( p , q ) = ∑ i = p q ( R i − E C 1 ) 2 + ∑ j = 1 ( p − 1 ) ( R j − E C 2 ) 2 + ∑ j = ( q + 1 ) M ( R j − E C 2 ) 2 R i ∈ C 1 , R j ∈ C 2 ,
(3) C s = | | R I N G K | | 1 / | | R I N G 1 | | 1 ,
Reconstruction parameters.
Shepp-Logan phantom
Aluminum part
Rotation angle 360° 360°
Number of projections 720 720
Distance from source to the center of rotation 900mm 810.2mm
Distance from source to detector 1200mm 1045.4mm
Detector unit size 0.50mm 0.10mm
Reconstruction pixel size 0.375mm 0.0775mm
Image matrices 512×512×512 768×768×64
Numerical evaluations of the simulation and experimental data.
Different methods
FWHM
Shepp-Logan phantom Uncorrected 1.53×103 7.71 9.81mm
The method of Wei et al. 2.27×103 8.39 9.46mm
The method of Liang et al. 1.85×103 9.65 8.89mm
The first part of the proposed method 1.90×103 9.66 9.22mm
The second part of the proposed method 2.65×103 10.29 9.29mm
Reference 1.91×104 11.08 9.14mm
Industrial aluminum part Uncorrected 5.88 3.86 /
The method of Wei et al. 13.81 8.56 /
The method of Liang et al. 9.23 7.52 /
The first part of the proposed method 9.76 8.10 /
The second part of the proposed method 14.01 11.26 /
|
CommonCrawl
|
Correlation between quantitative analysis of wall shear stress and intima-media thickness in atherosclerosis development in carotid arteries
Bo Zhang ORCID: orcid.org/0000-0002-6085-111X1,
Junyi Gu1,
Ming Qian2,
Lili Niu2,
Hui Zhou2 &
Dhanjoo Ghista3
BioMedical Engineering OnLine volume 16, Article number: 137 (2017) Cite this article
This paper presents quantitative analysis of blood flow shear stress by measuring the carotid arterial wall shear stress (WSS) and the intima-media thickness (IMT) of experimental rabbits fed with high-fat feedstuff on a weekly basis in order to cause atherosclerosis.
This study is based on establishing an atherosclerosis model of high-fat rabbits, and measuring the rabbits' common carotid arterial WSS of the experimental group and control group on a weekly basis. Detailed analysis was performed by using WSS quantification.
We have demonstrated small significant difference of rabbit carotid artery WSS between the experimental group and the control group (P<0.01) from the 1st week onwards, while the IMT of experimental group had larger differences from 5th week compared with the control group (P<0.05). Next, we have shown that with increasing blood lipids, the rabbit carotid artery shear stress decreases and the rabbit carotid artery IMT goes up. The decrease of shear stress appears before the start of IMT growth. Furthermore, our receiver operator characteristic (ROC) curve analysis showed that when the mean value of shear stress is 1.198 dyne/cm2, the rabbit common carotid atherosclerosis fatty streaks sensitivity is 89.8%, and the specificity is 81.3%. The area under the ROC curve is 0.9283.
All these data goes to show that WSS decreasing to 1.198 dyne/cm2 can be used as an indicator that rabbit common carotid artery comes into the period of fibrous plaques. In conclusion, our study is able to find and confirm that the decrease of the arterial WSS can predict the occurrence of atherosclerosis earlier, and offer help for positive clinical intervention.
Epidemiological investigations have shown that cardiovascular and cerebrovascular diseases are the major lethal factors affecting people around the world. In 2008, the national monitoring data from The National Centers for Disease Control and Prevention showed that cardiovascular and cerebrovascular diseases mortality in China is 229/100 thousands [1], and atherosclerosis is a big cause of cardiovascular and cerebrovascular diseases. In terms of cerebral infarction, the primary cause is atherosclerosis [2].
The formation of atherosclerosis is a long and complicated process. Pathology shows that atherosclerosis is generally divided into four periods: (1) fatty streaks, (2) fibrous plaque, (3) atheromatous plaque, and (4) complicated lesions or secondary changes [3]. Early atherosclerosis refers to the clinical precancerous lesion of atherosclerosis (Preclinical Atherosclerosis, PCA), which means that the patients has evidence of atherosclerosis, but there are no specific clinical symptoms of atherosclerotic stenosis caused by arterial atherosclerosis [4].
The endothelium lining the cardiovascular system is highly sensitive to hemodynamic shear stresses that act on the vessel luminal surface in the direction of blood flow. Physiological variations of shear stress regulate changes in structural-wall remodeling, and are associated with susceptibility to atherosclerosis. Hence, identification and diagnosis of atherosclerosis in its early stage and then doing timely intervention can reduce (i) the occurrence rate of myocardial infarction and cerebral infarction after the expansion of plaques, and (ii) the consequences of interventional therapies such as angioplasty, bypass grafts, and deployment of stents. Thus medical cost can also be reduced, which has significant social and economic value.
Noninvasive assessment of atherosclerosis, such as intimal wall thickening and plaque formation, is routinely available using a variety of imaging techniques. The most common clinical detection of atherosclerosis is to measure the intima-media thickness (IMT) of common carotid artery by using ultrasonic imaging. By measuring the IMT, we can judge whether there is atherosclerosis and even atherosclerotic plaques [5, 6]. However, some scholars have suggested that IMT provides only a limited indication for early diagnosis of atherosclerosis and predicting cardiovascular and cerebrovascular diseases [7, 8].
Other methods for the clinical diagnosis of atherosclerosis mainly include digital subtraction angiography, magnetic resonance angiography, and computed tomography (CT) angiography. The above technologies are mainly based on the vascular morphological changes and hemodynamics, to observe vascular intima-media thickness, plaques, degree of luminal stenosis, and the blood flow velocity and flow rate, etc. Because in the early stage of atherosclerosis, there is no apparent vascular morphology change or hemodynamic changes, the above technologies cannot effectively predicate the early stage of atherosclerosis.
Vessel segments with low wall shear stress or highly oscillatory wall shear stress appear to be at the highest risk for development of atherosclerosis. A large number of experiments have shown that the reduction of vessel wall shear stress (WSS) has a close relationship with the incidence of atherosclerosis [9, 10]. WSS change can directly affect the morphology and function of the vascular endothelium, and stimulate the migration and proliferation of vascular endothelial smooth muscle cells and mononuclear cells [11]. Low or unstable changing WSS is an index of the occurrence and development of vascular atherosclerosis [12]. It is increasingly valued as an indicator to evaluate hemodynamic changes that are closely related to atherosclerosis [13, 14].
In this study, we have used WSS quantitative analysis software for carrying out WSS quantitative analysis [15]. This method can accurately show WSS changes at different spatial locations. It is a kind of convenient and noninvasive vascular WSS analysis tool. For this research, we have established an atherosclerosis model of high-fat rabbits, and have weekly measured the rabbits' common carotid arterial WSS of the experimental group and control group by using WSS quantitative analysis. Then, we have computed ROC curves when the pathological histology prompts that the common carotid arteries of the experimental rabbits are at the stage of fatty streaks and arterial fiber plaques. In particular, we made sure of the WSS threshold for atherosclerosis stages, and of the sensitivity and specificity of this threshold.
Although now it is widely accepted that low or unstable changing WSS is a high-risk signal in the development of vascular atherosclerosis, still there are only a few studies on the correlation between the WSS specific data and atherosclerosis. Hence, this study aims to find out the correlation between WSS and atherosclerosis, by observing the dynamic changes in the arterial blood wall pathological histology.
Experimental animals and grouping
In our experimental study, we employed a total of 60 healthy white male New Zealand rabbits (provided by Shanghai Tongji University Animal Laboratory), which are approximately 10 weeks old, and weighed 2–2.5 kg. By use of a randomized method, they were divided into two groups: 20 in normal control group and in the experimental group. Ethics on animal experiments were approved by the institutional board of Tongji University.
The Philips IE33 Diasonograph (Philips Medical Systems, Andover, MA, USA) as well as high frequency probe L15-7 (Philips Medical Systems, Andover, MA, USA) were used in our study.
Pharmaceutical application
We applied Atropine Sulfate injection, 0.5 mg/ml (developed by Shanghai Wellhope Pharmaceutical Co. Ltd., and approved by H31021172). Then, for Ketamine hydrochloride injection, we implemented 2 ml at 0.1 g (by Jiangsu Hengrui Pharmaceutical Co. Ltd., and approved by H32022820).
Experimental rabbits were fed with high-fat feedstuff, which was bought from Trophy Feedstuff Technology Co. Ltd. (where feedstuff code is TP2R118), and caused to develop atherosclerosis artificially. These New Zealand rabbits in the experimental group were fed with the feedstuff at 50 g/kg/days once every 12 h, and allowed to drink water with no restraint for a total of 10 weeks. The temperature of breeding environment was controlled at around 15 °C, and the rooms are kept ventilated and clean in accordance to animal ethnics.
Blood test and rabbit carotid artery specimen collection
We extracted phlebotomize from the rabbit's ear section once a week. Then, we used the automatic biochemical analyzer enzyme standard method to detect the serum total cholesterol (TC), triglyceride (TG), high-density lipoprotein (HDL) and low density lipoprotein (LDL).
Rabbit carotid artery specimens collection
Every two weeks, we sacrificed 8 rabbits in the experimental group and 4 rabbits in the control group by the air embolism method. We surgically removed the rabbits' common carotid arteries and marked the proximal parts with thick thread ties. The extracted common carotid arteries were fixed in 10% formaldehyde solution, and then we applied and examined the paraffin section and HE stain. Next, we performed histological observation under light, and recorded the histologic appearances of the arterial wall during different periods.
Rabbit carotid artery IMT measurements
We implemented the experimental rabbit intramuscular anesthesia with ketamine hydrochloride (22 mg/kg) and atropine sulfate (70 μg/kg) mixture [16]. We used Philips IE33 diasonograph, L15-7 linear array transverse section to scan the rabbit carotid arteries, about 1–2 cm below the mandibular angle plane, of both the experimental group and the control group, to collect 2 days images of the common carotid artery. By partial enlargement and adjusting the gain, the rabbit carotid artery wall intima-media is clearly shown. Then, we vertically placed the vascular wall of the carotid artery and measured the IMT value.
Quantitative analysis of shear stress
Principle of quantitative analysis of wall shear stress
In Fig. 1, part A is a Color Doppler blood flow diagram (or Color Doppler flow Imaging, CDFI). The magnified image on the left side is the pixel magnified image of the yellow box area in the vascular wall. Part B illustrates the shear stress calculation principle. In the Doppler blood flow image, the luminal border pixels are shown by grayscale pixels, and so the speed of the boundary pixels is zero. The blood flow near the border is shown by colored pixels. The brightness of the colored pixel is proportional to the blood flow velocity. The following definitions are made: V slow is the speed of blood flow pixels of Doppler blood flow velocity parallel to the wall and very close to the vessel wall. V fast is the speed of next layer of blood flow pixels of Doppler blood flow velocity parallel to the wall. The distance between two adjacent layers of pixels is constant, and defined as d. In the Doppler blood flowing image, the change in axial velocity in the radial direction, du/dr, is based on the equation:
$$\frac{du}{dr} = \frac{{V_{fast} - V_{slow} }}{d},$$
which can be the approximation of the blood WSS:
$$\tau_{w} = \mu \gamma_{w} = \mu \frac{du}{dr}|r = wall,$$
where μ is the fluid viscosity value.
Principle of quantitative analysis of shear stress based on a ultrasound imaging of artery, and b velocity gradient used in the Hagen–Poiseuille formula
Traditionally, the Hagen–Poiseuille formula (see below Eqs. 3 and 4) is applied to determine the WSS, by measuring the diameter of the vascular cavity, and the blood flow rate or the maximum flow velocity at the flow observation point. Due to its simplicity, the Hagen–Poiseuille formula can be used clinically, even though it is based on fluid flow in ideal conditions. The human artery blood vessel does not assume a standard circular cross-section, and the WSS on the vascular wall is affected by different factors like blood flow, blood pressure, tube wall geometry and intima-media thickness, etc. Nevertheless, the WSS values obtained from the Hagen–Poiseuille formula can adequately reflect WSS changes in blood vessels due to flow rate, fluid viscosity and vessel radius [17], as given by the following equations:
$$\tau_{w} = \frac{4\mu Q}{{\pi R^{3} }}.$$
where in τ w is wall shear stress, Q is flow rate, μ is fluid viscosity R is vessel radius, and u M is the maximum velocity of the fluid (at the center of the artery).
For the fluid shear stress, we have
$$\tau = \frac{{2\mu u_{M} }}{{\pi R^{3} }}.$$
Image acquisition for wall shear stress quantitative analysis
We used the Philips IE33 diasonograph, L15-7 linear array probe for scanning the rabbit carotid arteries in the longitudinal section, and we ensured that the ultrasonic cross-section passes through the center axis of the blood vessels. The acoustic beam was at 60° angle to the common carotid artery. We adjusted (i) the speed range to make the lumen full of blood flow without aliasing, and also (ii) the sampling frame range, in order to keep the Doppler graphics frame frequency between 20–30 frames. Then, we collected the Doppler blood flow images of the common carotid artery, about 1–2 cm below the mandibular angle plane, in both the experimental group and the control group. The images were saved in the DICOM format. Finally, we employed the software for quantitative analysis of shear stress, as follows:
Obtain the Doppler speed range.
Adjust the angle of image display. Because it is difficult to keep the blood vessel level in the ultrasound images in order to facilitate calculating the axial velocity gradient perpendicular to the blood flow, it becomes necessary to adjust the flow in the blood vessels in the image into a horizontal position.
Select the area-of-interest used for determining the wall shear stress.
Weed out the gray-scale images used for showing the vascular wall.
Transform the color images that represent Doppler blood flow velocity into velocity data, according to the maximum speed and according to the color indication range.
Blood viscosity is set to 3 cp.
Draw the shear stress space distribution.
Show the shear stress distribution, as required.
Analysis of shear stress data.
Results of images are saved in digital image format, and the data values are saved in a database.
The SAS9.3 software is used for statistical analysis. The measurement data is described based on \(\bar{x} \pm s\). Comparison among groups uses t test. Diagnostic efficiency is shown by sensitivity and specificity. We build an ROC curve of rabbit's common carotid arterial WSS, and calculate the area under the curve to evaluate the diagnosis. A P < 0.05 means the difference has a statistical significance.
Histopathologic examination
Rabbit carotid artery histology of the control group
Based on observation, the structures of arterial intima, tunica media and tunica externa are complete. The internal elastic membrane is continuous. No intimal thickening and no foam cells beneath intima are seen (Fig. 2a, b).
Rabbit carotid artery histology of the control group with microscopic low power lens (a) and high power lens (b). Rabbit carotid artery histology of the experimental group at 2 weeks (c, d), 4 weeks (e, f), 6 weeks (g, h), 8 weeks (i, j), and 10 weeks (k, l)
Carotid artery pathology of the experimental rabbits
For rabbit carotid artery histology of the experimental group (after 2 weeks), the structures of arterial intima, tunica media and tunica externa are complete. The internal elastic membrane is continuous. No intimal thickening and no foam cells beneath intima are seen (Fig. 2c, d).
For the rabbit carotid artery histology of the experimental group (after 4 weeks), the structures of arterial intima, tunica media and tunica externa are complete. The internal elastic membrane is continuous. No intimal thickening and no foam cells beneath intima are seen (Fig. 2e, f).
For the rabbit's common carotid artery histology of the experimental group (after 6 weeks), we see that foam cells are gathered under the endothelial cells of the artery intimal surface, convex to the lumen, and the formation of some fatty streaks. There are more extracellular matrixes at the edges. Smooth muscle cells and collagen fiber hyperplasia are not obvious. The fatty streaks period is shown in Fig. 2g, h.
For carotid artery pathology of experimental rabbits (after 8 weeks), we can see a large number of foam cells gathered under the endothelial cells. Plaques are formed. The arterial wall's thickness is uneven, and some parts of the walls have become thicker. Smooth muscle cells are seen to proliferate. Figure 2i, j shows the fibrous plaques formation period.
For carotid artery pathology of experimental rabbits (following 10 weeks), a large number of foam cells are seen to gather under the endothelial cells and subsequently, plaques are formed. The arterial wall thickness is seen to be uneven and some parts of the wall have become thicker. We can also see smooth muscle cells and collagen fiber hyperplasia in the pathology. Figure 2k, l shows the dense fibrous plaque formation period.
Blood lipid test and analysis
Quantitative value of rabbit serum total cholesterol
There are seen to be statistical differences in t test between the two groups from the first week; the serum cholesterol level of the experimental rabbits is higher than that of the control group from the first week (as depicted in Table 1).
Table 1 Serum total cholesterol test results between the experimental group and the control group (mmol/L)
Rabbit serum low density lipoprotein
There are statistical differences in t-test between the two groups from the first week. It is seen that the serum low density lipoprotein level of the experimental rabbits is higher than that of the control group from the first week (as depicted in Table 2).
Table 2 Serum low density lipoprotein test results between the experimental group and the control group (mmol/L)
Testing of common carotid artery intima-media thickness (IMT) using ultrasound
We can observe statistical differences in the t-test between the two groups from the 5th week onwards, which suggests that the carotid artery IMT levels of the experimental group is higher than that of the control group from the 5th week (as seen in Table 3).
Table 3 Rabbit carotid artery intima-media (IMT) test results between the experimental group and the control group (mm)
WSS quantitative analysis of the common carotid arteries
Let us observe the t-test statistical differences between the two groups from the 1st week; it is seen that rabbit carotid arterial WSS of the experimental group is lower than that of the control group from the 1st week (as seen in Table 4).
Table 4 Shear stress test results between the experimental group and the control group (dyne/cm2)
The columnar analysis diagram of common carotid arterial WSS
In Fig. 3, it is shown that the carotid arterial WSS of the experimental rabbits changes and increases with time, while the carotid artery WSS of the control group has no obvious change.
Dynamic variation diagram of the rabbit carotid arterial WSS value between the experimental and the control groups
Three-dimensional spatial distribution map of the carotid arterial WSS
Figure 4 depicts the rabbit common carotid arterial WSS 3D spatial distribution map Therein, the z axis range represents the wall shear stress, wherein the highest peak is the maximum shear stress. The higher the degree of atherosclerosis, the lower is the WSS value. Figure 4 shows the range of the three-dimensional surface plots at 2, 4, 6, 8, and 10 weeks.
Rabbit common carotid arterial WSS 3d spatial distribution map. a A rabbit carotid arterial WSS of control group with range at 250; The carotid arterial WSS of experimental rabbits at 2 weeks (b), 4 weeks (c), 6 weeks (d), 8 weeks (e), and 10 weeks (f)
Diagnostic efficiency of WSS
When the mean value of shear stress is 1.443 dyne/cm2, the rabbit common carotid atherosclerosis fatty streaks sensitivity is 75% and the specificity is 96.9%. The area under the receiver operator characteristic (ROC) curve is 0.9232 (Fig. 5). For the mean value of shear stress at 1.198 dyne/cm2, the rabbit common carotid atherosclerosis fatty streaks sensitivity is 89.8% and the specificity is 81.3%. The area under the ROC curve is 0.9283 (Fig. 6). In summary, this ROC curve analysis has shown that when the mean value of shear stress is 1.198 dyne/cm2, the rabbit common carotid atherosclerosis fatty streaks sensitivity is 75% and the specificity is 96.9%. The area under the ROC curve is 0.9232 (Fig. 6). So the decreasing of WSS to 1.198 dyne/cm2 can be used as an indicator that rabbit common carotid artery atherosclerosis comes into the period of fatty streaks.
ROC curve of rabbit common carotid WSS based on mean value of shear stress at 1.443 dyne/cm2 demonstrating that the area under this curve is 0.9232 approx
Atherosclerosis widely affects main and median arteries. The interaction between hemodynamics and the endothelium is an important determinant of cardiovascular function in human health, survival and morbidity. In hemodynamics, shear stress is the tangential stress acting on the endothelial surface. The endothelium is critical to cardiovascular health, as this layer of cells maintains anticoagulant properties and enables physiological control of vasoregulation and modulation of vascular permeability. Lipid (that mainly comprises cholesterol) deposits in the intima of main and median arteries. Smooth muscle cells and collagen fiber hyperplasia and atheromatous plaque formation causes different levels of luminal stenosis. It is a slow process of lesions, and the early stage of it is often neglected. Clinical ultrasound is commonly used to measure artery IMT so as to judge if there is atherosclerosis and then take the appropriate clinical intervention. But often the results are not obvious and the arterial injury cannot be reversed [18]. Therefore, IMT only has a limited sense on early diagnosis of atherosclerosis and predicting cardiovascular and cerebrovascular diseases.
In the process of atherosclerosis formation, hemodynamics alterations and pathological reconstitution occurs in the artery blood vessels. WSS is one of the important physical factors that affect the occurrence and development of atherosclerosis [19]. As a direct stress acting on arterial smooth muscle cells, it directly affects the arterial smooth muscle cells and arterial smooth muscle cell media, and thus indirectly affects other cellular components and structures of the arterial wall.
Among most blood vessels, the movement of blood flow is laminar. In theory, the velocity of fluid flow in blood vessels is different at all locations. The velocity of the outermost layer blood flow next to the vascular wall is zero, while the velocity of blood flow at the center of the cross section is the highest. This velocity gradient existing in a blood vessel tube is formed by the friction created by relative sliding between the flowing layers. The tangential friction between the two adjacent flowing layers in every unit area is the well-established blood flow parameter known as the Shear Stress (SS) [20].
Studies have confirmed that arterial WSS decrease is associated with the increase of blood lipid [21]. Arterial WSS decrease has close relationship with the increase of low density lipoprotein. Low WSS causes low density lipoprotein accumulation under the artery endothelium and induces formation of atherosclerosis. Figure 7 sums up how low WSS leads to the formation of atherosclerotic lesions and flow separations, resulting in pathologically disturbed flow [22]. WSS is different at different parts of the same blood vessel and atherosclerotic plaques are easier to be seen at the position with the lowest WSS. So the decreasing of WSS is a key step to cause atherosclerosis. Through the quantitative analysis of blood flow shear stress, this study has also proved that rabbit carotid arterial WSS values of the experimental group decrease when the blood lipid rises, which is statistically different from that of the control group (see Tables 1, 2, 3, 4).
Flow separation and reversal in plaque vessel leads to low shear stress and further promotes formation and rupture of atherosclerotic plaque
In this study, it is shown how WSS quantitative analysis can quickly assess the WSS of arterial blood flow, leading to atherosclerosis development. Implementing the Hagen–Poiseuille formula does not restrict us to factors such as blood flow, blood pressure, tube wall geometry, intima-media thickness, etc. We can employ it to quantitatively analyze the WSS of any point. Finally, it has the characteristics of being simple, fast and gives high accuracy prediction.
At the period of formation of atherosclerotic fatty streaks and early fibrous plaques, there are no clear clinical symptoms of stenosis caused by atherosclerosis, especially when it is the clinical precancerous lesion of atherosclerosis. At this stage, WSS is reduced, and has statistical significance when compared with the control group of rabbit animal models. This shows that WSS is closely related to atherosclerosis, and it can effectively be used as one of the effective parameters of occurrence and development of atherosclerosis.
Zhang P, Dong G, Sun B, Zhang L, Chen X, Ma N, Yu F, Guo H. Huang H Lee YL. Long-term exposure to ambient air pollution and mortality due to cardiovascular disease and cerebrovascular disease in Shenyang, China. PLoS ONE. 2012;6(6):e20827.
Mas JL. Prevention of cerebral infarct caused by atherosclerosis. Arch Des Maladies Du Coeur Et Des Vaisseaux. 1998;91 Spec No 5:65–73.
Sarkar RN, Bhattacharya R, Bhattacharyya K, Paul R, Mullick OS. Adult onset still's disease with persistent skin lesions complicated by secondary hemophagocytic lymphohistiocytosis. Int J Rheum Dis. 2014;17(1):118–21.
Karim R, Hodis HN, Detrano R, Liu CR, Liu CH, Mack WJ. Relation of framingham risk score to subclinical atherosclerosis evaluated across three arterial sites. Am J Cardiol. 2008;102(7):825–30.
Frerix M, Stegbauer J, Kreuter A, Weiner SM. Atherosclerotic plaques occur in absence of intima-media thickening in both systemic sclerosis and systemic lupus erythematosus: a duplexsonography study of carotid and femoral arteries and follow-up for cardiovascular events. Arthritis Res Ther. 2014;16(1):1–17.
Molinari F, Zeng G, Suri JS. A state of the art review on intima-media thickness (IMT) measurement and wall segmentation techniques for carotid ultrasound. Comput Methods Programs Biomed. 2010;100(3):201–21.
Diener HC, Sacco R, Yusuf S. Cerebrovascular Diseases. Cerebrovasc Dis. 2007;23(5–6):368–80.
Bassetti C, Aldrich MS. Sleep apnea in acute cerebrovascular diseases: final report on 128 patients. Sleep. 1999;22(2):217.
Roger VL, Weston SA, Killian JM, Pfeifer EA, Belau PG, Kottke TE, Frye RL, Bailey KR, Jacobsen SJ. Time trends in the prevalence of atherosclerosis: a population-based autopsy study. Am J Med. 2001;110(4):267–73.
Targonski P, Jacobsen SJ, Weston SA, Leibson CL, Pfeifer E, Nemetz P, Roger VL. Referral to autopsy: effect of antemortem cardiovascular disease: a population-based study in Olmsted County, Minnesota. Ann Epidemiol. 2001;11(4):264–70.
Böyum A. Isolation of mononuclear cells and granulocytes from human blood. Isolation of monuclear cells by one centrifugation, and of granulocytes by combining centrifugation and sedimentation at 1 g. Scand J Clin Lab Investig Supplement. 1968;97(10):77.
Ross R. Atherosclerosis—an inflammatory disease. N Engl J Med. 1999;340(2):115.
Hays AG, Kelle S, Hirsch GA, Soleimanifard S, Yu J, Agarwal HK, Gerstenblith G, Schär M, Stuber M, Weiss RG. Regional coronary endothelial function is closely related to local early coronary atherosclerosis in patients with mild coronary artery disease: pilot study. Circ Cardiovasc Imaging. 2012;5(3):341–8.
Senior RM, Campbell EJ, Landis JA, Cox FR, Kuhn C, Koren HS. Elastase of U-937 monocytelike cells: comparisons with elastases derived from human monocytes and neutrophils and murine macrophagelike cells. J Clin Investig. 1982;69(2):384–93.
Zhang L, Chen XX, Zhong XT, Fu Y. Quantitative blood flow shear stress analysis software in evaluation on carotid atherosclerosis. Chin J Med Imaging Technol. 2014;30(2):214–8.
Vidal CJ. Ketamine hydrochloride. Revista Española De Anestesiología Y Reanimación. 1970;17(2):215.
Uramoto H, Yamada S, Tanaka F. Angiogenesis of lung cancer utilizes existing blood vessels rather than developing new vessels using signals from carcinogenesis. Anticancer Res. 2013;33(5):1913–6.
Cheng RQ, Shen R, Chen C, Shen DZ, Lou DF, Yue-Hua LI, Wen-Juan SU, Jia CY. The correlation research on Ba PWV and IMT of patients with carotid atherosclerosis. Zhengzhou: Henan Traditional Chinese Medicine; 2015.
Tascilar N, Dursun A, Mungan G, Sumbuloglu V, Ekem S, Bozdogan S, Baris S, Aciman E, Cabuk F. Relationship of apoE polymorphism with lipoprotein(a), apoA, apoB and lipid levels in atherosclerotic infarct. J Neurol Sci. 2009;277(2):17–21.
Urbich C, Mallat Z, Tedgui A, Clauss M, Zeiher AM, Dimmeler S. Upregulation of TRAF-3 by shear stress blocks CD40-mediated endothelial activation. J Clin Investig. 2001;108(10):1451.
Zhou L, Liu H, Wen X, Peng Y, Tian Y, Zhao L. Effects of metformin on blood pressure in nondiabetic patients: a meta-analysis of randomized controlled trials. J Hypertens. 2017;35:1.
Davies PF. Hemodynamic shear stress and the endothelium in cardiovascular pathophysiology. Nat Clin Prac Cardiovasc Med. 2009;6(1):16–26.
Performed the literature review: JYG; Carried out experiments: BZ, JYG; Gave advice for setup: MQ, LLN, HZ; Checked the validity of data: LLN, HZ; Supported the experiments financially: BZ; Checked the paper: DG, LLN. All authors read and approved the final manuscript.
This research study was funded by the Academic Leaders Training Program of Pudong Health Bureau of Shanghai (Grant No. PWRd2013-02), and Pudong New Area Committee of Science and Technology (PKJ2015-Y17), and the National Natural Science Foundation of China (Grant Nos. 81401428, 81571693).
On behalf of all authors, the corresponding author states that there is no competing interests.
Department of Ultrasound in Medicine, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China
Bo Zhang
& Junyi Gu
Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
Ming Qian
, Lili Niu
& Hui Zhou
University 2020 Foundation, Northborough, MA, 01532, USA
Dhanjoo Ghista
Search for Bo Zhang in:
Search for Junyi Gu in:
Search for Ming Qian in:
Search for Lili Niu in:
Search for Hui Zhou in:
Search for Dhanjoo Ghista in:
Correspondence to Bo Zhang.
Zhang, B., Gu, J., Qian, M. et al. Correlation between quantitative analysis of wall shear stress and intima-media thickness in atherosclerosis development in carotid arteries. BioMed Eng OnLine 16, 137 (2017) doi:10.1186/s12938-017-0425-9
Fibrous plaques
Wall shear stress
Intima-media thickness
Receiver operator characteristic
BioMedical Engineering and the Heart
|
CommonCrawl
|
Optimal customer behavior in observable and unobservable discrete-time queues
Optimal financing and operational decisions of capital-constrained manufacturer under green credit and subsidy
January 2021, 17(1): 279-297. doi: 10.3934/jimo.2019111
Mean-field analysis of a scaling MAC radio protocol
Illés Horváth 1,, , Kristóf Attila Horváth 2, , Péter Kovács 1, and Miklós Telek 3,
MTA-BME Information Systems Research Group, H-1117 Budapest, Magyar Tudosok krt. 2
Budapest University of Technology and Economics, Department of Networked Systems and Services, H-1117 Budapest, Magyar Tudosok krt. 2
MTA-BME Information Systems Research Group, Budapest University of Technology and Economics, Department of Networked Systems and Services, H-1117 Budapest, Magyar Tudosok krt. 2
Received November 2018 Revised May 2019 Published January 2021 Early access September 2019
Fund Project: This work is supported by the OTKA 123914 project and the TUDFO/51757/2019-ITM grants
Figure(13) / Table(1)
We examine the transient behavior of a positioning system with a large number of tags trying to connect to the infrastructure with an exponential backoff policy in case of unsuccessful connection. Using a classic mean-field approach, we derive a system of differential equations whose solution approximates the original process. Analysis of the solution shows that both the solution and the original system exhibits an unusual log-periodic behavior in the mean-field limit, along with other interesting patterns of behavior. We also perform numerical optimization for the backoff policy.
Keywords: Population model, density dependent behavior, asymptotic limit, log-periodicity, positioning, ultra wideband, MAC, random access channel, exponential backoff.
Mathematics Subject Classification: Primary: 68M20, 60J20; Secondary: 90B20.
Citation: Illés Horváth, Kristóf Attila Horváth, Péter Kovács, Miklós Telek. Mean-field analysis of a scaling MAC radio protocol. Journal of Industrial & Management Optimization, 2021, 17 (1) : 279-297. doi: 10.3934/jimo.2019111
A.-L. Barabási, R. Albert and H. Jeong, Mean-field theory for scale-free random networks, Physica A: Statistical Mechanics and its Applications, 272 (1999), 173-187. Google Scholar
G. Ben Arous, A. Fribergh, N. Gantert and Al an Hammond, Biased random walks on Galton-Watson trees with leaves, Ann. Probab., 40 (2012), 280-338. doi: 10.1214/10-AOP620. Google Scholar
J. I. Capetanakis, Tree algorithms for packet broadcast channels, IEEE Transactions on Information Theory, 25 (1979), 505-515. doi: 10.1109/TIT.1979.1056093. Google Scholar
V. Claesson, H. Lonn and N. Suri, An efficient TDMA start-up and restart synchronization approach for distributed embedded systems, IEEE Transactions on Parallel and Distributed Systems, 15 (2004), 725-739. Google Scholar
Federal Communications Commission et al, Title 47-Telecommunication: Chapter I-Federal Communications Commission: Subchapter A-General: Part 15-radio frequency devices, Federal Communications Commission Regulatory Information, (2009). Google Scholar
S. N. Ethier and T. G. Kurtz, Characterization and Convergence, Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, John Wiley & Sons, Inc., New York, 1986. doi: 10.1002/9780470316658. Google Scholar
International Organization for Standardization, Information technology-Radio frequency identification for item management-Part 6: Parameters for air interface communications at 860 MHz to 960 MHz General, ISO/IEC standard, (2013). Google Scholar
C. Kipnis and S. R. S. Varadhan, Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions, Comm. Math. Phys., 104 (1986), 1-19. doi: 10.1007/BF01210789. Google Scholar
T. Komorowski, C. Landim and S. Olla, Fluctuations in Markov Processes: Time Symmetry and Martingale Approximation, Grundlehren der mathematischen Wissenschaften, 325. Springer Berlin Heidelberg, 2012. doi: 10.1007/978-3-642-29880-6. Google Scholar
T. G. Kurtz, Solutions of ordinary differential equations as limits of pure jump Markov processes, Journal of Applied Probability, 7 (1970), 49-58. doi: 10.2307/3212147. Google Scholar
B.-J. Kwak, N.-O. Song and L. E. Miller, On the stability of exponential backoff, Journal of research of the National Institute of Standards and Technology, 108 (2003), 289-297. Google Scholar
B.-J. Kwak, N.-O. Song and L. E. Miller, Performance analysis of exponential backoff, IEEE/ACM Trans. Netw., 13 (2005), 343-355. Google Scholar
V. Bansaye and S. Méléard, Stochastic Models for Structured Populations: Scaling Limits and Long Time Behavior, Springer International Publishing, 2015. doi: 10.1007/978-3-319-21711-6. Google Scholar
Y. Shen, H. Wymeersch and M. Z. Win, Fundamental limits of wideband localization-part II: Cooperative networks, IEEE Trans. Inform. Theory, 56 (2010), 4981-5000. doi: 10.1109/TIT.2010.2059720. Google Scholar
G. W. Shi and Y. Ming, Survey of indoor positioning systems based on ultra-wideband (UWB) technology, Wireless Communications, Networking and Applications, (2016), 1269–1278. doi: 10.1007/978-81-322-2580-5_115. Google Scholar
M. Shurman, B. Al Shua'b, M. Alsaedeen, M. F. Al-Mistarihi and K. Darabkh, N-BEB: New backoff algorithm for IEEE 802.11 MAC protocol, In 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), (2014), 540–544. doi: 10.1109/MIPRO.2014.6859627. Google Scholar
IEEE Computer Society, Low-Rate Wireless Personal Area Networks (LR-WPANs), IEEE Standard, 2011. Google Scholar
L. Y. Song, H. L. Zou and T. T. Zhang, A low complexity asynchronous UWB TDOA localization method, International Journal of Distributed Sensor Networks, 11 (2015). Google Scholar
D. Stauffer and D. Sornette, Log-periodic oscillations for biased diffusion on random lattice, Physica A, 252 (1998), 271-277. doi: 10.1016/S0378-4371(97)00680-8. Google Scholar
W. Steiner and M. Paulitsch, The transition from asynchronous to synchronous system operation: An approach for distributed fault-tolerant systems, Proceedings 22nd International Conference on Distributed Computing Systemspages, (2002), 329–336. doi: 10.1109/ICDCS.2002.1022270. Google Scholar
Figure 1. State transitions of a single user; $ p_i $ are constant, $ c_i $ depend on other users
Figure 2. Convergence of $ w_0(t) $ and $ w_1(t) $ when $ \alpha $ is fixed and $ L\to\infty $
Figure 3. Simulation for $ N_{L+i}(Nt)/N $ versus numerical solution for $ z_i(t) $ for $ i = 0, 1, 2 $ (parameters are $ N = 2^{10}, \gamma = 2, L = 10, \alpha = 0 $)
Figure 4. Early rapid transition: $ z_i(t) $ for values of $ i $ considerably smaller than 0 ($ \alpha = 0 $ and $ \gamma = 2 $)
Figure 5. The functions $ z(\gamma, \alpha, t) $ for $ \gamma = 20 $ and $ \alpha = 0 $ (thick line), $ 1/10, \dots, 9/10 $
Figure 6. The functions $ z(\gamma, \alpha, t) $ for $ \gamma = 2 $ and $ \alpha = 0, 1/10, \dots, 9/10 $
Figure 7. The values $ z_i(2, \alpha, 1) $ for $ \alpha = 0, 1/20, \dots, 19/20 $
Figure 8. The values $ z_i(20, \alpha, 1) $ for $ \alpha = 0, 1/20, \dots, 19/20 $
Figure 9. Mean of the scaled connection time for $ \gamma=20 $
Figure 10. Mean of the scaled connection time for $ \gamma=2 $
Figure 11. Mean of the scaled connection time as a function of $ \gamma $
Figure 12. Simulation for $1-\bar N_0(Nt)/N $ (red line) versus $\bar z(t) $ (dashed blue line); parameters are $ N=2^{10},\gamma=2,L=10,\alpha=0,t_0=0.5$
Figure 13. $z(t)$ (no switching, black line) versus $\bar z(t) $ (switching at time $t_0=0.72 $, optimal for $m_z $, dotted red line) versus $ \bar z'(t)$ (switching at time $ t_0=0.39$, optimal for the 99.9% quantile, dashed blue line). Parameters are $\gamma=2,L=10,\alpha=0 $
Table 1. Optimization of the switching time for a prescribed quantile ($ \alpha = 0 $)
switching mean time quantile
$ \gamma $ time $ t_0 $ to connect 0.9 0.95 0.99 0.999
2 $ \infty $ 2.722 5.306 7.171 12.91 25.47
2 0.718 2.198 3.738 4.522 6.791 11.57
2 0.534 2.321 3.732 4.344 6.089 9.730
1.65 $ \infty $ 2.628 4.746 6.050 9.776 17.20
1.65 1.008 2.321 3.782 4.439 6.213 9.634
Shengzhu Jin, Bong Dae Choi, Doo Seop Eom. Performance analysis of binary exponential backoff MAC protocol for cognitive radio in the IEEE 802.16e/m network. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1483-1494. doi: 10.3934/jimo.2017003
Chuangxia Huang, Hua Zhang, Lihong Huang. Almost periodicity analysis for a delayed Nicholson's blowflies model with nonlinear density-dependent mortality term. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3337-3349. doi: 10.3934/cpaa.2019150
Wanbiao Ma, Yasuhiro Takeuchi. Asymptotic properties of a delayed SIR epidemic model with density dependent birth rate. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 671-678. doi: 10.3934/dcdsb.2004.4.671
Jianwei Yang, Peng Cheng, Yudong Wang. Asymptotic limit of a Navier-Stokes-Korteweg system with density-dependent viscosity. Electronic Research Announcements, 2015, 22: 20-31. doi: 10.3934/era.2015.22.20
Keng Deng, Yixiang Wu. Asymptotic behavior for a reaction-diffusion population model with delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 385-395. doi: 10.3934/dcdsb.2015.20.385
Dongxue Yan, Xianlong Fu. Asymptotic behavior of a hierarchical size-structured population model. Evolution Equations & Control Theory, 2018, 7 (2) : 293-316. doi: 10.3934/eect.2018015
Eunju Hwang, Kyung Jae Kim, Bong Dae Choi. Delay distribution and loss probability of bandwidth requests under truncated binary exponential backoff mechanism in IEEE 802.16e over Gilbert-Elliot error channel. Journal of Industrial & Management Optimization, 2009, 5 (3) : 525-540. doi: 10.3934/jimo.2009.5.525
Zhipeng Qiu, Jun Yu, Yun Zou. The asymptotic behavior of a chemostat model. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 721-727. doi: 10.3934/dcdsb.2004.4.721
Jishan Fan, Tohru Ozawa. An approximation model for the density-dependent magnetohydrodynamic equations. Conference Publications, 2013, 2013 (special) : 207-216. doi: 10.3934/proc.2013.2013.207
Chaoying Li, Xiaojing Xu, Zhuan Ye. On long-time asymptotic behavior for solutions to 2D temperature-dependent tropical climate model. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021163
Dohyun Kim. Asymptotic behavior of a second-order swarm sphere model and its kinetic limit. Kinetic & Related Models, 2020, 13 (2) : 401-434. doi: 10.3934/krm.2020014
Giuseppe Da Prato, Arnaud Debussche. Asymptotic behavior of stochastic PDEs with random coefficients. Discrete & Continuous Dynamical Systems, 2010, 27 (4) : 1553-1570. doi: 10.3934/dcds.2010.27.1553
Sofía Nieto, Guillermo Reyes. Asymptotic behavior of the solutions of the inhomogeneous Porous Medium Equation with critical vanishing density. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1123-1139. doi: 10.3934/cpaa.2013.12.1123
Meng Liu, Ke Wang. Population dynamical behavior of Lotka-Volterra cooperative systems with random perturbations. Discrete & Continuous Dynamical Systems, 2013, 33 (6) : 2495-2522. doi: 10.3934/dcds.2013.33.2495
Hunseok Kang. Asymptotic behavior of a discrete turing model. Discrete & Continuous Dynamical Systems, 2010, 27 (1) : 265-284. doi: 10.3934/dcds.2010.27.265
Genni Fragnelli, A. Idrissi, L. Maniar. The asymptotic behavior of a population equation with diffusion and delayed birth process. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 735-754. doi: 10.3934/dcdsb.2007.7.735
Cecilia Cavaterra, Maurizio Grasselli. Asymptotic behavior of population dynamics models with nonlocal distributed delays. Discrete & Continuous Dynamical Systems, 2008, 22 (4) : 861-883. doi: 10.3934/dcds.2008.22.861
Chufen Wu, Dongmei Xiao, Xiao-Qiang Zhao. Asymptotic pattern of a migratory and nonmonotone population model. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1171-1195. doi: 10.3934/dcdsb.2014.19.1171
Toshikazu Kuniya, Mimmo Iannelli. $R_0$ and the global behavior of an age-structured SIS epidemic model with periodicity and vertical transmission. Mathematical Biosciences & Engineering, 2014, 11 (4) : 929-945. doi: 10.3934/mbe.2014.11.929
Francisco Guillén-González, Mamadou Sy. Iterative method for mass diffusion model with density dependent viscosity. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 823-841. doi: 10.3934/dcdsb.2008.10.823
Illés Horváth Kristóf Attila Horváth Péter Kovács Miklós Telek
|
CommonCrawl
|
nLab > Latest Changes: weak homotopy equivalence
CommentTimeSep 3rd 2012
(edited Sep 3rd 2012)
Format: MarkdownItexI have been trying to polish _[[weak homotopy equivalence]]_ by adding formal Definition/Proposition-environements. Also expanded the Idea-section and edited here and there. The following remark used to be in the entry, but I can't see right now how it makes sense. If I am mixed up, please clarify and I'll re-insert it into the entry: > It is tempting to try to restate the definition as "$f$ induces an isomorphism $f_*: \pi_n(X,x) \to \pi_n(Y,f(x))$ for all $x \in X$ and $n \geq 0$," but this is not literally correct; such a definition would be vacuously satisfied whenever $X$ is [[empty set|empty]], without regard to what $Y$ might be. If you really want to go this way, therefore, you still must add a clause for $\Pi_{-1}$ (the [[truth value]] that states whether a space is [[inhabited set|inhabited]]), so the definition is no shorter. Then, there used to be the following discussion box, which hereby I am moving from there to here. I have added a brief remark on how weak homotopy equivalences are homotopy equivalences after resolution. But maybe it deserves to be further expanded. *** [ begin forwarded discussion ] +--{.query} Is there any reason for calling these 'weak' homotopy equivalences rather than merely homotopy equivalences? ---Toby [[Mike Shulman|Mike]]: By "these" I assume you mean weak homotopy equivalences of simplicial sets, categories, etc. My answer is yes. One reason is that in some cases, such as as simplicial sets, symmetric sets, and probably cubical sets, there is also a notion of "homotopy equivalence" from which this notion needs to be distinguished. A simplicial homotopy equivalence, for instance, is a simplicial map $f:X\to Y$ with an inverse $g:Y\to X$ and simplicial homotopies $X\times \Delta^1 \to X$ and $Y\times \Delta^1 \to Y$ relating $f g$ and $g f$ to identities. _Toby_: Interesting. I would have guessed that any weak homotopy equivalence could be strengthened to a homotopy equivalence in this sense, but maybe not. [[Tim Porter|Tim]]: I think the initial paragraph is somehow back to front from a philosophical point of view, as well as a historical one. Homotopy theory grew out of studying spaces up to homotopy equivalence or rather from studying paths in spaces (and integrating along them). This leads to some invariants such as homology and the fundamental group. Weak homotopy type (and it might be interesting to find out when this term was first used) is the result and then around the 1950s with the development of Whitehead's approach (CW complexes etc.) the distinction became more interesting between the two concepts. I like to think of 'weak homotopy equivalence' as being 'observational', i.e. $f$ is a w.h.e if when we look at it through the observations that we can make of it, it looks to be an 'equivalence'. It is 'top down'. 'Homotopy equivalence' is more 'constructive' and 'bottom up'. The idea of simple homotopy theory takes this to a more extreme case, (which is related to Toby's query and to the advent of K-theory). With the constructive logical side of the nLab becoming important is there some point in looking at this 'constructive' homotopy theory as a counter balance to the model category approach which can tend to be very demanding on the set theory it calls on? On a niggly point, the homotopy group of a space is only defined if the space is non-empty so one of the statements in this entry is pedantically a bit dodgy! _Toby_: I would say that it has a homotopy group at every point, and this is true even if it is empty. You can only pretend that it has a homotopy group, period, if it\'s inhabited and path-connected. Anyway, how do you like the introduction now? You could add a more extensive History section too, if you want. [[Tim Porter|Tim]]: It looks fine. I would add some more punctuation but I'm a punctuation fanatic!!! With all these entries I suspect that in a few months time we will feel they need some tender loving care, a bit of Bonsai pruning!! For the moment lets get on to more interesting things. Do you think some light treatment of simple homotopy theory might be useful,say at a historical level? =-- [ end forwarded discussion ]
I have been trying to polish weak homotopy equivalence by adding formal Definition/Proposition-environements. Also expanded the Idea-section and edited here and there.
The following remark used to be in the entry, but I can't see right now how it makes sense. If I am mixed up, please clarify and I'll re-insert it into the entry:
It is tempting to try to restate the definition as "ff induces an isomorphism f *:π n(X,x)→π n(Y,f(x))f_*: \pi_n(X,x) \to \pi_n(Y,f(x)) for all x∈Xx \in X and n≥0n \geq 0," but this is not literally correct; such a definition would be vacuously satisfied whenever XX is empty, without regard to what YY might be. If you really want to go this way, therefore, you still must add a clause for Π −1\Pi_{-1} (the truth value that states whether a space is inhabited), so the definition is no shorter.
Then, there used to be the following discussion box, which hereby I am moving from there to here. I have added a brief remark on how weak homotopy equivalences are homotopy equivalences after resolution. But maybe it deserves to be further expanded.
[ begin forwarded discussion ]
+–{.query} Is there any reason for calling these 'weak' homotopy equivalences rather than merely homotopy equivalences? —Toby
Mike: By "these" I assume you mean weak homotopy equivalences of simplicial sets, categories, etc. My answer is yes. One reason is that in some cases, such as as simplicial sets, symmetric sets, and probably cubical sets, there is also a notion of "homotopy equivalence" from which this notion needs to be distinguished. A simplicial homotopy equivalence, for instance, is a simplicial map f:X→Yf:X\to Y with an inverse g:Y→Xg:Y\to X and simplicial homotopies X×Δ 1→XX\times \Delta^1 \to X and Y×Δ 1→YY\times \Delta^1 \to Y relating fgf g and gfg f to identities.
Toby: Interesting. I would have guessed that any weak homotopy equivalence could be strengthened to a homotopy equivalence in this sense, but maybe not.
Tim: I think the initial paragraph is somehow back to front from a philosophical point of view, as well as a historical one. Homotopy theory grew out of studying spaces up to homotopy equivalence or rather from studying paths in spaces (and integrating along them). This leads to some invariants such as homology and the fundamental group. Weak homotopy type (and it might be interesting to find out when this term was first used) is the result and then around the 1950s with the development of Whitehead's approach (CW complexes etc.) the distinction became more interesting between the two concepts.
I like to think of 'weak homotopy equivalence' as being 'observational', i.e. ff is a w.h.e if when we look at it through the observations that we can make of it, it looks to be an 'equivalence'. It is 'top down'. 'Homotopy equivalence' is more 'constructive' and 'bottom up'. The idea of simple homotopy theory takes this to a more extreme case, (which is related to Toby's query and to the advent of K-theory).
With the constructive logical side of the nLab becoming important is there some point in looking at this 'constructive' homotopy theory as a counter balance to the model category approach which can tend to be very demanding on the set theory it calls on?
On a niggly point, the homotopy group of a space is only defined if the space is non-empty so one of the statements in this entry is pedantically a bit dodgy!
Toby: I would say that it has a homotopy group at every point, and this is true even if it is empty. You can only pretend that it has a homotopy group, period, if it's inhabited and path-connected.
Anyway, how do you like the introduction now? You could add a more extensive History section too, if you want.
Tim: It looks fine. I would add some more punctuation but I'm a punctuation fanatic!!!
With all these entries I suspect that in a few months time we will feel they need some tender loving care, a bit of Bonsai pruning!! For the moment lets get on to more interesting things.
Do you think some light treatment of simple homotopy theory might be useful,say at a historical level? =–
[ end forwarded discussion ]
CommentAuthorTodd_Trimble
Author: Todd_Trimble
Format: MarkdownItex> The following remark used to be in the entry, but I can't see right now how it makes sense. The rectification seems more or less obvious: a map $f: X \to Y$ is a weak homotopy equivalence if $\pi_0(f): \pi_0(X) \to \pi_0(Y)$ is an isomorphism and if $\pi_n(f): \pi_n(X, x) \to \pi_n(Y, f(x)$ is an isomorphism for all $x \in X$ and $n \geq 1$. In other words, the mistake was to consider $\pi_0(X, x)$ in case $X$ is empty.
The following remark used to be in the entry, but I can't see right now how it makes sense.
The rectification seems more or less obvious: a map f:X→Yf: X \to Y is a weak homotopy equivalence if π 0(f):π 0(X)→π 0(Y)\pi_0(f): \pi_0(X) \to \pi_0(Y) is an isomorphism and if π n(f):π n(X,x)→π n(Y,f(x)\pi_n(f): \pi_n(X, x) \to \pi_n(Y, f(x) is an isomorphism for all x∈Xx \in X and n≥1n \geq 1. In other words, the mistake was to consider π 0(X,x)\pi_0(X, x) in case XX is empty.
Format: MarkdownItexI have given the discussion of homotopy types that used to be there its own subsection: _[Relation to homotopy types](http://ncatlab.org/nlab/show/weak+homotopy+equivalence#RelationToHomotopyTypes)_ and then polsihed and expanded that slightly
I have given the discussion of homotopy types that used to be there its own subsection: Relation to homotopy types and then polsihed and expanded that slightly
Format: MarkdownItexI have added a paragraph on [Examples of non-reversible weak homotopy equivalences](http://ncatlab.org/nlab/show/weak+homotopy+equivalence#ExamplesOfNonReversibleWHEs).
I have added a paragraph on Examples of non-reversible weak homotopy equivalences.
Format: MarkdownItex(never mind)
(never mind)
Format: MarkdownItexI didn't see your original message, but here is a guess: Is maybe "reversible" too misleading, and too easily misread as "invertible"?
I didn't see your original message, but here is a guess: Is maybe "reversible" too misleading, and too easily misread as "invertible"?
Format: MarkdownItexNo, I erased my message (where I thought I had a simpler example than the pseudocircle) because it was in mathematical error! But since you ask, I think "reversible" gets the correct idea across.
No, I erased my message (where I thought I had a simpler example than the pseudocircle) because it was in mathematical error! But since you ask, I think "reversible" gets the correct idea across.
CommentAuthorTobyBartels
CommentTimeSep 4th 2012
Author: TobyBartels
Format: MarkdownItex>The following remark used to be in the entry, but I can't see right now how it makes sense. If it really doesn't make sense to you, then I could explain; but if you just want to know why it's in there at all, it's because somebody (OK, it was me) *was* tempted to phrase the definition that way. There's actually a hierarchy of ways to phrase the definition: * $\Pi_{-1}(f)\colon \Pi_{-1}(X) \to \Pi_{-1}(Y)$ is an equivalence of $(-1)$-groupoids, and $\pi_n(f,x)\colon \pi_n(X,x) \to \pi_n(Y,f(x))$ is an isomorphism of $n$-tuply groupal sets for all points $x$ of $X$ and all natural numbers $n \geq 0$; * $\Pi_0(f)\colon \Pi_0(X) \to \Pi_0(Y)$ is an equivalence of $0$-groupoids, and $\pi_n(f,x)\colon \pi_n(X,x) \to \pi_n(Y,f(x))$ is an isomorphism of $n$-tuply groupal sets for all points $x$ of $X$ and all natural numbers $n \geq 1$; * $\Pi_1(f)\colon \Pi_1(X) \to \Pi_1(Y)$ is an equivalence of $1$-groupoids, and $\pi_n(f,x)\colon \pi_n(X,x) \to \pi_n(Y,f(x))$ is an isomorphism of $n$-tuply groupal sets for all points $x$ of $X$ and all natural numbers $n \geq 2$; * etc …; * $\Pi_\infty(f)\colon \Pi_\infty(X) \to \Pi_\infty(Y)$ is an equivalence of $\infty$-groupoids, and [vacuous]. Starting too near the end begs the question when it comes to the [[homotopy hypothesis]], while starting too near the beginning is unfamiliar. (I made the mistake of thinking that starting at the very beginning would simplify the phrasing, but it doesn't.) If you don't remember how $\pi_n(X,x)$ is an $n$-tuply groupal set for any natural number $n$, this is covered at [[homotopy group]] (third Idea paragraph, and Examples in Low Dimensions).
If it really doesn't make sense to you, then I could explain; but if you just want to know why it's in there at all, it's because somebody (OK, it was me) was tempted to phrase the definition that way.
There's actually a hierarchy of ways to phrase the definition:
Π −1(f):Π −1(X)→Π −1(Y)\Pi_{-1}(f)\colon \Pi_{-1}(X) \to \Pi_{-1}(Y) is an equivalence of (−1)(-1)-groupoids, and π n(f,x):π n(X,x)→π n(Y,f(x))\pi_n(f,x)\colon \pi_n(X,x) \to \pi_n(Y,f(x)) is an isomorphism of nn-tuply groupal sets for all points xx of XX and all natural numbers n≥0n \geq 0;
Π 0(f):Π 0(X)→Π 0(Y)\Pi_0(f)\colon \Pi_0(X) \to \Pi_0(Y) is an equivalence of 00-groupoids, and π n(f,x):π n(X,x)→π n(Y,f(x))\pi_n(f,x)\colon \pi_n(X,x) \to \pi_n(Y,f(x)) is an isomorphism of nn-tuply groupal sets for all points xx of XX and all natural numbers n≥1n \geq 1;
etc …;
Π ∞(f):Π ∞(X)→Π ∞(Y)\Pi_\infty(f)\colon \Pi_\infty(X) \to \Pi_\infty(Y) is an equivalence of ∞\infty-groupoids, and [vacuous].
Starting too near the end begs the question when it comes to the homotopy hypothesis, while starting too near the beginning is unfamiliar. (I made the mistake of thinking that starting at the very beginning would simplify the phrasing, but it doesn't.)
If you don't remember how π n(X,x)\pi_n(X,x) is an nn-tuply groupal set for any natural number nn, this is covered at homotopy group (third Idea paragraph, and Examples in Low Dimensions).
(edited Sep 4th 2012)
Format: MarkdownItexOkay, so that first clause in your item (1) was missing!
Okay, so that first clause in your item (1) was missing!
Format: MarkdownItex> If you don't remember how π n(X,x) is an n-tuply groupal set for any natural number n, If I didn't remember that, I should go home and do something else than I am doing, such as Macrame maybe. ;-)
If you don't remember how π n(X,x) is an n-tuply groupal set for any natural number n,
If I didn't remember that, I should go home and do something else than I am doing, such as Macrame maybe. ;-)
Format: MarkdownItexWell, sometimes people forget that for $n = 0$.
Well, sometimes people forget that for n=0n = 0.
Format: MarkdownItexSaying "isomorphism of n-tuply groupal sets" is not really necessary, though, since $\pi_n(f,x)$ is automatically a *morphism* of n-tuply groupal sets, so its being an isomorphism of such is equivalent to its being merely a bijection. It seems to me that the funny thing is that you can't start any *lower*: * $\Pi_{-2}(f)\colon \Pi_{-2}(X) \to \Pi_{-2}(Y)$ is an equivalence of $(-2)$-groupoids, and $\pi_n(f,x)\colon \pi_n(X,x) \to \pi_n(Y,f(x))$ is an isomorphism of $n$-tuply groupal sets for all points $x$ of $X$ and all integers $n \geq -1$ is *not* a correct definition. Maybe this hierarchy should be discussed in the entry somewhere.
Saying "isomorphism of n-tuply groupal sets" is not really necessary, though, since π n(f,x)\pi_n(f,x) is automatically a morphism of n-tuply groupal sets, so its being an isomorphism of such is equivalent to its being merely a bijection.
It seems to me that the funny thing is that you can't start any lower:
Π −2(f):Π −2(X)→Π −2(Y)\Pi_{-2}(f)\colon \Pi_{-2}(X) \to \Pi_{-2}(Y) is an equivalence of (−2)(-2)-groupoids, and π n(f,x):π n(X,x)→π n(Y,f(x))\pi_n(f,x)\colon \pi_n(X,x) \to \pi_n(Y,f(x)) is an isomorphism of nn-tuply groupal sets for all points xx of XX and all integers n≥−1n \geq -1
is not a correct definition. Maybe this hierarchy should be discussed in the entry somewhere.
CommentAuthorKarol Szumiło
Author: Karol Szumiło
Format: MarkdownItexMaybe this is a good opportunity to advertise my favorite definition of a weak homotopy equivalence. A map of spaces $f : X \to Y$ is a weak homotopy equivalence if for every natural number $m$ and a diagram $$ \begin{matrix} \partial I^m & \overset{u}{\longrightarrow} & X \\ \downarrow & & \, \downarrow f \\ I^m & \underset{v}{\longrightarrow} & Y \end{matrix} $$ there exist maps $w : I^m \to X$ and $H : I^m \times I \to Y$ such that $w | \partial I^m = u$ and $H$ is a homotopy from $f w$ to $v$ over $\partial I^m$. We obtain a definition of $k$-equivalence simply by restricting this condition to $m \le k$. This definition is completely basepoint-free and it doesn't refer to homotopy groups. In my experience it has the advantage that many basic properties of weak homotopy equivalences (and $k$-equivalences) can be verified using this definition without ever mentioning homotopy groups and the proofs tend to be neater than the ones that use the classical definition directly.
Maybe this is a good opportunity to advertise my favorite definition of a weak homotopy equivalence. A map of spaces f:X→Yf : X \to Y is a weak homotopy equivalence if for every natural number mm and a diagram
∂I m ⟶u X ↓ ↓f I m ⟶v Y \begin{matrix} \partial I^m & \overset{u}{\longrightarrow} & X \\ \downarrow & & \, \downarrow f \\ I^m & \underset{v}{\longrightarrow} & Y \end{matrix}
there exist maps w:I m→Xw : I^m \to X and H:I m×I→YH : I^m \times I \to Y such that w|∂I m=uw | \partial I^m = u and HH is a homotopy from fwf w to vv over ∂I m\partial I^m. We obtain a definition of kk-equivalence simply by restricting this condition to m≤km \le k.
This definition is completely basepoint-free and it doesn't refer to homotopy groups. In my experience it has the advantage that many basic properties of weak homotopy equivalences (and kk-equivalences) can be verified using this definition without ever mentioning homotopy groups and the proofs tend to be neater than the ones that use the classical definition directly.
CommentAuthorDavidRoberts
Author: DavidRoberts
Format: MarkdownItexAh, that's nice. And it is a vague sense dual to a trivial [[Dold fibration]], where the homotopy is in the upper triangle, not the lower triangle.
Ah, that's nice. And it is a vague sense dual to a trivial Dold fibration, where the homotopy is in the upper triangle, not the lower triangle.
Format: MarkdownItexHi Karol, that characterization is used quite widely at least in model category theory circles. I guess it was introduced in * Jardine, _Simplicial Presheaves_, Journal of Pure and Applied Algebra 47, 1987, no.1, 35-87.
Hi Karol,
that characterization is used quite widely at least in model category theory circles. I guess it was introduced in
Jardine, Simplicial Presheaves, Journal of Pure and Applied Algebra 47, 1987, no.1, 35-87.
Format: MarkdownItexI have added a note under _[Equivalent characterizations](http://ncatlab.org/nlab/show/weak+homotopy+equivalence#EquivalentCharacterizations)_.
I have added a note under Equivalent characterizations.
Format: MarkdownItex>its being an isomorphism of such is equivalent to its being merely a bijection Sure, but why complicate matters by bringing in the category of sets? If one is having trouble with the proposition that a certain group homomorphism is an isomorphism, then a handy theorem to use may be that it is so iff the underlying function is a bijection, but that fact is not the point. >It seems to me that the funny thing is that you can't start any *lower* That's because the concept of $(-1)$-tuply groupal set doesn't make sense, as far as I know. (At least, it's not in [[k-tuply monoidal n-category]].) Thus, there is such a thing as $\Pi_{-1}(X)$ (which is a $(-1)$-groupoid), but no such thing as $\pi_{-1}(X,x)$ (which would be a $(-1)$-tuply groupal set).
its being an isomorphism of such is equivalent to its being merely a bijection
Sure, but why complicate matters by bringing in the category of sets? If one is having trouble with the proposition that a certain group homomorphism is an isomorphism, then a handy theorem to use may be that it is so iff the underlying function is a bijection, but that fact is not the point.
It seems to me that the funny thing is that you can't start any lower
That's because the concept of (−1)(-1)-tuply groupal set doesn't make sense, as far as I know. (At least, it's not in k-tuply monoidal n-category.) Thus, there is such a thing as Π −1(X)\Pi_{-1}(X) (which is a (−1)(-1)-groupoid), but no such thing as π −1(X,x)\pi_{-1}(X,x) (which would be a (−1)(-1)-tuply groupal set).
Format: MarkdownItex> but why complicate matters by bringing in the category of sets? Only at the nForum would you hear a complaint that bringing in the category of sets complicates matters *vis a vis* the category of "$n$-tuply groupal sets". (-: @13 in the case of topological spaces is the "technical lemma" in section 9.6 of *A concise course in algebraic topology*. It, or something closely related to it, is also called the [HELP lemma](http://arxiv.org/abs/1004.5249v1) and apparently dates back at least to 1973, Boardman & Vogt "Homotopy invariant structures on topological spaces". I agree that it is quite nice; it's also quite similar to the solution-set condition in Smith's theorem for generating combinatorial model category structures.
but why complicate matters by bringing in the category of sets?
Only at the nForum would you hear a complaint that bringing in the category of sets complicates matters vis a vis the category of "nn-tuply groupal sets". (-:
@13 in the case of topological spaces is the "technical lemma" in section 9.6 of A concise course in algebraic topology. It, or something closely related to it, is also called the HELP lemma and apparently dates back at least to 1973, Boardman & Vogt "Homotopy invariant structures on topological spaces". I agree that it is quite nice; it's also quite similar to the solution-set condition in Smith's theorem for generating combinatorial model category structures.
Format: MarkdownItexThanks for these pointers. I have added them [here](http://ncatlab.org/nlab/show/weak+homotopy+equivalence#EquivalentCharacterizations) to the entry.
Thanks for these pointers. I have added them here to the entry.
Format: MarkdownItex@14 They're not quite dual. The notion of lifting property is self-dual, but here it splits into two distinct notions which are dual to each other: "lifting property up to under-homotopy" and "lifting property up to over-homotopy". Morphisms characterized by such properties behave differently depending on whether they're on the side of a strictly commuting triangle or on the side of a homotopy commuting one. The former are somewhat rigid and behave like (co)fibrations (for example Dold fibrations), the latter tend to be homotopy invariant and behave more like weak equivalences. I would be interested in learning about general theory of such lifting properties and classes of maps characterized by them, but I'm not aware of any such results. @18 You're right. The funny thing is that when I was first reading _A concise course in algebraic topology_ a few years ago I spent a lot of time trying to understand what this lemma is about and finally gave up. Only now I realize that this is exactly the thing I learned to appreciate in the meantime. The problem is that this "technical lemma" is awfully overloaded, in my mind this is five or six separate lemmas.
@14 They're not quite dual. The notion of lifting property is self-dual, but here it splits into two distinct notions which are dual to each other: "lifting property up to under-homotopy" and "lifting property up to over-homotopy". Morphisms characterized by such properties behave differently depending on whether they're on the side of a strictly commuting triangle or on the side of a homotopy commuting one. The former are somewhat rigid and behave like (co)fibrations (for example Dold fibrations), the latter tend to be homotopy invariant and behave more like weak equivalences. I would be interested in learning about general theory of such lifting properties and classes of maps characterized by them, but I'm not aware of any such results.
@18 You're right. The funny thing is that when I was first reading A concise course in algebraic topology a few years ago I spent a lot of time trying to understand what this lemma is about and finally gave up. Only now I realize that this is exactly the thing I learned to appreciate in the meantime. The problem is that this "technical lemma" is awfully overloaded, in my mind this is five or six separate lemmas.
Format: MarkdownItex> The problem is that this "technical lemma" is awfully overloaded, in my mind this is five or six separate lemmas. To me it is clearly a single lemma, but stated in a very confusing way because of the insistence on using one diagram that strictly commutes rather than talking about homotopies that live in squares and triangles.
The problem is that this "technical lemma" is awfully overloaded, in my mind this is five or six separate lemmas.
To me it is clearly a single lemma, but stated in a very confusing way because of the insistence on using one diagram that strictly commutes rather than talking about homotopies that live in squares and triangles.
Format: MarkdownItexI would definitely have an easier time understanding this lemma if it were stated using homotopy commutative diagrams, but I do think that its proof mixes up a few different lines of reasoning and would benefit from being split up into a few lemmas.
I would definitely have an easier time understanding this lemma if it were stated using homotopy commutative diagrams, but I do think that its proof mixes up a few different lines of reasoning and would benefit from being split up into a few lemmas.
Format: MarkdownItex> would benefit from being split up into a few lemmas. You could have a go at it in the $n$Lab entry. Would do the world a service, I am sure.
would benefit from being split up into a few lemmas.
You could have a go at it in the nnLab entry. Would do the world a service, I am sure.
CommentAuthorDmitri Pavlov
CommentTimeJan 8th 2019
Author: Dmitri Pavlov
Format: MarkdownItexAdded a remark about several other definitions of weak homotopy equivalences. <a href="https://ncatlab.org/nlab/revision/diff/weak+homotopy+equivalence/29">diff</a>, <a href="https://ncatlab.org/nlab/revision/weak+homotopy+equivalence/29">v29</a>, <a href="https://ncatlab.org/nlab/show/weak+homotopy+equivalence">current</a>
Added a remark about several other definitions of weak homotopy equivalences.
CommentTimeOct 31st 2021
Format: MarkdownItexadded this reference: * Takao Matumoto, Norihiko Minami, [[Masahiro Sugawara]], *On the set of free homotopy classes and Brown's construction*, Hiroshima Math. J. 14(2): 359-369 (1984) ([doi:10.32917/hmj/1206133043](https://projecteuclid.org/journals/hiroshima-mathematical-journal/volume-14/issue-2/On-the-set-of-free-homotopy-classes-and-Browns-construction/10.32917/hmj/1206133043.full)) <a href="https://ncatlab.org/nlab/revision/diff/weak+homotopy+equivalence/33">diff</a>, <a href="https://ncatlab.org/nlab/revision/weak+homotopy+equivalence/33">v33</a>, <a href="https://ncatlab.org/nlab/show/weak+homotopy+equivalence">current</a>
added this reference:
Takao Matumoto, Norihiko Minami, Masahiro Sugawara, On the set of free homotopy classes and Brown's construction, Hiroshima Math. J. 14(2): 359-369 (1984) (doi:10.32917/hmj/1206133043)
Format: MarkdownItexI have added ([here](https://ncatlab.org/nlab/show/weak+homotopy+equivalence#WeakHomotopyEquivalencesDetectedOnTermsOfFreeHomotopySet)) the statement of Theorem 2 in [Matumoto, Minami & Sugawara 1984](https://ncatlab.org/nlab/show/weak+homotopy+equivalence#MatumotoMinamiSugawara84), detecting weak homotopy equivalences on free homotopy sets. They require their last condition for wedge sums of circles indexed by any set, which seems a weirdly strong condition. Inspection of the proof shows, unless I am missing something, that the actual index set being used is that underlying the fundamental group of the codomain space. So I have used this weaker condition in the proposition. <a href="https://ncatlab.org/nlab/revision/diff/weak+homotopy+equivalence/34">diff</a>, <a href="https://ncatlab.org/nlab/revision/weak+homotopy+equivalence/34">v34</a>, <a href="https://ncatlab.org/nlab/show/weak+homotopy+equivalence">current</a>
I have added (here) the statement of Theorem 2 in Matumoto, Minami & Sugawara 1984, detecting weak homotopy equivalences on free homotopy sets.
They require their last condition for wedge sums of circles indexed by any set, which seems a weirdly strong condition. Inspection of the proof shows, unless I am missing something, that the actual index set being used is that underlying the fundamental group of the codomain space. So I have used this weaker condition in the proposition.
CommentAuthornLab edit announcer
Author: nLab edit announcer
Format: MarkdownItexAdd counterexample that it is not enough to require each pair of homotopy groups be isomorphic. Trebor <a href="https://ncatlab.org/nlab/revision/diff/weak+homotopy+equivalence/36">diff</a>, <a href="https://ncatlab.org/nlab/revision/weak+homotopy+equivalence/36">v36</a>, <a href="https://ncatlab.org/nlab/show/weak+homotopy+equivalence">current</a>
Add counterexample that it is not enough to require each pair of homotopy groups be isomorphic.
Format: MarkdownItexSince the counterexample is about the distinction between $\cong$ being used for "the proposition that these two groups are isomorphic" and "the proposition that the implicitly specified map is an isomorphism", I put additional language to clarify what is usually left implicit. Anonymous <a href="https://ncatlab.org/nlab/revision/diff/weak+homotopy+equivalence/37">diff</a>, <a href="https://ncatlab.org/nlab/revision/weak+homotopy+equivalence/37">v37</a>, <a href="https://ncatlab.org/nlab/show/weak+homotopy+equivalence">current</a>
Since the counterexample is about the distinction between ≅\cong being used for "the proposition that these two groups are isomorphic" and "the proposition that the implicitly specified map is an isomorphism", I put additional language to clarify what is usually left implicit.
|
CommonCrawl
|
Advanced cardiovascular risk prediction in the emergency department: updating a clinical prediction model – a large database study protocol
Charles Reynard ORCID: orcid.org/0000-0002-7534-26681,2,
Glen P. Martin3,
Evangelos Kontopantelis3,
David A. Jenkins3,
Anthony Heagerty1,
Brian McMillan4,
Anisa Jafar5,
Rajendar Garlapati6 &
Richard Body1,2
Diagnostic and Prognostic Research volume 5, Article number: 16 (2021) Cite this article
Patients presenting with chest pain represent a large proportion of attendances to emergency departments. In these patients clinicians often consider the diagnosis of acute myocardial infarction (AMI), the timely recognition and treatment of which is clinically important. Clinical prediction models (CPMs) have been used to enhance early diagnosis of AMI. The Troponin-only Manchester Acute Coronary Syndromes (T-MACS) decision aid is currently in clinical use across Greater Manchester. CPMs have been shown to deteriorate over time through calibration drift. We aim to assess potential calibration drift with T-MACS and compare methods for updating the model.
We will use routinely collected electronic data from patients who were treated using TMACS at two large NHS hospitals. This is estimated to include approximately 14,000 patient episodes spanning June 2016 to October 2020. The primary outcome of acute myocardial infarction will be sourced from NHS Digital's admitted patient care dataset. We will assess the calibration drift of the existing model and the benefit of updating the CPM by model recalibration, model extension and dynamic updating. These models will be validated by bootstrapping and one step ahead prequential testing. We will evaluate predictive performance using calibrations plots and c-statistics. We will also examine the reclassification of predicted probability with the updated TMACS model.
CPMs are widely used in modern medicine, but are vulnerable to deteriorating calibration over time. Ongoing refinement using routinely collected electronic data will inevitably be more efficient than deriving and validating new models. In this analysis we will seek to exemplify methods for updating CPMs to protect the initial investment of time and effort. If successful, the updating methods could be used to continually refine the algorithm used within TMACS, maintaining or even improving predictive performance over time.
ISRCTN number: ISRCTN41008456
Chest pain accounts for approximately 6% of all Emergency Department (ED) attendances [1]. Despite recent advances in diagnostic technology and changes to national guidelines [2, 3], it remains the most common reason for emergency hospital admission in England and Wales [1]. These patients are frequently admitted to undergo diagnostic evaluation for suspected acute coronary syndrome (ACS). Improved diagnostic pathways could allow those without an ACS diagnosis (over 100,000 patients per year in England and Wales) to be discharged from the ED without an unnecessary hospital admission. Equally it is integral that we try to capture as many ACS diagnoses as we can, since a missed ACS infers twice the mortality of a detected ACS [4].
The Troponin-only Manchester Acute Coronary Syndromes (T-MACS) decision aid can be used to rapidly rule in, rule out and risk stratify patients with suspected ACS [5]. T-MACS was derived by logistic regression, using details of a patient's symptoms with electrocardiographic (ECG) findings and cardiac troponin (cTn) concentrations, measured on arrival at ED, to calculate the probability that a patient has ACS. T-MACS classified patients with <2% probability of ACS as being 'very low risk', in this population this strategy identified 40% of patients as eligible for safe, immediate discharge from the ED [5].
T-MACS has been externally validated in 1,459 patients from three prospective studies in the United Kingdom [5], 1,244 patients from Australasia [6], and multi-centre prospective trials from the United Kingdom [7], Thailand [8] and Norway [9], each of which demonstrated acceptable predictive performance. A pilot randomized controlled trial of a precursor version of the algorithm showed that its use led to significantly more safe discharges from the ED within 4 hours of arrival than standard care [10]. The data from UK studies (which did not rely on the use of surrogate variables) consistently show that over 40% of patients are categorised as very low risk and can have ACS 'ruled out' with one blood test. It has been shown to safely reduce unnecessary hospital admissions, outperforming the algorithms currently advocated in NICE guidelines [2, 11].
Countering calibration drift
However, the performance (calibration and discrimination) of many clinical prediction models, such as T-MACS, is likely to decline with time [12, 13]. For example, this has been demonstrated previously with the EuroScore model that predicts short-term mortality after cardiac surgery [14]. Therefore, the same phenomenon is likely to occur with the T-MACS algorithm as patient demographics change and diagnostic technology evolves. Indeed, the very fact that T-MACS is implemented in practice can lead to it losing diagnostic performance, since the implementation of the model changes the predictor-outcome associations and the case-mix, meaning that the performance of the model degrades over time [15, 16].
In part, the above issues with "calibration drift" can be attributed to the fact the algorithm itself is static, having been derived in one sample over a fixed time-period. It is unlikely to be the optimal algorithm for early diagnosis in various locations with diverse populations, due to the population and, possibly, intervention heterogeneity. This has been attempted previously with the EuroScore, which was shown to demonstrate calibration drift due to changing demographics [14]. Siregar et al investigated the merits of various methods through which to update such models [17]. They found many had a similar improvement on the clinical prediction models (regression co-efficient updating and dynamic updating).
Model updating and dynamic approaches to clinical prediction models
Statistical methods have previously been proposed to overcome issues such as calibration drift, by allowing prediction models to be re-derived and validated to maintain their predictive performance through time [18]. Such cycles of learning allow the models to account for demographic shifts and changes in diagnostic technology. This has several advantages over continuously re-developing the model de novo, as model updating utilises existing evidence (current versions of the model) and can potentially be delivered in almost real-time. Specifically, several different methods for updating clinical prediction models have been suggested [12, 18]; including regression coefficient updating, meta-model updating and dynamic updating. Regression coefficient updating only modifies individual coefficients within the model from a singular further analysis. Bayesian dynamic updating allows for continuous updating and derivation, once the method has been evaluated it can theoretically continuously re-derive with new data [19, 20]. Siregar et al's analysis of dynamic updating suggested that a Bayesian approach may yield greater improvements in accuracy, when the sample is small [17]. Strobl et al [21] demonstrated that in updating prostate cancer risk assessment tools, there were also multiple methods that yielded similar improvement, with the exception of Random Forest regression (a machine learning form of dynamic updating) which was substantially worse than others.
In summary, T-MACS requires protection against calibration drift, and as such we aim to utilise prediction model updating methods to recalibrate it through time.
Here, we describe the protocol for the study that will deliver these objectives, in full accordance with the Transparent Reporting of the Predictive accuracy of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) guidelines [22].
Study arm design and study setting
This will be a multi-centre retrospective cohort study. The study will use data collected from emergency departments at Manchester Royal Infirmary (MRI), Royal Blackburn Teaching Hospital (RBTH) and Burnley General Teaching Hospital (BGTH). MRI is a major trauma centre with 1,721 beds and an emergency department attendance of 104,449 in 2020, RBTH has 700 inpatient beds with an emergency department attendance of 104,009 in 2020 and BGTH has 219 beds and its urgent care centre has an annual attendance of 44,519. Each of these hospitals has implemented TMACS to guide the care of patients with suspected ACS.
We will include patients who presented to the emergency department with chest pain and were assessed using the TMACS pathway since implementation at MRI, RBTH and BGTH. This is estimated to include approximately 14,000 patient episodes from June 2016 to October 2020.
We utilised the sample size calculation described by Riley et al [23] and also the rule of 10 primary outcome cases by variable used in similar logistic regression analyses. TMACS includes seven variables. As it is also planned to incorporate time, geographical location and the outcome of two alternative clinical prediction models (adding 8 variables), it is anticipated that the analysis will require a minimum of 170 cases in the training/optimization set. The prevalence of the primary outcome is 6.9% in the first 1,033 patients treated with TMACS. Based on that prevalence, a minimum of 2,464 patients would be required. This sample size was larger than that calculated by Riley et al, so we opted to be conservative and use the higher initial calculation [23].
This cohort will include patients who received routine care guided by TMACS, and whose data have been saved using bespoke interfaces deployed at Manchester Royal Infirmary (MRI), Royal Blackburn Teaching Hospital (RBTH) and Burnley General Teaching Hospital (BGTH). These tools are used in clinical practice and prospectively capture the data inputted by clinicians. This will be collated with data from local hospital servers to include: serial troponin assay results, and local diagnostic codes. This data will be cross-referenced with NHS Digital's Hospital Episode Statistics (HES) database to include any diagnosis or intervention that occurs within 30 days of the index presentation. We will also link with the civil registry database for mortality outcomes.
Assuring the quality of the data is vital for the integrity of this study, particularly as we are collating multiple databases across multiple organisations. We will use the principles laid out by Weiskopf et al to assure the quality of the data [24] (see Table 1).
Table 1 Data validation procedures adapted from Weiskopf et al [24]
Outcome Variables
The primary outcome will be acute myocardial infarction (AMI) within 30 days. Patients will be considered to have a diagnosis of AMI if they have either a coded clinical diagnosis of AMI locally or held centrally with NHS Digital. Only primary ICD-10 codes will be used for the outcome, however a sensitivity analysis will be conducted where a code at any position is used. The relevant ICD-10 codes are: I21, I22, or I23 [25].. A secondary composite outcome of major adverse cardiovascular events within 30 days will also be measured including acute myocardial infarction, death (as per civil registry) and revascularisation. ICD-10 outcomes include I21, I22, I23, I46, R96, R99, K40-50, K63, and K75
The use of coded diagnoses is essential for the process to be automated in future. However, the effect of accepting these definitions must be carefully considered due to concerns over the limitations of the coding databases both centrally and locally. This will be explored by conducting data validation and the effect of the differing outcomes will be examined in sensitivity analyses. We will blind the adjudicators to the TMACS inputs and prediction.
In data validation we will examine the local coded diagnoses of any patients who had at least one cardiac concentration above the 99th percentile upper reference limit for the assay and an absolute change of at least half the 99th percentile on serial sampling (for samples drawn 3-6 hours apart). We will not examine patients with two (adequately timed) troponin concentrations within the normal range as they cannot fulfil the diagnosis of AMI. We will adjudicate outcomes by a central committee. AMI will be defined in accordance with the universal definition of myocardial infarction, which requires a rise and/or fall of cardiac troponin with at least one concentration above the 99th percentile upper reference limit of the assay. In addition, patients must have at least one of: symptoms compatible with myocardial ischaemia, ECG changes compatible with ischaemia, imaging evidence of new loss of viable myocardium or identification of intracoronary thrombus at coronary angiography. In the GM-TMACS project, initial implementation of TMACS will require all patients to have two cardiac troponin tests drawn 3 hours apart. Thus, all patients included in the analyses presented here will have an acceptable reference standard for the diagnosis of AMI according to national and international guidelines [2, 26]. Diagnoses will be adjudicated by two independent investigators with reference to all relevant clinical investigations. Disagreements will be resolved by consulting with a third independent investigator.
The methodology optimising predictive performance for updating the TMACS algorithm will be identified from four candidate types. Predictive performance will be assessed by calibration plots, Brier scores, discrimination will be assessed with the c-statistic compared with DeLong's method [27]. We will examine continuation of the current model (status quo), model recalibration, model revision, and Bayesian dynamic modelling [13, 18]. TMACS currently returns a probability of ACS, which is then used to classify patients into a categorical risk group (Eq 1 ). We will examine the re-classification of patients from the original TMACS algorithm and the dynamic modelling approach [28]. We will calculate the observed risk of the reclassified cases.
$$ l={\mathit{\log}}_b\frac{p}{1-p}=1.713{x}_e+0.847{x}_a+0.607{x}_r+1.417{x}_v+2.058{x}_s+1.208{x}_h+0.089{x}_t-4.766 $$
Equation [1] - The TMACS clinical prediction model. l = log-odds of the primary outcome acute myocardial infarction, xe = presence of ECG ischaemia, xa = crescendo angina, xr = paint radiating to the right arm, xv= pain associated with vomiting, xs = sweating observed, xh= hypotension, and xt = is high sensitivity troponin T result on arrival.
The current iteration of the TMACS algorithm will be validated with the existing co-efficient and intercept (from the derivation study). This will serve as a baseline for comparison, and we will use it to assess for evidence of change of discrimination and calibration over time (Eq 2).
$$ {Z}_{sq}={\alpha}_{TMACS}+\sum \limits_{i\epsilon 1,\dots, 7}{\beta}_{i, TMACS}{x}_i $$
Equation 2: The current iteration of the TMACS algorithm, where Zsq - the linear prediction of the current model, αTMACS - intercept and βi, TMACS previously derived regression coefficients.
Model recalibration
In this method we will recalibrate the TMACS algorithm with the entire dataset and apply an overall weight to the original algorithm and derive a new intercept, this is described in Eq. 3 [29]. This has been included as it is the simplest and has been used previously to updated CPMs [30].
$$ {\displaystyle \begin{array}{c}{Z}_{mr}=\hat{\alpha}+{\hat{\beta}}_o{Z}_{sq}\\ {}{Z}_{mr}=\hat{\alpha}+{\hat{\beta}}_o{\alpha}_{TMACS}+\sum \limits_{i\in 1,\dots, 7}{\hat{\beta}}_o\ast \left({\beta}_{i, TMACS}{x}_i\right)\end{array}} $$
Equation 3: Zmr - model updated by recalibration, \( \hat{\alpha} \) is the re-estimated intercept, \( \hat{\beta_o} \)the new overall calibration slope, and Zsq – is the linear prediction of the TMACS model.
Model extension
Additional variables will be considered for incorporation from other clinical prediction models that have been used for the same purpose (Eq. 4). These include predictors from the HEART score and Thrombolysis in Myocardial Infarction (TIMI) risk score [29, 30]. We will re-derive the algorithm with these covariates to investigate any improvement in diagnostic characteristics [18, 29]..
$$ {Z}_e=\hat{\alpha}+\sum \limits_{i\epsilon 1,\dots, 7}{\beta}_i{x}_i+\sum \limits_{j\epsilon s}{\hat{\beta}}_j{x}_j $$
Equation 4: Ze - model updated by extension, βi are the original coefficients for the original covariables, s is the new covariates and \( {\hat{\beta}}_i \) their new coefficients.
Bayesian dynamic updating
Dynamic updating allows the original model's intercepts and co-efficients to be updated after each patient episode, stabilising calibration and improving performance [12, 20]. This will be deployed incorporating guidance from our patient and public representatives. The representatives stated that if such a method were used then they felt that it initially ought to require human oversight. As such the initial updating will not be after each recorded patient episode instead it will be every three months for the first year to simulate a probationary period with quarterly meetings. After this period we will update the model after every patient episode. This is achieved through recursive estimation using the prediction equation
$$ {\beta}_t\mid {Y}^{t-1}\sim N\left(\ {\hat{\beta}}_{t-1},{R}_t\right) $$
Where β is a dimensional vector of regression coefficients, Yt − 1 is a set of past outcomes, t is a given time and \( {R}_t=\raisebox{1ex}{${\hat{\sum}}_{t-1}$}\!\left/ \!\raisebox{-1ex}{${\lambda}_t$}\right. \). λt is known as the forgetting factor, which down-weights past observations by inflating the variance, and will be chosen in order to enable the sample size to continue to meet the specifications laid out by Riley et al [23].
When this is then taken into a Bayesian framework, the posterior is proportional to the product of the probability distribution at time t and t-1, giving
$$ p\left({\beta}_t|{Y}^t\right)\propto p\Big({y}_t\left|\ {\beta}_t\right)p\left({\beta}_t|{Y}^{t-1}\right)\propto likelihood\ x\ prior $$
Model recalibration and model revision methods will be internally validated by using a bootstrap validation of 1000 samples. The dynamic updating methodologies will be internally validated with one-step a head prequential testing [13].
Ethics and dissemination:
This study has received ethical approval from a research ethics committee and the confidentiality advisory group (references 19/WA/0311, and 19/CAG/0209).
The study is registered on the ISRCTN number: ISRCTN41008456
First, we aim to publish our findings in peer reviewed journals. The primary target audience for the clinical study will be emergency medicine physicians, acute medicine physicians, cardiologists, clinical biochemists, public health professionals and industry leaders in acute diagnostics.
Further we will aim to present our findings at international and national conferences with relevant target audiences (e.g. European Society for Emergency Medicine Annual Congress, European Society of Cardiology Annual Conference, Royal College of Emergency Medicine Annual Scientific Conference). In addition, we will develop a public engagement strategy in conjunction with Public Programmes and our patient groups, in order that the local population have the opportunity to learn about our work and to engage with future work.
We aim to recalibrate TMACS protecting the research investment of time and money, but potentially also improving it's clinically efficacy. However, this method could be applied to any clinical prediction model, a plethora of which are deployed within emergency medicine. These range from the Well's score for deep vein thrombosis to the Ottawa ankle score for fractures [31, 32]. These were all derived and then externally validated, but subsequently their upkeep stopped.
The recent focus of research has been the development and deployment of new clinical prediction models. Here we present a method that follows the paradigm shift in the focus of modelling research. The scientific community must adapt to an overly saturated environment of clinical prediction models [33, 34], part of the answer is assessing what already exists and seeking to protect and improve it . Not only is this an efficiency but it also recalibrates the community of clinical modelers to follow one of the central thesis of science, to build on the work of others [35].
Due to data governance restrictions from the confidentiality advisory group and NHS Digital the sharing of data is not possible. However, requests to collaborate are welcome.
Publication, Part of Hospital Admitted Patient Care Activity, 2016-17 - NHS Digital. 2017. [Internet]. [cited 2017 Dec 11]. Available from: https://digital.nhs.uk/catalogue/PUB30098
Chest pain of recent onset: assessment and diagnosis | Guidance and guidelines | NICE. National Institute for Health and Care Excellence., 2010. CG95. Recent-Onset Chest Pain of Suspected Cardiac Origin: Assessment and Diagnosis. [Internet]. [cited 2017 Dec 11]. Available from: https://www.nice.org.uk/guidance/cg95
DG15, N.D.G., Myocardial infarction (acute): Early rule out using high-sensitivity troponin tests (Elecsys Troponin Thigh-sensitive, ARCHITECT STAT High Sensitive Troponin-I and AccuTnI+ 3 assays). 2014. National Institute forHealth and Care Excellence. https://www.nice.org.uk/guidance/dg15. [Accessed 13 Feb 2015].
Pope JH, Aufderheide TP, Ruthazer R, Woolard RH, Feldman JA, Beshansky JR, et al. Missed diagnoses of acute cardiac ischemia in the emergency department. N Engl J Med. 2000;342(16):1163–70. https://doi.org/10.1056/NEJM200004203421603.
Body R, Carlton E, Sperrin M, Lewis PS, Burrows G, Carley S, et al. Troponin-only Manchester Acute Coronary Syndromes (T-MACS) decision aid: single biomarker re-derivation and external validation in three cohorts. Emerg Med J. 2016. https://doi.org/10.1136/emermed-2016-205983.
Greenslade JH, Nayer R, Parsonage W, Doig S, Young J, Pickering JW, et al. Validating the Manchester Acute Coronary Syndromes (MACS) and Troponin-only Manchester Acute Coronary Syndromes (T-MACS) rules for the prediction of acute myocardial infarction in patients presenting to the emergency department with chest pain. Emerg Med J. 2017 Aug;34(8):517–23. https://doi.org/10.1136/emermed-2016-206366.
Body R, Morris N, Reynard C, Collinson PO. Comparison of four decision aids for the early diagnosis of acute coronary syndromes in the emergency department. Emerg Med J. 2020;37(1):8–13. https://doi.org/10.1136/emermed-2019-208898.
Ruangsomboon O, Thirawattanasoot N, Chakorn T, Limsuwat C, Monsomboon A, Praphruetkit N, et al. The utility of the 1-hour high-sensitivity cardiac troponin T algorithm compared with and combined with five early rule-out scores in high-acuity chest pain emergency patients. Int J Cardiol. 2020;322:23–8. https://doi.org/10.1016/j.ijcard.2020.08.099.
Steiro O-T, Tjora HL, Langørgen J, Bjørneklett R, Nygård OK, Skadberg Ø, et al. Clinical risk scores identify more patients at risk for cardiovascular events within 30 days as compared to standard ACS risk criteria: the WESTCOR study. Eur Heart J Acute Cardiovasc Care. 2020;10(3):287–301. https://doi.org/10.1093/ehjacc/zuaa016.
Body R, Boachie C, McConnachie A, Carley S, Van Den Berg P, Lecky FE. Feasibility of the Manchester Acute Coronary Syndromes (MACS) decision rule to safely reduce unnecessary hospital admissions: a pilot randomised controlled trial. Emerg Med J. 2017;34(9):586–92. https://doi.org/10.1136/emermed-2016-206148.
Carlton EW, Pickering JW, Greenslade J, Cullen L, Than M, Kendall J, et al. Assessment of the 2016 National Institute for Health and Care Excellence high-sensitivity troponin rule-out strategy. Heart. 2017;heartjnl-2017-311983.
Jenkins DA, Sperrin M, Martin GP, Peek N. Dynamic models to predict health outcomes: current status and methodological challenges. Diagnostic and Prognostic Research. 2018;2(1):1–9. https://doi.org/10.1186/s41512-018-0045-2.
Jenkins DA, Martin GP, Sperrin M, Riley RD, Debray TP, Collins GS, et al. Continual updating and monitoring of clinical prediction models: time for dynamic prediction systems? Diagnostic and Prognostic Research. 2021;5(1):1–7. https://doi.org/10.1186/s41512-020-00090-3.
Hickey GL, Grant SW, Murphy GJ, Bhabra M, Pagano D, McAllister K, et al. Dynamic trends in cardiac surgery: why the logistic EuroSCORE is no longer suitable for contemporary cardiac surgery and implications for future risk models. Eur J Cardiothorac Surg. 2013;43(6):1146–52. https://doi.org/10.1093/ejcts/ezs584.
Sperrin M, Jenkins D, Martin GP, Peek N. Explicit causal reasoning is needed to prevent prognostic models being victims of their own success. J Am Med Inform Assoc. 2019;26(12):1675–6. https://doi.org/10.1093/jamia/ocz197.
Lenert MC, Matheny ME, Walsh CG. Prognostic models will be victims of their own success, unless…. J Am Med Inform Assoc. 2019;26(12):1645–50.
Siregar S, Nieboer D, Vergouwe Y, Versteegh MIM, Noyez L, Vonk ABA, et al. Improved Prediction by Dynamic Modeling. Circ Cardiovasc Qual Outcomes. 2016;9(2):171–81. https://doi.org/10.1161/CIRCOUTCOMES.114.001645.
Su T-L, Jaki T, Hickey GL, Buchan I, Sperrin M. A review of statistical updating methods for clinical prediction models. Stat Methods Med Res. 2018;27(1):185–97. https://doi.org/10.1177/0962280215626466.
Raftery AE, Kárný M, Ettler P. Online prediction under model uncertainty via dynamic model averaging: Application to a cold rolling mill. Technometrics. 2010;52(1):52–66. https://doi.org/10.1198/TECH.2009.08104.
McCormick TH, Raftery AE, Madigan D, Burd RS. Dynamic logistic regression and dynamic model averaging for binary classification. Biometrics. 2012;68(1):23–30. https://doi.org/10.1111/j.1541-0420.2011.01645.x.
Strobl AN, Vickers AJ, Calster BV, Steyerberg E, Leach RJ, Thompson IM, et al. Improving patient prostate cancer risk assessment: Moving from static, globally-applied to dynamic, practice-specific risk calculators. J Biomed Inform. 2015;56:87–93. https://doi.org/10.1016/j.jbi.2015.05.001.
Moons KG, Altman DG, Reitsma JB, Ioannidis JP, Macaskill P, Steyerberg EW, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med. 2015;162(1):W1–73. https://doi.org/10.7326/M14-0698.
Riley RD, Snell KI, Ensor J, Burke DL, Harrell FE Jr, Moons KG, et al. Minimum sample size for developing a multivariable prediction model: PART II-binary and time-to-event outcomes. Stat Med. 2019;38(7):1276–96. https://doi.org/10.1002/sim.7992.
Weiskopf NG, Weng C. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research. J Am Med Inform Assoc. 2013;20(1):144–51. https://doi.org/10.1136/amiajnl-2011-000681.
Thygesen K, Alpert JS, Jaffe AS, Chaitman BR, Bax JJ, Morrow DA, et al. Fourth universal definition of myocardial infarction (2018). J Am Coll Cardiol. 2018;72(18):2231–64. https://doi.org/10.1016/j.jacc.2018.08.1038.
Roffi M, Patrono C, Collet J-P, Mueller C, Valgimigli M, Andreotti F, et al. 2015 ESC Guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation: Task Force for the Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology (ESC). Eur Heart J. 2016;37(3):267–315. https://doi.org/10.1093/eurheartj/ehv320.
DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44(3):837–45. https://doi.org/10.2307/2531595.
Hippisley-Cox J, Coupland C, Brindle P. Development and validation of QRISK3 risk prediction algorithms to estimate future risk of cardiovascular disease: prospective cohort study. BMJ. 2017;357:j2099.
Steyerberg EW, Borsboom GJ, van Houwelingen HC, Eijkemans MJ, Habbema JDF. Validation and updating of predictive logistic regression models: a study on sample size and shrinkage. Stat Med. 2004;23(16):2567–86. https://doi.org/10.1002/sim.1844.
Sim J, Teece L, Dennis MS, Roffe C, SO□S Study Team. Validation and recalibration of two multivariable prognostic models for survival and independence in acute stroke. PLoS One. 2016;11(5):e0153527.
Stiell IG, Greenberg GH, McKnight RD, Nair RC, McDowell I, Worthington JR. A study to develop clinical decision rules for the use of radiography in acute ankle injuries. Ann Emerg Med. 1992;21(4):384–90. https://doi.org/10.1016/S0196-0644(05)82656-3.
Wells PS, Anderson DR, Rodger M, Ginsberg JS, Kearon C, Gent M, et al. Derivation of a simple clinical model to categorize patients probability of pulmonary embolism: increasing the models utility with the SimpliRED D-dimer. Thromb Haemost. 2000;83(03):416–20. https://doi.org/10.1055/s-0037-1613830.
Hemingway H, Riley RD, Altman DG. Ten steps towards improving prognosis research. BMJ. 2009;339(dec30 1):b4184. https://doi.org/10.1136/bmj.b4184.
Van Calster B, Wynants L, Riley RD, van Smeden M, Collins GS. Methodology over metrics: Current scientific standards are a disservice to patients and society. J Clin Epidemiol. 2021. https://doi.org/10.1016/j.jclinepi.2021.05.018.
Newton, I., 2019. Isaac Newton letter to Robert Hooke, 1675. HSP Discover. https://discover.hsp.org/Record/dc-9792/.
Dr Charles Reynard has received funding from the National Institute for Health Research, the Royal College of Emergency medicine project grant and Manchester University NHS Foundation Trust grant.
The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care
Professor Anthony Heagerty has received funding from the British Heart Foundation and the Ancestry and biological Informative Markers for stratification of Hypertension consortium.
Division of Cardiovascular Sciences, University of Manchester, Manchester, UK
Charles Reynard, Anthony Heagerty & Richard Body
Emergency Department, Manchester University NHS Foundation Trust, Manchester, UK
Charles Reynard & Richard Body
Division of Informatics, Imaging and Data Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
Glen P. Martin, Evangelos Kontopantelis & David A. Jenkins
Centre for Primary Care and Health Services Research Division of Population Health, Health Services Research and Primary Care School of Health Sciences Faculty of Biology, Medicine and Health University of Manchestern, Manchester, UK
Brian McMillan
Humanitarian and Conflict Response Institute, University of Manchester, Manchester, UK
Anisa Jafar
Emergency Department, Royal Blackburn Hospital, East Lancashire Hospitals NHS Trust, Burnley, UK
Rajendar Garlapati
Charles Reynard
Glen P. Martin
Evangelos Kontopantelis
David A. Jenkins
Anthony Heagerty
Richard Body
Concept: CR, RB, GM, EK. Design: CR, RB, GM, EK, DJ, AH, BM, AJ, RG, RB. Manuscript writing: CR, RB, GM, EK, DJ, AH, BM, AJ, RG, RB
Correspondence to Charles Reynard.
As this is a retrospective study consent for publication was not possible
Professor Richard Body has consulted for Siemens, Roche, Beckman, Singulex, LumiraDx and Abbott
Reynard, C., Martin, G.P., Kontopantelis, E. et al. Advanced cardiovascular risk prediction in the emergency department: updating a clinical prediction model – a large database study protocol. Diagn Progn Res 5, 16 (2021). https://doi.org/10.1186/s41512-021-00105-7
Clinical prediction model, Acute myocardial infarction, Calibration drift
|
CommonCrawl
|
Symmetrization associated with hyperbolic reflection principle
Yuuki Ida ORCID: orcid.org/0000-0002-6773-08711,
Tsuyoshi Kinoshita1 &
Tomohiro Matsumoto1
Pacific Journal of Mathematics for Industry volume 10, Article number: 1 (2018) Cite this article
In this paper, in view of application to pricing of Barrier options under a stochastic volatility model, we study a reflection principle for the hyperbolic Brownian motion, and introduce a hyperbolic version of Imamura-Ishigaki-Okumura's symmetrization. Some results of numerical experiments, which imply the efficiency of the numerical scheme based on the symmetrization, are given.
Reflection principle and the static hedge of barrier options
The reflection principle of standard Brownian motion relates the probability distribution of a first hitting time to a boundary to the 1-dimensional marginal distribution of the process. The formula has a direct application in continuous-time finance, that is, the static hedging of barrier optionsFootnote 1. The idea is explained roughly as follows. Suppose that we sold a knock-out call optionFootnote 2 (which is a typical barrier option). Its pay-off is described as
$$(S_{T} - K)_{+} 1_{\{\tau > T\}}, $$
T is the expiry date of the option,
K is the exercise price,
S is the price process of a risky asset, with S0>K,
τ:= inf{s>0:S s <K′}, the first hitting time of S to K′, the knock-out boundary, with K′<K.
The static hedge of the knock-out option consists of two plain-vanilla (=without knock-out condition) options, long position of call option with pay-off (S T −K)+, and short position of "put option" whose value
at τ equals the call, and
is zero at T on τ>T.
This simple portfolio hedges the knock-out option since it is
zero at T on τ≤T since at τ it is liquidated, and
(S T −K) at T on τ>T.
This relation can be expressed as
$$\begin{array}{*{20}l}&E [ (S_{T}-K)_{+} 1_{\{\tau > T\}}|\mathcal{F}_{t \wedge \tau}]\\ &= E [ (S_{T}-K)_{+} | \mathcal{F}_{t \wedge \tau}] \\ & \quad -E [{~}^{\backprime\backprime}\text{put \ option}^{\prime\prime} | \mathcal{F}_{t \wedge \tau}], \end{array} $$
for 0≤t≤T, where \( \{ \mathcal {F}_{t} \} \) is the filtration generated by S. The existence of such an option is ensured by the reflection principle. If S is geometric Brownian motion, it can be the option with pay-off (K−S T )+ since
$$ (S_{t}-K)_{\tau \leq t \leq T} \overset{\text{(law)}}{=}(K-S_{t})_{\tau \leq t \leq T} $$
by the reflection principle. In general, the property is referred to as (arithmetic) put-call symmetry at K [4], which is weaker than the reflection principle that ensures put-call symmetry for any K.
The interpretation is first proposed in [3], and there are vast literatures since then (see e.g. [1] and references therein). Among these, we just mention a multi-dimensional extension proposed in [11], where the reflection principle with respect to reflection groups is applied to the pricing of multi-asset barrier options, barrier being the boundary of a Weyl chamber. To the best of our knowledge, it is the first attempt to deal with the cases where the barrier(=knock-out/in boundary) is not a one point set.
Symmetrization and its application to numerical calculation of the price of barrier options
A new point of view in the literature, the symmetrization, was first introduced in [10], and further generalized in [2]. The symmetrization is a procedure to convert a given diffusion into the one with a weaker version of reflection principle, aiming at obtaining a precise numerical value of the price of barrier options in a reasonable computational time, rather than static-hedge in the market described in the previous section.
Let us briefly explain their idea. We work on 1-dimensional case for simplicity. Let S be a diffusion process satisfying the following stochastic differential equation:
$$ dS_{t} = \sigma(S_{t}) \, {dW}_{t} + \mu (S_{t}) \, dt, $$
where, σ and μ are piece-wise continuous functions with linear growth. In general we cannot expect the formula (1) to hold. The symmetrization \( \tilde {S} \) of S alternatively satisfies (1). It is defined as a (weak) solution to
$$ d\tilde{S}_{t} = \tilde{\sigma} (\tilde{S}_{t}) \, {dW}_{t} + \tilde{\mu} (\tilde{S}_{t}) \, dt, $$
$$\tilde{\sigma} (x) = \left\{\begin{array}{ll} \sigma (x) & x \geq K' \\ \sigma (2K-x) & x < K' \end{array}\right., $$
$$ \tilde{\mu} (x) = \left\{\begin{array}{ll} \mu (x) & x \geq K' \\ -\mu (2K-x) & x < K' \end{array}\right., $$
The following is proven in [10].
Theorem 1
(Imamura-Ishigaki-Okumura [10]) The law-unique solution \( \tilde {S} \) of (4) satisfies the put-call symmetry at K, and \( (\tilde {S}_{t})_{0 \leq t \leq \tau } \) has the same law as (S t )0≤t≤τ.
It then implies
$$ \begin{array}{ll} &\mathbf{E}[(S_{T}-K)_{+} 1_{\{ \tau > T\}}] \\ &= \mathbf{E}[(\tilde{S}_{T}-K)_{+} ] - \mathbf{E}[(K-\tilde{S}_{T})_{+} ]. \end{array} $$
The formula (6), however, is not anymore interpreted as static hedge relation, but it has another application. The Eq. (6) now reads that
An expectation with stopping time is converted to the one without it.
A numerical calculation of an expectation with stopping time often is a tough challenge due to its path-dependent nature. On the other hand, an expectation with respect to one dimensional marginal of a diffusion process is in most cases numerically tractable. Thus the Eq. (6) gives a new insight to the numerical analysis of barrier options/stopping times.
Euler-Maruyama approximation of the price of barrier options
The most common technique to numerically approximate an expectation with respect to a diffusion process would be so-called "Euler-Maruyama" scheme. Here we briefly recall the scheme.
An Euler-Maruyama discretization of (3) with respect to a time partition 0=t0<t1<⋯<t n =T is given by
$$ \begin{aligned} S^{n}_{t_{0}} &= S_{0}, \\ S^{n}_{t_{k+1}} &= S^{n}_{t_{k}} + \sigma (S_{t_{k}}) \Delta W_{t_{k}} \\ & \qquad \;\; + \mu (S_{t_{k}}) (t_{k+1}-t_{k}), \\ k&=0,1,\cdots, n-1, \end{aligned} $$
where \( \Delta W^{n}_{t_{k}} \sim N (0, t_{k+1} -t_{k}) \), mutually independent for k=0,1,⋯,n−1. The stopping time τ is also approximated by
$$\tau^{n} := \text{min}\left\{j:S^{n}_{t_{j+1}}< K'\right\}. $$
The expectation in the left-hand-side of (6) is approximated by (Monte-Carlo simulation of)
$$ \mathbf{E} \left[\left(S^{n}_{T} -K\right)_{+} 1_{\{\tau^{n} >T\}}\right], $$
while the right-hand-side counterpart is
$$ \mathbf{E} \left[\left(\tilde{S}^{n}_{T} -K\right)_{+}\right] - \mathbf{E} \left[\left(K-\tilde{S}^{n}_{T}\right)_{+}\right], $$
where \( \tilde {S}^{n} \) is obtained by the same procedure as (7).
The discretization error, by which we mean the difference between the true value of the expectation and its Euler-Maruyama approximation like (8) or (9), is known to be of O(n−1/2)in general when t k −tk−1=T/n for all k. It is reported in [5] that the one with stopping time like (8) cannot be improved, while the one with one-dimensional marginal like (9) is, provided some continuity of the coefficients, known to be of O(n−1).
The symmetrized drift coefficient (5) may not be continuous in general even if the original one is very smooth, and as far as we know, no existing result ensures the order is of O(n−1) though recently there have been several papers ([12, 13], and [14]) to deal with discontinuous coefficients in line with the problem posed here. In [10], however, they conjecture that it is the case by performing numerical experiments.
References for more detailed and precise results of the order can be found in [10].
SABR model and hyperbolic Brownian motion
In the present paper, we study a hyperbolic version of the symmetrization, with a view to the application of the pricing of barrier options under SABR model, which is known to be transformed to hyperbolic Brownian motion with drift.
The SABR (stochastic alha-beta-rho) model was introduced in [6]. It is given by
$$\begin{array}{*{20}l} {dS}_{t} &= v_{t} \sigma (S_{t})\, dW^{1}_{t} \\ {dv}_{t} &= \nu \, v_{t} \left(\sqrt{1-\rho^{2}} dW^{2}_{t} + \rho dW^{1}_{t}\right), \end{array} $$
where (W1,W2) is a two dimensional Brownian motion, ρ∈(−1,1) and ν is a constant. We note that
A driftless local volatility model is obtained by setting ν=0, and
\( Z_{t} := \psi (S_{t/\nu ^{2}}, V_{t/\nu ^{2}}) + \sqrt {-1} V_{t/\nu ^{2}}\) with \( \psi (x,y) = (x-\rho y)/\sqrt {1-\rho ^{2}} \) is a hyperbolic Brownian motion with drift, a solution to (13) in "Hyperbolic symmetrization" section (for details see [7]).
The following is a "motto" widely accepted among researchers and practitioners in finance (see e.g. [8]): as tractability of one dimensional diffusion processes is attributed to the reduction to the standard Brownian motion with drift by the Lamperti transform, so the analysis of SABR model will be converted to that of hyperbolic Brownian motion with drift, where we can still work on symmetries from linear fractional transformations. We shall observe a realization of this idea in the present paper.
The contents of the present paper
We start with introducing a hyperbolic version of the reflection principle that parallels the one with the standard Brownian motion in "Hyperbolic reflection principle" section. We introduce in "Hyperbolic symmetrization" section a weak version of the reflection principle, which also parallels with the classical put-call symmetry. Associated symmetrization is then introduced. "Numerical experiments" section is devoted to numerical studies. As in the case of the Imamura-Ishigaki-Okumura's scheme using classical symmetrization, the error is not proven to be O(n−1) mathematically but the numerical results support the conjecture of the hyperbolic case as well.
Hyperbolic reflection principle
Invariant property of hyperbolic Brownian motion
A Hyperbolic Brownian motion is the unique solution to
$$\left\{\begin{array}{ll} {dX}_{t}&=Y_{t}dW^{1}_{t} \\ {dY}_{t}&=Y_{t}dW^{2}_{t}, \end{array}\right. $$
where W1 and W2 are independent Brownian motions. It is defined on the upper-half plane \(\mathbb {H} = \{(x,y) \in \mathbb {R}^{2} : y>0\)} and we may and sometimes will embed it to \( \mathbb {C} \) by Z t =X t +iY t , where \(i =\sqrt {-1}\).
Let \(f:\mathbb {H}\to \mathbb {H}\) be such that \(f(z):=\frac {az+b}{cz+d}\), where \(\left (\begin {array}{ll} a & b \\ c & d \\ \end {array}\right)\in \text {SL}(2,\mathbb {R})\). Then (f(Z t ))t≥0 and (Z t )t≥0 are equivalent in law provided that f(Z0)=Z0.
Since Z t =X t +iY t , using Ito's formula for Z t ,
$$\begin{array}{*{20}l} {dZ}_{t}&={dX}_{t}+{idY}_{t}=Y_{t}\left(dW^{1}_{t}+idW^{2}_{t}\right)\\ &=\text{Im}(Z_{t})dW^{\mathbb{C}}_{t}, \end{array} $$
where \(dW^{\mathbb {C}}_{t}=dW^{1}_{t}+idW^{2}_{t}\), which we define to be a complex Brownian motion. On the other hand, since Z t is a conformal martingale and f is a holomorphic function, we can use Ito's formula for conformal martingales to get
$$\begin{array}{*{20}l} df(Z_{t})&=\partial_{z}f(Z_{t}){dZ}_{t}\\ &=\frac{1}{({cZ}_{t}+d)^{2}}\text{Im}(Z_{t}){dW}_{t}^{\mathbb{C}}\\ &=\frac{|{cZ}_{t}+d|^{2}}{({cZ}_{t}+d)^{2}}\text{Im}(f(Z_{t}))dW^{\mathbb{C}}_{t}\\ &=\text{Im}(f(Z_{t}))d\widetilde{W}_{t}^{\mathbb{C}}, \end{array} $$
where \(d\widetilde {W}^{\mathbb {C}}_{t}=\frac {|{cZ}_{t}+d|^{2}}{({cZ}_{t}+d)^{2}}dW^{\mathbb {C}}_{t}\), which is another complex Brownian motion. Hence Z t and f(Z t ) are equivalent in law if they start from the same point, as they are defined by the same SDE. □
Hyperbolic reflections
Let be the totality of such isometries π on the upper-half plane \(\mathbb {H}\) that π2=Id and that the invariant set \(\text {Inv}_{\pi }:=\left \{z \in \mathbb {H}: \pi (z)=z \right \}\) is a geodesic on \(\mathbb {H}\).
We have that
where \(\Phi _{A}(z)=\frac {az+b}{cz+d}\) for \(A = \left (\begin {array}{ll} a &b \\ c&d \end {array}\right) \in \text {SL}(2,\mathbb {R})\)and \(\Phi _{0}(z):=-\overline {z}\).
It is well-known that an isometry on \( \mathbb {H} \) is either Φ A or Φ A ∘Φ0 for some \( A \in \text {SL}(2,\mathbb {R})\). By the fundamental theorem of algebra, we know that the equation Φ A (z)=z has at most two solutions of complex for \(A \in \text {SL}(2,\mathbb {R})\). So .
For \(\Phi _{A}\circ \Phi _{0} \in \text {Isom}(\mathbb {H})\) and for z=x+iy,
$$(\Phi_{A}\circ \Phi_{0})^{2}(z) =\frac{(a^{2}-bc)z-b(a-d)}{c(a-d)z-(bc-d^{2})}. $$
By a simple calculation,
$$\begin{array}{*{20}l} &\frac{(a^{2}-bc)z-b(a-d)}{c(a-d)z-(bc-d^{2})}=z\\ \iff &(a^{2}-bc)z-b(a-d)\\ &=c(a-d)z^{2}-(bc-d^{2})z \end{array} $$
If a=d, for any b and c, (10) is satisfied.
Since a2−bc=1, we have \(c=\frac {a^{2}-1}{b}\) if b≠0, that is;
$$A= \left(\begin{array}{ll}a & b \\ \frac{a^{2}-1}{b} & a \\ \end{array}\right). $$
For b=0, c is an arbitrary real number and a=±1 from a2=1;
$$A= \left(\begin{array}{cc}\pm 1 & 0 \\ c & \pm 1 \\ \end{array}\right). $$
If a≠d, the Eq. (10) is
$$cz^{2}-(a+d)z+b=0.$$
We get a=−d and b=c=0.
Finally, we should find that the invariant set is geodesic. A geodesics of upper half plane is a line perpendicular to the real line, or a half-circle orthogonal to the real line.
$$A= \left(\begin{array}{ll} a & b \\ \frac{a^{2}-1}{b} & a \\ \end{array}\right), $$
and if a≠±1,
$$\begin{aligned}&\frac{-a\bar{z}+b}{-\frac{a^{2}-1}{b}\bar{z}+a}=z,\\ \iff &(a^{2}-1)|z|^{2}-ab(z+\bar{z})+b^{2}=0. \end{aligned} $$
The last equation means that it is a half circle, with center \(\left (\frac {ab}{a^{2}-1},0\right)\) and radius of \(\left |\frac {b}{a^{2}-1}\right |\).
If a=±1,
$$-\bar{z}+b=z. $$
The equation means that the invariant set is lines perpendicular to the real line.
$$A= \left(\begin{array}{cc} \pm 1 & 0 \\ c & \pm 1 \\ \end{array}\right), $$
and if c≠0, without loss of generality, we may set a=1;
$$\begin{array}{*{20}l}&\frac{-\bar{z}}{-c\bar{z}+1}=z\\ \iff &-c|z|^{2}+(z+\bar{z})=0. \end{array} $$
The invariant set is a circle with center \(\left (\frac {1}{c},0\right)\) and the radius \(\frac {1}{|c|}\).
If c=0, the invariant set is the lines perpendicular to the real line. □
Let . Then, \( \mathbb {H} = D_{+} \cup \text {Inv}_{\pi } \cup D_{-} \), where D± are the connected components of \( \mathbb {H} \setminus \text {Inv}_{\pi } \).
(Hyperbolic Reflection Principle) Let Z0∈D+ and τ=inf{t≥0:Z t ∉D+}=inf{t≥0:Z t ∈Inv π }. If we put \(\widetilde {Z_{t}}=Z_{t} 1_{\left \{t<\tau \right \}}+\pi \left (Z_{t}\right)1_{\left \{t\geq \tau \right \}}\), then we have \( \left (Z_{t}\right)=(\widetilde {Z_{t}}) \) in law.
It suffices to show that if π is a reflection of \(\mathbb {H}\), then (π(Z t ))t≥0=(Z t )t≥0 in law if Z0∈Inv π since Z is a strong Markov process and Z τ ∈Inv π . As we have seen that π=Φ A ∘Φ0 for some specific \( A \in \text {SL} (2, \mathbb {R}) \), and by Proposition 1, we only need to check that \( (-\overline {Z}_{t}) \) is identically distributed as (Z t ) as a stochastic process, but this is obvious since (X t ) is identically distributed as (−X t ). □
Hyperbolic symmetrization
Hyperbolic put-call symmetry
Let . Then, by Proposition 2, we know that
$$\pi = \Phi_{A} \circ \Phi_{0} $$
$$ \begin{aligned} & A=\left(\begin{array}{cc} a & b\\ \frac{a^{2}-1}{b} & a \end{array}\right), \left(\begin{array}{cc} \pm 1 & 0\\ c & \pm 1 \end{array}\right) \\ &a, c\in \mathbb{R}, b\in \mathbb{R}\setminus \{0 \}. \end{aligned} $$
A Hyperbolic Brownian motion with drift is a unique solution in \(\mathbb {H}\) (if it exists) to
$$ \left\{ \begin{array}{cc} {dX}_{t}=Y_{t}dW^{1}_{t}+\mu_{1}(X_{t},Y_{t})dt \\ {dY}_{t}=Y_{t}dW^{2}_{t}+\mu_{2}(X_{t},Y_{t})dt, \end{array}\right. $$
where W1 and W2 are independent Brownian motions and μ1 and μ2 are measurable functions. If we use complex coordinate, the SDE (12) is rewritten as
$$ {dZ}_{t} = \text{Im} (Z_{t}) {dW}_{t}^{\mathbb{C}} + \mu (Z_{t}) \,dt, $$
where \( W^{\mathbb {C}} := W^{1} + i W^{2} \) and μ(Z)=μ1(Re(Z),Im(Z))+iμ2(Re(Z),Im(Z)).
Let and we write
$$A =\left(\begin{array}{ll} a & b\\ c & d \end{array}\right), $$
to unify the two classes in the expression (11). Suppose that μ satisfies
$$ \mu (z) = \frac{\Phi_{0} \circ \mu \circ \pi (z) } {(c \Phi_{0}\circ\pi (z) + d)^{2}}, $$
and (13) has a unique weak solution. Then (π(Z t )) and (Z t )have the same law as a stochastic process, provided that Z0∈Inv π
Using Itô formula for π(Z t ), we have
$$\begin{aligned} d\pi(Z_{t})&=d(\Phi_{A}\circ\Phi_{0}(Z_{t}))\\ &=\partial_{\bar{z}}(\Phi_{A}\circ\Phi_{0})(Z_{t})d\overline{Z}_{t}\\ &=-\frac{1}{(c\overline{Z_{t}}-d)^{2}}\text{Im}(Z_{t})d\overline{W}_{t}^{\mathbb{C}}\\ &~~~~~-\frac{\overline{\mu(Z_{t})}}{(-c\overline{Z_{t}}+d)^{2}}dt\\ &=-\frac{|c\overline{Z_{t}}-d|^{2}}{(c\overline{Z_{t}}-d)^{2}}\text{Im}(\pi(Z_{t}))d\overline{W}_{t}^{\mathbb{C}}\\ &~~~~~+\frac{\Phi_{0}\circ\mu\circ\pi^{2}(Z_{t})}{(c\Phi_{0}\circ\pi^{2}(Z_{t})+d)^{2}}dt\\ &=\text{Im}(\pi(Z_{t}))d\widetilde{W}^{\mathbb{C}}_{t}+\mu(\pi(Z_{t}))dt, \end{aligned} $$
where we have used assumption (14) in the last line and
$$d\widetilde{W}^{\mathbb{C}}_{t}=-\frac{|c\overline{Z_{t}}-d|^{2}}{(c\overline{Z_{t}}-d)^{2}}d\overline{W}_{t}^{\mathbb{C}}, $$
which is another complex Brownian motion. Now Theorem follows by the law-uniqueness of the SDE (13). □
Symmetrization
Here we present a hyperbolic version of the symmetrization introduced in [2] and [10].
We keep the setting of "Hyperbolic reflection principle" section and Theorem 2 except for the drift function μ. We let
$$\tilde{\mu}(z) = \left\{\begin{array}{ll} \mu (z) & z \in D_{+} \\ \\ {\frac{\Phi_{0} \circ \mu \circ \pi (z) } {(c \Phi_{0}\circ\pi (z) + d)^{2}}}. & z \in \mathbb{H}\setminus D_{+}. \end{array}\right. $$
(i) the law unique solution of the SDE, if it exists,
$${dZ}_{t} = \text{Im} (Z_{t}) dW^{\mathbb{C}} + \tilde{\mu} (Z_{t}) \,dt\qquad $$
satisfies (π(Z t ))=(Z t ) in law, provided that Z0∈Inv π .
(ii) Let Z0∈D+ and τ=inf{t≥0:Z t ∉D+}=inf{t≥0:Z t ∈Inv π }. If we put \(\widetilde {Z_{t}}=Z_{t} 1_{\left \{t<\tau \right \}}+\pi \left (Z_{t}\right)1_{\left \{t\geq \tau \right \}}\), then we have \( \left (Z_{t}\right)=(\widetilde {Z_{t}}) \) in law.
(iii) [Conversion Formula] Suppose that F is a bounded measurable function on \( \mathbb {H} \) with support in D+. Then,
$$\begin{aligned} &E [F(Z_{t}) 1_{\{\tau > t\}}]\\ &= E [F(Z_{t}) ] - E [F(\pi(Z_{t})) ]. \end{aligned} $$
(i) and (ii) are direct consequences of Theorem 2 and Proposition 3. (iii) can be proven in the same manner as in [10]. □
Let Z be the unique solution to (13), \(\pi (z)=\frac {1}{\bar {z}}\) and Inv π ={|z|=1}. We let \(D_{+} := \{z\in \mathbb {H}: |z|> 1 \}\) and
$$\mu(z)=c\,\text{Im}(z), $$
where c is a constant, then the symmetrization \(\tilde {\mu }\) in Theorem 3 is
$$\tilde{\mu} (z) = \left\{\begin{array}{ll} c\, \text{Im}(z) & z \in D_{+} \\ -c z^{2} \text{Im}\left(\frac{1}{\bar{z}}\right) & \text{otherwise}. \end{array}\right. $$
Numerical experiments
In the hyperbolic symmetrization proposed in the present paper the symmetrized drift may not be continuous in general, as in the case of the symmetrization in [10]. This means that no rigorous mathematical result guarantees the efficiency— (high) order of convergence— in Euler-Maruyama approximation. In [10], it is claimed, however, that numerical experiments show the efficiency. In this section we present some simulation results of the Example 1 with c=1, t=1, and F(z)=(|z|−1)+∧N with N=104, which suggest that in the hyperbolic case the conjecture is still likely to be true.
We work on Euler-Maruyama discretization scheme with Monte-Carlo simulation, described below.
Let n be the number of discretization; we put t k =k/n, k=0,1,⋯,n.
Let Z be the original process and \( \widetilde {Z} \) be the symmetrized one. We approximate Z and \( \widetilde {Z} \) by Zn=(Xn,Yn) and \( \widetilde {Z}^{n} = (\widetilde {X}^{n}, \widetilde {Y}^{n}) \), defined as
$$\begin{aligned} & X^{n}_{t_{k}}-X^{n}_{t_{k-1}} \\ &= Y^{n}_{t_{k-1}}\Delta W^{n}_{t_{k}} + \mu \left(Y^{n}_{t_{k-1}}\right) n^{-1}, \\ Y^{n}_{t_{k}} &= Y^{n}_{t_{k-1}} \text{exp} \left(\Delta W^{n}_{t_{k}} - (2n)^{-1}\right), \\ \,k&=1,2,\cdots,n \end{aligned} $$
$$\begin{aligned} \widetilde{X}^{n}_{t_{k}}-\widetilde{X}^{n}_{t_{k-1}} =& \widetilde{Y}^{n}_{t_{k-1}}\Delta W^{n}_{t_{k}} + \tilde{\mu}_{1}\\ & \left(\widetilde{X}^{n}_{t_{k-1}}, \widetilde{Y}^{n}_{t_{k-1}}\right) n^{-1}, \\ \widetilde{Y}^{n}_{t_{k}}-\widetilde{Y}^{n}_{t_{k-1}} =& \widetilde{Y}^{n}_{t_{k-1}}\Delta W_{t_{k}}^{n} +\tilde{\mu_{2}}\\ &\left(\widetilde{X}^{n}_{t_{k-1}}, \widetilde{Y}^{n}_{t_{k-1}}\right) n^{-1}, \\ k=&1,2,\cdots,n, \end{aligned} $$
where \(\tilde {\mu _{1}}\) and \(\tilde {\mu }_{2}\) are such that \(\tilde {\mu } = \tilde {\mu }_{1} + i \tilde {\mu _{2}}\). Here \( \left \{ \Delta W^{n}_{t_{k}}: k=1,2, \cdots, n\right \} \) simulates, by pseudo random numbers, independent copies of centered Gaussian random variables with variance n−1.
The Monte-Carlo simulation of Path-Wise Euler-Maruyama approximation of E[F(Z1)1{τ>1}] is obtained by
$$\begin{aligned} &\text{PW-EM} (n) \\ &:= \frac{1}{M} \sum_{m=1}^{M} F\left(Z^{n,m}_{1}\right) 1_{\{\tau^{n,m} > 1 \} }, \end{aligned} $$
where Zn,m stands for the m-th simulation of Zn, and
$$\tau^{n,m} = \text{min} \left\{ t_{k} : |Z^{n,m}_{t_{k}}| \leq 1 \right\} $$
The Monte-Carlo simulation of \( E[F(\widetilde {Z}_{1}) ] - E [F(\pi (\widetilde {Z}_{1}))] \) is given by
$$\begin{aligned} & \text{Symmetrization} (n) \\ &:= \frac{1}{M} \sum_{m=1}^{M} \left(F\left(\widetilde{Z}^{n,m}_{1}\right) - F \left(\pi \left(\widetilde{Z}^{n,m}_{1}\right)\right)\right), \end{aligned} $$
The "true" value Tr(n) is set to be Symmetrization(n) for some large n.
The errors are calculated accordingly as
$$\begin{aligned} &\text{PW EM Error} (n)\\ &:= \text{log} |\text{Tr}(n) - \text{PW EM} (n)| \end{aligned} $$
$$\begin{aligned} &\text{Sym Error} (n)\\ &:= \text{log} |\text{Tr}(n) - \text{Symmetrization} (n)|. \end{aligned} $$
The results are visualized as follows. The Figures 1 and 2 show the results when (X0,Y0)=(0.75,0.7) and (X0,Y0)=(1.0,1.0), respectively, and the "true" value is calculate for n=1000. The Tables 1 and 2 are the values of the dotted points in the Figures 1 and 2, respectively. The tangent of the regression line corresponds to the order of the convergence, which may suggest that it is of order 1 in the case of symmetrization.
The results of the first experiment
The results of the second experiment
Table 1 (X0,Y0)=(0.75,0.7), Tr1000=0.116674
Table 2 (X0,Y0)=(1.0,1.0), Tr1000=1.253903
A barrier option is a financial derivative with an additional condition that is made active when the underlying price process goes beyond/below a certain level. For details, see e.g. [9].
An option is called of "knock-out" type if the pay-off becomes zero if the underlying price process hits a certain value.
Akahori, J, Barsotti, F, Imamura, Y: The Value of Timing Risk, working paper (2017). arXiv:1701.05695 [q-fin.PR].
Akahori, J, Imamura, Y: On symmetrization of diffusion processes. Quant. Finance. 14(7), 1211–1216 (2014).
Bowie, J, Carr, P: Static Simplicity. Risk. 7(8), 44–50 (1994).
Carr, P, Lee, R: Put-Call Symmetry: Extensions and Applications. Math. Finance. 19(4), 523–560 (2009).
Gobet, E: Weak approximation of killed diffusion using Euler schemes. Stoch. Process. Appl. 87(2), 167–197 (2000).
Hagan, P, Lesniewski, A, Woodward, D: Managing smile risk. Wilmott Mag. 1, 84–108 (2002).
Hagan, P, Lesniewski, A, Woodward, D: Probability Distribution in the SABR Model of Stochastic Volatility. In: Large Deviations and Asymptotic Methods in Finance Volume 110 of the series Springer Proceedings in Mathematics & Statistics, pp. 1–35 (2015). http://www.springer.com/us/book/9783319116044.
Henry-Labordère, P: A General Asymptotic Implied Volatility for Stochastic Volatility Models (2005). arXiv:cond-mat/0504317.
Hull, J: Options, Futures, and Other Derivatives. 9th edn. Pearson/Prentice Hall, Upper Saddle River (2014).
Imamura, Y, Ishigaki, Y, Okumura, T: A numerical scheme based on semi-static hedging strategy. Monte Carlo Methods Applic. 20(4), 223–235 (2014).
Imamura, Y, Takagi, K: Semi-Static Hedging Based on a Generalized Reflection Principle on a Multi Dimensional Brownian Motion. Asia-Pacific Finan. Markets. 20(1), 71–81 (2013).
Kohatsu-Higa, A, Lejay, A, Yasuda, K: Weak approximation errors for stochastic differential equations with non-regular drift. J. Comput. Applied Math. 326, 138–158 (2017).
Ngo, H-L, Taguchi, D: Strong convergence for the Euler–Maruyama approximation of stochastic differential equations with discontinuous coefficients. Stat. Probab. Lett. 125, 55–63 (2017).
Taguchi, D: Stability Problem for One-Dimensional Stochastic Differential Equations with Discontinuous Drift. In: Donati-Martin, C, Lejay, A, Rouault, A (eds.) Séminaire de Probabilités XLVIII. Lecture Notes in Mathematics, vol 2168. Springer, Cham (2016).
There's no funding.
The experiments are reproducible except the pseudo random numbers used in the Monte-Carlo simulation.
Department of Mathematics, Ritsumeikan University, 1-1-1Nojihigashi, Kusatsu, Shiga, 525-8577, Japan
Yuuki Ida
, Tsuyoshi Kinoshita
& Tomohiro Matsumoto
Search for Yuuki Ida in:
Search for Tsuyoshi Kinoshita in:
Search for Tomohiro Matsumoto in:
Introduced a hyperbolic version of Imamura-Ishigaki-Okumura's symmetrization, and by numerical experiments showed efficiency of the scheme. All authors read and approved the final manuscript.
Correspondence to Yuuki Ida.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Ida, Y., Kinoshita, T. & Matsumoto, T. Symmetrization associated with hyperbolic reflection principle. Pac. J. Math. Ind. 10, 1 (2018) doi:10.1186/s40736-017-0035-2
Revised: 10 November 2017
Hyperbolic Brownian motion
Reflection principle
Barrier option
Euler-Maruyama scheme
|
CommonCrawl
|
Errata: Regular solutions of wave equations with super-critical sources and exponential-to-logarithmic damping
EECT Home
Modeling and control of hybrid beam systems with rotating tip component
June 2014, 3(2): 331-348. doi: 10.3934/eect.2014.3.331
Shape optimization for non-Newtonian fluids in time-dependent domains
Jan Sokołowski 1, and Jan Stebel 2,
Institut Élie Cartan Nancy, UMR 7502((Université Lorraine, CNRS, INRIA), Laboratoire de Mathématiques, Université de Lorraine, B.P.239, 54506 Vandoeuvre-lès-Nancy Cedex, France
Institute of Mathematics of the Academy of Sciences of the Czech Republic, Žitná 25, 110 00 Praha 1, Czech Republic
Received September 2013 Revised February 2014 Published May 2014
We study the model of an incompressible non-Newtonian fluid in a~moving domain. The domain is defined as a tube built by the velocity field $\mathbf{V}$ and described by the family of domains $\Omega_t$ parametrized by $t\in[0,T]$. A new shape optimization problem associated with the model is defined for a family of initial domains $\Omega_0$ and admissible velocity vector fields. It is shown that such shape optimization problems are well posed under the classical conditions on compactness of the admissible shapes [18]. For the state problem, we prove the existence of weak solutions and their continuity with respect to perturbations of the time-dependent boundary, provided that the power-law index $r\ge11/5$.
Keywords: Navier-Stokes equations., time-dependent domain, Shape optimization, incompressible viscous fluid.
Mathematics Subject Classification: Primary: 35Q30, 76D55; Secondary: 35R3.
Citation: Jan Sokołowski, Jan Stebel. Shape optimization for non-Newtonian fluids in time-dependent domains. Evolution Equations & Control Theory, 2014, 3 (2) : 331-348. doi: 10.3934/eect.2014.3.331
N. Arada, Regularity of flows and optimal control of shear-thinning fluids,, Nonlinear Analysis: Theory, 89 (2013), 81. doi: 10.1016/j.na.2013.04.015. Google Scholar
V. Barbu, I. Lasiecka and R. Triggiani, Tangential boundary stabilization of Navier-Stokes equations,, Mem. Amer. Math. Soc., 181 (2006). doi: 10.1090/memo/0852. Google Scholar
M. C. Delfour and J.-P. Zolésio, Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization,, Second edition, (2011). doi: 10.1137/1.9780898719826. Google Scholar
M. C. Delfour and J. P. Zolésio, Oriented distance function and its evolution equation for initial sets with thin boundary,, SIAM Journal on Control and Optimization, 42 (2004), 2286. doi: 10.1137/S0363012902411945. Google Scholar
L. Diening, M. Růžička and J. Wolf, Existence of weak solutions for unsteady motions of generalized Newtonian fluids,, Annali della Scuola Normale Superiore di Pisa. Classe di scienze, 9 (2010), 1. Google Scholar
R. Dziri and J.-P. Zolésio, Dynamical shape control in non-cylindrical navier-stokes equations,, Journal of Convex Analysis, 6 (1999), 293. Google Scholar
E. Feireisl, O. Kreml, Š. Nečasová, J. Neustupa and J. Stebel, Weak solutions to the barotropic Navier-Stokes system with slip boundary conditions in time-dependent domains,, Journal of Differential Equations, 254 (2013), 125. doi: 10.1016/j.jde.2012.08.019. Google Scholar
E. Feireisl, J. Neustupa and J. Stebel, Convergence of a Brinkman-type penalization for compressible fluid flows,, Journal of Differential Equations, 250 (2011), 596. doi: 10.1016/j.jde.2010.09.031. Google Scholar
J. Frehse, J. Málek and M. Steinhauer, On existence results for fluids with shear dependent viscosity-unsteady flows,, Partial Differential Equations, 406 (2000), 121. Google Scholar
J. Frehse, J. Málek and M. Steinhauer, On analysis of steady flows of fluids with shear-dependent viscosity based on the lipschitz truncation method,, SIAM Journal on Mathematical Analysis, 34 (2003), 1064. doi: 10.1137/S0036141002410988. Google Scholar
O. A. Ladyzhenskaya, New equations for the description of the motions of viscous incompressible fluids, and global solvability for their boundary value problems,, Trudy Mat. Inst. Steklov., 102 (1967), 85. Google Scholar
O. A. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flow,, Second English edition, (1969). Google Scholar
O. A. Ladyzhenskaya, Initial-boundary problem for Navier-Stokes equations in domains with time-varying boundaries,, Zapiski Nauchnykh Seminarov LOMI, 11 (1968), 97. Google Scholar
J.-L. Lions, Quelques Méthodes De Résolution Des Problèmes Aux Limites Non Linéaires,, (French) Dunod; Gauthier-Villars, (1969). Google Scholar
J. Málek and K. R. Rajagopal, Mathematical issues concerning the Navier-Stokes equations and some of its generalizations,, in Evolutionary equations. Vol. II, (2005), 371. Google Scholar
M. Moubachir and J.-P. Zolésio, Moving Shape Analysis and Control, vol. 277,, Chapman & Hall/CRC, (2006). doi: 10.1201/9781420003246. Google Scholar
J. Neustupa, Existence of a weak solution to the Navier-Stokes equation in a general time-varying domain by the Rothe method,, Mathematical Methods in the Applied Sciences, 32 (2009), 653. doi: 10.1002/mma.1059. Google Scholar
P. Plotnikov and J. Sokolowski, Compressible Navier-Stokes Equations, Theory and Shape Optimization,, Springer-Verlag, (2012). doi: 10.1007/978-3-0348-0367-0. Google Scholar
K. Rajagopal, Mechanics of non-Newtonian fluids,, in Recent Developments in Theoretical Fluid Mechanics (Winter School, (1992), 129. Google Scholar
W. Schowalter, Mechanics of Non-Newtonian Fluids,, Pergamon Press, (1978). Google Scholar
T. Slawig, Distributed control for a class of non-Newtonian fluids,, Journal of Differential Equations, 219 (2005), 116. doi: 10.1016/j.jde.2005.03.009. Google Scholar
J. Sokołowski and J. Stebel, Shape sensitivity analysis of time-dependent flows of incompressible non-Newtonian fluids,, Control and Cybernetics, 40 (2011), 1077. Google Scholar
J. Sokołowski and J. Stebel, Shape sensitivity analysis of incompressible non-Newtonian fluids,, in System Modeling and Optimization, (2013), 427. Google Scholar
J. Sokołowski and J.-P. Zolésio, Introduction to Shape Optimization. Shape Sensitivity Analysis,, Springer Series in Computational Mathematics, (1992). Google Scholar
C. Truesdell, W. Noll and S. Antman, The Non-linear Field Theories Of Mechanics,, Springer Verlag, (2004). doi: 10.1007/978-3-662-10388-3. Google Scholar
D. Wachsmuth and T. Roubíček, Optimal control of planar flow of incompressible non-Newtonian fluids,, Z. Anal. Anwend., 29 (2010), 351. doi: 10.4171/ZAA/1412. Google Scholar
Grzegorz Karch, Xiaoxin Zheng. Time-dependent singularities in the Navier-Stokes system. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3039-3057. doi: 10.3934/dcds.2015.35.3039
Xue-Li Song, Yan-Ren Hou. Attractors for the three-dimensional incompressible Navier-Stokes equations with damping. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 239-252. doi: 10.3934/dcds.2011.31.239
Hi Jun Choe, Hyea Hyun Kim, Do Wan Kim, Yongsik Kim. Meshless method for the stationary incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2001, 1 (4) : 495-526. doi: 10.3934/dcdsb.2001.1.495
Keyan Wang. On global regularity of incompressible Navier-Stokes equations in $\mathbf R^3$. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1067-1072. doi: 10.3934/cpaa.2009.8.1067
Hi Jun Choe, Do Wan Kim, Yongsik Kim. Meshfree method for the non-stationary incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 17-39. doi: 10.3934/dcdsb.2006.6.17
Roberta Bianchini, Roberto Natalini. Convergence of a vector-BGK approximation for the incompressible Navier-Stokes equations. Kinetic & Related Models, 2019, 12 (1) : 133-158. doi: 10.3934/krm.2019006
Huicheng Yin, Lin Zhang. The global existence and large time behavior of smooth compressible fluid in an infinitely expanding ball, Ⅱ: 3D Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1063-1102. doi: 10.3934/dcds.2018045
Linjie Xiong. Incompressible Limit of isentropic Navier-Stokes equations with Navier-slip boundary. Kinetic & Related Models, 2018, 11 (3) : 469-490. doi: 10.3934/krm.2018021
Pavel I. Plotnikov, Jan Sokolowski. Optimal shape control of airfoil in compressible gas flow governed by Navier-Stokes equations. Evolution Equations & Control Theory, 2013, 2 (3) : 495-516. doi: 10.3934/eect.2013.2.495
Takeshi Taniguchi. The exponential behavior of Navier-Stokes equations with time delay external force. Discrete & Continuous Dynamical Systems - A, 2005, 12 (5) : 997-1018. doi: 10.3934/dcds.2005.12.997
Feimin Huang, Xiaoding Shi, Yi Wang. Stability of viscous shock wave for compressible Navier-Stokes equations with free boundary. Kinetic & Related Models, 2010, 3 (3) : 409-425. doi: 10.3934/krm.2010.3.409
Jishan Fan, Fucai Li, Gen Nakamura. Convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations in a bounded domain. Kinetic & Related Models, 2016, 9 (3) : 443-453. doi: 10.3934/krm.2016002
Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602
Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149
Thomas Y. Hou, Ruo Li. Nonexistence of locally self-similar blow-up for the 3D incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 637-642. doi: 10.3934/dcds.2007.18.637
J. Huang, Marius Paicu. Decay estimates of global solution to 2D incompressible Navier-Stokes equations with variable viscosity. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4647-4669. doi: 10.3934/dcds.2014.34.4647
Franck Boyer, Pierre Fabrie. Outflow boundary conditions for the incompressible non-homogeneous Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 219-250. doi: 10.3934/dcdsb.2007.7.219
Lihuai Du, Ting Zhang. Local and global strong solution to the stochastic 3-D incompressible anisotropic Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4745-4765. doi: 10.3934/dcds.2018209
Gung-Min Gie, Makram Hamouda, Roger Temam. Asymptotic analysis of the Navier-Stokes equations in a curved domain with a non-characteristic boundary. Networks & Heterogeneous Media, 2012, 7 (4) : 741-766. doi: 10.3934/nhm.2012.7.741
Jan Sokołowski Jan Stebel
|
CommonCrawl
|
Vopenka's Principle for non-first-order logics
(For simplicity, the background theory for this post is NBG, a set theory directly treating proper classes which is a conservative extension of ZFC.)
Vopenka's Principle ($VP$) states that, given any proper class $\mathcal{C}$ of structures in the same (set-sized, relational) signature $\Sigma$, there are some distinct $A, B\in\mathcal{C}$ such that $A$ is isomorphic to an elementary substructure of $B$. In terms of consistency, we have the following rough upper and lower bounds: $$\text{proper class of extendibles $\le$ Vopenka's Principle $\le$ almost huge.} $$ (I don't know if this is state-of-the-art; more precise bounds, if known, would be welcome!) Thus, even though on the face of it $VP$ does not directly talk about cardinals, it is generally thought of as a large cardinal axiom.
Now, abstract model theory appears to give a framework for generalizing VP. Let $\mathcal{L}$ be any regular logic$^*$; then we can study "Vopenka's Principle for $\mathcal{L}$," $VP(\mathcal{L})\equiv$ "For any proper class $\mathcal{C}$ of $\Sigma$-structures ($\Sigma$ a set-sized relational signature), there are distinct $A, B\in\mathcal{C}$ with $A$ $\mathcal{L}$-elementarily embeddable into $B$." So, for example, taking $\mathcal{L}_I$ to denote first-order logic, $VP$ is just $VP(\mathcal{L}_I)$.
In principle, the resulting principles could have wildly varying large cardinal strengths. In practice, however, this seems to be extremely false.
Weaker Versions: Harvey Freidman has proved (see http://www.cs.nyu.edu/pipermail/fom/2005-August/009023.html) that $VP(\mathcal{L}_I)$ is equivalent to the statement that given any appropriate proper class $\mathcal{C}$ of structures, there are distinct $A$, $B\in\mathcal{C}$ such that $A$ is embeddable (NOT elementarily) into $B$. So $VP(\mathcal{L}_I)$ is equivalent to VP for the quantifier-free fragment of first-order logic.
Stronger Versions: Two reasonable logics to look at for stronger versions of $VP$ are $\mathcal{L}_{II}$ and $\mathcal{L}_{\omega_1\omega}$, second-order and (the smallest standard) infinitary logic respectively. However, the corresponding Vopenka principles are still just as strong as $VP(\mathcal{L}_I)$.$^{**}$ In general, $VP(\mathcal{L}_I)$ seems to be an upper bound for Vopenka's Principles for locally set-sized, definable logics. Since non-definable logics are of limited interest, it's reasonable to look at class-sized logics. The tamest class-sized logic I know of is $\mathcal{L}_{\infty\omega}$, the infinitary logic allowing arbitrary set-sized Boolean combinations but no infinite strings of quantifiers. However, $VP(\mathcal{L}_{\infty\omega})$ is inconsistent: by a famous theorem of Carol Karp, two structures are $\mathcal{L}_{\infty\omega}$-equivalent if and only if they are back-and-forth equivalent, so the class $\mathcal{O}$ of all ordinals (regarded as linear orderings) is a counterexample in any model of $ZFC$.
This all suggests that there are probably no interesting versions of Vopenka's Principle stronger than the usual one, and that any weaker form of Vopenka has to come from a horribly weak - to the point of being probably uninteresting - logic. I find this kind of disappointing. So, my question is:
Are there any interesting logics $\mathcal{L}$ for which $VP(\mathcal{L})$ is different from the usual Vopenka's Principle?
$^*$ The definition of "regular logic" is long and tedious, but it can be found in Ebbinghaus and Flum's book "Mathematical Logic" (Definitions 12.1.2 and 12.1.3). For this post, the details don't really matter; the key points are that the structures considered are the same as for first-order logic, and that everything is classical (i.e., two truth values).
$^{**}$ The proof for $\mathcal{L}_{II}$ goes as follows. Suppose $V\models VP(\mathcal{L}_I)$, and let $\mathcal{C}\in V$ be a proper class of structures in a set-sized relational signature $\Sigma$. Let $\Sigma'$ be the signature consisting of $\Sigma$ together with a new unary relation symbol $S$ and a new binary relation symbol $E$. In $V$, we can construct the class $\mathcal{C}'$ of structures of the form $$ A':= A\sqcup (\mathcal{P}(A)\times\lbrace A\rbrace), \quad S^{A'}=\mathcal{P}(A)\times\lbrace A\rbrace, \quad E^{A'}=\lbrace (a, b): a\in A, b=(X, A), a\in X\rbrace $$ for $A\in\mathcal{C}$. Now second-order quantification over a structure in $\mathcal{C}$ can be replaced with first-order quantification over the $S$-part of the corresponding structure in $\mathcal{C}'$. So if $A'$ is first-order elementarily embeddable into $B'$, $A$ must be second-order elementarily embeddable into $B$, so since $V\models VP(\mathcal{L}_I)$ we're done. The proof for $\mathcal{L}_{\omega_1\omega}$ follows similar lines.
lo.logic set-theory model-theory large-cardinals
Noah Schweber
Noah SchweberNoah Schweber
$\begingroup$ Issues closely related to more precise consistency strength bounds are addressed by Norman Perlmutter, see his recent preprint The large cardinals between supercompact and almost-huge. In particular, he shows that a cardinal is Vopěnka iff it is Woodin-for-supercompactness (as suggested by Kanamori). $\endgroup$ – Andrés E. Caicedo Aug 8 '13 at 1:36
$\begingroup$ The statement attributed to Friedman in 2005 in the question and many similar statements too were already systematically studied in Ch 6 of Adamek and Rosicky's 1994 book Locally Presentable and Accessible Categories. For a lower bound, they show for example that Vopenka's principle is equivalent to the statement that for any proper class of graphs, one embeds (not elementarily) in another. At the upper end, Vopenka's principle implies that for any proper class of objects in an accessible category, one admits a nonidentity map to another. This includes all AECs for example. $\endgroup$ – Tim Campion Sep 6 '18 at 23:54
Since the title of your question is, "Vopenka's Principle for non-first-order logics", this, from Magidor's and Vaananen's paper "On Lowenheim-Skolem-Tarski numbers for extensions of first order logic", might be of some relevance:
"Definition 3: Let $\tau$ be a fixed vocabulary. A logic $L$ consists of
A set, also denoted by $L$, of "formulas" of $L$. If $\phi$$\in$$L$, then there is a natural number $n_{\phi}$, called the of the sequence of free variables,
A relation $\mathcal A$$\vDash$$\phi$[$a_0$,...,$a_{n_{\phi}-1}$] between models of vocabulary $\tau$, sequences ($a_0$,...,$a_{n_{\phi}-1})$ of elements of A and formulas $\phi$$\in$$L$. It is assumed that this relation satisfies the isomorphism axiom, that is, if $\pi$: $\mathcal A$$\cong$$\mathcal B$, then $\mathcal A$$\vDash$$\phi$[$a_0$,...,$a_{n_{\phi}-1}]$ and $\mathcal B$$\vDash$ $\phi$[$\pi$$a_0$,...,$\pi$$a_{n_{\phi}-1}$] are equivalent.
We call $\tau$ the vocabulary of the logic $L$.
Definition 4: The Lowenheim-Skolem number $LS(L)$ of $L$ is the smallest cardinal $\kappa$ such that if a theory $T$$\subset$$L$ has a model, it has a model of cardinality $\lt$ max($\kappa$,$|T|$). The Lowenheim-Skolem-Tarski number $LST(L)$ of $L$ is the smallest cardinal $\kappa$ such that if $\mathcal A$ is any $\tau$-structure, then there is a substructure $\mathcal A^{'}$ of $\mathcal A$ of cardinality $\lt$$\kappa$ such that $\mathcal A^{'}$$\prec_{L}$$\mathcal A$."
I can now state their characterization of Vopenka's Principle:
"Theorem 6: Vopenka's Principle holds if and only if every logic has a Lowenheim-Skolem-Tarski number." I leave it for you to decide whether this characterization of Vopenka's Principle is different enough from the usual Vopenka's Principle to adequately answer your question.
Thomas BenjaminThomas Benjamin
$\begingroup$ So, this is not directly related to my specific question - my question was not whether there were alternate characterizations of VP (of which there are lots), or even whether there were any in terms of abstract logics, but specifically whether the specific principles of the form $VP(L)$ were distinct for reasonable natural $L$. Still, this is interesting, so +1. $\endgroup$ – Noah Schweber Jul 18 '15 at 18:22
$\begingroup$ By the way, a note about the proof of their Theorem 6: one direction is immediate. Suppose every logic has a LST number, and fix a proper class $C$ of structures. We can now build a silly logic $L_C$ containing first-order logic with a sentence which holds in exactly the structures in $C$. The existence of an LST number of $C$ then immediately implies the existence of lots of nontrivial elementary embeddings in $C$. The nontrivial direction is showing that VP is strong enough to produce LST numbers; for this, Magidor and Vaananen use a version of supercompactness. $\endgroup$ – Noah Schweber Jul 18 '15 at 18:25
$\begingroup$ @NoahSchweber: When you ask "whether the specific principles of the form $VP(L)$" are "distinct for reasonable natural $L$", what sort of 'distinctness' would you hope to find, if such 'distinctness' did, in fact, exist (perhaps Thm. 6 suggests that there might not be any versions of $VP$ stronger than the usual one)? $\endgroup$ – Thomas Benjamin Jul 19 '15 at 1:08
$\begingroup$ As usually when discussing large-cardinal-like principles, I mean distinct in terms of provable equivalence over ZFC (or related theories), or - even better! - in terms of consistency strength over ZFC (see OP paragraph beginning "in principle"). This is not directly addressed by the result you cite - in particular, it is not clear that "$L$ has a LST number" implies $VP(L)$ (consider non-$L$-elementary classes of structures; $LST(L)$ isn't directly useful here). $\endgroup$ – Noah Schweber Jul 19 '15 at 3:04
$\begingroup$ How about first-order logic, assuming $\neg VP$? :P $\endgroup$ – Noah Schweber Jul 19 '15 at 4:11
Here is one example where changing the logic leads to inequivalent formulations of Vopenka's principle, but it is a different kind of change in the logic than you describe.
Namely, the change has to do with how one treats classes in set theory. In Gödel-Bernays GBC set theory, it is natural to formalize it as you did, as a single assertion in GBC making a claim about every class. In ZFC, however, set theorists usually consider classes as definable classes only, and so it is natural to formalize Vopenka's principle as a scheme of assertions, one statement for each definable class (as the assertion that for any parameters to be used with that definition, if it defines an Ord-length sequence of structures, then the Vopenka statement holds for it).
Since augmenting any ZFC model with only its definable classes makes it into a GB model (one should force global choice first, if necessary, to get GBC), it might seem that the difference in these formulations wouldn't matter much. But in fact, the two formulations of VP are different, as I argued in my answer to Mike Shulman's question, Can Vopenka's principle be violated definably?. What I proved there is that there can be a model of GBC satisfying the definable version of the Vopenka principle (the scheme), but not the full version in GBC. And the same issue applies to the concept of Vopenka cardinals, giving rise to the notion of almost-Vopenka cardinals.
The end result is that the first-order formulation of VP in ZFC is strictly weaker than the second-order formulation of VP in GBC.
122 silver badges33 bronze badges
Joel David HamkinsJoel David Hamkins
$\begingroup$ This raises the question as to which of these is equivalent to the category theoretic formulation in terms of large discrete categories... $\endgroup$ – David Roberts Aug 8 '13 at 22:35
$\begingroup$ I think the answer to that is that it depends on how one treats class-sized objects in the category theory. I believe that there are analogues of definable classes and GBC style classes in category theory, and many category-theoretic issues (e.g. Is there a logical engdofunctor of Set?), depend on those distinctions. $\endgroup$ – Joel David Hamkins Aug 8 '13 at 23:09
Not the answer you're looking for? Browse other questions tagged lo.logic set-theory model-theory large-cardinals or ask your own question.
Theories with the infinitary Vopenka property
When do substructures have computable copies?
From elementary equivalence to isomorphism
The (non-)absoluteness of second-order elementary equivalence
higher-order reflection
When does Vopěnka's principle hold?
If two structures are elementarily equivalent, is there a zigzag of elementary embeddings between them?
Preservation results in abstract logics
Weaker forms of Vopěnka's principle (using Indiscernables and other forms of Elementarity): How weak are they?
Vopenka's principle is equivalent to the existence of a strong compactness cardinal for any "logic"?
|
CommonCrawl
|
The safe carbon budget
Frederick van der Ploeg ORCID: orcid.org/0000-0003-2340-46331,2,3
Climatic Change volume 147, pages 47–59 (2018)Cite this article
Cumulative emissions drive peak global warming and determine the carbon budget needed to keep temperature below 2 or 1.5 °C. This safe carbon budget is low if uncertainty about the transient climate response is high and risk tolerance (willingness to accept risk of overshooting the temperature target) is low. Together with energy costs, this budget determines the optimal carbon price and how quickly fossil fuel is abated and replaced by renewable energy. This price is the sum of the present discounted value of all future losses in aggregate production due to emitting one ton of carbon today plus the cost of peak warming that rises over time to reflect the increasing scarcity of carbon as temperature approaches its upper limit. If policy makers ignore production losses, the carbon price rises more rapidly. If they ignore the peak temperature constraint, the carbon price rises less rapidly. The alternative of adjusting damages upwards to factor in the peak warming constraint leads initially to a higher carbon price which rises less rapidly.
Many economic studies derive optimal climate policies from maximizing social welfare subject to the constraints of an integrated assessment model that combines both a model of the global economy and a model of the carbon cycle and temperature dynamics (e.g., Nordhaus 1991, 2010, 2014; Golosov et al. 2014; Dietz and Stern 2015; van den Bijgaart et al. 2016; Rezai and van der Ploeg 2016). The resulting optimal carbon price is (approximately) proportional to world GDP if global warming causes damages that are proportional to world GDP. The factor of proportionality depends on ethical considerations such as intergenerational inequality aversion (the lack of willingness to sacrifice consumption today to curb global warming many decades into the future) and the amount by which welfare of future generations is discounted (impatience). This factor also depends on the carbon cycle and heat exchange dynamics (the fraction of carbon emissions that stays up permanently, the rate at which the remaining parts of the carbon stock return to the surface of the earth, temperature inertia, etc.).
The Paris Climate Agreement within the United Nations Framework Convention on Climate Policy (COP21) signed in April 2016 commits to keep global warming well below 2 °C this century and pursues efforts to limit temperature to 1.5 °C. This has the merit of focusing at a clear and easy-to-communicate target for peak global warming. Since climate change is subject to large degrees of uncertainty, one specifies a probability of say 2/3 that this target must be met which corresponds to a risk tolerance of 1/3. Since cumulative carbon emissions drive peak global warming, the target for peak global warming determines how much carbon can be emitted in total. This is called the safe carbon budget and depends on three key parameters only: maximum permissible global warming, climate uncertainty, and risk tolerance. The path-breaking study by Fitzpatrick and Kelley (2017) also investigates the optimal climate policy under uncertainty with a probabilistic temperature target. I exploit that peak global warming is approximately driven by cumulative carbon emissions. The policy problem can then be separated into two parts: first, determine the safe carbon budget for cumulative emissions and fossil fuel use, and, then, work out how this budget for fossil fuel use is optimally allocated over time taking due account of production losses resulting from global warming. The resulting recommendations are straightforward to communicate to policy makers, and by splitting them in two parts, it helps countries to agree on the required international climate policy.
My main aim is to show the drivers of the optimal time path for the carbon price which ensures that cumulative emissions from now on stay within the safe carbon budget. This carbon price and the time paths for mitigation and abatement are derived from an integrated assessment model and consists of two components: (1) the present discounted value of all future production losses from emitting one ton of carbon today, called the social cost of carbon SCC, which rises at the same rate as world GDP,Footnote 1 and (2) the cost of staying forever within the safe carbon budget which rises at the real interest rate to reflect the increasing scarcity of carbon as its budget gets closer to exhaustion, called the cost of peak warming CPW. Together, these two costs determine the full SCC. The optimal climate policy sets the carbon price, either via a carbon tax or an emissions market, to the full SCC. One can thus determine how fast fossil fuel is phased out and renewable energies are phased in and how much of fossil fuel is abated. Using the safe carbon budget means that ethically loaded concepts such as how much to discount welfare of future generations and the willingness to sacrifice consumption today to curb global warming play no role in determining the safe budget, but do affect the timing of the energy transition and how much of fossil fuel is abated. The estimated damages from global warming that have been used to calculate optimal carbon prices are low and typically lead to peak warming below 2 °C. One reason is that such estimates ignore the damages that occur from the risk of tipping points at higher temperatures.
I differ from existing studies on temperature constraints in taking cumulative emissions, peak warming, and the safe carbon budget rather than an explicit temperature constraint as driver of climate policy. This is why the CPW rises at a rate equal to the real interest rate, not the real interest rate plus the rate of decay of atmospheric carbon as in Nordhaus (1982), Tol (2013), and Bauer et al. (2015). Lemoine and Rudik (2017) ignore the SCC and find that temperature inertia leads to an inverse U-shape of the CPW which grows more slowly than exponentially and temporarily overshoots. However, recent results in climate science (e.g., Matthews et al. 2009; Ricke and Caldeira 2014) suggest that temperature inertia is much less than Lemoine and Rudik (2017) assume in which case their rationale for an inverse U-shape of the time path for the CPW disappears and the CPW has to be much higher as in the IPCC Fifth Assessment global mitigation cost scenarios (Clarke et al. 2014). My analysis is closest to Dietz and Venmans (2017) who also find that the optimal price of carbon consists of the SCC plus the CPW.Footnote 2
My other aim is to put forward these results in the simplest possible integrated assessment framework where cumulative emissions drive peak warming. I simplify by abstracting from non-CO2 carbon gases for which the transient climate response to cumulative emissions is not valid, other climate uncertainties, detailed marginal abatement costs, endogenous technology and sectoral transformation strategies, and more convex damage functions. My aim is not to come up with the best numbers for climate policy as this is better left for the much more detailed integrated assessment models (IAMs) (e.g., Clarke et al. 2014). The climate policy/science literature has already addressed the need to tighten climate policy in the light of the 1.5 °C target (e.g., Kriegler et al. 2014; Tahvoni et al. 2015; Rogelj et al. 2015, 2016), the FEEM Limits Project, the 2016 SSP data base on shared socioeconomic pathways, comparison exercises reported in IPCC studies (Clarke et al. 2014), and studies that deal with carbon prices consisting of the CPW only (e.g., Bauer et al. 2015). My analysis is complementary and more modest in that it builds a bridge between the economics literature based on production damages and the climate policy/science literature on temperature constraints. Overshooting a peak warming target bears an unacceptable risk of irreversible tipping points and the CPW of avoiding this must be added to the usual SCC.
Paris COP21 target for peak global warming and the safe carbon budget
The key driver of peak global warming measured as deviation from pre-industrial temperature, PGW, is cumulative carbon emissions, E (e.g., Allen et al., 2009,b; Matthews et al. 2009; Gillett et al. 2013; IPCC 2013; Allen 2016), which are measured here from 2015 onwards and, thus, do not contain historical emissions. Cumulative emissions ignore the slow removal of part of atmospheric carbon to oceans and the surface of the earth and, thus, underestimate peak global warming, but only by a small amount (see Appendix A1). Denoting the transient climate response to cumulative emissions by TCRE, a linear reduced-form relationship is:
$$ PGW=\alpha + TCRE\times E\kern0.75em \mathrm{with}\kern0.75em TCRE\equiv \kern0.5em \overline{TCRE}\times \varepsilon \kern0.75em \mathrm{and}\kern0.75em \ln \left(\varepsilon \right)\sim N\left(\mu, {\sigma}^2\right), $$
where α is a constant, \( \overline{TCRE} \) is the mean of TCRE,ε is a lognormally distributed shock to the TCRE with mean set to μ = − 0.5σ2 so E[ε] = 1. The mean of TCRE is thus \( \overline{TCRE} \) and its standard deviation is \( \overline{TCRE}\sqrt{\exp \left({\sigma}^2\right)-1}. \) This is a stochastic extension of the relationship used in Allen (2016), which allows for uncertainty in the TCRE and abstracts from additive uncertainty in PGW. The lognormal distribution has the advantage of analytical convenience and ensures that the TCRE is always positive. Uncertainty in the TCRE may follow from a more complicated stochastic process with dynamics and non-normal features such as skewedness and fat tails or result from a number of underlying shocks to the climate system, but (1) keeps it simple. Paris COP21 has agreed to keep PGW below 2 °C (and to aim for 1.5 °C). I assume that this target has to be met with probability 0 < β < 1:
$$ \mathrm{prob}\left[ PGW<{2}^{\mathrm{o}}\mathrm{C}\right]=\beta . $$
IPCC typically sets β to 2/3. The safe carbon budget compatible with (2) is deduced from (1) and denoted by \( \overline{E} \). Cumulative emissions at any time t cannot exceed the safe carbon budget:
$$ {E}_t\le \frac{2-\alpha }{\overline{TCRE}\kern0.5em \times \exp \left({F}^{-1}\left(\beta; -0.5{\sigma}^2,{\sigma}^2\right)\right)}\equiv \overline{E},\kern1em \forall t\ge 0, $$
where F(.; μ, σ2) is the cumulative normal density function with mean μ and variance σ2. Equation (3) indicates that a more ambitious target for peak global warming, say 1.5 °C instead of 2 °C, a higher expected TCRE, or a lower risk tolerance 1 − β, implies that less carbon can be burnt and more fossil fuel must be locked up in the earth. More uncertainty about the TCRE (higher σ2) also cuts maximum tolerated emissions and the safe carbon budget.
Without uncertainty, a safe carbon budget of \( \overline{E}=\left(2-\alpha \right)/\overline{TCRE}=362 \) GtC or 1327 GtCO2 is compatible with PGW of 2 °C given values of α = 1.276 °C and TCRE = 2 °C per trillions ton of carbon (cf. Allen 2016; van der Ploeg and Rezai 2017) if uncertainty is ignored. McGlade and Ekins (2015) suggest that the carbon embodied in reserves and probable reserves (resources) is 3 to 10–11 times higher than the carbon budget compatible with peak temperatures of 2 °C. They calculate that 80% of global coal reserves, half of global gas reserves, and a third of global oil reserves must be left unburnt. In practice, much more needs to be abandoned as many oil and gas reserves are owned by states instead of private companies. Not only carbon assets will be stranded but also energy-intensive irreversible investments in say coal-fired electricity generation. A more ambitious PGW target of 1.5 °C as stated in the Paris COP21 agreement requires tightening the safe carbon budget to 411 GtCO2 if uncertainty is ignored. At current global yearly uses of oil, coal, and gas, this implies the end of the fossil fuel era in one decade instead of four decades.
Equation (3) indicates that climate risk implies a lower safe carbon budget and more stranded assets, especially if risk tolerance is limited. To assess the magnitude of this effect numerically, estimates of the mean and standard deviation of the TCRE are needed. Allen et al. (2009, b) report a 5–95% probability range of the TCRE of 1.4–2.5 °C per TtC. We calibrate to a slightly wider range of 1.2–3.3 °C per TtC, so get mean and standard deviation of the TCRE of 2 and 0.508 °C per TtC, respectively, with σ = 0.25. IPCC (2013) also reports lower figures for the 5–95% probability range of the TCRE 1.0–2.1 °C per TtC from Matthews et al. (2009) and 0.7–2.0 °C per TtC from Gillett et al. (2013). Again, taking a slightly wider range of 0.8–2.6 °C per TtC, we get mean and standard deviation for TCRE of 1.45 and 0.445 °C per TtC, respectively, and σ = 0.3.
Table 1 reports the safe carbon budget for these two calibrations, peak global warming targets of both 2 and 1.5 °C, and a range of risk tolerance values. The qualitative results are the same for the two calibrations of the TCRE, but the one based on Matthews et al. (2009) and Gillett et al. (2013) yields higher safe carbon budgets due to the lower mean value of the TCRE (despite the slightly higher standard deviation). Below, I focus on the calibration of Allen et al. (2009, b).
Table 1 Risk tolerance and the safe carbon budget from 2015 onwards (GtCO2)
Focusing at a PGW target of 2 °C, Table 1 indicates that a risk tolerance of 1/3 (in line with the value reported by the IPCC) gives a safe carbon budget from 2015 onwards of 1228 GtCO2. Tightening up, risk tolerance to 10 and 1% curbs the safe carbon budget to 994 GtCO2 and 766 GtCO2, respectively. Less risk tolerance thus implies that less carbon can be burnt in total. If PGW has to be kept below 1.5 °C, the safe carbon budget drops dramatically from 1228 GtCO2 to 381 GtCO2 if risk tolerance is a third and from 766 GtCO2 to a mere 238 GtCO2 if the risk tolerance is 1%.
Optimal energy transition given the safe carbon budget
What is the optimal timing of fossil fuel use and carbon emissions, the mitigation and abatement rates, and when is the end of the fossil fuel era? These depend crucially on the costs of fossil fuel versus those of renewable energy, the cost of abatement, and the various rates of technical progress. It is thus not surprising that the IPCC and climate scientists stress a tight target for PGW with reference to geo-physical conditions and risk. I augment a simple IAM (van der Ploeg and Rezai 2017) with the safe carbon budget constraint (3). This model has constant trend growth in world GDP, g, and constant rates of technological progress in fossil fuel extraction, mitigation of energy (which leads to a gradually rising share of renewable energy), and abatement. It models a permanent and a transitory component of the stock of atmospheric carbon (Golosov et al. 2014) and a lag between temperature and increases in atmospheric carbon concentration (Appendix A1).
Maximizing global welfare subject to the constraint that income net of damages must equal spending on consumption, energy generation, mitigation, and abatement yields the SCC, which corresponds to the unconstrained optimal carbon price. Calculation of the SCC requires additional climate parameters, i.e., the fraction of carbon emissions staying up in the atmosphere forever, β0, the rate of return of remaining emissions to the surface of the earth and oceans, β1, and the mean lag between the temperature rise following an increase in atmospheric carbon, Tlag, and for the ethical considerations, i.e., the rate at which welfare of future generations is discounted, RTI, and intergenerational inequality aversion, IIA. It can be shown that the SCC or unconstrained optimal carbon price is then proportional to world GDP (see Appendix A2)Footnote 3:
$$ {P}_t={\tau}^U\times {WGDP}_t\ \mathrm{with}\ {\tau}^U\equiv \left(\frac{\beta_0}{SDR}+\frac{1-{\beta}_0}{SDR+{\beta}_1}\right)\left(\frac{1}{1+ SDR\times Tlag}\right)d, $$
where WGDP t denotes world GDP at time t, SDR ≡ RTI + (IIA − 1) × g > 0 is the growth-corrected social discount rate, and d > 0 is the damage coefficient defined as the fraction of world GDP (measured in trillion US dollars) that is lost per trillion ton of carbon in the atmosphere. The damage coefficient d is adjusted to allow for the delayed impact of the carbon stock on global mean temperature (see Appendix A2). The SCC is thus high and climate policy ambitious if a large part of emissions stay up forever (high β0), the absorption rate of the oceans is low (low β1), the temperature lag is small (low Tlag), welfare of future generations is discounted less heavily (low RTI), and there is more willingness to sacrifice current consumption to curb future global warming (low IIA). Higher economic growth (high g) implies that future generations are richer, so current generations are less prepared to curb global warming (especially if IIA is high), but also implies that damages from global warming rise faster, and thus, a higher carbon price is warranted. The net effect of economic growth on the SCC (4) is negative if IIA > 1.
Maximizing welfare subject to the additional constraint that cumulative carbon emissions cannot exceed the safe carbon budget yields the full social cost of carbon, SCC + CPW, which corresponds in a market economy to the constrained optimal carbon price, P t . If the safe carbon budget constraint (3) bites, this price is given by (see (A17b) in Appendix A2):
$$ {P}_t=\left({\tau}^U+\Delta {e}^{SDR\times t}\right)\times {WGDP}_t>{\tau}^U\times {WGDP}_t,\kern0.75em \forall t\le \overline{t}, $$
where the constant Δ> 0 follows from the constraint \( {E}_{\overline{t}}={\int}_0^{\overline{t}}\left(1-{a}_t\right)\left(1-{m}_t\right){\gamma}_0{e}^{-{r}_{\gamma }t}{WGDP}_t=\overline{E}. \)Here, m(t) is the mitigation rate (the share of renewables in total energy) at time t, a(t) is the abatement rate at time t, \( {\gamma}_0{e}^{-{r}_{\gamma }t} \) is energy use as fraction of world GDP at time t, and \( \overline{t} \) is the date of the end of the fossil fuel era.
The constrained optimal carbon price (5) consists of two terms: (i) the SCC or τU × WGDP t which grows at the same rate as world GDP familiar from the literature on simple rules for the optimal unconstrained carbon price (cf. Golosov et al. 2014; van den Bijgaart et al. 2016; Rezai and van der Ploeg 2016); and (ii) the CPW or ΔeSDR × t × WGDP t which grows at the rate of the real interest rate, i.e., SDR + g = RTI + IIA × g > 0. If policy makers ignore production damages from global warming (cf. Nordhaus 1982; Tol 2013; Bauer et al. 2015; Lemoine and Rudik 2017), the constrained optimal carbon price boils down to the CPW:
$$ {P}_t={\Delta}^{\ast }{e}^{\left( RTI+ IIA\times g\right)\times t}\times {WGDP}_0,\kern0.75em \forall t\le \overline{t}, $$
where RTI + IIA × g > 0 is the real interest rate and Δ∗ ensures that the safe carbon budget is never violated. The constrained carbon price is simply the CPW, which rises as the carbon budget approaches exhaustion. Matters become more complicated if there is also a substantial temperature lag, since then, the CPW has an inverse U-shape and might overshoot (Lemoine and Rudik 2017). This does not occur if the peak temperature constraint is formulated in terms of cumulative emissions. This is also why the CPW rises at the real interest rate and not at the real interest rate plus the rate of decay of atmospheric carbon.
In a market economy, cost minimization by firms requires that the marginal cost of fossil fuel equals the marginal cost of mitigating fossil fuel plus the price of carbon for using unabated fossil fuel, (1 − a t )P t (see Appendix A3). Mitigation thus increases in the relative cost of carbon-emitting technologies and abatement including the price of non-abated carbon (see eq. (A20)). Cost minimization also requires that the marginal cost of abatement equals the saved cost of carbon emissions. Abatement thus rises as its cost falls or the carbon price rises over time (see (A21)). I assume cost conditions are such that fossil fuel is fully mitigated before it is fully abated.
Calibration of carbon stock dynamics, damages, and the economy
The top panel of Table 2 gives the benchmark estimates of the variance of the lognormally distributed shock to the TCRE, the target for PGW, and risk tolerance as discussed in section 2. Although the IPCC typically takes a risk tolerance of 1/3, I have set it to 10% and even this might be on the high side given that the risks of tipping points and the damages done by the ensuing climate catastrophes when temperature exceeds 2 °C are large. The parameters in the bottom two panels excluding (b) come from Rezai and van der Ploeg (2016), van der Ploeg and Rezai (2017) and are based on the DICE-2013R IAM (Nordhaus 2010, 2014). The middle panel gives the parameters needed for finding the optimal energy mix and transition to the carbon-free era from cost minimization given the carbon price, and the bottom panel the additional parameters needed for calculating the SCC.
Table 2 Calibration details
Global energy use measured in GtC is 0.14% of world GDP, which matches current energy use of 10 GtC and initial world GDP of 73 trillion dollars. We focus at using mitigation and abatement, so set exogenous technical progress in energy needs to be zero. Initial fossil fuel and renewable energy costs are calibrated to give current energy cost shares of 7% of GDP and an additional cost of 5.6% of GDP for full decarbonization. The cost of fossil fuel is set to 515 $/tC and rises at the rate of 0.1% per year to capture resource scarcity. Technical change leading to a reduction in the costs of mitigation and abatement is 1.25% per year, which matches the cost of 1.6% of GDP for full decarbonization in 100 years. The cost of full abatement is calibrated to an initial value of 20% of GDP, which then falls at the rate of non-carbon technologies and decreases to 5.7% of GDP in 100 years.
Turning to the bottom panel, the rate of time impatience is set to 1.5% per year and shows how impatient policy makers are. Intergenerational inequality aversion is set to 1.45 and indicates how little policy makers are prepared to sacrifice utility of current generations for the benefit of future generations. Given a trend growth rate in world GDP of 2% per year, this implies a long-run real interest rate of 4.4% per year. Global warming damages in any year are 1.9% of world GDP per trillion ton of carbon in the atmosphere. These damages rise at the same rate of growth as world GDP and the discount rate to be used is thus the growth-corrected long-run real interest rate, which is 2.4% per year.
Effective carbon in the atmosphere takes account of the delay between a rise in the stock of carbon and mean global temperature of 10 years (cf. Ricke and Caldeira 2014). A fifth of carbon stays to all intents and purposes permanently up in the atmosphere; the remainder slowly returns to the oceans and the surface of the earth at a rate of 0.23% per year (cf. Golosov et al. 2014).
Constrained optimal climate policy simulations with a safe carbon budget
Using this calibration, not pricing carbon at all leads to zero mitigation and zero abatement, cumulative emissions of 6519 GtCO2, 118 years for the end of the fossil fuel era to occur, and PGW of 4.6 °C, which is much too high. The globally best unconstrained climate policy is portrayed by the purple solid lines in Fig. 1 and has a zero CPW. It has an initial carbon price or SCC of $12/tCO2 (or $44/tC), and grows at 2% per annum from then on. The mitigation rate is driven by technological progress and the rising price of carbon, and increases from 20 to 100% in 78 years at which date the carbon-free era starts. The abatement rate rises from a mere 1.5 to 19% at the end of the fossil fuel era. In total, 2328 GtCO2 is burnt, which implies PGW of 2.6 °C. The unconstrained climate policy thus overshoots the 2 °C target agreed at the Paris COP21 conference by 0.6 °C. The safe carbon budget from 2015 onwards corresponding to a risk tolerance of 10% and a peak warming target of 2 °C is 994 GtCO2 (see Table 1).Footnote 4
Constrained, adjusted, and unconstrained optimal climate policies
Figure 1 portrays three policies to ensure that cumulative emissions stay within this budget: (1) the constrained optimal carbon price (5), SCC + CPW, with d calibrated to estimated production damages (black dashed lines); (2) the constrained optimal carbon price (6) ignoring these damages, CPW, and thus with d = 0 (black dotted lines); and (3) the optimal carbon price with damages adjusted upwards to stay within the safe carbon budget (red dashed-dotted lines).
Constrained optimal carbon price with calibrated damages
The constrained optimal carbon price manages to keep cumulative emissions to 994 GtCO2 and has two components: the SCC and the CPW (the difference between the dashed black and the purple solid line). The SCC rises at the rate of growth of world GDP (2% per year) and the CPW rises at a rate equal to the real interest rate (4.4% per year). The initial CPW is $10/tCO2, so that the initial carbon price has to increase from $12 to $22/tCO2. The carbon era now ends in 49 instead of 78 years. During this period, the mitigation rate rises from 28 to 100% and the abatement rate rises from 2.8 to 34%. Note that a peak warming target of 1.5 °C implies that only 308 GtCO2 can be burnt. It necessitates a much higher path for the constrained optimal carbon price that starts at $58/tCO2 and rises in a mere 28 years to $179/tCO2 at the end of the carbon era (not shown).
Constrained cost-minimizing carbon price ignoring calibrated damages or CPW
Ignoring production damages of global warming, policy makers set the carbon price to the CPW which ensures that cumulative emissions do not exceed 994 GtCO2. This price rises more rapidly than the path that does take account of damages. It starts somewhat lower at $16 instead of $22/tCO2 and rises in 47 years to a final carbon price of $128 instead of $119/tCO2. As a result, mitigation starts somewhat more modestly (at 24%) too. Abatement is more modest and rises from 2.0 to 29% at the end of the carbon era.
Welfare-maximizing carbon prices with damages adjusted upwards
Since welfare maximization with calibrated damages leads to overshooting of the peak warming target, this suggests that calibrated damages are an underestimate of the true risk of global warming in that they ignore the risks of tipping points and climate disasters which are captured by the safe carbon budget constraint. Adjusting the damage coefficient upwards by a factor 2.8 (i.e., from 1.9 to 5.4% of world GDP per TtC) ensures that cumulative emissions never exceed the safe carbon budget when welfare is maximized. The end of the fossil fuel era then occurs more than two decades earlier than with the unconstrained optimal carbon price (after 56 instead of 78 years, but more slowly than with the constrained welfare-maximizing carbon price (49 years)). The initial carbon price almost triples from $12 to $34/tCO2, and then rises at 2% per annum in line with the rate of economic growth.Footnote 5 As a result of this more ambitious climate policy, the path for the mitigation rate is higher and starts at 36% and rises to 100% during the fossil fuel era. Abatement is also higher; it starts at 4.2% and rises to 21% towards the end of the fossil fuel era.
Climate uncertainty, a higher transient climate response to cumulative emissions and a tighter risk tolerance, implies a lower safe carbon budget and that less fossil fuel can be burnt in total, thus requiring a more ambitious climate policy. The relatively modest identified damages from global warming in integrated assessment models imply that the unconstrained welfare-maximizing carbon price set to the SCC leads to overshooting of the peak warming target and thus that the safe carbon budget constraint bites. There are three options of staying within the safe carbon budget. The first option occurs if policy makers take account of production damages from global warming and ensure that the safe carbon budget constraint is never violated. The carbon price then consists of the SCC based on calibrated damages which rises at a rate equal to the growth rate of world GDP and the CPW which rises at a faster rate equal to the real interest rate. The second option occurs if policy makers ignore damages, as in the cost-minimizing temperature constraint literature. This leads to a more rapidly rising carbon price equal to the CPW. The third option is to acknowledge that damages are underestimated and adjust them upwards by factoring in the peak warming constraint. This leads to a less rapidly rising carbon price than the first option.
The safe carbon budget is easy to negotiate and communicate, and does not depend on ethical considerations regarding welfare of current and future generations. Once policy makers have agreed on what the appropriate risk tolerance is, the safe carbon budget follows directly from the climate physics. If production damages are ignored and the carbon price is set to the CPW, no further information on intergenerational fairness is needed if the carbon price results from a competitive market for emission permits. However, if the price is implemented via carbon taxes, policy makers need to specify the interest rates at which carbon taxes have to grow and these depend on ethical considerations.
More generally, carbon prices are affected by a wide range of other climate and economic uncertainties with some of them resolved not until the distant future. The solution then requires sophisticated stochastic dynamic programming algorithms. Uncertainty about future growth of aggregate consumption then depresses the social discount rate used by prudent policy makers and pushes up the SCC even more (e.g., Gollier 2012). Other types of uncertainty about future damage flows resulting from atmospheric carbon, the climate sensitivity, and sudden release of greenhouse gases into the atmosphere boost the risk-adjusted SCC even more and take account of hedging risks (e.g., Dietz et al. 2017; Hambel et al. 2017; Bremer and van der Ploeg 2017). Mitigating the risks of future interacting, multiple tipping points can push up the carbon price by a further factor of 2 to 8 (Lemoine and Traeger 2016; Cai et al. 2016). As uncertainty about the climate sensitivity has the biggest effect on carbon prices,Footnote 6 it may not be bad to start with the risk-adjusted safe carbon budget. For future research it is important to extend the literature on risk-adjusted carbon prices with resolution of a wide range of future uncertainties to allow for peak warming constraints.
It has been argued that an approach based on probabilistic stabilization targets is ad hoc and incurs welfare costs of 5% as the targets are inflexible and do not respond to changes in climatic conditions, the resulting policies tend to overreact to transient shocks, and the temperature ceiling is lower than the unconstrained optimal temperature under certainty (Fitzpatrick and Kelley 2017).Footnote 7 The relatively small welfare costs may be a price worth paying if an easy-to-communicate temperature target prompts policy makers into action. In fact, the IPCC approach of focusing attention at cumulative emissions and the safe carbon budget focuses at what matters most for global warming. The role of economics is to show how these cumulative budgets translate in the most cost-efficient manner to time paths of fossil fuel use, renewable use, and abatement. This paper has extended the IPCC approach to allow for various forms of climate uncertainty, since these curb the safe carbon budget significantly. This is related to the point-of-no return approach (van Zalinge et al. 2017), which prompts the question what to do once the climate has moved outside the viable region and can no longer be moved with traditional carbon pricing policies into the viable region. Negative carbon emissions and, therefore, unconventional policies such as geo-engineering are then called for (e.g., Keith 2000; Crutzen 2006; McCracken 2006; Bala et al. 2008; Lenton and Vaughan 2009; Barrett et al. 2014; Moreno-Cruz and Smulders 2016) and some argue that they are already called for to keep global warming below 2 °C (e.g., Gassler et al. 2015). Such policies act as insurance and are needed before the climate moves outside the viable set and reaches the point of no return. More work is needed on the reversible and irreversible uncertainties driving the climate (both the stock of carbon in the atmosphere and temperature) and what they imply for the safe carbon budget, climate mitigation and adaptation policies, and the need for negative-emissions policies.
This in line with recent studies on simple rules for the optimal carbon price in absence of temperature constraints (e.g., Golosov et al. 2014; van den Bijgaart et al. 2016; Rezai and van der Ploeg 2016).
Barbier and Burgess (2017) take a user cost approach to the 2 °C target. They show that for constant (declining at 2/6% per year) emissions, global welfare increases by 6% (19%) of global GDP and the carbon's budget life time increases from 18 to 21 (30) years compared with growing emissions under business as usual.
Our formulation of damages extends to that of Golosov et al. (2014) by adding a temperature lag. The carbon price (4) is independent of the carbon stock. With more convex damages, the carbon price (4) will increase with global warming as well as world GDP. Convex damages capture the risk of tipping points but this risk is already captured by having an explicit additional temperature constraint. This justifies our specification with flat marginal damages.
This is not too different from the 1 TtCO2 from 2011 onwards reported in the IPCC Fifth Assessment Report given a historical carbon budget of 2900 GtCO2 and cumulative emissions during 1870–2011 of 1900 GtCO2.
The average adjusted carbon price over 2015–2100 is $89/tCO2 for a safe carbon budget of 994 GtC02. The initial and average adjusted carbon price for a budget of 1327 GtCO2 (i.e., ignoring uncertainty; see Table 1) are $25 in 2015 and $65/tCO2, respectively. These are lower than the 2020 carbon prices in 2010 US dollars reported by Working Group III of the IPCC Fifth Assessment Report (Clarke et al. 2014) of $50–60 at a 5% discount rate.
Van den Bijgaart et al. (2016) point out that if the multiplicative factors determining the optimal unconstrained price of carbon are lognormally distributed, the price of carbon is lognormally distributed too. This allows one to get the difference between the mean and the median of the optimal unconstrained carbon price and see how this is driven by uncertainties in the carbon cycle, temperature adjustment, climate sensitivity, damages, and discount rate. Table 2 of this study indicates that uncertainties about climate sensitivity and damage shocks give the largest adjustments to the risk-adjusted carbon price.
This study allows for Bayesian learning and stochastic weather shocks, but the optimal policy with learning is close to that without learning as learning about the climate sensitivity is a slow process. This study uses an infinite-horizon version of the integrated assessment model DICE with a sophisticated model for temperature dynamics and carbon exchange.
Allen M (2016) Drivers of peak warming in a consumption-maximizing world. Nat Clim Chang 6:684–686
Allen MR, Frame D, Frieler K, Hare W, Huntingford C, Jones C, Knutti R, Lowe J, Meinshausen M, Meinshausen N, Raper S (2009) The exit strategy. Nat Rep Clim Change 3(May):56–58
Allen MR, Frame DJ, Huntingford C, Jones CD, Lowe JA, Meinshausen M, Meinshausen N (2009) Warming caused by cumulative emissions towards the trillionth tonne. Nature 458:1163–1166
Bala G, Duffy PB, Taylor KE (2008) Impact of geoengineering schemes on the global hydrological cycle. Proc Natl Acad 105:7664–7669
Barbier EB, Burgess JC (2017) Depletion of the global carbon budget: a user cost approach. Environ Dev Econ:1–16
Barrett S, Lenton TM, Millner A, Tavoni A, Carpenter S, Anderies JM, Chapin FS, Crepin A-S, Daily G, Ehrlich P, Folke C, Galaz V, Hughes T, Kautsky N, Lambin EF, Naylor R, Nyborg K, Polasky S, Scheffer M, Wilen J, Xepapadeas A, de Zeeuw AJ (2014) Climate engineering reconsidered. Nat Clim Chang 4(7):527–529
Bauer N, Bosetti V, Hamdi-Cheriff M, Kitous A, McCollum D, Mjean A, Rao S, Turton H, Paroussos L, Ashina S, Calvin K, Wada K, van Vuuren D (2015) CO2 emission mitigation and fossil fuel markets: dynamic and international aspects of climate policy. Technol Forecast Soc Chang 90(A):243–256
Bremer TS, van der Ploeg F (2017) Pricing economic and climatic risks into the price of carbon: leading-order results from asymptotic analysis, mimeo. Edinburgh University
Cai Y, Lenton TM, Lontzek TS (2016) Risk of multiple climate tipping points should trigger a rapid reduction in CO2 emissions. Nat Clim Chang 6:520–525
Clarke L, Jiang K, Akimoto K, Babiker M, Fisher-Vanden K, Hourcade J-C, Krey V, Kriegler E, Löschel A, McCollum D, Paltsev S, Rose S, Shukla PR, Tahvoni M, van der Zwaan BCC, van Vuuren DP (2014) Assessing transformation pathways. In: Edenhofer O et al (eds) Climate change 2014: mitigation of climate change. Contribution of Working Group III to the Fifth Assessment Report of the International Panel on Climate Change. Cambridge University Press, Cambridge
Crutzen P (2006) Albedo enhancement by stratospheric sulfur injections. Clim Chang 77:211–219
Dietz S, Gollier C, Kessler L (2017) The climate beta. J Environ Econ Manag Forthcom
Dietz S, Stern N (2015) Endogenous growth, convexity of damages and climate risk: how Nordhaus' framework supports deep cuts in emissions. Econ J 125(583):574–620
Dietz S, Venmans F (2017) Cumulative carbon emissions and economic policy: in search of general principles, Working Paper No. 283, Grantham Research Institute on Climate Change and the Environment, LSE, London, U.K
Fitzpatrick LG, Kelley DL (2017) Probabilistic stabilization targets. J Assoc Environ Resour Econ 4(2):611–657
Gassler T, Guivarch C, Tachiiri K, Jones CD, Ciais P (2015) Negative emissions physically needed to keep global warming below 2 °C. Nat Commun 6:7958–7965
Gillett NP, Arora VK, Matthews D, Allen MR (2013) Constraining the ratio of global warming to cumulative CO2 emissions using CMI5 simulations. J Clim 26:6844–6858
Gollier C (2012) Pricing the planet's future: the economics of discounting in an uncertain world. Princeton University Press, Princeton
Golosov M, Hassler J, Krusell P, Tsyvinski A (2014) Optimal taxes on fossil fuel in general equilibrium. Econometrica 82(1):48–88
Hambel C, Kraft H, Schwartz E (2017) Optimal carbon abatement in a stochastic general equilibrium model with climate change, mimeo. Goethe University Frankfurt
IPCC (2013) Long-term climate change: projections, commitments, and irreversibilities, Chapter 12, Sections 5.4.2 and 5.4.3, Working Group 1, Contribution to the IPCC 5th Assessment Report, International Panel of Climate Change
Keith DW (2000) Geoengineering the climate: history and prospect. Annu Rev Energy Environ 25:245–284
Kriegler M, Weyant JP, Blanford GJ, Krey V, Clarke L, Edmonds J, Fawcett A, Luderer G, Riahi K, Richels R, Rose SK, Tahvoni M, van Vuuren DP (2014) The role of technology for achieving climate policy objectives: overview of the EMF 27 study on global technology and climate policy strategies. Clim Chang 123(3–4):353–367
Lemoine D, Rudik I (2017) Steering the climate system: using inertia to lower the cost of policy. Am Econ Rev Forthcom
Lemoine D, Traeger CP (2016) Economics of tipping the climate dominoes. Nat Clim Chang 6:514–519
Lenton T, Vaughan N (2009) The radioactive forcing potential of different climate engineering options. Atmos Chem Phys 9:5539–5561
Matthews HD, Gillett NP, Stott PA, Zickfeld K (2009) The proportionality of global warming to cumulative carbon emissions. Nature 459:829–832
McCracken MC (2006) Geoengineering: worthy of cautious evaluation? Clim Chang 77:235–243
McGlade C, Ekins P (2015) The geographical distribution of fossil fuels used when limiting global warming to 2 °C. Nature 517:187–190
Moreno-Cruz JB, Smulders JA (2016) Revisiting the economics of climate change: the role of geoengineering. Res Econ, in press
Nordhaus W (1982) How fast should we graze the global commons? Am Econ Rev 72(2):242–246
Nordhaus W (1991) To slow or not to slow: the economics of the greenhouse effect. Econ J 101(407):920–937
Nordhaus W (2010) Economic aspects of global warming in a post-Copenhagen world. Proc Natl Acad Sci 107(26):11721–11726
Nordhaus W (2014) Estimates of the social cost of carbon: concepts and results from the DICE-2013R model and alternative approaches. J Assoc Environ Resour Econ 1:273–312
Rezai A, van der Ploeg F (2016) Intergenerational inequality aversion, growth and the role of damages: Occam's rule for the global carbon tax. J Assoc Environ Resour Econ 3(2):493–522
Ricke KL, Caldeira K (2014) Maximum warming occurs about one decade after a carbon dioxide emission. Environ Res Lett 9(12):124002
Rogelj J, Luderer G, Pietzcker RC, Kriegler E, Schaeffer M, Krey V, Riahi K (2015) Energy system transformations for limiting end-of-century warming to below 1.5 °C. Nat Clim Chang 5:519–527
Rogelj J, van den Elzen M, Höhne N, Fransen T, Fekete H, Winkler H, Schaeffer R, Sha F, Riahi K, Meinshausen M (2016) Paris agreement climate proposals need a boost to keep warming well below 2 °C. Nature 534:631–639
Tahvoni M, Kriegler E, Riahi K, van Vuuren DP, Aboumahboub T, Bowen A, Calvin K, Campiglio E, Kober T, Jewell J, Luderer G, Marangoni G, McMcCollum D, van Sluisveld M, Zimmer A, van der Zwaan B (2015) Post-2020 climate agreements in the major economies assessed in the light of global models. Nat Clim Chang 5:119–126
Tol RSJ (2013) Targets for global climate policy: an overview. J Econ Dyn Control 37(5):911–928
van den Bijgaart IM, Gerlagh R, Liski M (2016) A simple formula for the social cost of carbon. J Environ Econ Manag 77:75–94
van der Ploeg F, Rezai A (2017). Climate policy with declining discount rates in a multi-region world—back-on-the-envelope calculations, mimeo. University of Oxford
van Zalinge BC, Feng Q, Dijkstra HA (2017) On determining the point of no return in climate change. Earth Syst Dyn 8:707–717
OXCARRE, Department of Economics, University of Oxford, Manor Road Building, Oxford, OX1 3UQ, UK
Frederick van der Ploeg
VU University Amsterdam, De Boelelaan 1105, 1081 HV, Amsterdam, the Netherlands
St. Petersburg State University, 7/9 Universitetskaya nab., St. Petersburg, Russia, 199034
Correspondence to Frederick van der Ploeg.
(DOCX 176 kb)
van der Ploeg, F. The safe carbon budget. Climatic Change 147, 47–59 (2018). https://doi.org/10.1007/s10584-017-2132-8
Issue Date: March 2018
|
CommonCrawl
|
Op-Amp-Applications
A circuit is said to be linear, if there exists a linear relationship between its input and the output. Similarly, a circuit is said to be non-linear, if there exists a non-linear relationship between its input and output.
Op-amps can be used in both linear and non-linear applications. The following are the basic applications of op-amp −
Inverting Amplifier
Non-inverting Amplifier
This chapter discusses these basic applications in detail.
An inverting amplifier takes the input through its inverting terminal through a resistor $R_{1}$, and produces its amplified version as the output. This amplifier not only amplifies the input but also inverts it (changes its sign).
The circuit diagram of an inverting amplifier is shown in the following figure −
Note that for an op-amp, the voltage at the inverting input terminal is equal to the voltage at its non-inverting input terminal. Physically, there is no short between those two terminals but virtually, they are in short with each other.
In the circuit shown above, the non-inverting input terminal is connected to ground. That means zero volts is applied at the non-inverting input terminal of the op-amp.
According to the virtual short concept, the voltage at the inverting input terminal of an op-amp will be zero volts.
The nodal equation at this terminal's node is as shown below −
$$\frac{0-V_i}{R_1}+ \frac{0-V_0}{R_f}=0$$
$$=>\frac{-V_i}{R_1}= \frac{V_0}{R_f}$$
$$=>V_{0}=\left(\frac{-R_f}{R_1}\right)V_{t}$$
$$=>\frac{V_0}{V_i}= \frac{-R_f}{R_1}$$
The ratio of the output voltage $V_{0}$ and the input voltage $V_{i}$ is the voltage-gain or gain of the amplifier. Therefore, the gain of inverting amplifier is equal to $-\frac{R_f}{R_1}$.
Note that the gain of the inverting amplifier is having a negative sign. It indicates that there exists a 1800 phase difference between the input and the output.
A non-inverting amplifier takes the input through its non-inverting terminal, and produces its amplified version as the output. As the name suggests, this amplifier just amplifies the input, without inverting or changing the sign of the output.
The circuit diagram of a non-inverting amplifier is shown in the following figure −
In the above circuit, the input voltage $V_{i}$ is directly applied to the non-inverting input terminal of op-amp. So, the voltage at the non-inverting input terminal of the op-amp will be $V_{i}$.
By using voltage division principle, we can calculate the voltage at the inverting input terminal of the op-amp as shown below −
$$=>V_{1} = V_{0}\left(\frac{R_1}{R_1+R_f}\right)$$
According to the virtual short concept, the voltage at the inverting input terminal of an op-amp is same as that of the voltage at its non-inverting input terminal.
$$=>V_{1} = V_{i}$$
$$=>V_{0}\left(\frac{R_1}{R_1+R_f}\right)=V_{i}$$
$$=>\frac{V_0}{V_i}=\frac{R_1+R_f}{R_1}$$
$$=>\frac{V_0}{V_i}=1+\frac{R_f}{R_1}$$
Now, the ratio of output voltage $V_{0}$ and input voltage $V_{i}$ or the voltage-gain or gain of the non-inverting amplifier is equal to $1+\frac{R_f}{R_1}$.
Note that the gain of the non-inverting amplifier is having a positive sign. It indicates that there is no phase difference between the input and the output.
A voltage follower is an electronic circuit, which produces an output that follows the input voltage. It is a special case of non-inverting amplifier.
If we consider the value of feedback resistor, $R_{f}$ as zero ohms and (or) the value of resistor, 1 as infinity ohms, then a non-inverting amplifier becomes a voltage follower. The circuit diagram of a voltage follower is shown in the following figure −
In the above circuit, the input voltage $V_{i}$ is directly applied to the non-inverting input terminal of the op-amp. So, the voltage at the non-inverting input terminal of op-amp is equal to $V_{i}$. Here, the output is directly connected to the inverting input terminal of opamp. Hence, the voltage at the inverting input terminal of op-amp is equal to $V_{0}$.
According to the virtual short concept, the voltage at the inverting input terminal of the op-amp is same as that of the voltage at its non-inverting input terminal.
So, the output voltage $V_{0}$ of a voltage follower is equal to its input voltage $V_{i}$.
Thus, the gain of a voltage follower is equal to one since, both output voltage $V_{0}$ and input voltage $V_{i}$ of voltage follower are same.
|
CommonCrawl
|
Comparison of effectiveness between warm acupuncture with local-distal points combination and local distribution points combination in breast cancer-related lymphedema patients: a study protocol for a multicenter, randomized, controlled clinical trial
Chien-Hung Yeh1 na1,
Tian Yi Zhao1 na1,
Mei Dan Zhao1,
Yue Wu1,
Yong Ming Guo1,
Zhan Yu Pan2,
Ren Wei Dong1,
Bo Chen1,
Bin Wang2,
Jing Rong Wen1,
Dan Li1,
Yi Guo3,4 &
Xing Fang Pan1
Trials volume 20, Article number: 403 (2019) Cite this article
Lymphedema is the most common complication after breast cancer treatment, but management of lymphedema remains a clinical challenge. Several studies have reported the beneficial effect of acupuncture for treating breast cancer-related lymphedema (BCRL). Our objective is to verify the effectiveness of warm acupuncture on BCRL and compare the effectiveness of a local distribution acupoint combination with a local-distal acupoint combination for BCRL.
This is a study protocol for a multicenter, three-arm parallel, assessor blinded, randomized controlled trial. A total of 108 participants diagnosed as BCRL will be randomly allocated in equal proportions to a local distribution acupoint (LA) group, a local-distal acupoint (LDA) group, or a waiting-list (WL) group. The LA and LDA groups will receive 20 acupuncture treatment over 8 weeks with local distribution acupoint combination and local-distal acupoint combination, respectively. The WL group will receive acupuncture treatment after the study is concluded. The primary outcome is the mean change in inter-limb circumference difference from baseline to week 8. The secondary outcomes include volume measurement, skin hardness, common terminology criteria for adverse events 4.03 (edema limbs criteria), stages of lymphedema from the International Society of Lymphology, Disabilities of the Arm, Shoulder and Hand questionnaire, and the Medical Outcome Study 36-item Short-form Health Survey.
This study aims to provide data on warm acupuncture as an effective treatment for BCRL and at the same time compare the effectiveness of different acupoint combinations.
ClinicalTrials.gov: Identifier NCT03373474. Registered on 14th December 2017.
Lymphedema is the most common complication after breast cancer treatment, with an average incidence rate of 21% [1]. Some of the symptoms associated with breast cancer-related lymphedema (BCRL) are discomfort, pain, heaviness, tightness, stiffness, weakness, and a decreased range of motion in the affected arm, which can cause severe physical morbidity. The swollen appearance of the arm can also cause psychological distress, such as depression and anxiety, since it is a constant reminder of breast cancer [2].
Although several treatment options are available, including manual lymphatic drainage, compression bandaging, and complete decongestive therapy, a clinical guideline on the integrative therapies used after breast cancer treatment reported that there were no A-graded or B-graded therapies to report for lymphedema [3]. Therefore, the treatment of lymphedema is difficult and probably requires multi-disciplinary attention. In several National Comprehensive Cancer Network guidelines, acupuncture is recommended for the supportive care of cancer to reduce symptoms and side effects of conventional cancer care [4]. Recent studies have begun to investigate the therapeutic effect of acupuncture on BCRL and results showed the potential of acupuncture treatment for BCRL [5,6,7,8,9,10]. However, previous studies were mostly pilot and observational studies with small sample sizes and, thus, the effectiveness of acupuncture needs to be further confirmed with a larger trial. In addition, no consensus has been reached on the choice of acupoints to achieve optimal results. For example, Cassileth et al. [8] performed a whole-body treatment that included acupoints on the abdomen, legs, and affected and unaffected arms, while Yao et al. [5] performed acupuncture treatment on the affected arm only. Although both acupuncture prescriptions were able to reduce arm circumference, which of the two provides better results remains unclear. Therefore, we propose a multi-center, randomized controlled trial to determine the optimal acupoint combination for the treatment of BCRL.
In this study, we aim to determine the effectiveness of warm acupuncture in the treatment of BCRL with a rigorous, larger, multicenter, randomized controlled trial. In addition, we will compare the effectiveness between different acupoint combinations. Specifically, we aim to determine whether a local-distal acupoint combination or a local distribution acupoint combination is more effective in the treatment of BCRL.
This study is a multicenter, three-arm parallel, assessor blinded, randomized controlled trial conducted in China. A total of 108 patients will be randomly assigned to local acupoint (LA) group, a local-distal acupoint (LDA) group, or a waiting-list (WL) group in a 1:1:1 ratio. The schedule of enrolment, interventions, and assessments is summarized in Table 1, and the study flow chart is shown in Fig. 1. Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) 2013 Checklist is attached as Additional file 1.
Table 1 Schedule of enrolment, interventions and assessments
Study flow chart
Participants will be recruited from the Tianjin Medical University Cancer Institute and Hospital, Baokang Hospital Affiliated to Tianjin University of Traditional Chinese Medicine, The Second Affiliated Hospital of Baotou Medical College, Henan Cancer Hospital, Gansu Provincial Cancer Hospital, and Sichuan Cancer Hospital by billboard advertisement and practitioner referrals.
At least 6 months after breast cancer surgery and with persistent breast cancer-related upper extremity lymphedema for at least 3 months. Upper extremity lymphedema is defined as more than 2 cm circumference difference or 5% volume difference between the affected and unaffected arms.
Presence of stage II or III lymphedema according to the 2016 consensus by the International Society of Lymphology [11].
A Karnofsky Performance Score ≥70
Men or women aged 18–80 years
Out-patients
Estimated life expectancy >6 months
Exclusion criteria
Bilateral BCRL
Taking diuretics
History of primary lymphedema
A diagnosis of severe heart, liver, kidney, or hematologic disease
Edema caused by upper extremity disability or other conditions such as heart failure, kidney disease or malnutrition
Edema caused by recurrent or metastatic breast cancer
Hypoproteinemia
Inflammation, scar, or trauma at the site of operation, or other active skin infections
Unable to self-care, history of psychological disorders, or unable to communicate
Received lymphedema treatment within the past 1 month
Pregnancy or breastfeeding
The presence of electronic medical device implants
Denial to sign the informed written consent or unwillingness to conform to randomization
Participation in other clinical trials during the study period
Randomization and allocation concealment
After signing the informed consent, eligible participants will be randomly assigned to one of the three groups by center randomization. The Clinical Evaluation Center at the China Academy of Chinese Medical Science in Beijing will be responsible for the generation of a random number and group assignment, which will be provided through the website at http://118.144.35.11/crivrs/index.htm. The practitioner who will perform the acupuncture treatment will then assign the participant to that intervention.
The assessor who will collect the data and the statistician who will perform the statistical analysis will be blinded to group assignment. Acupuncturist blinding cannot be achieved due to the nature of the intervention. Participant blinding is limited to the LA group and LDA group, since the WL group cannot help but notice their allocation.
The acupuncture groups (LA and LDA group) will receive acupuncture treatments three times per week for a total of 20 treatments. The treatments will be performed by practitioners who hold a Chinese medicine practitioner license from the Ministry of Health of the People's Republic of China. Practitioners will be instructed to achieve de qi sensation and then the needles will be retained for 30 min. Disposable, sterilized stainless-steel acupuncture needles will be used in the acupuncture groups (Huatuo disposable acupuncture needle, Suzhou Medical Co., Jiangsu, China, 0.25 × 40 mm). During the needle retention period, moxa cones (Mac mini needle moxa, Tianjin HaingLimSouWon Medical Co., Ltd., Tianjin, China, 2 cm) will be placed on the handle of the needles at specific acupoints. A piece of hard paper will be placed on the acupoint to prevent burning of the participants from falling ashes. The moxa cones will then be burned to delivery warm acupuncture. Additional hard paper will be added if the patient feels uncomfortable with the increasing temperature. All treatment that may affect the result of the study will be restricted, including surgical interventions, acupuncture, blood-letting, diuretics, exercise, complete decongestive therapy, compression therapy, and manual lymphatic drainage. Treatments that the participants had been using prior to the trial may be allowed at the discretion of the investigator after evaluation of the patient's condition.
Acupoint prescription set
Local points set: Waiguan (TE5), Quchi (LI11), Sidu (TE9), Shaohai (HT3), Naohui (TE13), and Xiajiquan on the affected arm.
Additional local points set: Chize (LU5), Quze (PC3), Zhizheng (SI7), Yangchi (TE4), Zhongzhu (TE3), Qingling (HT2), Tianjing (TE10), Jianyu (LI15), and two other points according to symptoms on the affected arm.
Distal points set: Waiguan (TE5), Quchi (LI11), Shaohai (HT3), Xiajiquan on the unaffected arm; Guanyuan (CV4), Qihai (CV6), Shuifen (CV9), Zhongwan (CV12), bilateral Sanyinjiao (SP6), and Yinlingquan (SP9).
Detailed location of the acupoints are shown in Fig. 2.
Acupuncture points used in the study
LA group
Participants will receive acupuncture treatment using the local points set plus the additional local points set (local distribution points combination). Warm acupuncture will be applied at Naohui (TE13), Quchi (LI11), and Sidu (TE9) if permitted, and one other acupoint according to the symptom on the affected arm.
LDA group
Participants will receive acupuncture treatment using the local points set plus the distal points set (local-distal combination). Warm acupuncture will be applied similarly to the LA group, with the addition of Qihai (CV6), Shuifen (CV9), bilateral Yinlingquan (SP9), and Quchi (LI11) on the unaffected arm.
WL group
Patients in the WL group will not receive any acupuncture treatment during the study. However, for ethical consideration, 20 free acupuncture treatments will be offered after the study is completed.
Primary outcome measures
Various assessment methods are available, but circumference measurement is simple, convenient, low cost, and reliable [12]. Therefore, the primary outcome measures will be the mean change in inter-limb circumference difference from baseline to the end of the 8-week intervention. The circumference will be measured using a measurement tape (Hoechstmass Balzer Gmbh, Sulzbach,Germany) at the wrist crease, 10 cm above the wrist crease, elbow crease, 10 cm above the elbow crease, where the lymphedema is most severe, and at its corresponding location on the unaffected limb. The circumference difference will be assessed at baseline and before intervention at weeks 1–8.
Secondary outcome measures
Volume measurement is also commonly used for the evaluation of lymphedema and the mean change in inter-limb volume difference from baseline to the end of the 8-week intervention will be included as the second primary outcome measure. The volume of the affected and unaffected limbs will be measured by the volumetric measuring device (Baseline, USA) using the water displacement method, which is considered as the most reliable method for volume measurements [13]. The volume difference will be assessed at baseline and before intervention at weeks 1–8.
Skin hardness will be measured at places where the skin feels most tense to the touch by the NSCING SHORE LX-A durometer (Nanjing SuCe Measuring Instrument Co., Ltd., Nanjing, China). Skin changes such as increased tissue resistance and skin elasticity are often found in lymphedematous skin [14]. Our preliminary study found that patients felt more comfortable and less tense in the affected limb after acupuncture treatments. Therefore, we will use the muscle hardness tester to evaluate the effect of acupuncture on soft tissue tension. The change in skin hardness will be assessed at baseline and before intervention at weeks 1–8.
The Common Terminology Criteria for Adverse Events (CTCAE 4.03) [15] will be used to grade the severity of swelling using the edema limbs criteria. A grading of mild, moderate or severe swelling will be assessed based on the inter-limb circumference or volume discrepancy, anatomic architecture, appearance, or activities of daily living. The CTCAE 4.03 will allow us to evaluate the clinical significance of circumference change. The CTCAE 4.03 edema limb grading will be assessed at baseline and before intervention at weeks 1–8.
Stages of lymphedema from the International Society of Lymphology will be used to grade the severity of lymphedema [11]. Staging of 0, I, II, or III will be assessed based on severity of swelling, ability to reduce swelling by elevation, and skin changes. Staging of lymphedema will be assessed at baseline and before intervention at weeks 1–8.
The Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire is a scale that consists of two concepts – functional status (part A) and symptoms (part B). The functional status part is further divided into three dimensions, namely physical, social, and psychological. The total score of the DASH ranges from 0 to 100, with higher scores representing worse symptoms and function. The DASH has good validity and responsiveness and it is recommended to assess upper extremity function in breast cancer survivors [16]. The validated Chinese version of the DASH will be used in this study [17]. The DASH will be assessed at baseline and before intervention at weeks 1–8.
The Medical Outcome Study 36-item Short-Form Health Survey (SF-36) is a commonly used instrument to assess quality of life and has good validity [18]. The SF-36 includes the following eight concepts: physical functioning, role limitations due to physical problems, social functioning, bodily pain, general mental health, role limitations due to emotional problems, vitality, and general health perception [19]. The validated Chinese version of the SF-36 will be used in this study [20]. The SF-36 will be assessed at baseline and before intervention at weeks 4 and 8.
All adverse event occurrences will be informed immediately to the clinical research coordinator and the principle investigator. Together with the acupuncturist in charge, they will evaluate, consult on the case, and take proper action. All expected (feeling faint after acupuncture treatment, stuck needles, broken needles, minor burning and vesicles during or after warm acupuncture, hematoma and bruising after needle removal) and unexpected (exacerbation of lymphedema, inflammation, infection, local or systematic reaction) adverse events will be reported immediately and recorded in the case report form (CRF). Any significant change of health state after baseline will also be recorded as an adverse event. All adverse events will be closely monitored and followed up until stabilization or resolution. Complete blood count, liver function, kidney function, and electrocardiography will be assessed at baseline and at week 8 to detect adverse events.
According to the results of our preliminary trial (9 participants in the LDA group, 7 participants in the LA group, and 10 participants in the WL group), the biggest circumference difference after 20 treatments was 2.34 ± 1.6 cm, 3.32 ± 1.53 cm, and 3.74 ± 1.1 cm in the LDA, LA, and WL groups, respectively. According to the formula:
$$ \mathrm{n}={\Psi}^2\left(\sum \left({Si}^2\right)/K\right)/\left[\sum {\left(\overline{Xi}-\overline{X}\right)}^2/\left(\mathrm{K}-1\right)\right] $$
α = 0.05
β = 0.10
K = 3
Ψ: K = 3, degree of freedom V1 = K–1 = 2; degree of freedom V2 = N–1, N is unknown, assume N as ∞, according to the T distribution critical values table when α = 0.05 and β = 0.10: Ψ α, β, K–1, ∞ = 2.52
\( \overline{Xi} \) and Si represent mean number (X1 = 2.34, X2 = 3.32, X3 = 3.74) and standard deviation (S1 = 1.6, S2 = 1.53, S3 = 1.1) of group i according to the preliminary trial.
\( \overline{\mathrm{X}} \)= (X1 + X2 + X3)/K = (2.34 + 3.32 + 3.74)/3 = 3.13
The result of calculation was 30 participants in each group. Assuming a 20% dropout rate, a total of 108 participants are required with 36 participants in each group.
A full analysis set will include all randomized participants who received at least one treatment and one follow-up. The principle of the last observation carried forward will be used in the case of missing data. The number of end point evaluations in each group will be kept the same as the number of participants in each group at the beginning of the trial.
A per-protocol set will include the following: (1) participants who meet the criteria of the protocol, (2) participants with measurable primary outcomes, and (3) participants without major violation of the protocol.
The statistical analysis of the primary outcome will be analyzed with a full analysis set and per-protocol set separately. The safety set will include all randomized participants who received at least one treatment.
Data will be coded and entered into SPSS (v.22) for statistical analysis. The Cochran–Mantel–Haenszel test will be used for analysis of center effect. Descriptive statistics of all sociodemographic and clinical data will be included. Continuous variables will be reported using mean and SD for normally distributed data or median and range for skewed data. Categorical variables will be expressed as number and percentage. For outcome measures, the mean differences from baseline values to the end of treatment will be compared using ANCOVA. Repeated measures analysis of variance (R-ANOVA) will be used to assess the inter-limb circumference difference, inter-limb volume difference and skin hardness between the three study groups. Inter-group differences in categorical data (CTCAE 4.03, stages of lymphedema) will be assessed using the χ2 test or Fisher's exact tests (two-tailed), as appropriate. Linear mixed models (for continuous outcome variables) and generalized estimating equations (for categorical outcome variables) will be used to examine the change of intensity in inter-limb circumference difference and SF-36. P values of <0.05 will be considered statistically significant.
Patient and public involvement
Patients and public were not involved in the development of this protocol.
To warrant quality of the data, a well-trained assessor will be responsible for data collection and recording on the CRF. Double entry of the data into the online trial database will be implemented by clinical research coordinators. Regular monitoring of the recruitment, intervention and assessment processes will be performed to ensure predetermined protocol and standard operating procedures are followed. All CRFs will be stored in a locked cabinet in an area with limited access. All participant information will be stored in a separate locked cabinet.
Ethics and dissemination
This protocol has been approved by the Medical Ethics Committees at Tianjin University of Traditional Chinese Medicine (TJUTCM-EC20170004). Written and informed consent will be fully explained by the acupuncturist and signed by the participants before entering the trial. The protocol has been registered at ClinicalTrials.gov. Any modification of the protocol will be documented at ClinicalTrials.gov. The results of this study will be published in a peer-reviewed journal.
BCRL is a common complication after breast cancer treatment despite the application of less invasive surgical techniques and BCRL patients often suffer from severe physical and psychological morbidity. Patients are in constant need of an effective treatment to manage lymphedema [21]. Conservative intervention such as complete decongestive therapy (CDT) is often considered as the first-line intervention and different studies have shown that CDT may be able to reduce lymphedema volume of the affected limb [22]. However, CDT is an intensive treatment program that requires daily one-on-one treatment for 4–6 weeks with a specialized therapist [23]. Therefore, it is often considered costly and time-consuming [24, 25]. In addition, long-term use of compression garments is required to maintain the initial limb-volume reduction [26], which results in significant inconvenience and discomfort for the patients.
According to several pilot studies, acupuncture emerges as a potential treatment option given it convenient treatment modality with very few side effects. Our preliminary study also found satisfactory results with sustained effect up to 1-month follow-up. Acupuncture is also more convenient and less expensive when compared to CDT. However, since lymphedema is a rather modern disease, the most effective acupuncture prescription has not been established. Indeed, a thorough search of the PubMed database showed a variety of acupoint prescriptions for the treatment of BCRL – one study [9] with flexible acupuncture points, one study [5] with acupuncture points on the affected limb only (local distribution acupoint combination), two studies [6, 7] with distal acupuncture points only (distal distribution acupoint combination), and two studies [8, 10] with whole body acupuncture points (local-distal acupoint combination). Therefore, additional research is required to compare the effectiveness of different acupuncture point combinations.
In this study, we will compare the two classic methods for combining acupuncture points – local distribution combination and local-distal combination. A local distribution combination is defined as local acupoints on the affected arm where the symptoms manifest and a local-distal combination is defined as acupoints on the affected arm combined with acupoints distant from the affected arm such as on the abdomen, unaffected arm, or legs. In clinical practice, a local-distal combination is the most commonly used treatment prescription, but a local distribution combination can be very effective when targeting a local symptom. By comparing the two methods, we will be able to gain knowledge of general principles in treating BCRL, which will then serve as a fundamental basis for future refinement of points prescription. In both acupuncture groups, we will integrate needle-top moxibustion to the treatment plan since moxibustion can resolve fluids according to traditional Chinese medicine theories. An observational study by Li et al. [27] also showed that thermotherapy was able to reduce swelling and improve quality of life. Therefore, acupuncture combined with needle-top moxibustion (warm acupuncture) may be more effective in managing BCRL.
There are some limitations in this study. First, the follow-up period is only 1 month. A longer follow-up period will allow us to assess the long-term effect of acupuncture treatments. Second, the amount of moxibustion applied is different in the LA and LDA groups. The probable compounding effect of greater moxibustion application and the local-distal acupoint combination in the LDA group may result in increased therapeutic dosage in that group and thus interfere with the comparison between local distribution acupoint combination and local-distal acupoint combination.
In conclusion, this is a rigorous, larger, multi-center, randomized controlled trial, which will enable us to further assess the effectiveness of acupuncture in the treatment of BCRL and also compare the effectiveness between local distribution and local-distal acupoint combinations.
Protocol version number: 1.2 (2017/12/5)
Date recruitment began: 2018/1/19
Date when recruitment will be completed: 2019/5/1
Data supporting the findings of this study are available from the corresponding author.
BCRL:
Breast cancer-related lymphedema
CDT:
Complete decongestive therapy
CRF:
Case report form
CTCAE:
Common Terminology Criteria for Adverse Events
DASH:
Disabilities of the Arm, Shoulder, and Hand
SF-36:
Medical Outcome Study 36-item Short-Form Health Survey
DiSipio T, Rye S, Newman B, Hayes S. Incidence of unilateral arm lymphoedema after breast cancer: a systematic review and meta-analysis. Lancet Oncol. 2013;14(6):500–15.
Maunsell E, Brisson J, Deschenes L. Arm problems and psychological distress after surgery for breast cancer. Canadian journal of surgery. Can J Surg. 1993;36(4):315–20.
Greenlee H, DuPont-Reyes MJ, Balneaves LG, Carlson LE, Cohen MR, Deng G, et al. Clinical practice guidelines on the evidence-based use of integrative therapies during and after breast cancer treatment. CA Cancer J Clin. 2017;67(3):194–232.
Zia FZ, Olaku O, Bao T, Berger A, Deng G, Fan AY, et al. The National Cancer Institute's Conference on Acupuncture for Symptom Management in Oncology: state of the science, evidence, and research gaps. J Natl Cancer Inst Monogr. 2017;2017(52). https://doi.org/10.1093/jncimonographs/lgx005.
Yao C, Xu Y, Chen L, Jiang H, Ki CS, Byun JS, et al. Effects of warm acupuncture on breast cancer-related chronic lymphedema: a randomized controlled trial. Curr Oncol. 2016;23(1):E27–34.
Jeong YJ, Kwon HJ, Park YS, Kwon OC, Shin IH, Park SH. Treatment of lymphedema with saam acupuncture in patients with breast cancer: A pilot study. Med Acupunct. 2015;27(3):206–15.
Smith CA, Pirotta M, Kilbreath S. A feasibility study to examine the role of acupuncture to reduce symptoms of lymphoedema after breast cancer: a randomised controlled trial. Acupunct Med. 2014;32(5):387–93.
Cassileth BR, Van Zee KJ, Yeung KS, Coleton MI, Cohen S, Chan YH, et al. Acupuncture in the treatment of upper-limb lymphedema: results of a pilot study. Cancer. 2013;119(13):2455–61.
de Valois BA, Young TE, Melsome E. Assessing the feasibility of using acupuncture and moxibustion to improve quality of life for cancer survivors with upper body lymphoedema. Eur J Oncol Nurs. 2012;16(3):301–9.
Cassileth BR, Van Zee KJ, Chan Y, Coleton MI, Hudis CA, Cohen S, et al. A safety and efficacy pilot study of acupuncture for the treatment of chronic lymphoedema. Acupunct Med. 2011;29(3):170–2.
BENDA K. The diagnosis and treatment of peripheral lymphedema: 2016 consensus document of the International Society of Lymphology. Lymphology. 2016;49:170–84.
Chen YW, Tsai HJ, Hung HC, Tsauo JY, et al. Reliability study of measurements for lymphedema in breast cancer patients. Am J Phys Med Rehabil. 2008;87(1):33–8.
Megens AM, Harris SR, Kim-Sing C, McKenzie DC. Measurement of upper extremity volume in women after axillary dissection for breast cancer. Arch Phys Med Rehabil. 2001;82(12):1639–44.
Killaars RC, Penha TR, Heuts EM, van der Hulst RR, Piatkowski AA. Biomechanical properties of the skin in patients with breast cancer-related lymphedema compared to healthy individuals. Lymphat Res Biol. 2015;13(3):215–21.
US department of Health and Human Services. National Cancer Institute: National Institutes of Health (2010). Common Terminology Criteria for Adverse Events (CTCAE) Version 4.0. National Institutes of Health. 2010. https://evs.nci.nih.gov/ftp1/CTCAE/CTCAE_4.03/Archive/CTCAE_4.0_2009-05-29_QuickReference_8.5x11.pdf. Accessed 16 Jul 2018.
Harrington S, Michener LA, Kendig T, Miale S, George SZ. Patient-reported upper extremity outcome measures used in breast cancer survivors: a systematic review. Arch Phys Med Rehabil. 2014;95(1):153–62.
Liao CL, Wang C, Zhou X, Wang XJ. Checkout reliability and validity of Chinese version of DASH short form scale applied in upper limb dysfunction evaluation research of breast cancer patients [in Chinese]. Chin Nurs Res. 2014;28(10):3581–3.
Pusic AL, Cemal Y, Albornoz C, Klassen A, Cano S, Sulimanoff I. Quality of life among breast cancer patients with lymphedema: a systematic review of patient-reported outcome instruments and outcomes. J Cancer Surviv. 2013;7(1):83–92.
Ware JE Jr, Sherbourne CD. The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Med Care. 1992;30(6):473–83.
Li L, Wang HM, Shen Y. Development and psychometric tests of a Chinese version of the SF-36 Health Survey Scales [in Chinese]. Chin J Prev Med. 2002;36(2):109–13.
Greenslade MV, House CJ. Living with lymphedema: a qualitative study of women's perspectives on prevention and management following breast cancer-related treatment. Can Oncol Nurs J. 2006;16(3):165–79.
Ezzo J, Manheimer E, McNeely ML, Howell DM, Weiss R, Johansson KI, et al. Manual lymphatic drainage for lymphedema following breast cancer treatment. Cochrane Database Syst Rev. 2015;5:CD003475.
Boris M, Weindorf S, Lasinski B, Boris G. Lymphedema reduction by noninvasive complex lymphedema therapy. Oncology (Williston Park). 1994;8(9):95–106 discussion 109-10.
Kärki A, Anttila H, Tasmuth T, Rautakorpi UM. Lymphoedema therapy in breast cancer patients: a systematic review on effectiveness and a survey of current practices and costs in Finland. Acta Oncol. 2009;48(6):850–9.
Basta MN, Fox JP, Kanchwala SK, et al. Complicated breast cancer-related lymphedema: evaluating health care resource utilization and associated costs of management. Am J Surg. 2016;211(1):133–41.
Rockson S. Current concepts and future directions in the diagnosis and management of lymphatic vascular disease. Vasc Med. 2010;15(3):223–31.
Li K, Zhang Z, Liu NF, Feng SQ, Tong Y, Zhang JF, et al. Efficacy and safety of far infrared radiation in lymphedema treatment: clinical evaluation and laboratory analysis. Lasers Med Sci. 2017;32(3):485–94.
This work was supported by the National Basic Research Program of China under Grant No. 2014CB543201. The funding source had no role in the design of this study and will not have any role during its execution or analyses, interpretation of the data, or decision to submit results.
Chien-Hung Yeh and Tian Yi Zhao contributed equally to this work.
College of Acupuncture and Massage, Tianjin University of Traditional Chinese Medicine, No. 312, Anshan West Road, Nankai District, Tianjin, 300193, China
Chien-Hung Yeh, Tian Yi Zhao, Mei Dan Zhao, Yue Wu, Yong Ming Guo, Ren Wei Dong, Bo Chen, Jing Rong Wen, Dan Li & Xing Fang Pan
Department of Combined Chinese & Western Medicine, Tianjin Medical University Cancer Institute & Hospital, National Clinical Research Center of Cancer, Tianjin, China
Zhan Yu Pan & Bin Wang
Department of Traditional Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin, China
Yi Guo
Acupuncture Research Center of Tianjin University of Traditional Chinese Medicine, Tianjin, China
Chien-Hung Yeh
Tian Yi Zhao
Mei Dan Zhao
Yong Ming Guo
Zhan Yu Pan
Ren Wei Dong
Bo Chen
Bin Wang
Jing Rong Wen
Xing Fang Pan
CHY, TYZ, and XFP planned the overall study protocol. CHY and TYZ drafted the manuscript. MDZ and YW participated in the design of the study and made critical revisions to the manuscript. YMG, ZYP, RWD, BC, BW, JRW, and DL provided clinical advice and contributed to the refinement of the protocol. XFP and YG have the final responsibility for the decision to submit for publication. All authors have read and approved the final manuscript.
Correspondence to Xing Fang Pan.
The Medical Ethics Committee at Tianjin University of Traditional Chinese Medicine has approved the study protocol (TJUTCM-EC20170004). The study is registered at ClinicalTrials.gov (NCT03373474), date of registration 14th December 2017. Prior to registration written informed consent will be obtained in all patients. We are currently recruiting patients in the Tianjin Medical University Cancer Institute and Hospital, Baokang Hospital Affiliated to Tianjin University of Traditional Chinese Medicine, The Second Affiliated Hospital of Baotou Medical College, Henan Cancer Hospital, Gansu Provincial Cancer Hospital, and Sichuan Cancer Hospital. Ethical approval was obtained from all recruiting hospitals.
Additional file
Additional file 1:
SPIRIT 2013 Checklist: recommended items to address in a clinical trial protocol and related documents*. (DOC 124 kb)
Yeh, CH., Zhao, T.Y., Zhao, M.D. et al. Comparison of effectiveness between warm acupuncture with local-distal points combination and local distribution points combination in breast cancer-related lymphedema patients: a study protocol for a multicenter, randomized, controlled clinical trial. Trials 20, 403 (2019). https://doi.org/10.1186/s13063-019-3491-4
|
CommonCrawl
|
Research | Open | Published: 29 September 2016
Numerical solution of DGLAP equations using Laguerre polynomials expansion and Monte Carlo method
A. Ghasempour Nesheli ORCID: orcid.org/0000-0001-7237-87481,
A. Mirjalili2 &
M. M. Yazdanpanah3
SpringerPlusvolume 5, Article number: 1672 (2016) | Download Citation
We investigate the numerical solutions of the DGLAP evolution equations at the LO and NLO approximations, using the Laguerre polynomials expansion. The theoretical framework is based on Furmanski et al.'s articles. What makes the content of this paper different from other works, is that all calculations in the whole stages to extract the evolved parton distributions, are done numerically. The employed techniques to do the numerical solutions, based on Monte Carlo method, has this feature that all the results are obtained in a proper wall clock time by computer. The algorithms are implemented in FORTRAN and the employed coding ideas can be used in other numerical computations as well. Our results for the evolved parton densities are in good agreement with some phenomenological models. They also indicate better behavior with respect to the results of similar numerical calculations.
In the theory of strong interaction, the lepton–nucleon deep-inelastic scatterings (DIS) could lead us to get the required information for nucleon structure function. The DIS processes form the backbone of our knowledge for the parton densities, which are indispensable for analyses of hard scattering processes at proton–(anti-)proton colliders. Moreover, many experimental groups (Bloom et al. 1969; Breidenbach et al. 1969; Abbott et al. 1979) have observed the scaling behavior of the proton structure function in DIS (Bjorken 1969). This observation established the quark-parton model as a valid framework for interpreting DIS data; the DIS processes can be expressed in terms of universal parton densities. In Quantum Chromodynamics (QCD), structure functions are defined as convolutions of the universal parton momentum distributions inside the proton with the coefficient functions, which contain information about the boson–parton interaction. At large momentum transfers, \(Q^{2} \gg 0\), the perturbative calculations of the coefficient functions predict a logarithmic dependence of the proton structure functions on Q 2 to higher orders in α s . Thus, measurements of the structure functions allow precision tests of perturbative QCD. The standard and the basic tools for theoretical investigation of DIS structure functions are the DGLAP evolution equations (Gribov and Lipatov 1972; Dokshitzer 1977; Altarelli and Parisi 1977).
There exist several analytical and numerical methods to solve DGLAP evolution equations. What we present in this article, is a solution which is based entirely on numerical analyses of these equations, forming a series of Laguerre polynomials with respect to the \(y = \ln (1/x)\) variable, where x is the fraction of proton momentum carried by the parton, The Laguerre series converges very quickly and it can easily be truncated with a reasonable precision.
In this article, we assume that reader is familiar with the required relations and the theoretical frameworks based on which the DGLAP evolution equations are working. So, at different sections of this article, we are mostly focused on presenting numerical investigations which would finally yield the evolved parton densities at energy scale Q 2. In each section of the paper, we not only introduce the required theoretical expressions but also explain how to use them in practice to do our numerical calculations. The Monte Carlo algorithms which we construct are such as to make the wall clock time by computer in a proper time. The numerical patterns which we develop in this paper can also be used for other numerical investigations.
The organization of this paper is as following. In section "A short overview of the theoretical framework" we give a short overview on the evolution of parton densities, using DGLAP equations. The theoretical framework is based on using Laguerre polynomial expansions. In section "Basic tools, Monte Carlo solutions" the general structure of Monte Carlo algorithm is introduced. The required functions and subroutines are also introduced there. They can be requested via E-mail, [email protected], from the authors. Then we use them to build the related Monte Carlo algorithm to get the numerical solutions for DGLAP evolution equations. The results are presented for the evolved parton densities at the end of section "Basic tools, Monte Carlo solutions" and also in sections "The programs and the results" and "Conclusion", based on the numerically Monte Carlo algorithm. The results are in good agreement with the results of CTEQ and GRV parameterization groups. Finally, we give our conclusion in section 6.
A short overview of the theoretical framework
In high energy physics, the parton densities at the Q 2 scale can be obtained, employing the DGLAP evolution equations. These equations can be used to describe Bjorken violation in deep inelastic scattering (DIS). There are many different ways to solve DGLAP equations numerically. One of them is to use the Laguerre polynomial expansion which we employ to get the solution of non-singlet and singlet sectors of parton densities. To numerically achieve the evolved parton densities, we initially need to reach high levels of precision. The Laguerre polynomial expansion are rapidly convergent for medium values of x and at all energy scales. At very lowx, say x < 0.001, these polynomials are numerically instable due to rapid rising of the splitting moments.
Following, we have provided the required definitions and conventions for the above mentioned numerical calculations:
Running coupling constant
The Q 2 dependence of the strong coupling constant α(Q 2), considering the renormalization group equation, is given by:
$$Q^{2} \frac{d}{{dQ^{2} }}\alpha (Q^{2} ) = - \alpha (Q^{2} )\bar{\beta }\left[ {\alpha (Q^{2} )} \right]$$
$$\bar{\beta }(\alpha ) = \beta_{0} \frac{\alpha }{4\pi } + \beta_{1} \left( {\frac{\alpha }{4\pi }} \right)^{2} + \cdots ,$$
in which β 0 and β 1 are universal scheme independent coefficients and are given by:
$$\begin{aligned} \beta_{0} & = \frac{11}{3}C_{G} - \frac{4}{3}T_{R} n_{f} , \\ \beta_{1} & = \frac{34}{3}C_{G}^{2} - \frac{10}{3}C_{G} n_{f} - 2C_{F} n_{f} , \\ \end{aligned}$$
and \(C_{G} = N,\;{\text{C}}_{\text{F}} = \frac{{{\text{N}}^{ 2} - 1}}{{ 2 {\text{N}}}},\;T_{R} = \frac{1}{2}\) where Nand n f refers respectively to the number of quark colors and flavors. The solution of β- QCD evolution equation, Eq. (1), at the next-to-leading order (NLO) approximation is given by:
$$\frac{{\alpha (Q^{2} )}}{2\pi } = \frac{2}{{\beta_{0} }}{\mkern 1mu} \frac{1}{{\ln {{Q^{2} } \mathord{\left/ {\vphantom {{Q^{2} } {\Lambda^{2} }}} \right. \kern-0pt} {\Lambda^{2} }}}}\left[ {1 - \frac{{\beta_{1} }}{{\beta_{0}^{2} }}\frac{{\ln \ln {{Q^{2} } \mathord{\left/ {\vphantom {{Q^{2} } {\Lambda^{2} }}} \right. \kern-0pt} {\Lambda^{2} }}}}{{\ln {{Q^{2} } \mathord{\left/ {\vphantom {{Q^{2} } {\Lambda^{2} }}} \right. \kern-0pt} {\Lambda^{2} }}}} + O\left( {\frac{1}{{\ln^{2} {{Q^{2} } \mathord{\left/ {\vphantom {{Q^{2} } {\Lambda^{2} }}} \right. \kern-0pt} {\Lambda^{2} }}}}} \right)} \right].$$
The cutoff parameter Λ, is determined by fitting the experimental data which at the NLO approximation is lower than 250 MeV.
DGLAP evolution equations
Considering the contribution of quark-antiquark pair in evolution of quark densities, one would find out that for each quark flavor i with i = 1…2n f summing over all quark and antiquark flavors, using the notations of Furmanski and Petroznio (1982a, b) we would have
$$q_{i}^{( + )} \equiv q_{i} + \bar{q}_{i} ,\quad q_{i}^{( - )} \equiv q_{i}^{V} \equiv q_{i} - \bar{q}_{i} ,\quad q^{( + )} \equiv \Sigma {\mkern 1mu} \equiv \sum\limits_{i = 1}^{{n_{f} }} {q_{i}^{( + )} } ,$$
Following that by defining
$$\chi_{i} (x,Q^{2} ) = q_{i}^{( + )} (x,Q^{2} ) - \frac{1}{{n_{f} }}q^{( + )} (x,Q^{2} ),$$
and the new combination of splitting functions by Ellis et al. (1996):
$$P_{ \pm } (x,\alpha ) = P_{V}^{(0)} + \frac{\alpha }{2\pi }P_{ \pm }^{(1)} (x) + \left( {\frac{\alpha }{2\pi }} \right)^{2} P_{ \pm }^{(2)} (x) + \cdots ,$$
the evolved DGLAP equations will be appeared in the following forms:
$$Q^{2} \frac{d}{{dQ^{2} }}q_{i}^{( - )} (x,Q^{2} ) = \frac{{\alpha (Q^{2} )}}{2\pi }P_{ - } (x,\alpha (Q^{2} )) \otimes q_{i}^{( - )} (x,Q^{2} ),$$
$$Q^{2} \frac{d}{{dQ^{2} }}\chi_{i} (x,Q^{2} ) = \frac{{\alpha (Q^{2} )}}{2\pi }P_{ + } (x,\alpha (Q^{2} )) \otimes \chi_{i} (x,Q^{2} ),$$
$$Q^{2} \frac{d}{{dQ^{2} }}\left( {\begin{array}{*{20}l} {q^{( + )} (x,Q^{2} )} \hfill \\ {G(x,Q^{2} )} \hfill \\ \end{array} } \right) = \frac{{\alpha (Q^{2} )}}{2\pi }\left( {\begin{array}{*{20}c} {P_{qq} (x,Q^{2} )} & \quad {P_{qg} {\mkern 1mu} (x,Q^{2} )} \\ {P_{gq} (x,Q^{2} )} & \quad {P_{gg} (x,Q^{2} )} \\ \end{array} } \right) \otimes \left( {\begin{array}{*{20}l} {q^{( + )} (x,Q^{2} )} \hfill \\ {G(x,Q^{2} )} \hfill \\ \end{array} } \right),$$
where ⊗ symbol is indicating the following convolution integral:
$$p(x) \otimes q(x) \equiv \int_{x}^{1} {\frac{dy}{y}p\left( {\frac{x}{y}} \right)q\left( y \right)} = \int_{x}^{1} {\frac{dy}{y}p(y)q\left( {\frac{x}{y}} \right)} .$$
Similar expansion like Eq. (7) exists for the different elements of the related matrix of splitting function. The advantage of using Eqs. (8–10) are that we are able to extract the sea quark densities at any energy scale Q 2 separately for each quark flavor rather than to get an average quantity for sea quark densities.
Equations (8–10) can be written in terms of the new variable \(t = - \frac{2}{{\beta_{0} }}\ln \frac{{\alpha (Q^{2} )}}{{\alpha (Q_{0}^{2} )}}\,\,\) so as:
$$\frac{d}{dt}q_{i}^{( - )} (x,t) = \left( {P_{V}^{(0)} (x) + \frac{\alpha }{2\pi }R_{ - } (x) + \cdots } \right) \otimes q_{i}^{( - )} (x,t),$$
$$\frac{d}{dt}\chi_{i} (x,t) = \left( {P_{V}^{(0)} (x) + \frac{\alpha }{2\pi }R_{ + } (x) + \cdots } \right) \otimes \chi_{i} (x,t),$$
$$\frac{d}{dt}\left( {\begin{array}{*{20}l} {q^{( + )} (x,t)} \hfill \\ {G(x,t)} \hfill \\ \end{array} } \right) = \left( {P^{(0)} (x) + \frac{\alpha }{2\pi }R(x) + \cdots } \right) \otimes \left( {\begin{array}{*{20}l} {q^{( + )} (x,t)} \hfill \\ {G(x,t)} \hfill \\ \end{array} } \right),$$
$$R_{ \pm } (x) = P_{ \pm }^{(1)} (x) - \frac{{\beta_{1} }}{{2\beta_{0} }}P_{V}^{(0)} (x),$$
$$R(x) = P^{(1)} (x) - \frac{{\beta_{1} }}{{2\beta_{0} }}P^{(0)} (x).$$
Solutions of Eqs. (10–12) will lead us to the evolved valence, sea and gluon densities at different energy scales.
Evolution operators
Defining \(\tilde{q}(x) \equiv q(t = 0,x)\) as parton density at initial energy scale Q 0, the evolved valence density will be obtained by
$$q_{i}^{( - )} (t,x) = E_{ - } (t,x{\mkern 1mu} ) \otimes {\mkern 1mu} \tilde{q}_{i}^{( - )} (x).$$
For the χ i function we will have
$$\chi_{i} (t,x) = E_{ + } (t,x{\mkern 1mu} ) \otimes {\mkern 1mu} \tilde{\chi }_{i} (x),$$
and for the gluon and singlet distributions, we will have:
$$\left( {\begin{array}{*{20}c} {q^{( + )} (t,x)} \\ {G(t,x)} \\ \end{array} } \right) = E(t,x) \otimes \left( {\begin{array}{*{20}c} {\tilde{q}^{( + )} (x)} \\ {\tilde{G}(x)} \\ \end{array} } \right).$$
In Eq. (19), the first term on the right hand side is the evolution operator which for the singlet and gluon densities has the following matrix form:
$$E(t,x) = \left( {\begin{array}{*{20}c} {E_{qq} (t,x)} &\quad {E_{qg} (t,x)} \\ {E_{gq} (t,x)} &\quad {E_{gg} (t,x)} \\ \end{array} } \right).$$
Substituting Eqs. (17–19) in Eqs. (12–14) will lead us to:
$$\frac{d}{dt}E_{ \pm } (t,x) = \left( {P_{V}^{(0)} (x) + \frac{\alpha }{2\pi }R_{ \pm } (x) + \cdots } \right) \otimes E_{ \pm } (t,x),$$
$$\frac{d}{dt}E(t,x) = \left( {P^{(0)} (x) + \frac{\alpha }{2\pi }R(x) + \cdots } \right) \otimes E(t,x).$$
We should note that evolution operators satisfy the following initial conditions
$$E_{ \pm } (0,x) = \delta (1 - x),\,\,\,\,\,\,E(0,x) = \delta (1 - x) \cdot I,$$
where I In Eq. (23) refers to the unit matrix with dimension 2.
Laguerre expansion
The Laguerre polynomials can be represented by an alternative form as Arfken and Weber (2005):
$$L_{n} (x) = \sum\limits_{k = 0}^{n} {\left( {\begin{array}{*{20}c} n \\ k \\ \end{array} } \right)( - 1)^{k} \frac{{x^{k} }}{k!} = 1 - nx + \frac{n(n - 1)}{2!}\frac{{x^{2} }}{2!}} - \frac{n(n - 1)(n - 2)}{3!}\frac{{x^{3} }}{3!} + \cdots$$
These polynomials have the following properties:
The generating function of these polynomials is indicated by:
$$g(x,z) = \frac{{e^{{{{ - xz} \mathord{\left/ {\vphantom {{ - xz} {\,(1 - z)}}} \right. \kern-0pt} {\,(1 - z)}}}} }}{1 - z} = \sum\limits_{n = 0}^{\infty } {L_{m} (x)z^{m} ,\quad \left| z \right| < 1} .$$
They satisfy the following recursive relation:
$$L_{n + 1} (x) = 2L_{n} (x) - L_{n - 1} (x) - \frac{{(1 + x)L_{n} (x) - L_{n - 1} (x)}}{n + 1}.$$
These polynomials possess a closure property under the convolution integral:
$$L_{n} (z) \otimes L_{m} (z) = L_{n + m} (z) - L_{n + m + 1} (z).$$
They also satisfy the following orthonormal condition, using the weight function e −y:
$$\int_{0}^{\infty } {dye^{ - y} L_{m} (y)L_{n} (y) = \delta_{m,n} } .$$
Since the polynomials form a complete set, any function can be expanded in terms of them:
$$F(y) = \sum\limits_{m = 0}^{\infty } {F_{m} L_{m} (y)} ,$$
Following that we will have:
$$F_{m} = \int_{0}^{\infty } {dy\,e^{ - y} L_{m} (y)F(y)} .$$
Based on the completeness property, any two arbitrary functions A(y) and B(y) can be expanded in terms of Laguerre polynomials:
$$A(y) = \sum\limits_{n = 0}^{\infty } {A_{n} L_{n} (y)} ,\,\,\,\,\,B(y) = \sum\limits_{n = 0}^{\infty } {B_{n} L_{n} (y)} .$$
Assuming \(C(y) = A(y) \otimes B(y) = \sum\nolimits_{n = 0}^{\infty } {C_{n} L_{n} (y)}\) and using the closure property, given by Eq. (27), the following relations can be obtained between the expansion coefficients
$$C_{n} = \sum\limits_{i = 0}^{n} {A_{i} b_{n - i} } = \sum\limits_{i = 0}^{n} {B_{i} a_{n - i} } ,$$
$$\begin{aligned} a_{i} = A_{i} - A_{i - 1} ,\,\,\,\,\,\,A_{ - 1} \equiv 0, \hfill \\ b_{i} = B_{i} - B_{i - 1} ,\,\,\,\,\,\,\,B_{ - 1} \equiv 0. \hfill \\ \end{aligned}$$
If we wish to use the Laguerre polynomials to get the evolved parton densities, we should change the interval of the y variable from (0, ∞) to (0, 1). Therefore, we need to change the variables as in the following
$$\begin{array}{*{20}l} {x = e^{ - y} ,} & \quad {dx = - xdy,} \hfill \\ {y:(0\;\infty ),} & \quad {x:(1\;0)\mathop{\longrightarrow}\limits{ - }(0\;1).} \hfill \\ \end{array}$$
Following that the orthonormal condition, given by Eq. (30), can be written as
$$\int_{0}^{1} {dxL_{m} \left( {\ln \frac{1}{x}} \right)L_{n} \left( {\ln \frac{1}{x}} \right) = \delta_{m,n} } .$$
Now the expansion of an arbitrary function F(x) would take the form
$$F(x) = \sum\limits_{n = 0}^{\infty } {F_{n} } L_{n} \left( {\ln \frac{1}{x}} \right),$$
so as for expansion coefficient, F n , we can write
$$F_{n} = \int_{0}^{1} d x{\mkern 1mu} L_{n} \left( {\ln \frac{1}{x}} \right)F(x).$$
Now we are equipped with the required relations to extract the evolution operators for parton densities in terms of the Laguerre polynomials which will be done in next subsection.
Evolution operator for the non-singlet density: LO approximation
At the leading order (LO) approximation, Eq. (21) for the non-singlet evolution operator can be written as
$$\frac{d}{dt}E_{ - }^{(0)} (t,x) = P_{V}^{(0)} (x) \otimes E_{ - }^{(0)} (t,x),$$
Substituting Eqs. (32, 33) in Eq. (38), we arrive at
$$p_{i}^{(0)} = P_{i}^{(0)} - P_{i - 1}^{(0)} ,\,\,\,\,\,P_{ - 1}^{(0)} = 0,$$
$$\frac{d}{dt}\left( {E_{n}^{(0)} (t)} \right) = \sum\limits_{m = 0}^{n} {p_{n - m}^{(0)} E_{m}^{(0)} (t)} .$$
By using the initial condition, given by Eq. (23), for non-singlet sector, the general solution is
$$E_{n}^{(0)} (t) = e^{{P_{0}^{(0)} t}} \sum\limits_{k = 0}^{n} {\frac{{A_{n}^{(k)} t^{k} }}{{k{\mkern 1mu} !}}} ,$$
$$A_{n}^{(0)} = 1,\,\,\,\,\,\,\,A_{n}^{(k + 1)} = \sum\limits_{i = k}^{n - 1} {p_{n - i}^{(0)} } A_{i}^{(k)} .$$
By substituting Eq. (42) in Eq. (41), the evolution operator for the non-singlet sector at the LO approximation is determined and we can obtain parton densities at energy scale Q 2 based on Eq. (17).
Evolution operator for the non-singlet density: NLO approximation
We intend now to obtain the solution of the following differential equation
$$\frac{d}{dt}E_{ - } (t,x) = \left( {P_{V}^{(0)} (x) + \frac{\alpha }{2\pi }R_{ - } (x)} \right) \otimes E_{ - } (t,x),$$
where R - is determined by Eq. (15). We can write the following Laguerre expansion for \(R_{ - }\)
$$R_{ - } (x) = \sum\limits_{n = 0}^{\infty } {R_{n} } L_{n} \left( {\ln \frac{1}{x}} \right).$$
Similar Laguerre expansions exist for \(E_{ - } (t,x)\) which by substituting in Eq. (43), using Eqs. (32, 33) we will arrive at
$$\frac{d}{dt}\left( {E_{n} (t)} \right) = \sum\limits_{m = 0}^{n} {\left( {p_{n - m}^{(0)} + \frac{\alpha (t)}{2\pi }r_{n - m} } \right)} E_{m} (t),$$
$$r_{i} = R_{i} - R_{i - 1} ,\,\,\,\,\,R_{ - 1} = 0.$$
Now at the NLO approximation for E n (t), we can write
$$E_{n} (t) = E_{n}^{(0)} (t) + AE_{n}^{(1)} (t).$$
To simplify the calculation we present \(AE_{n}^{(1)} (t)\) by S n (t). Therefore Eq. (45) can be written as
$$\frac{d}{dt}\left( {E_{n}^{(0)} (t)} \right) + \frac{d}{dt}\left( {S_{n} (t)} \right) = \sum\limits_{m = 0}^{n} {\left( {p_{n - m}^{(0)} + \frac{\alpha (t)}{2\pi }r_{n - m} } \right)} \left( {E_{m}^{(0)} (t) + S_{m} (t)} \right).$$
Using Eq. (40) and the initial condition, given by Eq. (23), we will get
$$S_{n} (t) = - \frac{{\beta_{0} }}{2}\frac{\alpha (t) - \alpha (0)}{2\pi }\sum\limits_{i = 0}^{n} {r_{n - i} } E_{i}^{(0)} (t).$$
In summary, the E n (t) term up to NLO approximation would have the following form
$$E_{n} (t) = E_{n}^{(0)} (t) - \frac{{\beta_{0} }}{2}\frac{\alpha (t) - \alpha (0)}{2\pi }E_{n}^{(1)} (t),$$
$$E_{n}^{(0)} (t) = e^{{P_{0}^{(0)} t}} \sum\limits_{k = 0}^{n} {\frac{{A_{n}^{(k)} t^{k} }}{{k{\mkern 1mu} !}}} ,\,\,\,\,\,\,\,\,\,\,E_{n}^{(1)} (t) = \sum\limits_{i = 0}^{n} {r_{n - i} E_{i}^{(0)} (t)} .$$
As in the LO approximation, by substituting Eq. (50) in the related expansion for \(E_{ - } (t,x)\), the evolution operator for the non-singlet sector at the NLO approximation is determined and finally the parton densities at any energy scale can be obtained by using Eq. (17).
Evolution operator for the singlet and gluon densities
This subsection contains two parts. At first, the solutions for the singlet sector, q (+), and gluon densities are introduced. At the next step, using the solution for singlet sector and the χ i distribution, it is possible to get the solution for \(q_{i}^{( + )}\) which is defined by Eq. (5). Then, by accessing the valence distribution from the previous subsection and the \(q_{i}^{( + )}\) distribution, sea quark distributions for individual flavors will be obtained. More details of these calculations are as followed.
In order to be able to extract sea quark densities, at first Eq. (9) should be solved and in terms of evolution operator, we will have [see Eq. (21)]:
$$\frac{d}{dt}E_{ + } (t,x) = \left( {P_{V}^{(0)} (x) + \frac{\alpha }{2\pi }R_{ + } (x) + \cdots } \right) \otimes E_{ + } (t,x).$$
The solution of this equation at the LO approximation is like the non-singlet case. At the NLO approximation, we should just do the following replacements with respect to the non-singlet case:
$$P_{ - } \to P_{ + } ,\,\,\,\,\,\,\,\,\,\,\,\,E_{ - } \to E,\,\,\,\,\,\,\,\,\,\,R_{ - } \to R_{ + } .$$
At the next step, Eq. (14) for singlet and gluon densities should be solved which at LO and NLO approximations could be done as follows:
LO approximation
At LO approximation, for related evolution operator, we have [see Eq. (22)]
$$\frac{d}{dt}\left( {\begin{array}{*{20}c} {E_{qq}^{(0)} (t,x)} &\quad {E_{qg}^{(0)} (t,x)} \\ {E_{gq}^{(0)} (t,x)} &\quad {E_{gg}^{(0)} (t,x)} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {P_{qq}^{(0)} (x)} &\quad {P_{qg}^{(0)} (x)} \\ {P_{gq}^{(0)} (x)} &\quad {P_{gg}^{(0)} (x)} \\ \end{array} } \right) \otimes \left( {\begin{array}{*{20}c} {E_{qq}^{(0)} (t,x)} &\quad {E_{qg}^{(0)} (t,x)} \\ {E_{gq}^{(0)} (t,x)} &\quad {E_{gg}^{(0)} (t,x)} \\ \end{array} } \right).$$
The matrix evolution operator in Eq. (54) can be expanded in terms of Laguerre polynomials, so as
$$\left( {\begin{array}{*{20}c} {E_{qq}^{(0)} (t,x)} & \quad {E_{qg}^{(0)} (t,x)} \\ \quad {E_{gq}^{(0)} (t,x)} & \quad{E_{gg}^{(0)} (t,x)} \\ \end{array} } \right) = \sum\limits_{n = 0}^{\infty } {\left( {\begin{array}{*{20}c} {E_{n,qq}^{(0)} (t)} & \quad{E_{n,qg}^{(0)} (t)} \\ \quad {E_{n,gq}^{(0)} (t)} & \quad {E_{n,gg}^{(0)} (t)} \\ \end{array} } \right)L_{n} \left( {\ln \frac{1}{x}} \right)} .$$
The general solution for elements of the matrix evolution, by analytical consideration of moments, is as follows:
$$E^{(0)} (t,s) = e_{1} (s)\,e^{{\lambda_{1} (s)t}} + e_{2} (s)\,e^{{\lambda_{2} (s)t}} ,$$
where \(e_{1} ,e_{2}\) are projection matrix operators with the following properties
$$e_{1} (s) + e_{2} (s) = I,\,\,\,\,\,\lambda_{1} (s)\,e_{1} (s)\, + \lambda_{2} (s)\,e_{2} (s)\, = \hat{P}^{(0)} (s),\,\,\,\,\,e_{i} (s)\,e_{j} (s) = \delta_{ij} \,e_{i} (s).$$
λ 1(s) and λ 2(s) are eigenvalues of the \(\hat{P}^{(0)} (s)\) matrix which are given by
$$\lambda_{1} (1) \equiv \lambda = - \left( {\frac{4}{3}C_{F} + \frac{2}{3}n_{f} T_{R} } \right)$$
and due to momentum conservation at each vertex of parton splitting, we will have λ 2(1) = 0 (Furmanski and Petroznio 1982a, b).
Considering the Laguerre expansion of evolution operator and also the splitting function in the Matrix form and Eq. (22) which connect these two expansions to each other, we will arrive at:
$$E_{n}^{(0)} (t) = \sum\limits_{k = 0}^{n} {\frac{{t^{k} }}{{k{\mkern 1mu} !}}} \left( {A_{n}^{(k)} + B_{n}^{(k)} e^{\lambda t} } \right).$$
In Eq. (59), the \(A_{n}^{(k)}\) and \(B_{n}^{(k)}\) coefficients are two dimensional matrix which are obtained from the following recurrence relations (Furmanski and Petroznio 1982a, b)
$$A_{0}^{(0)} = e_{2} ,\,\,\,\,\,\,\,\,\,\,B_{0}^{(0)} = e_{1} ,$$
$$\left\{ {\begin{array}{*{20}l} {A_{n}^{(k + 1)} = \lambda \,e_{1} \,A_{n}^{(k)} + \sum\nolimits_{i = k}^{n - 1} {p_{n - i}^{(0)} \,A_{i}^{(k)} ,} } \hfill & {n > 0,} \hfill \\ {B_{n}^{(k + 1)} = - \lambda \,e_{2} \,B_{n}^{(k)} + \sum\nolimits_{i = k}^{n - 1} {p_{n - i}^{(0)} \,B_{i}^{(k)} } ,} \hfill & {k = 0,1,2, \ldots ,n - 1.} \hfill \\ \end{array} } \right.$$
In deriving theses recurrence relations, Eq. (32) is used. Based on Eq. (57) the required matrix quantities in Eq. (61) are given by
$$\begin{aligned} & {e_{\,1} \equiv e_{\,1} (1),} \quad {e_{2} \equiv e_{\,2} (1),} \\ & {e_{1} = \frac{1}{\lambda }P_{0}^{(0)} ,} \quad {e_{2} = \frac{1}{\lambda }\left( { - P_{0}^{(0)} + \lambda I} \right) = - e_{1} + I.} \end{aligned}$$
At the beginning we need the initial values for \(A_{n}^{(k)}\) and \(B_{n}^{(k)}\) which are given by
$$\left\{ \begin{aligned} A_{n}^{(0)} = e_{2} - \frac{1}{{\lambda^{n} }}\left( {e_{1} \,a_{n}^{(n)} - ( - 1)^{n} e_{2} \,b_{n}^{(n)} } \right), \hfill \\ B_{n}^{(0)} = e_{1} + \frac{1}{{\lambda^{n} }}\left( {e_{1} \,a_{n}^{(n)} - ( - 1)^{n} e_{2} \,b_{n}^{(n)} } \right), \hfill \\ \end{aligned} \right.$$
$$\left\{ \begin{aligned} a_{n}^{(k + 1)} = \lambda \,e_{1} \,a_{n}^{(k)} + \sum\nolimits_{i = k}^{n - 1} {p_{n - i}^{(0)} \,A_{i}^{(k)} ,\,\,\,\,\,\,\,\,} a_{n}^{(0)} = 0, \hfill \\ b_{n}^{(k + 1)} = - \lambda \,e_{2} \,b_{n}^{(k)} + \sum\nolimits_{i = k}^{n - 1} {p_{n - i}^{(0)} \,B_{i}^{(k)} ,\,\,\,\,\,\,\,\,} b_{n}^{(0)} = 0. \hfill \\ \end{aligned} \right.$$
Substituting the coefficients \(A_{n}^{(k)}\) and \(B_{n}^{(k)}\) in Eq. (59) and the result of the related Laguerre expansion for the matrix evolution operator, the operator is obtained and we are able to use Eq. (19) to evolve singlet and gluon densities at higher energy scales. Equations (19, 20) at the LO approximation can be represented by:
$$\left\{ {\begin{array}{*{20}c} {q^{( + )} (t,x) = E_{qq}^{(0)} (t,x) \otimes \tilde{q}^{( + )} (x) + E_{qg}^{(0)} (t,x) \otimes \tilde{G}(x)} \\ {G(t,x) = E_{gq}^{(0)} (t,x) \otimes \tilde{q}^{( + )} (x) + E_{gg}^{(0)} (t,x) \otimes \tilde{G}(x)} \\ \end{array} } \right..$$
Using Eq. (65), we can obtain gluon and singlet distribution, q (+), and then using the evolved valence quark (non-singlet) and χ i distributions, Eqs. (17, 18), the sea distributions at the LO approximation will be obtained [see Eqs. (5, 6)]
$$\bar{q}_{i} = \frac{1}{2}\left( {\chi_{i} + \frac{1}{{n_{f} }}q^{( + )} - q_{i}^{V} } \right).$$
NLO approximation
The evolution operator in Eq. (22) at the NLO approximation is written as:
$$E(t,x) = E^{(0)} (t,x) + \frac{\alpha (t)}{2\pi }E^{(1)} (t,x).$$
The LO contribution, \(E^{(0)} (t,x)\), is given by Eq. (59) and for the NLO contribution we should use the following relations (Furmanski and Petroznio 1982a, b)
$$E_{n}^{(1)} (t) = \tilde{E}_{n}^{(1)} (t) - 2\tilde{E}_{n - 1}^{(1)} (t) + \tilde{E}_{n - 2}^{(1)} (t),\quad \quad \tilde{E}_{ - 1}^{(1)} (t) = \tilde{E}_{ - 2}^{(1)} (t) = 0,$$
where \(\tilde{E}_{n}^{(1)}\) can be obtained from \(E_{n}^{(0)}\), using the following integral:
$$\tilde{E}_{n}^{(1)} (t) = \int_{0}^{t} {d\tau \,e^{{ - \beta_{0} \frac{\tau }{2}}} \sum\limits_{i,j,k} {E_{i}^{(0)} (t - \tau )\,R_{j} \,E_{k}^{(0)} (\tau )} \,\delta (n - i - j - k)} .$$
where \(R_{j}\) is the expansion coefficients of R(x) in terms of Laguerre polynomials so as
$$R(x) = \sum\limits_{n = 0}^{\infty } {R_{n} \,} L_{n} \left( {\ln \frac{1}{x}} \right).$$
The sum in Eq. (69) should be used, considering the condition which is given by Dirac Delta function, that is n = i + j + k.
As before by accessing as well to the non-singlet and χ i densities, using Eqs. (19, 50, 59, 68) the sea quark densities at the NLO approximation can be extracted
Now equipped by the required theoretical framework, based on Laguerre polynomial expansion, we will be able to get numerical results for the parton densities at any energy scale.
Basic tools, Monte Carlo solutions
Here, we fully describe the numerical solutions of DGLAP evolution equations, using the FORTRAN programing language. At first, we introduce the general representations of programs, functions and subroutines which we are using in all our FORTRAN codes. The programs are divided into two parts, including the LO and NLO approximations; and each part contains a non-singlet and a singlet section. We then present the obtained numerical results by depicting the related parton densities at Q 2 = 4, 50 and 200 GeV2. We also compare our results with the related results from CETQ and GRSV phenomenological groups. Comparisons with other numerical results have also been carried out.
Functions, subroutines and main programs
We write the required codes in FORTRAN 90 language to solve numerically the DGLAP evolution equations. The basic method to solve the integrals is to use Monte Carlo simulations. The only generic subroutine used is Ran3 which generates random numbers. All the other subroutines and function are written by us. We first introduce the functions and subroutines written by us and we will then illustrate the compiled programs in different sections.
Run3(idum) function
This function generates random numbers with uniform distribution between 0 and 1 based on Park-Miller method and Knuth suggested corrections. Each negative integer number can be considered as the input idum. The generation of this input should not be changed when we call them subsequently. The order of the period interval of this generation is 108 (Press et al. 1996). The random numbers are generated in the interval [a, b], using Ran3 function base on the following formula
$$y = a + (b - a)Ran\text{3}(idum).$$
Using Eq. (70) repeatedly, we can see that the generated numbers have uniform distribution and therefore we can use them to perform Monte Carlo integrations.
Monte Carlo integration
This method of integration is used to obtain the definite integrals which are based on generating random numbers. There are generally two different ways to do this kind of integration (Press et al. 1996).
Averaged Monte Carlo integration
For function, f(x) in the [a, b] interval, by getting the averaged function, \(\bar{f}\), its integration can be obtained as follows:
$$\int_{a}^{b} {f(x)dx = } (b - a)\overline{f} .$$
To calculate the averaged function, we first generate N random number with uniform distribution in the [a, b] interval and then we get the averaged function, \(\bar{f}\), according to
$$\bar{f}_{N} = \frac{{\sum\nolimits_{i = 1}^{N} {f(x_{i} )} }}{N}.$$
Therefore, Eq. (71) can now be written as (see Fig. 1I)
$$\int_{a}^{b} {f(x)dx \approx } (b - a)\bar{f}_{N} .$$
I An arbitrary function, f and its averaged value. II The generated pair number
Monte Carlo integrations, based on pair point method
In the first step, the random number,x i , is generated uniformly in the [a, b] interval [using Eq. (70)]. In the second step, the random number, y i , is generated in the [0, c] interval in which \(c \ge f_{{\rm max} }\) and f max is the maximum value of the function f in the [a, b] interval. Therefore, we achieve pairs of numbers (x i , y i ) in a rectangle with b − a and c dimensions. By repeating the generation processes in the first and second steps N times, N pairs of numbers will be generated in the rectangle. According to Fig. 2II, if we count m pairs of numbers which are below f(x), we then have:
$$\int_{a}^{b} f(x)dx \approx \frac{m}{N}(b - a)c$$
Valance u quark densities in the LO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with the CETQ4L and GRSV98LO parameterization groups has also been done
We should note that for negative values of y i , we decrease one unit from m and consider the absolute value of the function, f, to produce the related pair. This method is more appropriate for huge complex integrals and by increasing the number of repeating processes, N, we get a better solution. After specified iterations, we get to a converged solution and we would not need to add to the number of repetitions.
Laguerre function, xlag(N,y,nmax)
The function xlag(N,Y,nmax) will give us numerical values for the Laguerre function at each order. By accessing the Laguerre function at two successive orders n and n − 1, the Laguerre function at the order n + 1 will be obtained using the following relation (Arfken and Weber 2005):
Subroutine intp0(p0,ymin,ymax,ndat,nmax)
This subroutine will produce the Laguerre expansion coefficient of the splitting function, using Monte Carlo integration. The input of this subroutine is the splitting function and the output is an array which contains the subtraction of subsequent expansion coefficients and is denoted by p0.
Subroutine intR(R,ymin,ymax,ndat,nmax)
This subroutine will produce the expansion coefficients of a combined splitting function at the NLO approximation in each order, using the Monte Carlo integration. The input of this subroutine is the splitting function and the output is an array which contains the subtraction of the subsequent expansion coefficients and is denoted by rn.
Splitting functions, FPn0(n,x,nmax),…
The outputs of these functions are the numerical values of splitting functions which are multiplied by Laguerre coefficient in which the existing singularities are removed by the plus prescription method. The plus prescription takes advantage the following relation (Greiner et al. 1996):
$$\int_{0}^{1} {dx\frac{f(x)}{{\left( {1 - x} \right)_{ + } }}} = \int_{0}^{1} {dx\frac{f(x) - f(1)}{1 - x}}$$
Function E0Lag(y0,ELO,nmax)
Considering the Laguerre expansion for the evolution operator which was introduced in subsection "Laguerre expansion", the numerical values for this function can be obtained at each order of Laguerre expansion in terms of t and x variables.
Functions qinq(z), …
These functions are related to the parton densities at initial energy scale \(Q_{0}^{2} = \,2.56\,{\text{GeV}}^{2}\) and their combinations which could be found in Lai et al. (1997). These functions are calculated using the generated random numbers, based on the Monte Carlo program. The results of two CTEQ4L and CETQ4M fitting groups are used to give us the initial parton densities at LO and NLO approximations respectively.
Function Zeta(is)
This function is defined by Ellis et al. (1996)
$$\zeta (s) = \sum\limits_{k = 1}^{\infty } {\frac{1}{{k^{s} }}} .$$
Function S2(Y)
The S2(Y) function is defined by Ellis et al. (1996)
$$S_{2} (x) = - 2Li_{2} ( - x) + \frac{1}{2}\ln^{2} (x) - 2\ln (x)\ln (1 + x) - \frac{{\pi^{2} }}{6}.$$
where Li 2(x) as a dilogarithm function can be approximated by:
$$Li_{2} (x) = - \int_{0}^{x} {dy\frac{1 - y}{y}} = \frac{x}{{1^{2} }} + \frac{{x^{2} }}{{2^{2} }} + \frac{{x^{3} }}{{3^{2} }} + \cdots \quad for\,\left| x \right| \le 1.$$
The other required functions and subroutines will be introduced in the respective sections.
The programs and the results
All programs are written in four parts:
Non-singlet sector at the LO approximation
In this case, we ate concerned with the evolution of valence quarks. So according to notations of section "DGLAP evolution equations", we just need to consider the DGLAP equation for \(q_{i}^{( - )}\). To achieve this numerical solution, we do the following:
First, we should calculate numerically the expansion coefficients of splitting functions, using Eq. (37) where we choose the upper limit of summation in Eq. (36) equals to 30. To avoid the singularities which do exist in the splitting functions, we take into account the splitting function while we multiply the function with x rather than themselves and in the end, we display x times the parton densities. Therefore, the expansion coefficients are obtained via the following relation:
$$P_{n}^{(0)} = \int_{0}^{1} d x{\mkern 1mu} L_{n} (\ln \left(\frac{1}{x}\right))\,xP_{V}^{(0)} (x),$$
in which (Furmanski and Petronzio 1980):
$$P_{V}^{(0)} (x) = P_{qq}^{(0)} (x) = C_{F} \left( {\frac{{1 + x^{2} }}{{\left( {1 - x} \right)_{ + } }} + \frac{3}{2}\delta (1 - x)} \right),$$
The contribution of Dirac delta function in Eq. (81) is given by:
$$\delta P_{n}^{(0)} = \int_{0}^{1} d x{\mkern 1mu} L_{n} (\ln \left(\frac{1}{x}\right))x\frac{3}{2}C_{F} \delta (1 - x) = \frac{3}{2}C_{F} = 2.$$
Considering the rest of \(P_{V}^{(0)} (x)\), the final result for Eq. (80) would be:
$$P_{n}^{(0)} = C_{F} \int_{0}^{1} d x{\mkern 1mu} L_{n} \left( {\ln \left( {\frac{1}{x}} \right)} \right)x\frac{{1 + x^{2} }}{{\left( {1 - x} \right)_{ + } }} + \delta P_{n}^{(0)} .$$
By applying the plus prescription,Eq. (76), the final result for \(P_{n}^{(0)}\) is given by:
$$P_{n}^{(0)} = C_{F} \int_{0}^{1} d x{\mkern 1mu} \left( {\frac{{L_{n} \left( {\ln \left( {\frac{1}{x}} \right)} \right)x(1 + x^{2} ) - 2}}{{\left( {1 - x} \right)}}} \right) + 2.$$
The integral in Eq. (84) is done numerically, using the pair point method [see Eq. (74)]. The result of integral is given by intp0 subroutine. The outputs of the program which contain the differences between two subsequent expansion coefficients will be saved in an array called p0(0:nmax) [see Eq. (39)].
Now using Eq. (42) it is possible to calculate numerically the matrix A. The sum which is relating to A matrix in Eq. (42) is calculated in sumA(A,p0, nmax) subroutine, where p0 is the output of intp0 subroutine. Therefore A in sumA(A,p0,nmax) subroutine is a two dimensional array which is presented by \(A(0:k\hbox{max} \,,\,0:n\hbox{max} )\) in our program.
Therefore, the sum in Eq. (42) can be calculated by doing three loops over I, k and n indices. The last index, I, contains by itself an additional sum. All these steps are performed in sumA(A, p0, nmax) subroutine. The general solution for the A matrix is as follows:
$$A = \left( {\begin{array}{*{20}c} {} &\quad {n = 0} &\quad {n = 1} &\quad {n = 2} &\quad . &\quad . &\quad . \\ {k = 0} &\quad 1 &\quad 1 &\quad 1 &\quad . &\quad . &\quad . \\ {k = 1} &\quad 0 &\quad {p_{1}^{(0)} } &\quad {p_{2}^{(0)} + p_{1}^{(0)} } &\quad . &\quad . &\quad . \\ {k = 2} &\quad 0 &\quad 0 &\quad {(p_{1}^{(0)} )^{2} } &\quad . &\quad . &\quad . \\ . &\quad . &\quad . &\quad . &\quad . &\quad . &\quad . \\ . &\quad . &\quad . &\quad . &\quad . &\quad . &\quad . \\ . &\quad . &\quad . &\quad . &\quad . &\quad . &\quad . \\ \end{array} } \right).$$
Now by considering the A matrix from step 2, and p0 from the first step, it is possible to numerically calculate the evolution operator at LO approximation, using Eq. (41).
The related program is given by ELOn(ELO,A,p0,nmax) subroutine in which the variable tat the LO approximation has been defined:
$$t_{LO} = - \frac{2}{{\beta_{0} }}\ln \frac{{\alpha_{LO} (Q^{2} )}}{{\alpha_{LO} (Q_{0}^{2} )}},$$
$$\alpha_{LO} (Q^{2} ) = \frac{4\pi }{{\beta_{0} }}{\mkern 1mu} \frac{1}{{\ln \left( {\frac{{Q^{2} }}{{\varLambda^{2} }}} \right)}}.$$
The required quantities in the definition of t can be found in Gluck et al. (1998) [see Eq. (3)].
The convolution integral in Eq. (17) would now take the form:
$$q_{i}^{( - )} (t,x) = E_{ - } (t,x{\mkern 1mu} ) \otimes {\mkern 1mu} \tilde{q}_{i}^{( - )} (x) = \int_{x}^{1} {E_{ - } \left( {t,\frac{x}{y}} \right)} \,{\mkern 1mu} \tilde{q}_{i}^{( - )} (y)\frac{dy}{y},$$
or alternatively:
$$q_{i}^{( - )} (t,x) = \int_{x}^{1} {E_{ - } \left( {t,\frac{x}{y}} \right)} {\mkern 1mu} \left[ {y\tilde{q}_{i}^{( - )} (y)} \right]\frac{dy}{{y^{2} }},$$
since in the CETQ4 parameterization (Lai et al. 1997), the initial densities are presented as x times the parton densities. Considering the Laguerre expansion of evolution operator, Eq. (89) can be written as:
$$q_{i}^{( - )} (t,x) = \int_{x}^{1} {\sum\limits_{n = 0}^{n\max } {E_{n}^{(0)} (t)L_{n} \left( {\frac{1}{{{x \mathord{\left/ {\vphantom {x y}} \right. \kern-0pt} y}}}} \right)} \left[ {y\tilde{q}_{i}^{( - )} (y)} \right]\frac{dy}{{y^{2} }}} .$$
where \(E_{n}^{(0)} (t)\) is given by Eq. (41). How to use it in the numerical calculation was described in previous step. In Eq. (90) \(y\tilde{q}_{i}^{( - )} (y)\) represents the initial parton densities at energy scale \(Q_{0}^{2} = \,2.56\,{\text{GeV}}^{2}\). The numerical solution of Eq. (90) is brought into the main part of program. For this propose we first need to perform a loop over x in the interval, say [0.001, 1] with the step 0.001 and then do the integration over y for each value of x.
We perform the integration using the pair point method [Eq. (74)] which is specified in the program by the index pair. This program is run at three energy scales Q 2 = 4, 50, 200 GeV2 where the wall clock time depends on the produced random numbers. The output of the program is labeled "Our result". The results are depicted in Figs. 2, 3 and compared with CTEQ (Lai et al. 1997) and GRV (Gluck et al. 1995) parameterization groups.
Valence d quark densities in the LO approximation at energy scales Q 2 = 4, 50, 200 GeV2 Comparison with the CETQ4L and GRSV98LO parameterization groups has also been done
Non-singlet sector at NLO approximation
In this case we should numerically solve Eq. (12) at the NLO approximation. We first need to calculate \(R_{ - }\) which was defined by Eq. (15). This section, like the previous one, can be divided to four parts.
The required splitting functions at NLO approximation are could be found in Furmanski and Petronzio (1980), Herrod and Wada (1980). Whenever it is needed we use the plus prescription technique to remove the singularities in the calculations [see Eq. (76)]. To obtain the Laguerre expansion coefficients, all the splitting functions should be multiplied by \(xL_{n} \left( {\ln 1/x} \right)\) and then we need to perform numerical integration as we did before [Eq. (80)]. It is required to separately perform the integration resulted from Dirac delta function in the splitting functions. These integrals together with the integrals whose singularities have been removed and also the rest of results, will produce the following functions which we need to run the program:
$$xRnglag, \, xP0qqlag, \, xP1nglag, \, PF, \, PA, \, xPGlag, \, fPG, \, xPNFlag, \, fPNF.$$
The subroutine which provides us with the Laguerre expansion coefficients of \(P_{ - } \,,\,\,R_{ - }\) is called intp0(p0,rn,xmin,xmax,ndat,mmax). The contribution from Dirac delta function in the splitting function can be expressed by:
$$\begin{aligned} \delta P_{n - }^{(1)} &= \int_{0}^{1} d x{\mkern 1mu} L_{n} \left( {\ln \left( {\frac{1}{x}} \right)} \right)x\left[ {C_{F}^{2} \delta (1 - x)\left( {\frac{3}{8} - \frac{1}{2}\pi^{2} + 6\zeta (3)} \right)} \right. \hfill \\ &\quad \left. + \frac{1}{2}C_{F} C_{A} \delta (1 - x)\left( {\frac{17}{12} + \frac{11}{9}\pi^{2} - 6\zeta (3)} \right) - C_{F} T_{R} n_{f} \delta (1 - x)\left( {\frac{1}{6} + \frac{2}{9}\pi^{2} } \right) \right] \hfill \\ &= C_{F}^{2} \left( {\frac{3}{8} - \frac{1}{2}\pi^{2} + 6\zeta (3)} \right) + \frac{1}{2}C_{F} C_{A} \left( {\frac{17}{12} + \frac{11}{9}\pi^{2} - 6\zeta (3)} \right) - C_{F} T_{R} n_{f} \left({\frac{1}{6} + \frac{2}{9}\pi^{2} } \right), \hfill \\ \end{aligned}$$
$$\begin{aligned} \delta R_{n - } & = \delta P_{n - }^{1} (x) - \frac{{\beta_{1} }}{{2\beta_{0} }}\delta P_{n}^{(0)} (x) = C_{F}^{2} \left( {\frac{3}{8} - \frac{1}{2}\pi^{2} + 6\zeta (3)} \right) \\ & \quad + \frac{1}{2}C_{F} C_{A} \left( {\frac{17}{12} + \frac{11}{9}\pi^{2} - 6\zeta (3)} \right) - C_{F} T_{R} n_{f} \left( {\frac{1}{6} + \frac{2}{9}\pi^{2} } \right) - \frac{{\beta_{1} }}{{2\beta_{0} }}(2). \\ \end{aligned}$$
The numerical values, resulted from Eqs. (91, 92) has been calculated in the intp0 subroutine. In continue, by adding all the contributions from the splitting function, the related Laguerre expansion coefficients can be calculated as:
$$R_{n - } = \int_{0}^{1} d x{\mkern 1mu} L_{n} \left( {\ln \left( {\frac{1}{x}} \right)} \right)xR_{ - } (x),$$
$$R_{ - } (x) = \left( {P_{NS}^{(1)\, - } - \frac{{\beta_{1} }}{{2\beta_{0} }}P_{qq}^{(0)} (x)} \right),$$
where \(P_{NS}^{(1)\, - }\) has been introduced in Ref. Furmanski and Petronzio (1980), Herrod and Wada (1980).
The numerical solutions of Eq. (93) are calculated in the intp0 subroutine. The outputs of the subroutine are the differences between two subsequent coefficients of the expansion which will be put in p0(0:nmax) and rn(0:nmax) as two dimensional arrays.
Now by accessing the two arrays p0(0:nmax) and rn(0:nmax) it is possible to calculate the matrix A as before. The program does not need to be changed as we use sumA(A,p0,nmax) again and we simply change its inputs.
In this step, considering the matrix A which was made in the second step and the values of p0 and rn which were calculated in the first step, it is possible to calculate the expansion coefficients of evolution operators, \(E_{n}^{(0)} (t),\,\,E_{n}^{(1)} (t),\,\,E_{n} (t)\) [see Eqs. (50, 51)]. The related program is presented in ENLOn(ENLO,A,p0,rn,nmax) subroutine, as it follows: At first the variable:
$$t_{NLO} = - \frac{2}{{\beta_{0} }}\ln \frac{{\alpha_{NLO} (Q^{2} )}}{{\alpha_{NLO} (Q_{0}^{2} )}},$$
should be used where the coupling constant at NLO approximation is given by:
$$\alpha_{NLO} (Q^{2} ) = \frac{4\pi }{{\beta_{0} }}{\mkern 1mu} \frac{1}{{\ln (\frac{{Q^{2} }}{{\varLambda^{2} }})}}\left( {1 - \frac{{\beta_{1} }}{{\beta_{0}^{2} }}\frac{{\ln \left( {\ln (\frac{{Q^{2} }}{{\varLambda^{2} }})} \right)}}{{\ln (\frac{{Q^{2} }}{{\varLambda^{2} }})}}} \right).$$
The evolution operator at the NLO approximation in terms of the t variable is written as [see Eq. (50)]:
$$E_{n} (t) = E_{n}^{(0)} (t) - \frac{{\beta_{0} }}{2}\frac{{\alpha (Q^{2} ) - \alpha (Q_{0}^{2} )}}{{(2\pi )^{2} }}E_{n}^{(1)} (t).$$
After performing all the numerical calculations in the mentioned subroutine, the output will be saved in a one dimensional array called ENLO.
This step is like step 4 of the previous part. The valence quark densities are obtained, using the following convolution integral:
$$q_{i}^{( - )} (t,x) = \int_{x}^{1} {\sum\limits_{n = 0}^{n\max } {E_{n} (t)L_{n} \left( {\ln \frac{1}{{{x \mathord{\left/ {\vphantom {x y}} \right. \kern-0pt} y}}}} \right)} \left[ {y\tilde{q}_{i}^{( - )} (y)} \right]\frac{dy}{{y^{2} }}} .$$
The expansion coefficients, E n (t), can be obtained via:
$$E_{ - } (t,\frac{x}{y}) = \sum\limits_{n = 0}^{n\max } {E_{n} (t)} \,L_{n} \left( {\ln \frac{1}{{{x \mathord{\left/ {\vphantom {x y}} \right. \kern-0pt} y}}}} \right).$$
Equation (99) is calculable with E0lag subroutine (see subsection "Function E0Lag(y0,ELO,nmax)") where \(E_{ - }\) is governed by Eq. (43) and since \(R_{ - }\) and \(P_{ - }\) are known, the expansion coefficients, E n (t), are calculable.
To perform the integration in Eq. (98), we follow the previous method. The only difference is in the initial parton densities which would be the ones mentioned in Lai et al. (1997)) but at NLO approximation. The results of running the programs for valence u and d quark densities are depicted in Figs. 4, 5 and compared with the results from the CTEQ (Lai et al. 1997) and GRV (Gluck et al. 1995) parameterization groups.
Valence u quark densities in the NLO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with the CETQ4 M and GRSV98NLO parameterization groups has also been done
Valence d quark densities in the NLO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with the CETQ4M and GRSV98NLO parameterization groups has also been done
To indicate the reliability of our numerical calculations, we should provide their statistical errors. This can be done, using the following relations:
$$\int {f\,dx} \approx (x_{2} - x_{1} )\left\langle f \right\rangle \pm (x_{2} - x_{1} )\sqrt {\frac{{\left\langle {f^{2} } \right\rangle - \left\langle f \right\rangle^{2} }}{N}}$$
(99-a)
where "f" refers to related parton density. The last term in this equation indicated the error of calculations where 〈f 2〉 and 〈f〉2 are given respectively by:
$$\left\langle f \right\rangle^{2} = \left( {\frac{1}{N}\sum\limits_{i = 1}^{N} {f(x_{i} )} } \right)^{2} \quad and \quad \left\langle {f^{2} } \right\rangle \quad = \left( {\frac{1}{N}\sum\limits_{i = 1}^{N} {f^{2} (x_{i} )} } \right).$$
In these relations N denotes to number of points where we used in the Mote Carlo numerical integration.
As a result what we got for statistical errors of different valance densities at typical value Q 2 = 50 GeV2 and at the NLO approximation are as following:
$$xU_{v} \to \,\,\,\% 0.76\,,\,\,xD_{v} \,\, \to \,\,\% 0.33.$$
As can be seen these errors are <1 % which indicates enough reliability of our calculations for valence densities. The complete information for the other statistical errors at different energy scales for both LO and NLO approximations can be found in the appendix.
Singlet sector at the leading order
In the singlet sector, we encounter functions which possess a matrix form. According to subsection "DGLAP evolution equations", we should solve the Matrix evolution equation and also the equation for χ i , Eq. (14), in order to obtain the sea quark densities. First, the Gluon and q (+) densities are obtained. Then, using the q (+) and χ i , it is possible to get the q i (+) densities. And finally, having the valence and \(q_{i}^{( + )}\), the sea quark densities for any separate flavor is extractable.
For this section, two separate programs have been written and we first take into account the solution for the gluon density.
Gluon distribution at leading order
To get the solution for gluon densities, we should note that in almost all parts of the calculations we would encounter matrix form. This section involves three steps.
First, we need to define the splitting function (Furmanski and Petronzio 1980; Herrod and Wada 1980)
$$P^{(0)} = \left( {\begin{array}{*{20}c} {P_{qq}^{(0)} } &\quad {P_{qg}^{(0)} } \\ {P_{gq}^{(0)} } &\quad {P_{gg}^{(0)} } \\ \end{array} } \right) \equiv \left( {\begin{array}{*{20}c} {P_{11}^{(0)} } &\quad {P_{12}^{(0)} } \\ {P_{21}^{(0)} } &\quad {P_{22}^{(0)} } \\ \end{array} } \right) \equiv P^{(0)} (IE,JE),\,\,\,\,\,\,\,\,IE,JE = 1,2.$$
As can be seen, the splitting function is defined by a two dimensional array. As before the expansion coefficients of the splitting function should be determined which perform a three dimensional array:
$$\begin{aligned} P_{n}^{(0)} &= \int_{0}^{1} d x{\mkern 1mu} L_{n} (\ln \left(\frac{1}{x}\right))xP^{(0)} (x), \hfill \\ P_{n}^{(0)}& \equiv P^{(0)} (n,IE,JE). \hfill \\ \end{aligned}$$
Equation (101) is in fact a matrix form which should be applied for all the elements in Eq. (100). It is obvious that we need to resort to the plus prescription technique to overcome the singularities of integration in Eq. (101). The concerned integration has been solved by the first method of numerical integration which is called the "averaged method" [see Eq. (73)]. The numerical solution of the related integral is inserted in the intp0(p0,e1,e2,xmin,xmax,ndat,nmax) subroutine. The output of this subroutine is the difference between the two subsequent expansion coefficients which is put in a one dimensional array, named p0(0:nmax).
The projection operators e 1 and e 2 are finally given by Eq. (62):
$$e_{1} = \frac{1}{\lambda }P^{(0)} (0,IE,JE) \equiv e1(IE,JE),\,\,\,\,\,\,e_{2} = - e_{1} + I\, \equiv e2(IE,JE).$$
Secondly, we should computed the A and B matrices, given by Eq. (61). To define the related arrays, we first consider the upper index (k) and then the lower index (n) and then we proceed to the indices of the 2 × 2 projection matrices which are represented by IE, JE symbols. So A and B construct 4 × 4 dimensional arrays which can be represented by:
$$\begin{aligned} A(0:k\hbox{max} \,,\,0:n\hbox{max} ,2,2) \to A(k,\,n,IE,JE), \hfill \\ B(0:k\hbox{max} \,,\,0:n\hbox{max} ,2,2) \to B(k,\,n,IE,JE). \hfill \\ \end{aligned}$$
The limit of the arrays are specified in the left hand side of Eq. (103). The IE, JE symbol is represented by 2 × 2 matrices.
Equations (61, 63, 64) are three recurrence relationships. To get the related solutions, we first need to calculate them for n = 0. Therefore we will have:
$$\left\{ \begin{aligned} a_{n}^{(0)} = 0\,\,\, \Rightarrow \,\,\,a_{0}^{(0)} = 0, \hfill \\ b_{n}^{(0)} = 0\,\,\, \Rightarrow \,\,\,b_{0}^{(0)} = 0, \hfill \\ \end{aligned} \right.$$
$$\left\{ \begin{aligned} a_{n}^{(k)} = b_{n}^{(k)} = 0,\,\,\,k \ge n \hfill \\ A_{n}^{(k)} = B_{n}^{(k)} = 0,\,\,\,k > n \hfill \\ \end{aligned} \right.$$
$$A_{0}^{(0)} = e_{2} ,\,\,\,\,\,B_{0}^{(0)} = e_{1} ,$$
These are considered as the required constants in our calculations. To proceed, we would consider n = 1; therefore, we would have
$$\mathop{\longrightarrow}\limits^{n = 1}\left\{ \begin{aligned} a_{1}^{(k + 1)} = \lambda \,e_{1} \,a_{1}^{(k)} + \sum\limits_{i = k}^{0} {p_{1 - i}^{(0)} \,A_{i}^{(k)} \mathop{\longrightarrow}\limits^{k = 0}\,a_{1}^{(1)} = p_{1}^{(0)} \,A_{0}^{(0)} = p_{1}^{(0)} e_{2} } \hfill \\ b_{1}^{(k + 1)} = - \lambda \,e_{2} \,b_{1}^{(k)} + \sum\limits_{i = k}^{0} {p_{1 - i}^{(0)} \,B_{i}^{(k)} \mathop{\longrightarrow}\limits^{k = 0}\,b_{1}^{(1)} = p_{1}^{(0)} \,B_{0}^{(0)} = p_{1}^{(0)} e_{1} } \hfill \\ \end{aligned} \right.,$$
$$\mathop{\longrightarrow}\limits{n = 1}\left\{ \begin{aligned} A_{1}^{(0)} = e_{2} - \frac{1}{{\lambda^{1} }}\left( {e_{1} \,a_{1}^{(1)} - ( - 1)^{1} e_{2} \,b_{1}^{(1)} } \right) \Rightarrow A_{1}^{(0)} = e_{2} - \frac{1}{\lambda }\left( {e_{1} p_{1}^{(0)} e_{2} \, + e_{2} p_{1}^{(0)} e_{1} } \right) \hfill \\ B_{1}^{(0)} = e_{1} + \frac{1}{{\lambda^{1} }}\left( {e_{1} \,a_{1}^{(1)} - ( - 1)^{1} e_{2} \,b_{1}^{(1)} } \right) \Rightarrow B_{1}^{(0)} = e_{1} + \frac{1}{\lambda }\left( {e_{1} p_{1}^{(0)} e_{2} \, + e_{2} p_{1}^{(0)} e_{1} } \right) \hfill \\ \end{aligned} \right.,$$
$$\mathop{\longrightarrow}\limits^{n = 1}\left\{ \begin{aligned} A_{1}^{(k + 1)} = \lambda \,e_{1} \,A_{1}^{(k)} + \sum\limits_{i = k}^{0} {p_{1 - i}^{(0)} \,A_{i}^{(k)} \mathop{\longrightarrow}\limits^{k = 0}A_{1}^{(1)} = \lambda \,e_{1} \,A_{1}^{(0)} + p_{1}^{(0)} e_{2} } \hfill \\ B_{1}^{(k + 1)} = - \lambda \,e_{2} \,B_{1}^{(k)} + \sum\limits_{i = k}^{0} {p_{1 - i}^{(0)} \,B_{i}^{(k)} \mathop{\longrightarrow}\limits^{k = 0}B_{1}^{(1)} = - \lambda \,e_{2} \,B_{1}^{(0)} + p_{1}^{(0)} \,e_{1} } \hfill \\ \end{aligned} \right..$$
The above relations can be extended to n > 1. So we can acquire all the required values of A and B matrices. There are four A and B matrices which are given in the program by IE, JE(qq, qg, gq, gg). The general forms of the matrices are:
$$A,B\left( {k,n} \right) = 0\,,\,\,\,\,\,for\,\,\,\,\,\,k > n.$$
All the calculation are performed in the ABELO(E0,p0,e1,e2,nmax) subroutine. The inputs of this subroutine are e 1, e 2 and p0 matrices and the outputs are the Laguerre expansion coefficients of the evolution operator, Eq. (59), at LO approximation, where they are put in the three dimensional array E0(n,IE,JE).
By achieving the Laguerre expansion coefficients, the evolution matrix operator in the Bjorken x space is calculable. Equation (55) is given by E0Lag function which which is called in the main program whenever is needed. We should note that at LO approximation, E ± are equal toE (0). The combinations which give us the required densities are as follow:
$$q_{i}^{( + )} = q_{i} + \bar{q}_{i},\quad q_{i}^{( - )} = q_{i}^{V} = q_{i} - \bar{q}_{i} \,\,\,\,\, \Rightarrow \,\,\,\,\,q_{i}^{( + )} = q_{i}^{V} + 2\,\bar{q}_{i} ,$$
$$q^{( + )} = \Sigma {\mkern 1mu} = \sum\limits_{i = 1}^{{n_{f} }} {q_{i}^{( + )} } = \sum\limits_{i = 1}^{{n_{f} }} {q_{i}^{V} + 2\,\bar{q}_{i} } = q_{u}^{V} + 2\,\bar{q}_{u} + q_{d}^{V} + 2\,\bar{q}_{d} + 2\,\bar{q}_{s} .$$
As before, the parton densities at initial energy scale \(Q_{0}^{2} = \,2.56\,\hbox{GeV}^{2}\) are taken form (Lai et al. 1997). The used density functions in the program are given by:
$$qinq, \, qinU, \, qind, \, qinqUb, \, qindb, \, qinSb, \, qinG$$
The first function is related to q (+). The second and the third ones are related to valence quarks. The rest are related to sea quarks except the last one which is for the gluon density.
$$x\tilde{\bar{d}} = \frac{1}{2}\left( {x(\tilde{\bar{d}} + \tilde{\bar{u}})\, + x(\tilde{\bar{d}} - \tilde{\bar{u}})} \right),$$
$$x\tilde{\bar{u}} = \frac{1}{2}\left( {x(\tilde{\bar{d}} + \tilde{\bar{u}})\, - x(\tilde{\bar{d}} - \tilde{\bar{u}})} \right),$$
$$\tilde{q}^{( + )} = \tilde{u}_{v} + \tilde{d}_{v} + 2\,(\tilde{\bar{u}} + \tilde{\bar{d}} + \tilde{\bar{s}}).$$
The inputs of these functions are the generated random numbers by Monte Carlo method which produce numerical values for the related parton densities at initial energy scales. Since the density functions are appeared as xq, for \(\tilde{q}^{( + )}\) we will have:
$$x\tilde{q}^{( + )} = x\tilde{q}_{u}^{V} + x\tilde{q}_{d}^{V} + 2\,(x\tilde{\bar{q}}_{u} + x\tilde{\bar{q}}_{d} + x\tilde{\bar{q}}_{s} )$$
Now, using the input density functions, the convolution integrals, Eq. (65) and the function E0Lag which is related to Eq. (55), it is possible to get the \(q^{( + )}\) and gluon densities as in the following:
$$q^{( + )} (t,x) = \int_{x}^{1} {\left\{ {\sum\limits_{n = 0}^{n{\text{max}} } {E_{n,qq}^{(0)} (t)L_{n} \left( {\ln \frac{y}{x}} \right)} \left[ {y\tilde{q}^{( + )} (y)} \right] + \sum\limits_{n = 0}^{n{\text{max}} } {E_{n,qg}^{(0)} (t)L_{n} \left( {\ln \frac{y}{x}} \right)} \left[ {y\tilde{G}(y)} \right]} \right\}} \frac{dy}{{y^{2} }},$$
$$G(t,x) = \int_{x}^{1} {\left\{ {\sum\limits_{n = 0}^{n{\text{max}}} {E_{n,gq}^{(0)} (t)L_{n} \left( {\ln \frac{y}{x}} \right)} \left[ {y\tilde{q}^{( + )} (y)} \right] + \sum\limits_{n = 0}^{n{\text{max}} } {E_{n,gg}^{(0)} (t)L_{n} \left( {\ln \frac{y}{x}} \right)} \left[ {y\tilde{G}(y)} \right]} \right\}} \frac{dy}{{y^{2} }}.$$
The integrals in Eqs. (117, 118) are obtained numerically based on the "average method" [see Eq. (73)]. The above integrals are used in the main part of the program. The results include q (+) and gluon densities. We would require q (+) in further sections to obtain the sea quark densities. The results of running the programs for gluon densities at different energy scales are depicted in Fig. 6 compared to the results from the CTEQ (Lai et al. 1997) and GRV (Gluck et al. 1995) parameterization groups.
Gluon densities in the LO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with CETQ4L and GRSV98LO parameterization groups has also been done
Sea quark densities at LO approximation
This section contains two parts and for each part we require a separate program. At first, the equation which is related to χ distribution is solved. Then using the valence (obtained in 4.2) and q (+) densities which were obtained in subsection "Gluon distribution at leading order", we would be able to extract sea and gluon densities. This subsection is divided to two steps:
At this step we should first solve Eq. (9) for χ i distribution. The related solution is like the one for the non-singlet distribution (subsection "Non-singlet sector at NLO approximation") except that the replacement given by Eq. (53) should be carried out.
Therefore we will have [see Eq. (13)]
$$\frac{d}{dt}\chi_{i} (t,x) = \left( {P_{qq}^{(0)} (x) + \frac{\alpha }{2\pi }R_{\, + } (x)} \right) \otimes \chi_{i} (t,x),$$
$$R_{\, + } (x) = \left( {P_{NS}^{(1) + } - \frac{{\beta_{1} }}{{2\beta_{0} }}P_{qq}^{(0)} (x)} \right).$$
The analytical form for \(P_{NS}^{(1) + }\) can be found in Furmanski and Petronzio (1980), Herrod and Wada (1980). In the written program we need to define two functions (xRpolag, xP1polag) which should be replaced by previous ones in subsection "Non-singlet sector at NLO approximation" (xRnglag, xP1nglag). The other stages, are just the same as the ones in section "Non-singlet sector at NLO approximation". Since we have used R + and P +, the obtained coefficients are E +. And after that, it would be possible to obtain the χ i (x, t) distribution in Bjorken x-space by Eq. (18).
The initial \(\tilde{\chi }_{i}\) densities for separated quark flavor will be obtained, using the Eq. (66) as follows:
$$\left\{ {\begin{array}{*{20}l} {x\tilde{\chi }_{u} = x\tilde{u}_{v} + 2x\tilde{\bar{u}} - \frac{1}{{n_{f} }}x\tilde{q}^{( + )} ,} \hfill \\ {x\tilde{\chi }_{d} = x\tilde{d}_{v} + 2x\tilde{\bar{d}} - \frac{1}{{n_{f} }}x\tilde{q}^{( + )} ,} \hfill \\ {x\tilde{\chi }_{s} = 2x\tilde{\bar{s}} - \frac{1}{{n_{f} }}x\tilde{q}^{( + )} .} \hfill \\ \end{array} } \right.$$
The above functions are added to the program by the following names:
$$xkhiU, \, xkhid, \, xkhiS, \, qinq, \, qinU, \, qind, \, qinU, \, qinUb, \, qindb, \, qinSb, \, qinG \, .$$
Having accesses to the initial parton densities from Lai et al. (1997), which are recognized by the tilde symbol, we will get the χ i densities for separate flavor as in the following:
$$\left\{ {\begin{array}{*{20}l} {\chi_{u} (t,x) = \int_{x}^{1} {E_{ + } (t,\frac{x}{y})\left[ {y\tilde{u}_{v} + 2y\tilde{\bar{u}} - \frac{1}{{n_{f} }}y\tilde{q}^{( + )} } \right]} \frac{dy}{{y^{2} }},} \hfill \\ {\chi_{d} (t,x) = \int_{x}^{1} {E_{ + } (t,\frac{x}{y})\left[ {y\tilde{d}_{v} + 2y\tilde{\bar{d}} - \frac{1}{{n_{f} }}y\tilde{q}^{( + )} } \right]} \frac{dy}{{y^{2} }},} \hfill \\ {\chi_{s} (t,x) = \int_{x}^{1} {E_{ + } (t,\frac{x}{y})\left[ {2y\tilde{\bar{s}} - \frac{1}{{n_{f} }}y\tilde{q}^{( + )} } \right]} \frac{dy}{{y^{2} }}.} \hfill \\ \end{array} } \right.$$
The integrals in Eq. (122) can be solved numerically, using the "average method" [see Eq. (73)]. The results would be the χ i distributions at different energy scales.
It is now possible to get the sea quark densities at different energy scales, using Eq. (66):
$$x\bar{u} = \frac{1}{2}\left( {x\chi_{u} + \frac{1}{{n_{f} }}xq^{( + )} - xu_{v} } \right),$$
$$x\bar{d} = \frac{1}{2}\left( {x\chi_{d} + \frac{1}{{n_{f} }}xq^{( + )} - xd_{v} } \right),$$
$$x\bar{s} = \frac{1}{2}\left( {x\chi_{s} + \frac{1}{{n_{f} }}xq^{( + )} } \right).$$
The required densities in Eqs. (123–125), including the valence (subsection "Non-singlet sector at NLO approximation"), χ i [Eq. (122)] and q (+) (subsection "Gluon distribution at leading order") distributions have been obtained before. The results of running the programs for sea quark densities at energy scales, Q 2 = 4, 50, 200 GeV2 are depicted in Figs. 7, 8 and 9 and compared with the results from CTEQ (Lai et al. 1997) and GRV (Gluck et al. 1995) parameterization groups.
Sea u quark densities in the LO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with CETQ4L and GRSV98LO parameterization groups has also been done
Sea d quark densities in the LO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with CETQ4L and GRSV98LO parameterization groups has also been done
Sea s quark densities in the LO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with CETQ4L and GRSV98LO parameterization groups has also been done
Singlet sector at the next leading order
As before this section includes two subsections:
Gluon densities at NLO approximation
The required Laguerre expansion coefficients for the evolution operator is given by Eq. (97). The \(E_{n}^{(1)} (t)\) term in Eq. (97) is defined by Eqs. (68, 69). The only quantity in Eq. (69) which should be determined is R j . In the following, the gluon at NLO approximation can be calculated based on the following steps.
The calculation of R j is done by a subroutine called intR(R,xmin,xmax,ndat,nmax). Evolution of gluon densities is done by Eq. (14) where P (1)(x) and R(x) in Eq. (16) are defined by:
$$P^{(1)} = \left( {\begin{array}{*{20}c} {P_{qq}^{(1)} } & {P_{qg}^{(1)} } \\ {P_{gq}^{(1)} } & {P_{gg}^{(1)} } \\ \end{array} } \right) \equiv \left( {\begin{array}{*{20}c} {P_{11}^{(1)} } & {P_{12}^{(1)} } \\ {P_{21}^{(1)} } & {P_{22}^{(1)} } \\ \end{array} } \right) \equiv P^{(1)} (IE,JE),\,\,\,\,\,\,\,\,IE,JE = 1,2$$
$$\left( {\begin{array}{*{20}c} {R_{qq} } &\quad {R_{qg} } \\ {R_{gq} } &\quad {R_{gg} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {P_{qq}^{(1)} } &\quad {P_{qg}^{(1)} } \\ {P_{gq}^{(1)} } &\quad {P_{gg}^{(1)} } \\ \end{array} } \right) - \frac{{\beta_{1} }}{{2\beta_{0} }}\left( {\begin{array}{*{20}c} {P_{qq}^{(0)} } &\quad {P_{qg}^{(0)} } \\ {P_{gq}^{(0)} } &\quad {P_{gg}^{(0)} } \\ \end{array} } \right),$$
while matrix form for P (0)(x) is given by Eq. (100). As before, these matrices are defined by two dimensional arrays. The analytical expression for the splitting function at NLO approximation can be found in Furmanski and Petronzio (1980), Herrod and Wada (1980). The used functions in the program whose singularities have been removed by plus prescription technique [Eq. (76)] are:
$$\begin{aligned} & xRqqlag, \, xRqglag, \, xRgqlag, \, xRgglag, \, xP0qqlag, \, xP0qglag, \, xP0gqlag, \, xP0gglag, \, xP1qqlag, \hfill \\ & xP1qglag, \, xP1gqlag, \, xP1gglag, \, xFqqlag, \, xF1qglag, \, xF2qglag, \, xF1gqlag, \, xF2gqlag, \, xF3gqlag, \hfill \\ &xF1gglag, \, xF2gglag, \, xF3gglag, \, fF3gg, \, xP1polag, \, PF, \, PA, \, xPGlag, \, fPG, \, xPNFlag, \, fPNF. \hfill \\ \end{aligned}$$
The required integrals are done numerically in the intR subroutine. The output of the program is put in three dimensional arrays which are called R(0:nmax,2,2).
At this step the evolution operator E (0) should be calculated; this has been done in subsection "Gluon distribution at leading order". The only difference is in the definition of the t parameter which should be redefined at NLO approximation. The matrices A and B which are used to get the evolution operator are like before. So in this step we can use a program similar to what has been written in subsection "Gluon distribution at leading order". This program contains the following functions and subroutine:
$$intp0, \, P0qq, \, P0qg, \, P0gq, \, P0gg, \, ABELO, \, NFAC, \, E0Lag$$
After computing A and B matrices, as before we will have
$$E_{n}^{(0)} (t_{NLO} ) = \sum\limits_{k = 0}^{n} {\frac{{t_{NLO}^{k} }}{{k{\mkern 1mu} !}}} \left( {A_{n}^{(k)} + B_{n}^{(k)} e^{{\lambda t_{NLO} }} } \right)$$
In this equation, we use different values for t and we denote Eq. (128) by the function: E0t(nP,IP,JP,t,A,B,nmax). The inputs of this function are the indices of the 2 × 2 matrix elements, A and B matrices and the t variable. The output of the function is the numerical values of Laguerre expansion coefficients for evolution operator.
To get \(\tilde{E}_{n}^{(1)}\) in Eq. (68) we should solve the integral in Eq. (69). For this purpose we should first solve the sum in the integrand of this equation. So Eq. (69) can be written as:
$$\tilde{E}_{n}^{(1)} (t) = \int_{0}^{t} {d\tau \,e^{{ - \beta_{0} \frac{\tau }{2}}} (ERE)_{n} (t,\tau )} ,$$
$$(ERE)_{n} (t,\tau ) = \sum\limits_{i,j,k} {E_{i}^{(0)} (t - \tau )\,R_{j} \,E_{k}^{(0)} (\tau )} \,\delta (n - i - j - k).$$
The \(E_{n}^{(0)} (t)\) terms can be calculated using the E0t function [see Eq. (128)]. The R j was calculated before as the R(0:nmax,2,2) array (step.1) which can be considered as the input of the program. So we should write a program which calculates the sum in Eq. (130) by taking into account the Dirac delta function and also do the matrix multiplication. The details of the program can be found in the following subroutine:
$$SUMERE\left( {n,to,tNLO,A,B,R,SEREqq,SEREqg,SEREgq,SEREgg,nmax} \right).$$
The outputs are the four elements of the matrix which are related to the sum in Eq. (130).
In this step we first calculate the integral in Eq. (129) and we would obtain the related Laguerre expansion. Then we do the loop over the order of Laguerre expansion since we are going to calculate the integral in Eq. (129) at each order n. The τ variable is obtained by generating the random number, RAN3, in the interval \([0\,,\,\,t_{NLO} ]\) so as Press et al. (1996)
$$\tau = t_{NLO} \,RAN3\left( {idum} \right)$$
Now we give the random number to the SUMERE subroutine. Then we call this subroutine for each generated random number in which we are able to calculate the sum in Eq. (130). To increase the precision of calculations we need to generate more random numbers which consequently increase the wall clock time. The integrals can be calculated, using the "average method" [see Eq. (73)]. We should note that four integrals are calculated which are in fact the elements of the matrix evolution operator \(\tilde{E}_{n}^{(1)} (t)\). The solutions of these integrals are put in the three dimensional array Et1(−2:nmax,2,2) where the first two terms of this array for all elements of the 2 × 2 matrices are zero [see Eq. (68)].
We continue to calculate the required expression in Eq. (68). The \(E_{n}^{(1)} (t)\) term is appeared as an array by the name E1(0:nmax,2,2). This array is put inside the n th loop. The term, relating to n = 0 is computed, using the first two terms which were introduced in Eq. (68). The other terms, relating to the Laguerre expansion at NLO approximation are obtained by iterating the loop over n, IE, JE indices.
In the end, using this subroutine and the function E0t which was obtained before, we can calculate the related Laguerre expansion coefficients at the NLO approximation, based on Eq. (97). All the required mentioned tasks are gathered in a subroutine called intE(En,A,B,R,ndat,nmax). This is the most important part of the program. The outputs are the expansion coefficients of the evolution operator as 2 × 2 matrices.
Now by getting the Laguerre expansion for the evolution operators and the parton densities at the initial energy scale Q 0 2 = 2.56 GeV2, we can obtain the evolved parton densities at any desired energy scale. The process is like the one for the LO approximation. In this case we will have
$$q^{( + )} (t,x) = \int_{x}^{1} {\left\{ {\sum\limits_{n = 0}^{n{\text{max}} } {E_{n,qq} (t)L_{n} \left( {\ln \frac{y}{x}} \right)} \left[ {y\tilde{q}^{( + )} (y)} \right] + \sum\limits_{n = 0}^{n{\text{max}}} {E_{n,qg} (t)L_{n} \left( {\ln \frac{y}{x}} \right)} \left[ {y\tilde{G}(y)} \right]} \right\}} \frac{dy}{{y^{2} }},$$
$$G(t,x) = \int_{x}^{1} {\left\{ {\sum\limits_{n = 0}^{n{\text{max}} } {E_{n,gq} (t)L_{n} \left( {\ln \frac{y}{x}} \right)} \left[ {y\tilde{q}^{( + )} (y)} \right] + \sum\limits_{n = 0}^{n{\text{max}}} {E_{n,gg} (t)L_{n} \left( {\ln \frac{y}{x}} \right)} \left[ {y\tilde{G}(y)} \right]} \right\}} \frac{dy}{{y^{2} }}.$$
The initial parton densities at the NLO approximations are taken from Lai et al. (1997) which are designated in Eqs. (132, 133) by tilde symbol. The list of the required functions is:
The integrals in Eqs. (132, 133) are numerically calculated, using the "average method" [see Eq. (73)]. The solution of integrals is brought into the main part of the program. The results are gluons and q + densities (singlet sector); the singlet densities will be used in the next section to obtain the sea densities. The outputs of this program are the evolved gluon densities at different energy scales which are depicted in Fig. 10 and are compared with the CTEQ (Lai et al. 1997) and GRV (Gluck et al. 1995) parameterization groups.
Gluon densities in the NLO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with CETQ4M and GRSV98NLO parameterization groups and the results from Ref. Coriano and Savkli (1999) has also been done. Our results for gluon densities are in better agreement with the CETQ4M and GRSV98NLO results rather than the results from Ref. Coriano and Savkli (1999)
If we wish to calculate the statistical error for gluon density, we should resort to Eq. (99-a). What we get for the required error at the typical energy scale Q 2 = 50 GeV2 in the NLO approximation is %1.78 which indicates good precision in our calculations for gluon densities (for more information, see the "Appendix").
Sea quark densities at the NLO approximation
The objective of this subsection is to obtain the sea quark densities at the NLO approximation. The entire procedure is like what we have been done for LO approximation (see subsection "Sea quark densities at LO approximation"). The required relations to obtain the sea quark densities are given by Eqs. (123–125). Here, two programs should be run. The first program gives the gluons [see Eqs. (132, 133)]. The second program, which uses the results of the first program (q (+)) and the results of the program which gave us the χ i densities [see Eq. (122) and Eq. (98)] that gave us the valence densities [\(q_{i}^{( - )}\), see Eq. (98)], would yield the sea densities [see Eqs. (123–125)] at different energy scales in the NLO approximation. The results and the comparisons with the CTEQ (Lai et al. 1997) and GRV (Gluck et al. 1995) parameterization groups are presented in Figs. 11, 12 and 13 for different quark flavors.
Sea u quark densities in the NLO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with the CETQ4M and GRSV98NLO parameterization groups has also been done
Sea d quark densities in the NLO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with the CETQ4M and GRSV98NLO parameterization groups has also been done
Sea s quark densities in NLO approximation at energy scales Q 2 = 4, 50, 200 GeV2. Comparison with CETQ4M and GRSV98NLO parameterization groups and the results from Ref. Coriano and Savkli (1999) has also been done
As can be seen from Fig. 13 our results for the strange sea quark densities are in good agreement with the CETQ4 M and GRSV98NLO parameterization groups. The results from Ref. Coriano and Savkli (1999) indicate completely different behavior with respect to the fitting parameterization models as well as with respect to our results. This confirms the validity of our calculations for numerically obtaining the evolved parton densities.
To provide the statistical error for the sea parton densities we need again to resort to Eq. (99-a). The obtained errors at the typical energy scale Q 2 = 50 GeV2 for the NLO approximation are as following (see appendix as well):
$$x\bar{U}_{{}} \to \,\,\,\% 0.17\,\,,\,\,x\bar{D}_{{}} \,\, \to \,\,\% 0.2\,\,,\,\,\,x\bar{S}\,\, \to \,\% 0.076.$$
Once again the small values for the statistical errors indicated enough precision of the employed numerical integration to evolve the parton densities
In this paper, we have presented numerical solutions for the DGLAP evolution equations, based on the Laguerre polynomials expansion (Furmanski and Petroznio 1982a, b). Although people can use other methods especially in the Mellin moment space, the method which we used in this article has this specific feature that we do not need to change the space of calculations. In fact, all the computations have been done in the Bjorken x-space. We have tried to explain all the steps of performing the FORTRAN codes which produce parton densities at high energy scales. Since we have just used FORTRAN package, it means that all calculation have been done numerically. The main program can be requested from the authors via the E-mail address: [email protected]. The results are in good agreement with CETQ (Lai et al. 1997) and GRV (Gluck et al. 1995) parameterization groups. This confirms the validity of our numerical solutions for the DGLAP evolution equations. Our results for parton densities are much better than what have been represented in Coriano and Savkli (1999) especially for sea strange and gluon densities at the NLO approximation. Also the results are comparable with the results of Kobayashi et al. (1995), Schoffel (1999). A very precise technique for achieving numerical solutions for the DGLAP evolution equations can be found in Botje (2011). Also, in Kumano and Nagai (2004), a comparison between different methods, including the Laguerre polynomials expansion has been done which reveals how it is reliable to use the Laguerre polynomials to get such solutions.
This method can be extended to evolve the polarized parton densities in a numerical way which we hope to report them in future. Further, we can evolve nucleon structure functions with two methods. One which is based on the Jacobbi polynomials expansion. The other method is related to the evolved nucleon structure function, using the evolved parton densities by Laguerre polynomials expansion as we have done in this article. Comparing these two methods provides us with the opportunity to obtain the QCD cut off parameter (Ghasempour Nesheli et al. 2015).
Abbott LF et al (1979) A QCD analysis of e N deep inelastic scattering data. SLAC-PUB 2400:1
Altarelli G, Parisi G (1977) Asymptotic freedom in parton language. Nucl Phys B 126:298
Arfken GB, Weber HJ (2005) Mathematical methods for physicists. Elsevier Academic Press, Amsterdam
Bjorken JD (1969) Asymptotic sum rules at infinite momentum. Phys Rev 179:1547
Bloom ED et al (1969) High-energy inelastic e-p scattering at 6° and 10°. Phys Rev Lett 23:930
Botje M (2011) QCDNUM: fast QCD evolution and convolution. Comput Phys Commun 182:490
Breidenbach M et al (1969) Observed behavior of highly inelastic electron-proton scattering. Phys Rev Lett 23:935
Coriano C, Savkli C (1999) QCD evolution equations: numerical algorithms from the Laguerre expansion. Comput Phys Commun 118:236
Dokshitzer YL (1977) Calculation of the structure functions for deep inelastic scattering and e + e- annihilation by perturbation theory in quantum chromodynamics. Sov Phys JETP 46:641
Ellis RK, Stirling WJ, Webber BR (1996) QCD and collider physics, 108. Cambridge University Press, UK
Furmanski W, Petronzio R (1980) Singlet parton densities beyond leading order. Phys Lett B 97:437
Furmanski W, Petronzio R (1982) Lepton-hadron processes beyond leading order in quantum chromodynamics. Z Phys C 11:293
Furmanski W, Petroznio R (1982) A Method of analyzing the scaling violation of inclusive spectra in hard processes. Nucl Phys B 195:237
Ghasempour Nesheli A, Mirjalili A, Yazdanpanah MM (2015) Analyzing the parton densities and constructing the xF3 structure function using the Laguerre polynomials expansion and Monte Carlo calculations. Eur Phys J Plus 130:82
Gluck M, Reya E, Vogt A (1995) Dynamical parton distributions of the proton and small-x physics. Z Phys C 67:433
Gluck M, Reya E, Vogt A (1998) Dynamical parton distributions revisited. Eur Phys J C 5:462
Greiner W, Scharmm S, Stein E (1996) Quntum chromodynamic. Spinger, Berlin, p 239
Gribov VN, Lipatov LN (1972) Deep inelastic e p scattering in perturbation theory. Sov J Nucl Phys 15:438
Herrod RT, Wada S (1980) Altarelli–Parisi equation in the next-to-leading order. Phys Lett B 96:195
Kobayashi R, Konuma M, Kumano S (1995) FORTRAN program for a numerical solution of the nonsinglet Altarelli–Parisi equation. Comput Phys Commun 86:264
Kumano S, Nagai T-H (2004) Comparison of numerical solutions for Q2 evolution equations. J Comput Phys 201:651
Lai HL et al (1997) Improved parton distributions from global analysis of recent deep inelastic scattering and inclusive jet data. Phys Rev D 55:1280
Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1996) Numerical recipes in FORTRAN 90. Cambridge University Press, Cambridge
Schoffel L (1999) An Elegant and fast method to solve QCD evolution equations, application to the determination of the gluon content of the pomeron. Nucl Instrum Methods 423:439
A.M provided the required theoretical issues of the paper. The subroutines and FORTRAN files have been written by A.N.G. The M.M.Y provided some studies and ideas to complete the paper. All authors read and approved the final manuscript.
We would like to thank L. Schoeffel to read the manuscript and giving us his constructive comments. Authors acknowledge the Yazd University for the provided facilities whilst this research was performed there. A.G.N is indebted Shiraz Azad University to support him financially to do this project. A.M is grateful the theory divisions of CERN to visit there and provide him an opportunity to tide up his scientific affairs.
Department of Physics, Shiraz Branch, Islamic Azad University, Shiraz, Iran
A. Ghasempour Nesheli
Physics Department, Yazd University, 89195-741, Yazd, Iran
A. Mirjalili
Faculty of Physics, Shahid Bahonar University of Kerman, Kerman, Iran
M. M. Yazdanpanah
Search for A. Ghasempour Nesheli in:
Search for A. Mirjalili in:
Search for M. M. Yazdanpanah in:
Correspondence to A. Ghasempour Nesheli.
In the following tables, we listed the numerical values for the statistical error of all parton densities at different energy scales. Tables 1 and 2 are related to the LO and NLO approximations respectively.
Table 1 Numerical values for the statistical error of patron densities at the LO approximation
Table 2 Numerical values for the statistical error of patron densities at the NLO approximation
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Evolved parton densities
Monte Carlo method
|
CommonCrawl
|
numerical computation complex error function - Computer Asset Management Corp.
4730 E M 36, Pinckney, MI 48169
numerical computation complex error function Fowlerville, Michigan
This site uses cookies to improve performance by remembering that you are logged in when you go from page to page. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W., NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN978-0521192255, MR2723248 External links[edit] MathWorld – Erf Authority control NDL: 00562553 Retrieved from It is not as prone to subtractive cancellation as the series derived from integrating the power series for $\exp(-x^2)$. vdLT84 C.
Most languages seem to have a way to link in C functions, and if that is the case, then there is an open source implementation somewhere out there. Hunter and T. The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function): erf ( x ) = 2 x Sny93 W.
Also has erfi for calculating i erf ( i x ) {\displaystyle i\operatorname {erf} (ix)} Maple: Maple implements both erf and erfc for real and complex arguments. Fortran 77 implementations are available in SLATEC. Matlab provides both erf and erfc for real arguments, also via W. Snyder, Algorithm 723.
Statist. 18 (1969), 290--293. Algorithms: [ CPT70] , [ Hum64] . For |z| < 1, we have erf ( erf − 1 ( z ) ) = z {\displaystyle \operatorname ζ 2 \left(\operatorname ζ 1 ^{-1}(z)\right)=z} . Haskell: An erf package[18] exists that provides a typeclass for the error function and implementations for the native (real) floating point types.
Adams, Algorithm 39. Heald, Rational approximations for the Fresnel integrals, Math. New Exponential Bounds and Approximations for the Computation of Error Probability in Fading Channels. Appl.
Math. W. Cody, Performance evaluation of programs for the error and complementary error functions, ACM Trans. Anal. (1994), in press.
Coleman, Complex polynomial approximation by the Lanczos --method : Dawson's integral, J. Luk75 Y. Setting Your Browser to Accept Cookies There are many reasons why a cookie could not be set correctly. G.
Normal deviate, Comm. Math. Softw., 19 (1): 22–32, doi:10.1145/151271.151273 ^ Zaghloul, M. Cod90b W.
See [2]. ^ http://hackage.haskell.org/package/erf ^ Commons Math: The Apache Commons Mathematics Library ^ a b c Cody, William J. (1969). "Rational Chebyshev Approximations for the Error Function" (PDF). Hill, Algorithm AS 66. Standards 86 (1981), 661--686. Gaussian Quadrature is an accurate technique –Digital Gal Aug 28 '10 at 1:25 GQ is nice, but with (a number of) efficient methods for computing $\mathrm{erf}$ already known, I
Your cache administrator is webmaster. The symposium was held at the University of British Columbia August 9--13, 1993, in honor of the fiftieth anniversary of the journal Mathematics of Computation. Another form of erfc ( x ) {\displaystyle \operatorname 2 (x)} for non-negative x {\displaystyle x} is known as Craig's formula:[5] erfc ( x | x ≥ 0 Nat.
Luke, Mathematical functions and their approximations, Academic Press, New York, 1975. I thought about mentioning the numerical instability, but the post was already long. Bul67 R. Is Morrowind based on a tabletop RPG?
The imaginary error function has a very similar Maclaurin series, which is: erfi ( z ) = 2 π ∑ n = 0 ∞ z 2 n + 1 n This site stores nothing other than an automatically generated session ID in the cookie; no other information is captured. Complex Arguments. Wei94a J.
If you're going the Taylor series route, the best series to use is formula 7.1.6 in Abramowitz and Stegun. R. (March 1, 2007), "On the calculation of the Voigt line profile: a single proper integral with a damped sine integrand", Monthly Notices of the Royal Astronomical Society, 375 (3): 1043–1048, Hummer, Expansion of Dawson's function in a series of Chebyshev polynomials, Math. Error Functions, Dawson's Integral, Fresnel Integrals . 4.2.1.
Applications[edit] When the results of a series of measurements are described by a normal distribution with standard deviation σ {\displaystyle \textstyle \sigma } and expected value 0, then erf ( a
on the error function of a complex argument
Speedwise, this implementation seems to be on par with Godfrey's (file ID: 3574) which I currently use for small to medium range of the complex argument. Excel: Microsoft Excel provides the erf, and the erfc functions, nonetheless both inverse functions are not in the current library.[17] Fortran: The Fortran 2008 standard provides the ERF, ERFC and ERFC_SCALED Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view Algebra Applied Mathematics...
numerical evaluation error function
Wenston, An algorithm for the numerical computation of the Voigt function, Appl. All generalised error functions for n>0 look similar on the positive x side of the graph. Sci. (1994), in press. See for instance the cephes library. –lhf Jun 3 '11 at 2:48 @shaikh: Or boost's implementation –ziyuang Jun 3 '11 at 2:51 add a comment| up vote 3 down Wolfram Knowledgebase Curated computable knowledge powering Wolfram|Alpha. J. M. Poppe and C. M. Comp. 30 (1976), 827--830....
numerical analysis error function
Retrieved from "https://en.wikipedia.org/w/index.php?title=Residual_(numerical_analysis)&oldid=684333069" Categories: Numerical analysis Navigation menu Personal tools Not logged inTalkContributionsCreate accountLog in Namespaces Article Talk Variants Views Read Edit View history More Search Navigation Main pageContentsFeatured contentCurrent eventsRandom x3 = 1.41421356242... So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. Gene...
numerical integration error function
return accumulator Some details of the algorithm require careful thought. Fortran 77 implementations are available in SLATEC. Your cache administrator is webmaster. Continued Fractions. In practice, since polynomials of very high degree tend to oscillate wildly, only polynomials of low degree are used, typically linear and quadratic. Most languages seem to have a way to link in C functions, and if that is the case, then there is an open source implementation somewhere out there. Philip...
|
CommonCrawl
|
Botanical Studies
Anti-hepatitis, antioxidant activities and bioactive compounds of Dracocephalum heterophyllum extracts
Qiang-Qiang Shi1,2,
Jun Dang1,
Huai-Xiu Wen1,
Xiang Yuan1,2,
Yan-Duo Tao1 &
Qi-Lan Wang1
Botanical Studies volume 57, Article number: 16 (2016) Cite this article
Dracocephalum heterophyllum was a traditional Tibetan medicine possesses various pharmacological effects involved in anti-inflammatory, antibacterial activities. However, its anti-hepatitis, antioxidant activity and bioactive compounds have not been reported, the objective of this research work was to investigate the pharmacological activity and bioactive compounds of D. heterophyllum extracts.
In the present study, the anti-hepatics and antioxidant activities of four D. heterophyllum extracts (i.e. petroleum ether extracts, ethyl acetate extracts, n-BuOH extracts, and water extracts) were conducted. The main chemical constituent of petroleum ether and ethyl acetate extracts were also isolated using chromatographic techniques and identified by NMR spectroscopic methods. The anti-hepatitis assay showed that the petroleum ether and ethyl acetate extracts of D. heterophyllum significantly prolonged the mean survival times and reduced the mortality of mouse hepatitis model induced by concanavalin A (ConA). The levels of alanine transaminase, aspartate transaminase in blood serum could be decreased obviously by ethyl acetate extracts compared with ConA group (P < 0.01). The histological analysis demonstrated that the ethyl acetate extracts could inhibit apoptosis and necrosis caused by ConA. In addition, the antioxidant activities of the four extracts of D. heterophyllum were measured by DPPH assay, ABTS assay, anti-lipidperoxidation assay, ferric reducing antioxidant power assay, ferrous metal ions chelating assay and determination of total phenolic contents. The results showed that the ethyl acetate extract had the highest antioxidant activities, followed by petroleum ether extract. Finally, nine mainly compounds were isolated from the Petroleum ether and ethyl acetate extracts, including four triterpenes: oleanolic acid (1), ursolic acid (2), pomolic acid (3), 2α- hydroxyl ursolic acid (4), three flavonoids: apigenin-7-O-rutinoside (5), luteolin (8), diosmetin (9) and two phenolic acids: rosmarinic acid (6), methyl rosmarinate (7).
The Ethyl acetate extract of D. heterophyllum had the highest anti-hepatitis and antioxidants activities, followed by petroleum ether extract. The bioactive substances may be triterpenes, flavonoids and phenolic acids, the ethyl acetate extracts of D. heterophyllum may be possible candidates in developing anti-hepatitis medicine.
Hepatitis is an inflammation of the liver. The condition can progress to fibrosis (scarring), cirrhosis or liver cancer. Hepatitis viruses are the most common cause of hepatitis in the world. There are five main hepatitis viruses, referred to as types A, B, C, D and E. In particular, types B and C lead to chronic disease in hundreds of millions of people and, together, are the most common cause of liver cirrhosis and cancer. More than 20 million people worldwide are infected with hepatitis virus. And infection from these viruses results in approximately 1.45 million deaths each year. Effective prophylactic vaccines have been available since the 1980s (Hollinger and Liang 2001). Nonetheless, for many developing countries, large-scale vaccination programs were hardly affordable, and an enormous number of chronic hepatitis virus carriers will be in need of better medication for decades to come. Current therapies are based on the systemic administration of high doses of interferon-α (IFN-α) or on nucleoside analogs. However, both therapies have a sustained response rate of only about 30 %, combinations exert no clear synergism, and lamivudine therapy leads to the rapid emergence of resistant virus variants (Pumpens et al. 2002; Zoulim 2001). Hepatitis B virus (HBV) is a hepadnavirus DNA virus with species specificity and tissue specificity, normally infect only humans and chimpanzees. There is no other feasible small experimental animal infection model. It is also very difficult to infect the cells cultured in vitro. For now, though there are many hepatitis animal model, such as duck hepatitis B model (Schultz et al. 2004), woodchuck model (Wang et al. 2004), chimpanzees model (Wieland et al. 2004), HBV transgenic mice model (Chisari 1995), there are still some different levels of flaws.
Dracocephalum heterophyllum is a traditional Tibetan medicine growing on Qinghai-Tibet Platean with special living environment of high elevation and strong sunlight irradiation. The plant is distributed widely in Sitsang, Qinghai, Sinkiang and Gansu province of China. In traditional Tibetan Medicine, D. heterophyllum is known as Ao-Ga or Ji-Mei-Qing-Bao, which has been used as an ethnomedicine to treat various ailments such as jaundice, hepatopathy, cough, lymphangitis, mouth ulcers and tooth diseases. There had been reports said that the herb had antiviral activity (Zhang et al. 2009), antianoxic effect (Peng 1984), antiasthmatic, anticoughing and disinfectant action (Mahmood et al. 2005), and the essential oil of it also had antimicrobial and antioxidant activities (Zhang et al. 2008).
The aim of this paper was to evaluate the anti-hepatitis activities of D. heterophyllum in the mouse fulminant hepatitis model induced by concanavalin A (ConA), and measured the antioxidant activities of this herbs in a series of in vitro assay such as free radical scavenging experiments (DPPH and ABTS assay), anti-lipidperoxidation experiments (FTC assay), ferric reducing antioxidant power assay (FRAP), metal chelating assay and determination of total phenolic contents (TPC). Finally, the bioactive substances were also separated and purified using chromatographic techniques.
Chemical reagent
Female Bal B/C mice were bought from Beijing Vitalriver Experimental Animals Ltd. (Beijing, China), The animals were performed according to guidelines laid down by the Animal Care and Use Committee of Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (Approval IDs: SCXY2012-0119) that follows internationally acceptable standards on animal care and use in laboratory experimentation. Concanavalin A (ConA) was obtained from Sigma-Aldrich Co. (Shanghai, China), ALT kit, AST kit, TUNEL kit, DAPI kit were purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, China), 1,1-Diphenyl-2-picryl-hydrazyl (DPPH), 2-azino-bis(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS), 3-(2-pyridyl)-5,6-bis (4-phenyl-sulfonic acid)-1,2,4-triazine (ferrozine), 2,4,6-Tris(2-pyridyl)-s-triazine (TPTZ), linoleic acid, α-Tocopherol, butylated hydroxytoluene (BHT), butylated hydroxyanisole (BHA), potassium persulfate (K2S2O8), ammonium thiocyanate were (NH4SCN), ferrous chloride tetrahydrate (FeCl2·4H2O), Ferric chloride tetrahydrate (FeCl3·6H2O), sodium tungstate dihydrate (Na2WO4·2H2O), sodium molybdate dehydrate (Na2MoO4·2H2O), sodium carbonate anhydrous (Na2CO3) were all purchased from Aladdin Industrial Corporation (Shanghai, China). Gallic acid was obtained from National Institute for Food and Drug Control (Beijing, China). All other chemical reagent and buffer used were analytical grade and obtained from Beijing Chemical Co. (Beijing, China).
The whole grass of D. heterophyllum was harvested in August 2014, from North Mountain in Huzhu, Qinghai province, China. The sample was identified by MEI Li-juan (Northwest Institute of Plateau Biology, Qinghai, China). The fresh samples were air-dried in hade, then ground into a homogeneous powder in a mill.
8.86 kg of air-dried and powered D. heterophyllum was extracted by 95 % ethanol at 70 °C through heating reflux. The samples were filtered with filter paper while the residue was further extracted under the same conditions 3 times. The filtrates were collected, and then ethanol was removed by a rotary evaporator (EYELA, Japan) at 50 °C to get the crude extract of D. heterophyllum.
The crude ethanol extract (1026 g) of D. heterophyllum was suspended into 500 mL water. The suspension was successively extracted 3 times by the same volume of petroleum ether, ethyl acetate and n-butanol at room temperature to get four fractions. Then the four fractions were dried by a rotary evaporator (EYELA, Japan), respectively. The four extracts were stored at 4 °C until used.
Anti-hepatitis activity assay
The survival experiment of mice with lethal doses of ConA
Concanavalin A (ConA) is a plants lectin, and is known for its ability to stimulate mouse T cell subsets giving rise to four functionally distinct T-cell populations, including precursors to suppressor T-cell; one subset of human suppressor T-cells as well is sensitive to ConA (Dwyer and Johnson 1981). Female BALB/c mice would develop severe liver injury as assessed by transaminase release within 8 h when an intravenous dose concanavalin A (Con A) was given. Histopathologically, only the liver was affected. Con A-induced liver injury depends on the activation of T lymphocytes by macrophages in the presence of ConA (Tiegs et al. 1992). The model might allow the study of the pathophysiology of autoimmune hepatitis and viral hepatitis.
Female Balb/C mice 9–10 weeks old weighing about 25–29 g were housed for 1 day to acclimatize. The mice were randomly divided into nine groups each comprised of eight mice. The first group, ConA group, was only injected with 1 mg/mL ConA in caudal vein at the lethal dose of 20 mg/Kg. The next four groups, drug group, were only injected with 10 mg/mL four extracts of D. heterophyllum in abdominal cavity at the dose of 50 mg/Kg, respectively. The last four groups, ConA+ drug group, were injected with 10 mg/mL four extracts of D. heterophyllum in abdominal cavity at the dose of 50 mg/Kg, respectively, and 2 h later, were injected with 1 mg/mL ConA in caudal vein at the dose of 20 mg/Kg to induced hepatitis. The mice were fed and observed for 24 h to determine their mortality.
The level of transaminase in mice serum
According to the (Table 1), 32 female Balb/C mice were randomly divided into four groups each comprised of eight mice. The first group, control group, was only injected with DMSO and PBS in abdominal cavity. The second group, ConA group, was only injected with 1 mg/mL ConA in caudal vein at the dose of 12.5 mg/Kg. The third group, drug group, was only injected with 10 mg/mL Ethyl acetate extracts of D. heterophyllum in abdominal cavity at the dose of 50 mg/Kg. The last group, ConA+ drug group, were injected with 10 mg/mL ethyl acetate extracts of D. heterophyllum in abdominal cavity at the dose of 50 mg/Kg, and 2 h later, was injected with 1 mg/mL ConA in caudal vein at the dose of 12.5 mg/Kg. The blood was taken using orbital venous plexus blood collection in the time of 8, 16, 24 h after injected ConA. The blood volume was collected 200–300 μL, and centrifuged 5 min at 3000 rpm. The upper serum was moved to the new Eppendorf tube, and saved at −20 °C, temporarily. The ALT and AST kit were used for quantitation of transaminase in mice serum of different groups.
Table 1 The treatment on the hepatitis model induced by conA
Morphologic, histological and hepatocytes apoptosis analysis
Pretreating the mice base on the (Table 1). Test mice breaking the neck to death after induced by ConA 24 h later, removing the liver, observing the hepatic pathological changes of liver using morphologic method. Then the livers were fixed in 10 % formalin, embedded in paraffin and cut into slices. HE staining were performed and the result of the staining was analyzed with microscopic examination. The method of TdT mediated dUTP nick end labeling (TUNEL) and 4′,6-diamidino-2-phenylindole (DAPI) were used to detect apoptosis of mouse liver.
DPPH free radical scavenging activity
The free radical scavenging activity of four D. heterophyllum extracts were estimated using in vitro 1,1-diphenyl-2-picryl-hydrazil (DPPH) free radical assay following the methodology described by Blois (1958). DPPH free radical reagent solution was prepared by dissolving DPPH in ethanol to obtain 0.1 mM concentration. Four extracts test solution were prepared by dissolving different concentrations of four extracts (10, 50, 100, 150 200, 300, 500 μg/mL) in ethanol. To 2 mL of test solution 2 mL of freshly prepared DPPH reagent was added. The reaction mixture was incubated in dark at room temperature. Thirty minutes later, the absorbance was measured at 517 nm using 752 N UV–Vis spectrophotometer (Shanghai, China). Aliquots of α-tocopherol, BHT and BHA was served as standards for the assay. A mixture of DPPH and ethanol was served as control and a mixture of D. heterophyllum extracts solution and ethanol was served as a blank for the sample. All experiments were repeated three times and take the mean to eradicate any discrepancies. % inhibition of DPPH was calculated by the following formula:
$${\text{Inhibition of DPPH }}\left( \% \right) \, = \, \left[ {{{\left( {{\text{A}}_{\text{Control}} - {\text{A}}_{\text{Sample}} } \right)} / {{\text{A}}_{\text{Control}} }}} \right] \, \times { 1}00$$
where AControl is the absorbance of the control reaction, while ASample is the absorbance at 517 nm with D. heterophyllum extracts.
ABTS free radical scavenging activity
The ABTS·+ free radical scavenging activity was measured as the methodology described by Li et al. (2012) with some modifications. The ABTS·+ was produced by mixing 0.2 mL ABTS diammonium salt (7.4 mM) with 0.2 mL potassium persulfate (2.6 mM). The mixture was kept in the dark at room temperature for 12 h to allow completion of radical generation, then diluted with 95 % ethanol (about 40–50 times) so that its absorbance at 734 nm was 0.70 ± 0.02. To determine the scavenging activity, 4 mL aliquot of ABTS·+ reagent was mixed with 1 mL of sample alcoholic solutions (10–500 μg/mL). After incubation for 6 min, the absorbance at 734 nm was read on 752 N UV–Vis spectrophotometer (Shanghai, China). Aliquots of α-tocopherol, BHT and BHA was served as standards for the assay. A mixture of ABTS·+ reagent and ethanol was served as control and a mixture of D. heterophyllum extracts solution and ethanol was served as a blank for the sample. All experiments were repeated three times and take the mean to eradicate any discrepancies. The percentage inhibition of the samples was calculated as:
$${\text{Inhibition of ABTS}} \cdot^{ + } \left( \% \right) \, = \, \left[ {{{\left( {{\text{A}}_{\text{Control}} - {\text{ A}}_{\text{Sample}} } \right)} / {{\text{A}}_{\text{Control}} }}} \right] \, \times { 1}00$$
Anti-lipidperoxidation activity by FTC
The anti-lipidperoxidation activity of four D. heterophyllum extracts and standards was measured by ferric thiocyanate method (Kikuzaki and Nakatani 1993), which was slightly modified. A mixture of 1 mL of sample alcoholic solutions (25 and 50 μg/mL), 1 mL of 2.5 % linoleic acid solution in ethanol, 2 mL of a 0.05 M phosphate buffer (pH 7.0) and 1 mL of distilled water was placed in a centrifuge tube (15 mL) with a screw cap. The mixed solution (5 mL) was incubated in an oven at 40 °C in the dark. On the other hand, the 5 mL control was composed of 1 mL of ethanol, 1 mL of 2.5 % linoleic acid solution in ethanol, 2 mL of a 0.05 M phosphate buffer (pH 7.0) and 1 mL of distilled water. To 0.1 mL of this solution was added 9.5 mL of 75 % ethanol and 0.1 mL of 30 % ammonium thiocyanate. Exactly 3 min after adding 0.1 mL of 0.02 M FeCl2 in 3.5 % HCl, the absorbance of red color was measured at 500 nm. This step was repeated every 24 h until 1 day after absorbance of the control reached maximum. The percentage inhibition of lipid peroxidation in linoleic acid emulsion was calculated by following equation.
$${\text{Inhibition of lipid peroxidation }}\left( \% \right) \, = \, \left[ {{{\left( {{\text{A}}_{\text{Control}} - {\text{ A}}_{\text{Sample}} } \right)} / {{\text{A}}_{\text{Control}} }}} \right] \, \times { 1}00$$
where AControl is the absorbance of the control reaction and ASample is the absorbance at 500 nm with D. heterophyllum extracts or standards compounds.
Ferric reducing antioxidant power assay (FRAP)
The ability to reduce ferric ions was measured according to a modified method developed by Benzie and Strain (1996). To prepare the FRAP reagent, 0.1 mM acetate buffer (pH 3.6), 10 mM 2,4,6-tripyridyl-s-triazine (TPTZ) in 40 mM HCl, and 20 mM FeCl3·6H2O were mixed together in a ratio of 10:1:1 (v/v/v). A 100 μL aliquot of sample solutions (100–500 μg/mL) were added to 3 mL freshly prepared FRAP reagent. The absorbance of the reaction mixture was measured at 593 nm after 10 min incubation at 37 °C. Experiments were performed in triplicate. Aliquots of α-tocopherol, BHT and BHA was served as standards for the assay. The FeSO4·7H2O solutions (0.2–2 mM/L) was used to performed the calibration curves. The ferric reducing antioxidants power were calculated from the linear calibration curve and expressed by FRAP value: 1FRAP value = 1 mmol/L FeSO4.
Ferrous metal ions chelating activity
The ferrous metal chelating activity of four D. heterophyllum extracts and standards was estimated by the method of Dinis et al. (1994). Where in the ferrous ion chelating activity was measured by the absorbance of the ferrous iron-ferrozine complex at 562 nm. Briefly, four D. heterophyllum extracts (100–500 μg/mL) in 0.4 mL were added to 2 mM FeCl2 solution (0.05 mL), then 5 mM ferrozine (0.2 mL) was added in. The mixed solution was adjusted to 4 mL with ethanol. The reaction mixture was incubated in dark at room temperature for 10 min. Absorbance of the mixture was then measured at 562 nm. All tests and analyses were run in triplicate and averaged. The percentage of inhibition of the ferrozine–Fe2+ complex formation was calculated by the following formula (Gulcin 2006):
$${\text{Ferrous ions chelating effect }}\left( \% \right) \, = \, \left[ {{{\left( {{\text{A}}_{\text{Control}} - {\text{ A}}_{\text{Sample}} } \right)} \mathord{\left/ {\vphantom {{\left( {{\text{A}}_{\text{control}} - {\text{ A}}_{\text{sample}} } \right)} {{\text{A}}_{\text{control}} }}} \right. \kern-0pt} {{\text{A}}_{\text{Control}} }}} \right] \, \times { 1}00$$
where AControl is the absorbance of the control reaction, while ASample is the absorbance at 562 nm with D. heterophyllum extracts. The control contains FeCl2 and ferrozine, complex formation molecules.
Determination of total phenolic contents (TPC)
The total phenolic contents of four D. heterophyllum extracts were determined using the Folin-Ciocalteu method (Ragazzi and Veronese 1973) with a little modification. To prepare the Folin-Ciocalteu reagent, 25 g sodium tungstate, 6.25 g sodium molybdate, 175 mL distilled water, 8.5 % phosphoric acid solution, and 25 mL concentrated hydrochloric acid were mixed together. The above mixture was refluxed slowly in a water bath for 10 h. After cooling, 37.5 g lithium sulfate, 12.5 mL distilled water, and 50 mL hydrogen peroxide were added in, then continued to heat in boiling water for 15 min without cap. Finally dilute with water to 250 mL, and stored at 4 °C until used. 100 μL sample solution (1 mg/mL) was mixed with 500 μL Folin-Ciocalteu reagent and diluted with 1000 μL distilled water, then 1.5 mL of Na2CO3 solution (20 %, w/v) was added. Absorbance of the mixture was measured at 765 nm after 2 h incubation in dark at home temperature. The determinations were performed in triplicate, and the calculations were based on a calibration curve obtained with gallic acid (100–800 μg/mL).
Separation and purification of the bioactive compounds
The petroleum ether extract (500 g) was subjected to a silica gel column (petroleum ether-acetone, 95:5–5:95) to afford fractions (I–VI). Fraction III (150 g), the mixture of two compounds, was purified on a preparative C18 HPLC column with a isocratic of MeOH–H2O (87:13) to yield compound 1 (3 mg) and compound 2 (33 mg). The two compounds were difficult to separate. Fraction IV was purified on a preparative C18 HPLC column with a gradient of C2H3N–H2O (65:35–95:5) to yield compound 3 (130 mg) and compound 4 (75 mg). Fraction VI was purified on a preparative C18 HPLC column with a gradient of C2H3N–H2O (20:80–30:70) to yield compound 5 (200 mg).
The ethyl acetate extract (19 g) was subjected to medium pressure liquid chromatography with a gradient of MeOH–H2O (3:7–8:2) to afford fractions (I–VII). Fraction III (3 g) was purified on a preparative C18 HPLC column with a gradient of C2H3N−H2O (10:90–30:70) to yield compound 6 (1.5 g). Fraction IV (2.3 g) was purified on a preparative C18 HPLC column with a isocratic of C2H3N−H2O (25:75) to yield compound 7 (853 mg). Fraction V (1.1 g) was purified on a preparative C18 HPLC column with a isocratic of C2H3N−H2O (28:72) to yield compound 8 (58 mg) Fraction IV (1.1 g) was purified on a preparative C18 HPLC column with a isocratic of C2H3N–H2O (27:73) to yield compound 9 (87.9 mg).
Extraction and fractionation of Dracocephalum heterophyllum
From 8.86 kg of D. heterophyllum, 1026 g of 95 % ethanol extract was obtained. The yield was 11.6 %. The crude extract (1026 g) was then suspended in water and extracted with petroleum ether, ethyl acetate and n-butanol sequentially to get 500 g petroleum ether fraction (48.7.6 %), 19 g ethyl acetate fraction (2 %), 41.8 g n-butanol fraction (4.1 %) and 115.3 g water fraction (11.2 %).
Anti-hepatitis activity
As show in Fig. 1, the survival rate of mice in the first group, ConA group, is lower than 50 % in 8 h, and the mice were all die in 16 h. The mice in the second group, drug group, were all still living in 24 h. Its means that the four extracts of D. heterophyllum have no toxic effect on mice. The survival time of mice in the third group, ConA+ drug group, is all longer than ConA group on different levels, respectively. In particular, the survival rate of mice which were injected with petroleum ether extract is nearer to 50 % in 24 h. And the survival rate of mice which were injected with ethyl acetate extract has beyond 65 % in 24 h. The experimental results suggest, the petroleum ether and ethyl acetate extracts of D. heterophyllum significantly prolonged the mean survival times and reduced the mortality of ConA induced mice.
The experiments of survival rate. Female Balb/C mice were injected with four extracts of D. heterophyllum (50 mg/kg) at 2 h before the injection of a lethal dose of Con A (20 mg/kg). The survival rate was monitored at different times after Con A administration
In biochemistry, a transaminase or an aminotransferase is an essential enzyme in the process of normal cellular metabolism, and mainly remains in hepatocytes. In medicine, the presence of elevated transaminases in serum can be an indicator of liver damage. Two important transaminase enzymes are alanine transaminase (ALT), also called alanine aminotransferase (ALAT) or serum glutamate-pyruvate transaminase (SGPT); and aspartate transaminase (AST), also known as serum glutamic oxaloacetic transaminase (SGOT). As show in Fig. 2, the level of ALT and AST in ConA group was significantly higher than control group (P < 0.01). Its means that ConA can induce live damage, increase the contents of ALT and AST in serum. But, the level of ALT and AST in ConA+ drug group was obviously lower than ConA group (P < 0.01). Its suggest that ethyl acetate extract of D. heterophyllum can improve live injury induced by ConA.
The level of ALT and AST in mice serum. Female Balb/C mice were injected with ethyl acetate extract of D. heterophyllum (50 mg/kg) at 2 h before the injection of a dose of Con A (12.5 mg/kg). Serum transaminase ALT and AST levels were determined 8, 16, 14 h after the Con A injected. Data expressed as mean ± SD (n = 8; **P < 0.01 and ***P < 0.001)
As show in Fig. 3, in contrast with control group, the liver of ConA group had serious pathological changes, the liver issue become harden, the color become deepen, external surface of liver studded with a lot of regenerated nodules. Meanwhile the pathological histology analysis results showed that inflammatory cellular infiltration and necrosis were observed in ConA group. However, compared with ConA group the liver lesions of the ConA+ drug group were improved dramatically, and the inflammatory cells and necrosis of hepatic cells were significantly reduced.
Morphological change and histological pathologic qualitative evaluation for the livers of experimental group and controlled group. Mice were sacrificed at 24 h after the Con A injection. The liver were harvested from control (1), drug (2), Con A (3) and Con A + drug (4) mice respectively. And the liver tissues of the four groups were fixed and strain with hematoxylin and eosin (H and E). The arrow indicates massive cell death in the liver section. Original magnification ×200
The result of TUNEL assay indicated that there were many apoptotic cells in the ConA group compared with control group. And the DAPI staining showed nuclear condensation and fragmentation in ConA group. But, in ConA+ drug group, there were only A few apoptotic cells were detected. The results suggested that ConA can induce a large amount of apoptotic cells, and the ethyl acetate extracts of D. heterophyllum can improve apoptotic (Fig. 4).
Analysis of liver cell apoptosis. Mice were sacrificed at 24 h after Con A injection. The livers were harvested from control (A), Con A (B), drug (C) and ConA+ drug (D) groups respectively. Liver tissues from the four groups were fixed and stained with TUNEL (1), DAPI (2) and overlap (3). Original magnification ×200
Liver damage caused by hepatitis is associated with excessive activation of the immune responses. It has been reported that TNF-α participate in various forms of liver damage, such as viral, toxic and autoimmune hepatitis, and play an important role in ConA-induced hepatitis. It has also been reported that Kupffer cells secrete a large amount of TNF-α to aggravate the liver damage. Kupffer cells play an important role in T cell activation-induced liver injury. However, the mechanism may not be that simple, and more research is needed.
In this study, the antioxidant activities of four D. heterophyllum extracts and standards such as α-tocopherol, BHT and BHA were determined using the DPPH coloring method in different concentrations (10–500 μg/mL). Figure 5 illustrates the radical-scavenging activity of the different fractions of D. heterophyllum and three standards antioxidants. The scavenging effect of four D. heterophyllum extracts and standards on the DPPH radical decreased in the order: EtOAc > α-tocopherol > BHT > BHA > n-BuOH > petroleum ether > water, which was 88.8, 81.6, 80.4, 57.3, 33.7, 12, 9.1 % at a concentration of 50 µg/mL, respectively. Figure 5 also illustrated that the free radical scavenging activity of these samples increased with increasing concentration obviously.
Free radical scavenging activity of various fractions of D. heterophyllum. 0, 10, 30, 50, 100, 200 μg/mL of four D. heterophyllum extracts were measured by the DPPH method at 517 nm. BHT, BHA and α-tocopherol were used as standard antioxidants
ABTS·+ free radical scavenging activity
In this experiment, BHT, BHA and α-tocopherol were served as standard antioxidants. As seen in (Fig. 6), the extracts of D. heterophyllum had effective ABTS·+ free radical scavenging activity, and the scavenging activity of four fractions of D. heterophyllum and standards antioxidants on the ABTS·+ decreased in that order: α-tocopherol = BHA > BHT > EtOAc > n-BuOH > petroleum ether > water, which were 99.56, 99.56, 75.2, 66.2, 27.1, 12.2 and 8.0 %, at the concentration of 50 μg/mL, respectively.
Free radical scavenging activity of various fractions of D. heterophyllum. 0, 10, 30, 50, 100, 200 μg/mL of four D. heterophyllum extracts were measured by the ABTS method at 734 nm. BHT, BHA and α-tocopherol were used as standard antioxidants
Anti-lipidperoxidation activity
The petroleum ether fraction, EtOAc fraction, n-BuOH fraction, water fraction of D. heterophyllum and standards compounds exhibited effective Anti-lipidperoxidation activity. At the concentration of 50 μg/mL, the effects of four fractions of D. heterophyllum and α-tocopherol, BHT and BHA on lipid peroxidation of the linoleic acid emulsion are shown in (Fig. 7). The percentage inhibition of peroxidation in the linoleic acid system by 50 µg/mL concentrations of petroleum ether fraction, EtOAc fraction, n-BuOH fraction and water fraction of D. heterophyllum was found to be 78.2, 98.7, 83.0 and 67.5 %, respectively. On the other hand, the percentage inhibition of 50 µg/mL concentrations of standards α-tocopherol, BHT and BHA was found to be 91.5, 97.5 and 93.75 %, respectively. The result showed that the EtOAc fraction of D. heterophyllum had the highest anti-lipidperoxidation activity.
Anti-lipidperoxidation activity of various fractions of D. heterophyllum (50 μg/mL) was measured by the FTC method at 500 nm. BHT, BHA and α-tocopherol were used as standard antioxidants
Ferric reducing ability of plasma (FRAP)
The FeSO4·7H2O solutions (0.2–2 mM/L) was used to performed the calibration curves, regression equations of it was Y = 0.68006X + 0.00916, R2 = 0.999. The ferric reducing antioxidant power of four D. heterophyllum extracts and standards antioxidants were expressed as FRAP value: 1 FRAP value = 1 mmol/L FeSO4.
α-tocopherol, BHA and BHT had the reducing antioxidant power of 2.1 ± 0.01, 1.8 ± 0.01 and 1.3 ± 0.02 FRAP value. The EtOAc fraction had highest reducing antioxidant power with 1.6 ± 0.01 FRAP value, and followed by petroleum ether, n-BuOH and water fraction. The (Fig. 8) illustrates the dose response of each individual antioxidant tested was linear, showing that reducing antioxidant activity is not concentration-dependent, at least over the concentration ranges tested in this study.
Linearity of FRAP: dose–response lines for solutions of four D. heterophyllum extracts. BHT, BHA and α-tocopherol were used as standard antioxidants
As shown in (Fig. 9), the metal chelating activities of petroleum ether fraction, EtOAc fraction, n-BuOH fraction, water fraction and standards were concentration-dependent. The difference between the four D. heterophyllum factions and the control was statistically significant (P < 0.01). In addition, the percentages of metal scavenging capacity of 0.5 mg/mL concentration of Petroleum ether fraction, EtOAc fraction, n-BuOH fraction, water fraction, α-tocopherol, BHT and BHA were found as 59.2, 78.2, 29.0, 21.8, 42.0, 58.8, 86.6 %, respectively. These results show that the ferrous ion chelating effect of EtOAc fraction was statistically similar to BHA (P > 0.05) but higher than BHT (P < 0.05) and α-tocopherol (P < 0.05), the petroleum ether fraction was similar to BHT (P > 0.05), but higher than α-tocopherol (p < 0.05), the n-BuOH and water fraction had the lowest Ferrous metal ions chelating activity.
Ferrous ions chelating effect of different concentrations of Petroleum ether fraction, EtOAc fraction, n-BuOH fraction, water fraction α-tocopherol, BHT and BHA
Total phenolic contents of fractions of Dracocephalum heterophyllum
By way of the literature review (Li et al. 2012), total phenolic was of significant positive correlations with antioxidant levels. In this paper, the total phenolic contents of petroleum ether, EtOAc, n-BuOH and water fractions were measured by the Folin-Ciocalteu reagent. The gallic acid (100–800 μg/mL) was used to performed the calibration curves, the regression equations of it was Y = 0.001X + 0.1693, R2 = 0.9957, and the total phenolic contents was expressed as Gallic acid mg/g of dry material. The results showed that EtOAc fraction had the highest phenolic contents of 433.7 mg gallic acid/g, n-BuOH and petroleum ether fraction had the phenolic contents of 110.7 and 70.7 mg/g. The water fraction had the lowest phenolic contents of 31.7 mg Gallic acid/g of dry material. This results indirectly reflect the antioxidant activities of Dracocephalum heterophyllum extracts.
The correlation of total phenolic contents (TPC) with DPPH assay (R2 = 0.9637, P < 0.05), ABTS assay (R2 = 0.9638, P < 0.05), FTC assay (R2 = 0.8203, P < 0.05) and FRAP assay (R2 = 0.9991, P < 0.05), respectively. The results demonstrated that the antioxidant activities of D. heterophyllum extracts had high correlation with total phenolic contents (TPC).
The chemical constituent of Dracocephalum heterophyllum
The main chemical constituent of petroleum ether and ethyl acetate extracts were also isolated using chromatographic techniques and identified by NMR spectroscopic methods. 5 compounds were obtained from petroleum ether extract and 4 compounds were obtained from Ethyl acetate extract. These compounds were identified as oleanolic acid (1) (Seebacher et al. 2003), ursolic acid (2) (Seebacher et al. 2003), pomolic acid (3) (Cheng and Cao 1992), 2α- hydroxyl ursolic acid (4) (Kuang et al. 1989) and Apigenin-7-O-rutinoside (5) (Baris et al. 2011; Wang et al. 2003) from petroleum ether extracts, rosmarinic acid (6) (Kuhnt et al. 1994), methyl rosmarinate (7) (Kohda et al. 1989), luteolin (8) (Geiger et al. 1995; Xu et al. 2014) and diosmetin (9) (Sahu et al. 1998) from ethyl acetate extracts. The compound 1–4 were triterpenes, the compound 5, 8 and 9 were flavonoids, the compound 6 and 7 were phenolic acids (Fig. 10). The 1H and 13C NMR data are as in Additional file 1. The main chemical constituents were isolated and identified, but the anti-hepatitis, antioxidants activities of these compounds require further study.
The compounds of D. heterophyllum extracts
In this paper, the mouse fulminant hepatitis model induced by ConA was first used to research the anti-hepatitis activity of petroleum ether, ethyl acetate, n-butyl alcohol and water extracts of D. heterophyllum. The antioxidant activity was also studied by some experiment in vitro, and the bioactive substances were isolated using chromatographic techniques and identified by NMR spectroscopic methods. Our results indicated that the ethyl acetate extract of D. heterophyllum had the highest anti-hepatitis and antioxidants activities, followed by the petroleum ether extract. The bioactive substances may be triterpenes, flavonoids and phenolic acids, the ethyl acetate extracts of D. heterophyllum may be possible candidates in developing anti-hepatitis medicine.
Baris O, Karadayi M, Yanmis D, Guvenalp Z, Bal T, Gulluce M (2011) Isolation of 3 flavonoids from Mentha longifolia (L.) Hudson subsp. longifolia and determination of their genotoxic potentials by using the E. coli WP2 test system. J Food Sci 76:T212–217
Benzie IFF, Strain JJ (1996) The ferric reducing ability of plasma (FRAP) as a measure of "antioxidant power": the FRAP assay. Anal Biochem 239:70–76
Blois MS (1958) Antioxidant determinations by the use of a stable free radical. Nature 181:1199–1200
Cheng DL, Cao XP (1992) Pomolic acid derivatives from the root of Sanguisorba officinalis. Phytochemistry 31:1317–1320
Chisari FV (1995) Hepatitis B virus transgenic mice: insights into the virus and the disease. Hepatology 22:1316–1325
Dinis TCP, Madeira VMC, Almeida LM (1994) Action of phenolic derivatives (acetaminophen, salicylate, and 5-aminosalicylate) as inhibitors of membrane lipid-peroxidation and as peroxyl radical scavengers. Arch Biochem Biophys 315:161–169
Dwyer J, Johnson C (1981) The use of concanavalin A to study the immunoregulation of human T cells. Clin Exp Immunol 46:237
Geiger H, Voigt A, Seeger T, Zinsmeister HD, Lopezsaez JA, Perezalonso MJ, Velasconegeruela A (1995) Cyclobartramiatriluteolin, a unique triflavonoid from Bartramia Stricta. Phytochemistry 39:465–467
Gulcin I (2006) Antioxidant activity of caffeic acid (3,4-dihydroxycinnamic acid). Toxicology 217:213–220
Hollinger FB, Liang TJ (2001) Hepatitis B virus. Fields Virol 4:2971–3036
Kikuzaki H, Nakatani N (1993) Antioxidant effects of some ginger constituents. J Food Sci 58:1407–1410
Kohda H, Takeda O, Tanaka S, Yamasaki K, Yamashita A, Kurokawa T, Ishibashi S (1989) Isolation of inhibitors of adenylate-cyclase from dan-shen, the root of Salvia-Miltiorrhiza. Chem Pharm Bull 37:1287–1290
Kuang HX, Kasai R, Ohtani K, Liu ZS, Yuan CS, Tanaka O (1989) Chemical constituents of pericarps of Rosa davurica Pall., a traditional Chinese medicine. Chem Pharm Bull 37:2232–2233
Kuhnt M, Rimpler H, Heinrich M (1994) Lignans and other compounds from the mixe Indian medicinal plant Hyptis verticillata. Phytochemistry 36:485–489
Li XC, Lin J, Gao YX, Han WJ, Chen DF (2012) Antioxidant activity and mechanism of Rhizoma Cimicifugae. Chem Cent J 6:10
Mahmood U, Kaul VK, Singh V, Lal B, Negi HR, Ahuja PS (2005) Volatile constituents of the cold desert plant Dracocephalum heterophyllum Benth. Flavour Frag J 20:173–175
Peng HF (1984) The effect of Dracocephalum Heterophyllum benth on the tolerance of acute hypoxia in the living organism. Med J Chin People's Lib Army
Pumpens P, Grens E, Nassal M (2002) Molecular epidemiology and immunology of hepatitis B virus infection-An update. Intervirology 45:218–232
Ragazzi E, Veronese G (1973) Quantitative-analysis of phenolic compounds after thin-layer chromatographic separation. J Chrom 77:369–375
Sahu NP, Achari B, Banerjee S (1998) 7,3′-Dihydroxy-4′-methoxyflavone from seeds of Acacia Farnesiana. Phytochemistry 49:1425–1426
Schultz U, Grgacic E, Nassal M (2004) Duck hepatitis B virus: an invaluable model system for HBV infection. Adv Virus Res 63:1–70
Seebacher W, Simic N, Weis R, Saf R, Kunert O (2003) Complete assignments of 1H and13C NMR resonances of oleanolic acid, 18α-oleanolic acid, ursolic acid and their 11-oxo derivatives. Magn Reson Chem 41:636–638
Tiegs G, Hentschel J, Wendel A (1992) A T cell-dependent experimental liver injury in mice inducible by concanavalin A. J Clin. Invest 90:196–203
Wang MF, Simon JE, Aviles IF, He K, Zheng QY, Tadmor Y (2003) Analysis of antioxidative phenolic compounds in artichoke (Cynara scolymus L.). J Agric Food Chem 51:601–608
Wang Y, Menne S, Baldwin BH, Tennant BC, Gerin JL, Cote PJ (2004) Kinetics of viremia and acute liver injury in relation to outcome of neonatal woodchuck hepatitis virus infection. J Med Virol 72:406–415
Wieland SF, Spangenberg HC, Thimme R, Purcell RH, Chisari FV (2004) Expansion and contraction of the hepatitis B virus transcriptional template in infected chimpanzees. P Natl Acad Sci USA 101:2129–2134
Xu H, Hu G, Dong J, Wei Q, Shao H, Lei M (2014) Antioxidative activities and active compounds of extracts from Catalpa plant leaves. Sci World J 2014:857982
Zhang CJ, Li HY, Yun T, Fu YH, Liu CM, Gong B, Neng BJ (2008) Chemical composition, antimicrobial and antioxidant activities of the essential oil of Tibetan herbal medicine Dracocephalum heterophyllum Benth. Nat Prod Res 22:1–11
Zhang CJ, Li W, Li HY, Wang YL, Yun T, Song ZP, Song Y, Zhao XW (2009) In vivo and in vitro antiviral activity of five Tibetan medicinal plant extracts against herpes simplex virus type 2 infection. Pharm Bio 47:598–607
Zoulim F (2001) Detection of hepatitis B virus resistance to antivirals. J Clin Virol 21:243–253
Q-QS principal conducted the experiment and wrote the manuscripts. Associate professor Q-LW helped us to modify the manuscripts. All authors participated in drafting the manuscript. All authors read and approved the final manuscript.
The authors gratefully appreciated the support by senior engineer Li-Juan Mei, from Northwest Institute of Plateau Biology, CAS, who helped us identifying the authenticity of D. heterophyllum. The authors also like to thank the grant support from Advanced technology research and development program of Qinghai Province (2014-GX-220).
Key Laboratory of Tibetan Medicine Research, Northwest Institute of Plateau Biology, CAS, Xining, China
Qiang-Qiang Shi, Jun Dang, Huai-Xiu Wen, Xiang Yuan, Yan-Duo Tao & Qi-Lan Wang
University of Chinese Academy of Science, Beijing, China
Qiang-Qiang Shi & Xiang Yuan
Qiang-Qiang Shi
Jun Dang
Huai-Xiu Wen
Xiang Yuan
Yan-Duo Tao
Qi-Lan Wang
Correspondence to Qi-Lan Wang.
Additional file
NMR data.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Shi, QQ., Dang, J., Wen, HX. et al. Anti-hepatitis, antioxidant activities and bioactive compounds of Dracocephalum heterophyllum extracts. Bot Stud 57, 16 (2016). https://doi.org/10.1186/s40529-016-0133-y
Dracocephalum heterophyllum
|
CommonCrawl
|
Research | Open | Published: 30 January 2016
Vaccination control programs for multiple livestock host species: an age-stratified, seasonal transmission model for brucellosis control in endemic settings
Wendy Beauvais ORCID: orcid.org/0000-0001-7634-33311,2,
Imadidden Musallam1,2 &
Javier Guitian1,2
Brucella melitensis causes production losses in ruminants and febrile disease in humans in Africa, Central Asia, the Middle East and elsewhere. Although traditionally understood to affect primarily sheep and goats, it is also the predominant Brucella species that affects cows in some endemic areas. Despite this, no licensed vaccine is available specifically for use against B. melitensis in cows. The mainstay of most control programs is vaccination of sheep and goats with a live vaccine, Rev-1. The aim of this study was to investigate how critical vaccination of cows might be, in order to control B. melitensis on a mixed sheep-and-cattle farm.
A dynamic, differential-equation, age-structured, seasonal model with births and deaths, was used to investigate whether vaccination of both sheep and cattle had an impact on time to elimination of brucellosis on an individual mixed species farm, when compared to vaccination of sheep only. The model was a Susceptible-Exposed-Infectious-Recovered-Susceptible (SEIRS) model with an additional compartment for Persistently Infected (PI) individuals. Transmission parameters were fit based on a nation-wide probabilistic seroprevalence survey in Jordan.
The model predicted that it would take 3.5 years to eliminate brucellosis (to less than 0.5 % of adult sheep seropositive as a result of infection) on a mixed-species B. melitensis-endemic farm with the median field-study seroprevalence, following vaccination of both sheep and cattle, assuming a vaccine effectiveness of 80 %. Limiting the vaccination to sheep only, increased the time to 16.8 years. Sensitivity analysis showed that the finding that vaccination of cattle was of significant importance, was robust. Vaccine effectiveness had a strong influence on time to elimination.
In the absence of further data, vaccination of cattle should be considered essential in Brucella-endemic settings where mixed small ruminant and cattle flocks predominate. Further evidence that Brucella melitensis predominates in cattle in Jordan, as opposed to Brucella abortus, is needed in order to validate this model. The results may be applicable to other mixed-species settings with similar livestock management practices. These methods may be applied to other pathogens affecting multiple livestock species or with seasonal transmission.
Brucellosis, a bacterial zoonosis, is the cause of febrile disease in humans and livestock production losses in many countries worldwide. Control has proved elusive in many areas, particularly where Brucella melitensis, the more pathogenic species for humans and small ruminants (sheep and goats), predominates, such as the Middle East [1, 2].
Brucella spp. are predominantly transmitted via direct contact with abortion and birth fluids of infected animals, and via consumption of unpasteurised milk or dairy products. Infection via the oral route, or via contact with conjunctiva or cuts in the skin is possible. In livestock, offspring of infected mothers can become persistently infected and remain seronegative until abortion occurs, when they are reported to seroconvert. Transmission via semen is also possible [3, 4].
Survival of Brucella spp. in the environment depends critically on humidity, temperature and exposure to UV light. Survival in ideal environments is reported to last up to 135 days, although a field study in spring in Montana, USA found that Brucella abortus survived in the environment for only 21-81 days, depending on the environment [5].
Human infection is almost invariably associated with an animal (mostly ruminant) source. In highly endemic areas, vaccination of the ruminant reservoir is the mainstay of brucellosis control programs, as well as general biosecurity and hygiene practices. However, in many countries where vaccination has been practised for many years, a high incidence in humans and livestock persists [6, 7]. This is a result of poor compliance due to legitimate concerns over safety of the live vaccines available, for humans and livestock (which commonly abort if pregnant); the limited efficacy of the vaccine; the need for careful storage and handling of the live vaccine for safety and to preserve its efficacy, as well as cost and availability of vaccine, particularly for smallholders who have the additional burden of low biosecurity farming systems and often poor access to medical and veterinary care.
Although B. melitensis has been traditionally thought of as a pathogen adapted to sheep and goats, and B. abortus adapted to cattle, cows are known to be susceptible to B. melitensis. In the Middle East and Central Asia, for example, high seroprevalence estimates are reported in cows and B. melitensis has been frequently isolated from cows [8–10]. It is common for cattle and small ruminants to co-graze or share pasture areas, and to be housed in the same building at night, in these regions.
Despite this situation, there has been no vaccine licensed for B. melitensis in cattle, and neither the safety nor efficacy of the small ruminant vaccine (Rev-1) has been thoroughly evaluated, neither has the B. abortus cattle vaccine for use against B. melitensis. This has been a stumbling block for policy-makers. In Jordan, for example, the official brucellosis program involves vaccination of small ruminants with Rev-1 vaccine, but no vaccination for cattle at all. However in practice, very little vaccination is practised at all [8]. There is no clear evidence for recommendations on vaccination of cattle in mixed-species settings endemic for B. melitensis.
Therefore, the aims of this study were firstly to simulate Brucella melitensis transmission on a single mixed sheep-and-cattle farm, incorporating: (a) heterogeneity in transmission parameters according to livestock species, age and season; and (b) seasonality in births of lambs. And, secondly, to use the model to compare vaccination of sheep-only with vaccination of sheep-and-cattle, in terms of time to elimination of B. melitensis.
Previous transmission models for brucellosis have included compartmental models with Susceptible-Infected (SI) [11] or Susceptible-Infected-Recovered (SIR) structures [12, 13]. However, they have not explicitly quantified levels of transmission between different ruminant species. Furthermore, transmission, which is highly dependent on abortion/birth events, having periodicity and seasonality, has not been linked in the model to the reproductive cycle of livestock. The age-structure of the herd has also not been taken into account; young animals are generally ignored in the model as they cannot become infectious. However they can become persistently infected, or infected while still juvenile, leading to infectiousness in adulthood. In this study we explore these three issues in a revised model, parameterized with field data from a nationwide seroprevalence study in Jordan, in which data on herd structures and reproductive parameters was also collected.
Seroprevalence survey
Data from a previously published nationwide seroprevalence study of randomly-sampled flocks and herds in Jordan was used to estimate median within-farm seroprevalence on Brucella-positive farms in Jordan [8]. It was assumed that the impact of B. abortus and B. ovis on median seroprevalence was negligible, and that B. melitensis was the cause of all seropositives, after adjusting for the sensitivity and specificity of the tests used. It was also assumed that the farm(s) with the median seroprevalence value exhibited endemic stability.
From the original dataset, all sheep-only farms (n = 203), cattle-only (n = 171) and mixed sheep-cattle farms (n = 27) were selected. On each farm, seven to nine female animals of each species had been milk-sampled (cows) or blood-sampled (sheep). The estimated true seroprevalence values at flock and herd levels were 22.2 % (95 % CI: 16.5–28.8) (sheep flocks), 18.1 % (95 % CI: 11–25.3) (cattle-only herds), and 38.5 % (95 % CI: 24.3–51.8) (mixed herds of cattle and small ruminants).
The tests used, and estimated sensitivity and specificity, are shown in Table 1. The number of female animals and the number of pregnancies in the previous year, by species were also recorded for each farm.
Table 1 Assumed sensitivity and specificity of tests used in the nationwide seroprevalence study of brucellosis in ruminants in Jordan [14]
The median within-farm seroprevalence was estimated separately for sheep-only, cattle-only and mixed sheep-cattle farms. Only female animals were included.
To account for uncertainty in the true within-farm seroprevalence resulting from both (1) imperfect tests and (2) sampling a variable fraction of animals on each farm in the study, a previously-developed Bayesian model (Beauvais et al. 2016, manuscript under review) was adapted to produce an uncertainty distribution of true within-herd seroprevalence on each farm. These distributions for each farm were repeatedly sampled, and the median seroprevalence amongst seropositive farms was calculated for each iteration, to produce an uncertainty distribution for the true median within-herd seroprevalence on seropositive farms in the study. The model was run for 1000 iterations (for sheep-only farms, cattle-only farms and mixed sheep-cattle farms). The sensitivity and specificity values were taken from a published meta-analysis [14]. There were multiple different ELISAs included in the meta-analysis, which were not identified by product name, and the mean of these values was used, after excluding an ELISA with exceptionally poor performance. The performance of the tests used in sheep and cattle are shown in Table 1.
To account for possible correlation between within-herd seroprevalence and two other parameters in the model: (1) the number of pregnancies per animal per year; and (2) the ratio of cows: sheep on mixed farms, these parameter values were identified for the farm with the median seroprevalence in each iteration of the model, to produce an uncertainty distribution for the true value of each parameter on the median-seroprevalence farm(s).
Dynamic Brucellosis models
A dynamic, differential-equation, age-structured, seasonal model with births and deaths, was used to investigate whether vaccination of both sheep and cattle had an impact on time to elimination of brucellosis on an individual mixed-species farm, when compared to vaccination of sheep only. The model was a Susceptible-Exposed-Infectious-Recovered-Susceptible (SEIRS) model with an additional compartment for Persistently Infected (PI) individuals (Fig. 1). The model was run in R [15].
Diagram showing compartments in the transmission model (boxes) and transitions between compartments (solid arrows). Dashed arrows show births and deaths. PI = persistently infected, Non-PI = not persistently infected, V = vaccinated, S = susceptible, E = exposed (pre-infectious), I = Infectious and R = recovered. Dotted lines animals in one compartment infecting animals in another compartment. See section "Dynamic Brucellosis models" for more details
Only females are included in the model. The age categories are: young (0-9 months), when the animals cannot become infected; juvenile (9-12 months), when the animals can become infected but not infectious (Exposed or pre-infectious); and adult (> 12 months), when the animals can have a late abortion or give birth, at which time they can become Infectious (I). Aging and death occurs at a constant rate. Only adult animals die.
Replacement animals are assumed to be sourced only from amongst the youngstock born on the farm, at an annual rate equal to the death rate (m). Newborns that are sold off the farm are assumed to be sold before the age of 12 months, and so cannot become Infectious whilst on the farm. These animals are therefore not included in the model.
The population size remains constant from year to year, and replacement newborns enter the herd at a constant rate, however sheep births are limited to only 6 months of the year, the lambing season. The sheep replacement rate was therefore adjusted so that the mean annual replacement rate = annual death rate. Animals can be born either Susceptible (S), or Persistently Infected (PI) (1 % of newborns born to adults in the Exposed compartment). It is assumed that the birth rate is the same amongst all adult females (except in the Infectious compartment, in which case they have aborted/given birth in the last 4 months and cannot therefore abort/give birth). The proportion of newborn replacement animals that are PIs is therefore equal to 0.01 multiplied by the proportion of adult females that are in the Exposed compartment (excluding Infectious animals). PIs are seronegative but seroconvert when they have a late abortion/give birth, at which point they also become Infectious (I).
Susceptible (S) animals, once they enter the Juvenile age-group, can become Exposed (E) at a rate (r) proportional to the fraction of animals on the farm that are infectious (frequency-dependent transmission). Separate values of beta (the transmission coefficient) are used for sheep-to-sheep, cow-to-cow, sheep-to-cow and cow-to-sheep transmission. Homogenous mixing amongst different species and different age-groups is assumed, because close contact between animals is not necessary for transmission, rather it is assumed that the majority of transmission occurs through contact with placental fluid which may remain infectious for up to 4 months on pasture or bedding, which is shared between species and age-groups [5].
r for a given age-group (x), season (s) and species (i) is therefore given by:
$$ \left(\mathrm{bet}{\mathrm{a}}_{\mathrm{i},\mathrm{i}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\kern0.5em + \mathrm{bet}{\mathrm{a}}_{\mathrm{i},\mathrm{j}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{j},\mathrm{s}}\right)*{{\mathrm{S}}^{\mathrm{x}}}_{\mathrm{i},\mathrm{s}}/\mathrm{N} $$
betai,i represents the within-species transmission coefficient.
betai,j represents the between-species transmission coefficient.
Ia i,s represents the number of infectious adults of species i in season s.
Ia j,s represents the number of infectious adults of species j in season s.
Sx i,s represents the number of susceptible animals of a given age-group of a given species in a given season.
N represents the total number of animals of all ages and species on the farm.
Once in the Exposed (E) compartment, animals move into the Infectious (I) compartment at a rate v, the rate at which they have a late abortion/ give birth, which was assumed to equal the pregnancy rate reported by the farmers. (It was assumed that pregnancies resulting in early abortions, which are assumed not to be infectious as they result in reabsorption, are unlikely to have been detected by the farmer.)
Sheep can only have a late abortion/give birth during the 3 months preceding the lambing season, and the 6 months of the lambing season, and can therefore only move from the Exposed (E) to Infectious (I) compartments during these periods.
Animals move from the Infectious (I) to Recovered (R) compartment at a rate g, and remain seropositive until they re-join the Susceptible compartment at a rate z, when they become seronegative.
Mass vaccination occurs at a single time point, once endemic stability has been reached. A proportion (VE) of animals from each compartment, except the Infectious (I) and Persistently Infected (PI) compartments move into the Vaccinated compartments, where they remain until they die. (It is assumed that vaccination of Infectious and PI animals is ineffective.) Following this time-point, a proportion (VE) of newborn replacement animals enter directly into the Vaccinated (V) compartment, instead of the Susceptible (S) compartment. (It is assumed that replacement animals are vaccinated at some point between 0 and 9 months of age.)
The model is described by a set of differential equations shown in the Appendix. The parameters and values are described in Table 2.
Table 2 Parameters used in the SEIRS + PI transmission model for B. melitensis
Fitting of transmission parameters
Transmission parameters were fit in three steps:
The model was set assuming there were zero cows on the farm, and fit to the most likely median within-farm seroprevalence estimated from the sheep-only farms in the seroprevalence study, assuming endemic stability. The best-fit transmission parameter was found by the least sum of squares (LSS) method (the squared difference between the endemic seroprevalence predicted by the model and the seroprevalence study), using the Brent method implemented in the Optim function in R. In this way, the sheep-to-sheep transmission parameter was obtained.
Step 1 was repeated for cattle-only farms, to obtain the cattle-to-cattle transmission parameter.
The model was set using the cattle: sheep ratio obtained from the seroprevalence study and the sheep-to-sheep and cattle-to-cattle transmission parameters. Cattle-to-sheep and sheep-to-cattle transmission parameters were obtained simultaneously using equal-weighted LSS, by the Nelder-Mead method implemented in the Optim function in R.
Model output
The model was run using the fitted transmission parameters, once with vaccination of sheep and cattle, and once with vaccination of sheep only. In each case, we recorded the time to elimination of brucellosis on the farm, defined as reducing the proportion of seropositives (due to infection rather than vaccination) to <0.5 %. The threshold was chosen as a reasonable target for a brucellosis vaccination program on an individual farm of more than 200 animals. (On a smaller farm, a proportion of less than 0.5 % seropositives would imply eradication).
The transmission parameter fitting process and model outputs were repeated for a range of values for parameters that we considered to be the most uncertain: duration of infectious period, duration of immune period, ratio of cows: sheep on the farm and vaccination effectiveness. Parameter values were selected based on biological plausibility or results of the field study (cows: sheep ratio). The results were plotted.
In order to investigate the impact of model structure assumptions on the conclusions, the transmission parameter fitting process and model outputs were also repeated using a simple Susceptible-Infected compartmental structure with no account for age or season.
In addition, the fitted model was used to investigate how a change in the ratio of cows : sheep on the farm would affect the time to elimination, and the impact of cattle vaccination.
Parameters estimated from seroprevalence study
The uncertainty distributions for median within-farm seroprevalence and associated cows: sheep ratio and pregnancy rates in the seroprevalence study are described in Table 3.
Table 3 Parameters estimated from the nationwide seroprevalence study of brucellosis in ruminants in Jordan
Model fit
Good-fit transmission parameters were obtained for all models, with model sum of squares ranging from 1.26x10−17 to 1.48 x10−15 for sheep-only models; 1.43 x10−20 to 1.56 x10−17 for cow-only models; and 3.49 x10−13 to 1.42 x10−10 for mixed sheep-cattle models.
Model predictions
On a single mixed-species farm, assuming an infectious period of 4 months, a recovery period of 8 months, and a vaccine effectiveness of 80 %, sheep-only vaccination resulted in elimination of sheep brucellosis (to <0.5 % of adults seropositive due to infection as opposed to vaccination) in 16.8 years, whereas it took only 3.5 years with vaccination of both sheep and cattle (Fig. 2a).
Time to elimination of sheep brucellosis (to <0.5 % seropositive due to infection as opposed to vaccination) after mass vaccination followed by vaccination of replacements on a mixed sheep-cattle farm. The transmission model was fit to median seroprevalence on endemic farms in a randomly sampled survey in Jordan, using the SEIR+PI structure, seasonality in lambing period and sheep-sheep and sheep-cow transmission, and an age-structure. a. A mean immune (Recovered) period of 8 months, and an infectious period of 4 months was assumed. b. A mean immune (Recovered) period of 20 months, and an infectious period of 4 months was assumed. c. A mean immune (Recovered) period of 11.5 months, and an infectious period of 0.5 months was assumed. d. The transmission model was fit to median seroprevalence on endemic farms in a randomly sampled survey in Jordan, using the SI structure, no seasonality or agestructure
Assuming a recovery period of 20 months or an infectious period of 0.5 months and a recovery period of 11.5 months had a relatively small impact on the results (Fig. 2b and c). Vaccine effectiveness had an impact on the overall times to elimination, but the difference between sheep-only and sheep-and-cattle vaccination remained large.
A simple SI model with no seasonality or age-structure was fit to the same data. The time to elimination increased greatly, however the finding that vaccination of cattle would be necessary to eliminate infection in sheep appeared to be robust (Fig. 2d).
Assuming different ratios of cows : sheep on the farm had a minor impact on the time to elimination with vaccination of both cattle and sheep (Fig. 3). Vaccination of sheep only resulted in an increase of 2.1 years to elimination, even when the ratio of cows: sheep on the farm was 0.2, and the difference increased with an increasing ratio of cows: sheep.
Time to elimination of sheep brucellosis (to <0.5 % seropositive due to infection as opposed to vaccination) after mass vaccination followed by vaccination of replacements on a mixed sheep-cattle farm assuming various ratios of cows: sheep on the farm
The fitted model was used to investigate potential changes to the production system that may affect the transmission dynamics. When the ratio of cows to sheep was reduced to 1:10, the time to elimination decreased, and the effect of vaccinating cattle was minor (Fig. 4). When the lambing season was reduced from 6 months to 1 month, transmission ceased altogether after a single epidemic peak.
Time to elimination of sheep brucellosis (to <0.5 % seropositive due to infection as opposed to vaccination) after mass vaccination followed by vaccination of replacements on a mixed sheep-cattle farm, using the fitted transmission model, assuming the ratio of cows: sheep was changed to 1:10
The model predicted that it would take several years to eliminate brucellosis on a typical mixed-species B. melitensis-endemic farm following vaccination, even for the most optimistic scenarios. Limiting the vaccination to sheep-only increased the time to elimination greatly, such that the economic payback for vaccination may be so delayed as to make the program impractical and unjustifiable.
Elimination was defined as reducing the number of seropositives due to infection to <0.5 %, as it was considered a meaningful target for a brucellosis vaccination program on an individual farm. For the final stages of eradication programs, test and slaughter is often used. One reason for this is that in practice there is currently no good way of monitoring the effectiveness of Brucella Rev-1 or S19 vaccination programs – apart from ensuring a high percentage are seropositive following vaccination. Seropositives as a result of infection cannot yet be reliably distinguished from animals vaccinated with Rev-1 or S19 vaccines [3].
According to the model assumptions, new infections would cease several months to a year before the final seropositive either died or became seronegative. Stochastic fade-out could prove to be an important factor in time to elimination of brucellosis, particularly on small farms. This could be investigated further using a stochastic model. Further, on farms of fewer than 200 animals, transmission would cease sooner than predicted, because a threshold of <0.5 % seropositives were used. Nevertheless, the key finding- that vaccination of cattle in addition to sheep can be expected to have a significant impact- would still hold on smaller farms.
There are some important assumptions that could have lead to an over- or under-estimation of the transmission parameter. It was assumed that the median seroprevalence farms exhibited endemic stability, however if they were in fact at the beginning of an epidemic, we could have over-estimated transmission (or potentially under-estimated transmission if at the end of an epidemic). However, B. melitensis has been endemic in Jordan decades and in the seroprevalence study seropositives were found even on farms that reported purchasing no new animals in the preceding year. There is commonly limited contact between farms in Jordan (particularly cattle-only and mixed farms; sheep-only farms to a lesser extent). These factors lead to a higher likelihood that there was a state of endemic stability on the median-seroprevalence farm(s).
There is uncertainty, and probably variation, in the true infectious period and immune period for brucellosis, however when a variety of different assumptions about these parameters were made, there was a limited impact on the overall conclusions. Furthermore, considering that infectiousness is primarily related to abortion/birth events, the time from one infectious period to the next in a single animal is limited by the reproductive cycle, making the exact immune period less important in the model. In addition, the life span of the animals is relatively short, making the length of the immune period even less important.
In order to investigate the importance of the assumptions about the model structure, the work was repeated using a simple Susceptible-Infected (SI) structure, ignoring seasonal and age dependence. This resulted in very long predicted times to elimination, but the finding that vaccination of cattle was important was found to be robust.
When simulating the effectiveness of the vaccination program, it was assumed that there was no re-introduction of infection via contact with other herds or introductions of new animals. This may be an important factor in considering applying these results to a national program. The results should be interpreted as a "best-case scenario" for a single farm. Eliminating brucellosis from a region can be expected to be an even more lengthy process.
Accepting these limitations to the model, the findings suggest that the role of cattle in transmission of B. melitensis in mixed-species endemic settings cannot be ignored, and it is likely that vaccination of small ruminants alone may be futile in many cases, in terms of eliminating the infection. A caveat is that vaccination of small ruminants may significantly reduce infectiousness to humans, and vaccination could be justifiable as a public health measure, even if it is not effective in eliminating infection from farms [16]. However, vaccinating sheep and cows is likely to be a much more effective public health measure, in many cases.
Vaccine effectiveness was found to be critical to time to elimination. Vaccine effectiveness is a result of both vaccine efficacy and the thoroughness of the vaccination program as well as the degree to which the live vaccine is stored and handled properly to preserve its efficacy. The Rev-1 vaccine has been shown to have an efficacy of approximately 80 % in eliminating infectiousness, in experimental conditions. However the efficacy decreased in subsequent pregnancies. In practice the vaccine effectiveness may therefore be much lower, although vaccine immunity could in theory be boosted by exposure to natural infection or the Rev-1 vaccine given to replacement animals in subsequent years, which can be shed. The scenarios modelled using a vaccine effectiveness of 90 % are therefore highly optimistic, particularly if applied to a regional vaccination program, which entails more logistical difficulties. The vaccine effectiveness values applied to cattle are theoretical values, as there was no validation data available on vaccines against B. melitensis in cattle.
The study suggests that the length of the lambing season may have an important impact on the dynamics of B. melitensis. Lambing seasons vary in length according to geographical location and breed, and can also be deliberately managed to produce a shorter or longer lambing season. This could have implications for the transmission of several infectious diseases of small ruminants. Transmission models of brucellosis and other diseases for which transmission is related to reproductive status should take this into consideration.
In conclusion, until now a relatively simplistic approach to brucellosis control has been taken, based on the underlying assumption that B. abortus infects cattle and B. melitensis infects small ruminants, and largely ignoring transmission between species. Although this simplification is probably justified in many settings, it may be inappropriate where mixed-species herds are common. In the absence of further data, vaccination of cattle should be considered as potentially essential for control of B. melitensis in settings where mixed small ruminant and cattle flocks exist, particularly where the ratio of cows to sheep is high. Maximising vaccine coverage and vaccine efficacy is critical to the success of B. melitensis control programs. Given the long predicted time to elimination with vaccination alone, other biosecurity practices such as disinfection of calving and lambing areas may have a critical impact on the success of control. Further evidence that Brucella melitensis predominates in cattle in Jordan, as opposed to Brucella abortus, is needed in order to validate these results. The results may be applicable to other mixed-species settings with similar livestock management practices.
Ethical approval
Ethical approval for the seroprevalence study was granted by the Ethics and Welfare Committee of the Royal Veterinary College.
Dean AS, Crump L, Greter H, Schelling E, Zinsstag J. Global Burden of Human Brucellosis: A Systematic Review of Disease Frequency. PLoS Negl Trop Dis. 2012;6, e1865.
Pappas G, Papadimitriou P, Akritidis N, Christou L, Tsianos EV. The new global map of human brucellosis. Lancet Infect Dis. 2006;6:91–9.
Corbel MJ. Brucellosis in Humans and Animals. World Health Organization: Geneva, Switzerland; 2006.
Scientific Committee on Animal Health and Animal Welfare. Brucellosis in Sheep and Goats - Report of the Scientific Committee on Animal Health and Animal Welfare. 2001. http://ec.europa.eu/food/fs/sc/scah/out59_en.pdf. Accessed 25 Jan 2016.
Aune K, Rhyan JC, Russell R, Roffe TJ, Corso B. Environmental persistence of Brucella abortus in the Greater Yellowstone Area. J Wildl Manage. 2012;76(Cotton 1919):253–61.
Hegazy YM, Moawad A, Osman S, Ridler A, Guitian J. Ruminant brucellosis in the Kafr El Sheikh Governorate of the Nile Delta, Egypt: prevalence of a neglected zoonosis. PLoS Negl Trop Dis. 2011;5, e944.
Jennings GJ, Hajjeh RA, Girgis FY, Fadeel MA, Maksoud MA, Wasfy MO, et al. Brucellosis as a cause of acute febrile illness in Egypt. Trans R Soc Trop Med Hyg. 2007;101:707–13.
Musallam II, Abo-Shehada M, Omar M, Guitian J. Cross-sectional study of brucellosis in Jordan: Prevalence, risk factors and spatial distribution in small ruminants and cattle. Prev Vet Med. 2015;118:387–396.
Verger JM, Garin-Bastuji B, Grayon M, Mahé AM. Bovine brucellosis caused by Brucella melitensis in France. Ann Rech Vet. 1989;20:93–102.
Kasymbekov J, Imanseitov J, Ballif M, Schürch N, Paniga S, Pilo P, et al. Molecular epidemiology and antibiotic susceptibility of livestock Brucella melitensis isolates from Naryn Oblast, Kyrgyzstan. PLoS Negl Trop Dis. 2013;7, e2047.
Zinsstag J, Roth F, Orkhon D, Chimed-Ochir G, Nansalmaa M, Kolar J, et al. A model of animal-human brucellosis transmission in Mongolia. Prev Vet Med. 2005;69:77–95.
Hegazy YM, Ridler AL, Guitian FJ. Assessment and simulation of the implementation of brucellosis control programme in an endemic area of the Middle East. Epidemiol Infect. 2009;137:1436–48.
Oseguera Montiel D, Bruce M, Frankena K, Udo H, van der Zijpp A, Rushton J. Financial analysis of brucellosis control for small-scale goat farming in the Bajío region, Mexico. Prev Vet Med. 2015;118:247–59.
Anon. Scientific Report on Performances of Brucellosis Diagnostic Methods for Bovines, Sheep, and Goats Adopted on 11. EFSA J. 2006;432(December):1–44.
R Core Team. R: A language and environment for statistical computing. 2015.
Minas A, Minas M, Stournara A, Tselepidis S. The "effects" of Rev-1 vaccination of sheep and goats on human brucellosis in Greece. Prev Vet Med. 2004;64:41–7.
Talafha AQ, Ababneh MM. Awassi sheep reproduction and milk production: Review. Trop Anim Health Prod. 2011;43:1319–26.
Lafi SQ, Al-Rawashdeh OF, Hailat NQ, Fathalla MAR. Reproductive and production performance of Friesian dairy cattle in Jordan. Prev Vet Med. 1995;22:227–34.
Ovine and caprine brucellosis: Brucella melitensis. [http://www.cfsph.iastate.edu/Factsheets/pdfs/brucellosis_melitensis.pdf]. Accessed 25 Jan 2016.
Veterinary Epidemiology, Economics and Public Health Group, The Royal Veterinary College, Hawkshead Lane, North Mymms, Hatfield, AL9 7TA, UK
Wendy Beauvais
, Imadidden Musallam
& Javier Guitian
London Centre for Neglected Tropical Disease Research, London, UK
Search for Wendy Beauvais in:
Search for Imadidden Musallam in:
Search for Javier Guitian in:
Correspondence to Wendy Beauvais.
WB and JG conceived of the study. WB designed and implemented the model, and drafted the manuscript. IM carried out the fieldwork, contributed to discussions about the model structure and model parameters and assisted with editing of the manuscript. JG advised on the study design and implementation, and edited the manuscript. All authors read and approved the final manuscript.
The differential equations used for the model are as follows:
Young animals:
$$ \begin{array}{l}\mathrm{d}{{\mathrm{S}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \left(1\hbox{-} \mathrm{V}\mathrm{E}\right)*{\mathrm{m}}_{\mathrm{i},\mathrm{s}}*\left[1\hbox{-} \left(\mathrm{p}*{{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\ \left({{\mathrm{S}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + {{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \mathrm{P}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + {{\mathrm{V}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\right)\right)\right]\ \hbox{--}\ \mathrm{m}{1}_{\mathrm{i},\mathrm{s}}*{{\mathrm{S}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}\\ {}\mathrm{d}\mathrm{P}{{\mathrm{I}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = {\mathrm{m}}_{\mathrm{i},\mathrm{s}}*\left(\mathrm{p}*{{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\ \left({{\mathrm{S}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + {{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \mathrm{P}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + {{\mathrm{V}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\right)\right)\ \hbox{--}\ \mathrm{m}{1}_{\mathrm{i},\mathrm{s}}*\mathrm{P}{{\mathrm{I}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}\\ {}\mathrm{d}{{\mathrm{V}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{V}\mathrm{E}*{\mathrm{m}}_{\mathrm{i},\mathrm{s}}*\left[1\hbox{-} \left(\mathrm{p}*{{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\ \left({{\mathrm{S}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + {{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \mathrm{P}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + {{\mathrm{V}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\right)\right)\right]\ \hbox{--}\ \mathrm{m}{1}_{\mathrm{i},\mathrm{s}}*{{\mathrm{V}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}\end{array} $$
Juvenile animals:
$$ \begin{array}{l}\mathrm{d}{{\mathrm{S}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{m}{1}_{\mathrm{i},\mathrm{s}}*{{\mathrm{S}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ \mathrm{m}{2}_{\mathrm{i},\mathrm{s}}*{{\mathrm{S}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ \left(\mathrm{bet}{\mathrm{a}}_{\mathrm{i},,\mathrm{i}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \mathrm{bet}{\mathrm{a}}_{\mathrm{i},,\mathrm{j}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{j},\mathrm{s}}\right)*{{\mathrm{S}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}/\mathrm{N}\\ {}\mathrm{d}{{\mathrm{E}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \left(\mathrm{bet}{\mathrm{a}}_{\mathrm{i},,\mathrm{i}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \mathrm{bet}{\mathrm{a}}_{\mathrm{i},,\mathrm{j}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{j},\mathrm{s}}\right)*{{\mathrm{S}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}/\mathrm{N}\ \hbox{--}\ \mathrm{m}{2}_{\mathrm{i},\mathrm{s}}*{{\mathrm{E}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}\\ {}\mathrm{d}\mathrm{P}{{\mathrm{I}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{m}{1}_{\mathrm{i},\mathrm{s}}*\mathrm{P}{{\mathrm{I}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ \mathrm{m}{2}_{\mathrm{i},\mathrm{s}}*\mathrm{P}{\mathrm{I}}_{\mathrm{i},\mathrm{s}}^{\mathrm{j}\mathrm{uv}}\\ {}\mathrm{d}{{\mathrm{V}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{m}{1}_{\mathrm{i},\mathrm{s}}*{{\mathrm{V}}^{\mathrm{y}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ \mathrm{m}{2}_{\mathrm{i},\mathrm{s}}*{{\mathrm{V}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}\end{array} $$
Adult animals:
$$ \begin{array}{l}\mathrm{d}{{\mathrm{S}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{m}{2}_{\mathrm{i},\mathrm{s}}*{{\mathrm{S}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ {\mathrm{m}}_{\mathrm{i},\mathrm{s}}{{\mathrm{S}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ \left(\mathrm{bet}{\mathrm{a}}_{\mathrm{i},\mathrm{i}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \mathrm{bet}{\mathrm{a}}_{\mathrm{i},\mathrm{j}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{j},\mathrm{s}}\right)*{{\mathrm{S}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\mathrm{N} + \mathrm{z}*{{\mathrm{R}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\\ {}\mathrm{d}{{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{m}{2}_{\mathrm{i},\mathrm{s}}*{{\mathrm{E}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ {\mathrm{m}}_{\mathrm{i}\mathrm{s}}{{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \left(\mathrm{bet}{\mathrm{a}}_{\mathrm{i},\mathrm{i}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \mathrm{bet}{\mathrm{a}}_{\mathrm{i},\mathrm{j}}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{j},\mathrm{s}}\right)*{{\mathrm{S}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\mathrm{N}\ \hbox{--}\ \mathrm{v}*{{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\\ {}\mathrm{d}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{v}*{{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}} + \mathrm{v}*\mathrm{P}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ \mathrm{g}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ {\mathrm{m}}_{\mathrm{i},\mathrm{s}}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\\ {}\mathrm{d}{{\mathrm{R}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{g}*{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ {\mathrm{m}}_{\mathrm{i}\mathrm{s}}{{\mathrm{R}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\hbox{-}\ \mathrm{z}*{{\mathrm{R}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\\ {}\mathrm{d}\mathrm{P}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{m}{2}_{\mathrm{i}\mathrm{s}}*\mathrm{P}{{\mathrm{I}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ {\mathrm{m}}_{\mathrm{i},\mathrm{s}}\mathrm{P}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\hbox{-}\ \mathrm{v}*\mathrm{P}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\\ {}\mathrm{d}{{\mathrm{V}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}/\mathrm{d}\mathrm{t} = \mathrm{m}{2}_{\mathrm{i}\mathrm{s}}*{{\mathrm{V}}^{\mathrm{j}\mathrm{uv}}}_{\mathrm{i},\mathrm{s}}\hbox{--}\ {\mathrm{m}}_{\mathrm{i},\mathrm{s}}*{{\mathrm{V}}^{\mathrm{a}}}_{\mathrm{i},\mathrm{s}}\end{array} $$
$$ \mathrm{N} = \Sigma \left({{\mathrm{S}}^{\mathrm{y}}}_{\mathrm{s}} + \mathrm{P}{{\mathrm{I}}^{\mathrm{y}}}_{\mathrm{s}} + {{\mathrm{V}}^{\mathrm{y}}}_{\mathrm{s}} + {{\mathrm{S}}^{\mathrm{j}}}_{\mathrm{s}} + {{\mathrm{E}}^{\mathrm{j}}}_{\mathrm{s}} + \mathrm{P}{{\mathrm{I}}^{\mathrm{j}}}_{\mathrm{s}} + {{\mathrm{V}}^{\mathrm{j}}}_{\mathrm{s}} + {{\mathrm{S}}^{\mathrm{a}}}_{\mathrm{s}} + {{\mathrm{E}}^{\mathrm{a}}}_{\mathrm{s}} + {{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{s}} + {{\mathrm{R}}^{\mathrm{a}}}_{\mathrm{s}} + \mathrm{P}{{\mathrm{I}}^{\mathrm{a}}}_{\mathrm{s}} + {{\mathrm{V}}^{\mathrm{a}}}_{\mathrm{s}}\right) $$
s denotes season
i denotes species
y denotes young
juv denotes juvenile
a denotes adult
NI represents young animals that are not persistently infected
VE represents vaccination effectiveness
m represents the death and birth rate
p represents the probability that a newborn is persistently infected given that the mother is seropositive
E represents animals in the exposed (pre-infectious) compartment
S represents susceptibles
PI represents persistently infected animals
V represents vaccinated animals
m1 represents the annual rate at which animals mature from "young" to "juvenile"
m2 represents the annual rate at which animals mature from "juvenile" to "adult"
betai,i represents within-species transmission and betai,j represents between-species transmission coefficient.
I represents infectious animals
N represents the total population
z represents the rate at which animals lose protective immunity
R represents immune animals
v represents the annual rate of loss of infectiousness
g represents the annual rate of loss of immunity
The LCNTDR Collection: Advances in scientific research for NTD control
Mathematical models for parasites and vectors
|
CommonCrawl
|
The effects of spatial heterogeneities on some multiplicity results
DCDS Home
On the classical solvability of near field reflector problems
February 2016, 36(2): 917-939. doi: 10.3934/dcds.2016.36.917
Infinitely many positive and sign-changing solutions for nonlinear fractional scalar field equations
Wei Long 1, , Shuangjie Peng 1, and Jing Yang 2,
School of Mathematics and Statistics, Central China Normal University, Wuhan, 430079, China
Department of Mathematics, Huazhong Normal University,Wuhan, 430079
Received March 2014 Revised January 2015 Published August 2015
We consider the following nonlinear fractional scalar field equation $$ (-\Delta)^s u + u = K(|x|)u^p,\ \ u > 0 \ \ \hbox{in}\ \ \mathbb{R}^N, $$ where $K(|x|)$ is a positive radial function, $N\ge 2$, $0 < s < 1$, and $1 < p < \frac{N+2s}{N-2s}$. Under various asymptotic assumptions on $K(x)$ at infinity, we show that this problem has infinitely many non-radial positive solutions and sign-changing solutions, whose energy can be made arbitrarily large.
Keywords: Fractional Laplacian, reduction method., nonlinear scalar field equation.
Mathematics Subject Classification: 35J20, 35J6.
Citation: Wei Long, Shuangjie Peng, Jing Yang. Infinitely many positive and sign-changing solutions for nonlinear fractional scalar field equations. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 917-939. doi: 10.3934/dcds.2016.36.917
L. Abdelouhab, J. L. Bona, M. Felland and J.-C. Saut, Nonlocal models for nonlinear, dispersive waves, Phys. D, 40 (1989), 360-392. doi: 10.1016/0167-2789(89)90050-X. Google Scholar
W. Ao and J. Wei, Infinitely many positive solutions for nonlinear equations with non-symmetric potentials, Calc. Var. Partial Differential Equations, 51 (2014), 761-798. doi: 10.1007/s00526-013-0694-5. Google Scholar
J. Byeon and Y. Oshita, Existence of multi-bump stading waves with a critical frequency for nonlinear schrödinger equations, Comm. Partial Differential Equations, 29 (2005), 1877-1904. doi: 10.1081/PDE-200040205. Google Scholar
X. Cabré and J. Tan, Positive solutions of nonlinear problems involving the square root of the Laplacian, Adv. Math., 224 (2010), 2052-2093. doi: 10.1016/j.aim.2010.01.025. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
D. Cao and S. Peng, Multi-bump bound states of Schrödinger equations with a critical frequency, Math. Ann., 336 (2006), 925-948. doi: 10.1007/s00208-006-0021-y. Google Scholar
A. Capella, J. Dávila, L. Dupaigne and Y. Sire, Regularity of radial extremal solutions for some non-local semilinear equations, Comm. Partial Differential Equations, 36 (2011), 1353-1384. doi: 10.1080/03605302.2011.562954. Google Scholar
G. Cerami, G. Devillanova and S. Solimini, Infinitely many bound states for some nonlinear scalar field equations, Calc. Var. Partial Differential Equations, 23 (2005), 139-168. doi: 10.1007/s00526-004-0293-6. Google Scholar
G. Cerami, D. Passaseo and S. Solimini, Infinitely many positive solutions to some scalar field equation with non-symmetric coefficients, Comm. Pure Appl. Math., 66 (2013), 372-413. doi: 10.1002/cpa.21410. Google Scholar
S.-M. Chang, S. Gustafson, K. Nakanishi and T.-P. Tsai, Spectra of linearized operators for NLS solitary waves,, SIAM. J. Math. Anal., 39 (): 1070. doi: 10.1137/050648389. Google Scholar
W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. doi: 10.1002/cpa.20116. Google Scholar
G. Chen and Y. Zhang, Concentration phenomenon for fractionsl nonlinear Schrödinger equations, Commun. Pure Appl. Anal., 13 (2014), 2359-2376. doi: 10.3934/cpaa.2014.13.2359. Google Scholar
T. D'Aprile and A. Pistoia, Existence, multiplicity and profile of sign-changing clustered solutions of a semiclassical nonlinear Schrödinger equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 1423-1451. doi: 10.1016/j.anihpc.2009.01.002. Google Scholar
J. Dávila, M. Del Pino and J. Wei, Concentrating standing waves for fractional nonlinear Schrödinger equation, J. Differerntial Equations, 256 (2014), 858-892. doi: 10.1016/j.jde.2013.10.006. Google Scholar
M. del Pino and P. Felmer, Local mountain passes for semilinear elliptic problems in unbounded domains, Calc. Var. Partial Differential Equations, 4 (1996), 121-137. doi: 10.1007/BF01189950. Google Scholar
G. Devillanova and S. Solimini, Min-max solutions to some scalar field equations, Adv. Nonlinear Stud., 12 (2012), 173-186. Google Scholar
W. Ding and W. M. Ni, On the existence of positive entire solutions of a semilinear elliptic equation, Arch. Ration. Mech. Anal., 91 (1986), 283-308. doi: 10.1007/BF00282336. Google Scholar
P. Felmer, A. Quass and J. Tan, Positive solutions of nonlinear Schrödinger equation with the fractional Laplacian, Proc. Roy. Soc. Edinburgh Sect. A, 142 (2012), 1237-1262. doi: 10.1017/S0308210511000746. Google Scholar
R. L. Frank and E. Lenzmann, Uniqueness of non-linear ground states for fractional Laplacians in $\mathbbR$, Acta Math., 210 (2013), 261-318. doi: 10.1007/s11511-013-0095-9. Google Scholar
R. L. Frank, E. Lenzmann and L. Silvestre, Uniqueness of radial solutions for the fractional Laplacian,, , (). Google Scholar
A. Elgart and B. Schlein, Mean field dynamics of boson stars, Comm. Pure Appl. Math., 60 (2007), 500-545. doi: 10.1002/cpa.20134. Google Scholar
X. Kang and J. Wei, On interacting bumps of semi-classical states of nonlinear Schrödinger equations, Adv. Differential Equations, 5 (2000), 899-928. Google Scholar
M. K. Kwong, Uniqueness of positive solutions of $\Delta u-u+u^p = 0$ in $\mathbbR^n$, Arch. Ration. Mech. Anal., 105 (1989), 243-266. doi: 10.1007/BF00251502. Google Scholar
N. Laskin, Fractional quantum mechanics and Lévy path integrals, Phys. Lett. A, 268 (2000), 298-305. doi: 10.1016/S0375-9601(00)00201-2. Google Scholar
N. Laskin, Fractional Schrödinger equation, Phys. Rev. E, 66 (2002), 056108, 7 pp. doi: 10.1103/PhysRevE.66.056108. Google Scholar
A. J. Majda, D. W. McLaughlin and E. G. Tabak, A one-dimensional model for dispersive wave turbulence, J. Nonlinear Sci., 7 (1997), 9-44. doi: 10.1007/BF02679124. Google Scholar
M. Maris, On the existence, regularity and decay of solitary waves to a generalized Benjamin-Ono equation, Nonlinear Anal., 51 (2002), 1073-1085. doi: 10.1016/S0362-546X(01)00880-X. Google Scholar
E. S. Noussair and S. Yan, On positive multipeak solutions of a nonlinear elliptic problem, J. Lond. Math. Soc., 62 (2000), 213-227. doi: 10.1112/S002461070000898X. Google Scholar
E. S. Noussair and S. Yan, The effect of the domain geometry in singular perturbation problems, Proc. London Math. Soc., 76 (1998), 427-452. doi: 10.1112/S0024611598000148. Google Scholar
E. H. Lieb and H.-T. Yau, The Chandrasekhar theory of stellar collapse as the limit of quantum mechanics, Comm. Math. Phys., 112 (1987), 147-174. doi: 10.1007/BF01217684. Google Scholar
P. L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. I., Ann. Inst. H. Poincaré Anal. Non Lineaire, 1 (1984), 109-145. Google Scholar
P. L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. II., Ann. Inst. H. Poincaré Anal. Non Lineaire, 1 (1984), 223-283. Google Scholar
E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
G. Palatucci and A. Pisante, Improved sobolev embeddings, profile decomposition and concentration-compactness for fractional sobolev spaces, Calc. Var. Partial Differential Equations, 50 (2014), 799-829. doi: 10.1007/s00526-013-0656-y. Google Scholar
Y. Sire and E. Valdinoci, Fractional Laplacian phase transitions and boundary reactions: a geometric inequality and a symmetry result, J. Funct. Anal., 256 (2009), 1842-1864. doi: 10.1016/j.jfa.2009.01.020. Google Scholar
J. Tan, The Brezis-Nirenberg type problem involving the square root of the Laplacian, Calc. Var. Partial Differential Equations, 42 (2011), 21-41. doi: 10.1007/s00526-010-0378-3. Google Scholar
X. Wang, On concentration of positive bound states of nonlinear Schrödinger equations, Comm. Math. Phys., 153 (1993), 229-244. doi: 10.1007/BF02096642. Google Scholar
J. Wei and S. Yan, Infinite many positive solutions for the nonlinear Schrödinger equation in $\mathbbR^n$, Calc. Var. Partial Differential Equations, 37 (2010), 423-439. doi: 10.1007/s00526-009-0270-1. Google Scholar
J. Wei and S. Yan, Infinite many positive solutions for the prescribed scalar curvature problem on $\mathbbS^N$, J. Funct. Anal., 258 (2010), 3048-3081. doi: 10.1016/j.jfa.2009.12.008. Google Scholar
M. Weinstein, Existence and dynamic stability of solitary wave solutions of equations arising in long wave propagation, Comm. Partial Differential Equations, 12 (1987), 1133-1173. doi: 10.1080/03605308708820522. Google Scholar
Giuseppina Barletta, Roberto Livrea, Nikolaos S. Papageorgiou. A nonlinear eigenvalue problem for the periodic scalar $p$-Laplacian. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1075-1086. doi: 10.3934/cpaa.2014.13.1075
Miaomiao Cai, Li Ma. Moving planes for nonlinear fractional Laplacian equation with negative powers. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4603-4615. doi: 10.3934/dcds.2018201
Francesca Faraci, Alexandru Kristály. One-dimensional scalar field equations involving an oscillatory nonlinear term. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 107-120. doi: 10.3934/dcds.2007.18.107
Yu-Feng Sun, Zheng Zeng, Jie Song. Quasilinear iterative method for the boundary value problem of nonlinear fractional differential equation. Numerical Algebra, Control & Optimization, 2020, 10 (2) : 157-164. doi: 10.3934/naco.2019045
Miaomiao Niu, Zhongwei Tang. Least energy solutions for nonlinear Schrödinger equation involving the fractional Laplacian and critical growth. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 3963-3987. doi: 10.3934/dcds.2017168
Claudianor O. Alves, Giovany M. Figueiredo, Gaetano Siciliano. Ground state solutions for fractional scalar field equations under a general critical nonlinearity. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2199-2215. doi: 10.3934/cpaa.2019099
Tingzhi Cheng. Monotonicity and symmetry of solutions to fractional Laplacian equation. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 3587-3599. doi: 10.3934/dcds.2017154
Michael E. Filippakis, Nikolaos S. Papageorgiou. Existence and multiplicity of positive solutions for nonlinear boundary value problems driven by the scalar $p$-Laplacian. Communications on Pure & Applied Analysis, 2004, 3 (4) : 729-756. doi: 10.3934/cpaa.2004.3.729
Eric Chung, Yalchin Efendiev, Ke Shi, Shuai Ye. A multiscale model reduction method for nonlinear monotone elliptic equations in heterogeneous media. Networks & Heterogeneous Media, 2017, 12 (4) : 619-642. doi: 10.3934/nhm.2017025
Meixia Dou. A direct method of moving planes for fractional Laplacian equations in the unit ball. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1797-1807. doi: 10.3934/cpaa.2016015
Alain Miranville, Costică Moroşanu. Analysis of an iterative scheme of fractional steps type associated to the nonlinear phase-field equation with non-homogeneous dynamic boundary conditions. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 537-556. doi: 10.3934/dcdss.2016011
Masoumeh Hosseininia, Mohammad Hossein Heydari, Carlo Cattani. A wavelet method for nonlinear variable-order time fractional 2D Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2273-2295. doi: 10.3934/dcdss.2020295
Y. Kabeya. Behaviors of solutions to a scalar-field equation involving the critical Sobolev exponent with the Robin condition. Discrete & Continuous Dynamical Systems, 2006, 14 (1) : 117-134. doi: 10.3934/dcds.2006.14.117
Ran Zhuo, Wenxiong Chen, Xuewei Cui, Zixia Yuan. Symmetry and non-existence of solutions for a nonlinear system involving the fractional Laplacian. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 1125-1141. doi: 10.3934/dcds.2016.36.1125
Dengfeng Lü, Shuangjie Peng. On the positive vector solutions for nonlinear fractional Laplacian systems with linear coupling. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3327-3352. doi: 10.3934/dcds.2017141
Chenchen Mou. Nonlinear elliptic systems involving the fractional Laplacian in the unit ball and on a half space. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2335-2362. doi: 10.3934/cpaa.2015.14.2335
Tadeusz Kulczycki, Robert Stańczy. Multiple solutions for Dirichlet nonlinear BVPs involving fractional Laplacian. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2581-2591. doi: 10.3934/dcdsb.2014.19.2581
Juan-Luis Vázquez. Recent progress in the theory of nonlinear diffusion with fractional Laplacian operators. Discrete & Continuous Dynamical Systems - S, 2014, 7 (4) : 857-885. doi: 10.3934/dcdss.2014.7.857
Kaifang Liu, Lunji Song, Shan Zhao. A new over-penalized weak galerkin method. Part Ⅰ: Second-order elliptic problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2411-2428. doi: 10.3934/dcdsb.2020184
Lunji Song, Wenya Qi, Kaifang Liu, Qingxian Gu. A new over-penalized weak galerkin finite element method. Part Ⅱ: Elliptic interface problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2581-2598. doi: 10.3934/dcdsb.2020196
Wei Long Shuangjie Peng Jing Yang
|
CommonCrawl
|
Modeling the transmission dynamics and the impact of the control interventions for the COVID-19 epidemic outbreak
Fernando Saldaña 1 , , ,
Hugo Flores-Arguedas 1 ,
José Ariel Camacho-Gutiérrez 2 ,
Ignacio Barradas 1
Centro de Investigación en Matemáticas, 36023 Guanajuato, Guanajuato, Mexico
Facultad de Ciencias, Universidad Autónoma de Baja California, 22860 Baja California, Mexico
Received: 21 April 2020 Accepted: 11 June 2020 Published: 15 June 2020
In this paper we develop a compartmental epidemic model to study the transmission dynamics of the COVID-19 epidemic outbreak, with Mexico as a practical example. In particular, we evaluate the theoretical impact of plausible control interventions such as home quarantine, social distancing, cautious behavior and other self-imposed measures. We also investigate the impact of environmental cleaning and disinfection, and government-imposed isolation of infected individuals. We use a Bayesian approach and officially published data to estimate some of the model parameters, including the basic reproduction number. Our findings suggest that social distancing and quarantine are the winning strategies to reduce the impact of the outbreak. Environmental cleaning can also be relevant, but its cost and effort required to bring the maximum of the outbreak under control indicate that its cost-efficacy is low.
epidemic model,
control strategies,
parameter estimation
Citation: Fernando Saldaña, Hugo Flores-Arguedas, José Ariel Camacho-Gutiérrez, Ignacio Barradas. Modeling the transmission dynamics and the impact of the control interventions for the COVID-19 epidemic outbreak[J]. Mathematical Biosciences and Engineering, 2020, 17(4): 4165-4183. doi: 10.3934/mbe.2020231
[1] H. A. Rothan, S. N. Byrareddy, The epidemiology and pathogenesis of coronavirus disease (COVID-19) outbreak, J. Autoimmun., (2020), 102433.
[2] World Health Organization, Coronavirus disease 2019 (COVID-19): situation report-51, 2020. Available from: https://www.who.int/docs/default-source/coronaviruse/situationreports/20200311-sitrep-51-COVID-19.pdf.
[3] World Health Organization, Assessment of risk factors for coronavirus disease 2019 (COVID-19) in health workers: protocol for a case-control study, 26 May 2020. Available from: https://www.who.int/publications/i/item/assessment-of-risk-factors-for-coronavirus-disease-2019-(COVID-19)-in-health-workers-protocol-for-a-case-control-study
[4] A. Teslya, T. M. Pham, N. E. Godijk, M. E. Kretzschmar, M. C. Bootsma, G. Rozhnova, Impact of self-imposed prevention measures and short-term government intervention on mitigating and delaying a COVID-19 epidemic, medRxiv, (2020), 2020.03.12.20034827.
[5] F. Brauer, Mathematical epidemiology: Past, present, and future, Infect. Dis. Model., 2 (2017), 113-127.
[6] J. Jia, J. Ding, S. Liu, G. Liao, J. Li, B. Duan, et al., Modeling the control of COVID-19: Impact of policy interventions and meteorological factors, arXiv, (2020), 2003.02985.
[7] S. S. Nadim, I. Ghosh, J. Chattopadhyay, Short-term predictions and prevention strategies for COVID-2019: A model based study, arXiv, (2020), 2003.08150.
[8] B. Tang, X. Wang, Q. Li, N. L. Bragazzi, S. Tang, Y. Xiao, et al., Estimation of the transmission risk of the 2019-ncov and its implication for public health interventions, J. Clin. Med., 9 (2020), 462.
[9] C. Yang, J. Wang, A mathematical model for the novel coronavirus epidemic in Wuhan, China, Math. Biosci. Eng., 17 (2020), 2708-2724.
[10] Y. Liu, A. A. Gayle, A. Wilder-Smith, J. Rocklöv, The reproductive number of COVID-19 is higher compared to SARS coronavirus, J. Travel. Med., 27 (2020), taaa021.
[11] G. Kampf, D. Todt, S. Pfaender, E. Steinmann, Persistence of coronaviruses on inanimate surfaces and their inactivation with biocidal agents, J. Hosp. Infect., 104 (2020), 246-251.
[12] H. W. Hethcote, The mathematics of infectious diseases, SIAM Rev. Soc. Ind. Appl. Math., 42 (2000), 599-653.
[13] O. Diekmann, J. A. P. Heesterbeek, J. A. Metz, On the definition and the computation of the basic reproduction ratio r 0 in models for infectious diseases in heterogeneous populations, J. Math. Biol., 28 (1990), 365-382.
[14] P. Van den Driessche, J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29-48.
[15] J. A. Backer, D. Klinkenberg, J. Wallinga, Incubation period of 2019 novel coronavirus (2019-ncov) infections among travellers from Wuhan, China, 20-28 january 2020, Euro. Surveil., 25 (2020), 2000062.
[16] Secretary of Health, Aviso epidemiológico: casos de infección respiratoria asociados a nuevo-coronavirus-2019-ncov, 2020. Available from: https://www.gob.mx/salud/documentos/avisoepidemiologico-casos-de-infeccion-respiratoria-asociados-a-nuevo-coronavirus-2019-ncov.
[17] J. A. Christen, C. Fox, A general purpose sampling algorithm for continuous distributions (the t-walk), Bayesian Anal., 5 (2010), 263-281.
[18] El Financiero, Al 10% de los casos sospechosos de COVID-19 con síntomas leves se les aplica prueba: Imss, 2020. Available from: https://www.elfinanciero.com.mx/nacional/al-10-de-loscasos-sospechosos-de-covid-19-con-sintomas-leves-se-les-aplica-prueba-imss.
[19] E. Shim, A. Tariq, W. Choi, Y. Lee, G. Chowell, Transmission potential and severity of COVID-19 in South Korea, Int. J. Infect. Dis., 93 (2020), 339-344.
[20] A. Kuzdeuov, D. Baimukashev, A. Karabay, B. Ibragimov, A. Mirzakhmetov, M. Nurpeiissov, et al., A network-based stochastic epidemic simulator: Controlling COVID-19 with region-specific policies, medRxiv, (2020), 2020.05.02.20089136v1.
[21] J. Herman, W. Usher, SALib: An open-source python library for sensitivity analysis, J. Open Res. Softw., 2 (2017), 97.
Figures(10) / Tables(1)
Fernando Saldaña
Hugo Flores-Arguedas
José Ariel Camacho-Gutiérrez
Ignacio Barradas
Fernando Saldaña, Hugo Flores-Arguedas, José Ariel Camacho-Gutiérrez, Ignacio Barradas. Modeling the transmission dynamics and the impact of the control interventions for the COVID-19 epidemic outbreak[J]. Mathematical Biosciences and Engineering, 2020, 17(4): 4165-4183. doi: 10.3934/mbe.2020231
Figure 1. (a) Data per date and fitted curves for the cumulative infected individuals for the MAP estimate and posterior mean. (b) Estimation of $ \mathcal{R}_{0} $ for the samples of the MCMC. The value of $ \mathcal{R}_{0} $ for the MAP estimate is $ 2.5 $ and for the posterior mean estimate is $ 2.7 $
Figure 2. (a) Infectious symptomatic individuals $ I(t) $ corresponding to the MAP (red) and the posterior mean estimates (blue). (b) Red dots show the data of cumulative confirmed cases of COVID-19 in Mexico from March 11, 2020, to March 25, 2020. The gray area shows the uncertainty with the last 25000 samples of the chain. The green dots represent data from march 26 to march 31 that were not used in the inference
Figure 3. (a) Posterior predictive marginal for the total cumulative infections on March 31. (b) Red dots show the data of cumulative confirmed cases of COVID-19 in Mexico from March 11, 2020, to March 25, 2020, used for the inference. In black we present our predicted values, in green the data from March 26 to April 9 not used in the inference, and the dashed lines show the interval with 98 percent of the mass for the predictive marginal
Figure 4. Dynamics of the symptomatic infected and diagnosed classes $ I+D $ under the control measure which represents cautious behavior of susceptible individuals. Dashed lines represent hypothetical health-care system capacity. (a) We investigate three possible initial times for the application of the control intervention: $ t = 1 $ (blue), $ t = 15 $ (orange), and $ t = 30 $ (green) with $ \alpha = 0.01 $, and $ \theta = 0.3 $. (b) We explore different values for the control parameter $ \theta $, for all values the initial application time is $ t = 1 $ and $ \alpha = 0.01 $
Figure 5. Dynamics of the symptomatic infected and diagnosed classes $ I+D $ under the control measure which represents isolation of infected individuals. Dashed lines represent hypothetical health-care system capacity. (a) We investigate three possible initial times for the application of the control intervention: $ t = 1 $ (blue), $ t = 15 $ (orange), and $ t = 30 $ (green) with $ d_{2} = 0.2 $, and $ d_{1} = 0.02 $. (b) We explore different values for the control parameters $ d_1 $ and $ d_2 $ with $ 10 d_{1} = d_{2} $, for all values the initial application time is $ t = 1 $
Figure 6. Dynamics of the symptomatic infected and diagnosed classes $ I+D $ under the control measure which represents environmental cleaning and disinfection. Dashed lines represents hypothetical health-care system capacity. (a) We investigate three possible initial times for the application of the control intervention: $ t = 1 $ (blue), $ t = 15 $ (orange), and $ t = 30 $ (green) with $ m = 15 $. (b) We explore different values for the control parameter $ m $, for all values the initial application time is $ t = 1 $
Figure 7. Dynamics of the symptomatic infected and diagnosed classes $ I+D $ under the application of the three control interventions. Dashed lines represent hypothetical health-care system capacity. (a) We investigate three possible quarantine's duration: one month (blue), two months (orange), and three months (green). (b) We explore how the periodic application of the control interventions affects the epidemic curve
Figure 8. First ($ S1 $) and total order ($ ST $) Sobol indices of the cumulative cases $ C(t_i; {\bf{x}}) $ built from the solution $ I $ of model (4.1) with respect to the parameters $ {\bf{x}} = (\beta_{A}, \beta_{I}, \beta_{V}, c_1, c_2) $. We perform this analysis for several time values $ t_i $ and found that the results do not depend on $ t_i $. The indices for the variables $ c_{i} $ ($ i = 1, 2 $) are null
Figure 9. Posterior distributions for the parameters: (a) $ \beta_A $, (b) $ \beta_I $, (c) $ \beta_V $, (d) $ c_1 $, (e) $ c_2 $. The parameters $ \beta_A $ and $ c_1 $ are not well informed by the data, their posterior distribution corresponds to its prior distribution
Figure 10. Posterior predictive marginals for the total cumulative infections for several dates
|
CommonCrawl
|
The association between ambient temperature and mortality of the coronavirus disease 2019 (COVID-19) in Wuhan, China: a time-series analysis
Gaopei Zhu ORCID: orcid.org/0000-0002-2085-93361,
Yuhang Zhu ORCID: orcid.org/0000-0002-9236-28121,2,
Zhongli Wang3,
Weijing Meng4,
Xiaoxuan Wang1,
Jianing Feng1,
Juan Li ORCID: orcid.org/0000-0001-8710-48541,
Yufei Xiao1,
Fuyan Shi1 &
Suzhen Wang ORCID: orcid.org/0000-0003-2076-529X1
BMC Public Health volume 21, Article number: 117 (2021) Cite this article
The COVID-19 has caused a sizeable global outbreak and has been declared as a public health emergency of international concern. Sufficient evidence shows that temperature has an essential link with respiratory infectious diseases. The objectives of this study were to describe the exposure-response relationship between ambient temperature, including extreme temperatures, and mortality of COVID-19.
The Poisson distributed lag non-linear model (DLNM) was constructed to evaluate the non-linear delayed effects of ambient temperature on death, by using the daily new death of COVID-19 and ambient temperature data from January 10 to March 31, 2020, in Wuhan, China.
During the period mentioned above, the average daily number of COVID-19 deaths was approximately 45.2. Poisson distributed lag non-linear model showed that there was a non-linear relationship (U-shape) between the effect of ambient temperature and mortality. With confounding factors controlled, the daily cumulative relative death risk decreased by 12.3% (95% CI [3.4, 20.4%]) for every 1.0 °C increase in temperature. Moreover, the delayed effects of the low temperature are acute and short-term, with the most considerable risk occurring in 5–7 days of exposure. The delayed effects of the high temperature appeared quickly, then decrease rapidly, and increased sharply 15 days of exposure, mainly manifested as acute and long-term effects. Sensitivity analysis results demonstrated that the results were robust.
The relationship between ambient temperature and COVID-19 mortality was non-linear. There was a negative correlation between the cumulative relative risk of death and temperature. Additionally, exposure to high and low temperatures had divergent impacts on mortality.
Infectious disease is an old and heavy term. From smallpox, plague, cholera, malaria, etc. in the early days of human civilization, to the Ebola hemorrhagic fever (EHF) and the Acquired Immune Deficiency Syndrome (AIDS) in the 1970s and 1980s, all of them have caused a large number of deaths, disabilities and economic losses [1, 2]. It can be said that the history of human development is a history of humans fighting infectious diseases [3]. Since the twenty-first century, viral respiratory infections, especially that coronavirus-associated pneumonia, have become a severe public health crisis [4]. The Severe Acute Respiratory Syndrome in 2002 [5, 6] and Middle East Respiratory Syndrome in 2012 are typical of these diseases [7, 8]. More recently, the COVID-19 proved that the occurrence of a new and dangerous infectious disease could monopolize governmental activities, cause fear and hysteria, and get a significant impact on the free life of people throughout the world [9].
COVID-19 has attracted attention due to the report of unexplained pneumonia in Wuhan, China [10, 11]. It was caused by SRAS-COV-2 infection [12], and subsequently spread to many other parts of the world through global travel. At present, COVID-19 outbreaks have occurred in South Korea, Italy, the United States of America, and other countries, and has been defined as a global pandemic [13]. According to incomplete statistics, as of April 30, approximately 3.2 million cases have been confirmed worldwide, with approximately 224,000 deaths. The number of global confirmed cases and deaths continued to increase. The intermittent emergence and outbreaks of coronaviruses remind us that they pose a severe threat to global health [14].
This epidemic reminds us of the public health crisis that was also caused by coronavirus seventeen years ago. At present, there was clear evidence that the characteristics of this outbreak are similar to those of the 2002 SRAS epidemic [15]. According to the previous research reports, the age, underlying health conditions, and environment were the significant factors determining the spread speed and fatality rate of SARS [5, 16]. Therefore, we can guess that the above factors may be closely related to COVID-19. It is gratifying that recently some prospective studies [17,18,19] provide an association between factors (age, basic health situation, and virus transmission speed) and mortality of COVID-19. However, to our best knowledge, some relationship between environmental factors, including meteorological factors and the death risk of COVID-19 patients remain unknown, might need further investigation. Recently, a new study described the relationship between meteorological factors and the death toll of COVID-19. Still, the study hypothesized that there was a non-linear relationship between ambient temperature and the death toll of COVID-19, and analyzed the relationship between linear lag of temperature and the death toll of COVID-19 [20]. However, this linear delayed effect hypothesis seems to contradict some of the previous general research results. Because a lot of previous studies have confirmed that there was a non-linear delayed effect between temperature and death [21,22,23,24,25,26]. Furthermore, there was also methodological evidence that it is dangerous and unwise to use the generalized additive model (GAM), with the delayed structure of linear effects, to analyze the relationship while ignoring the non-linear delayed effects. Because this method ignores the non-linear delay effect, thus concealing the real relationship between environmental factors and death [27]. In other words, if the linear correlation assumption is not met, the linear model may be not reliable to estimate the genuine relationship between temperature and death.
In fact, relevant studies have adopted a more appropriate model — distributed lag non-linear model (DLNM) to deal with this situation [28]. Therefore, this model was worth recommended to analyze the non-linear delayed effects between COVID-19 mortality and temperature. Besides, there has been robust evidence that the impact of extreme temperatures needs to be taken into account when focusing on the relationship between average temperature and death, as they may cause unexpected influence on death [29, 30]. Therefore, analyzing the temperature specific effects between extreme temperatures and COVID-19 mortality was undoubtedly a reasonable choice.
Therefore, a time-series study based on the distributed lag non-linear model was conducted to examine the influence of ambient temperature on mortality outcomes in COVID-19 patients, which can capture the delayed effects of temperature and identify extremely temperature-mortality risks. Additionally, the overall cumulative exposure-response between ambient temperature and COVID-19 death with delayed effects were also analyzed.
Figure 1 shows the geographic position of Wuhan city in the east of Hubei Province, which is located where the Yangtze River joins its largest tributary, the Han River. Wuhan covers an area of about 8569.15 km2, and the registered population was 11.212 million in 2019. Wuhan is located between latitude 29°58′–31°22′N and longitude 113°41′–115°05′E, which has a subtropical monsoon humid climate with an annual average temperature of 15.8–17.5 °C and the average yearly rainfall of 1150–1450 mm. The city has four distinct seasons, with cold, wet winter and hot, humid summer. Also, Wuhan is an important science and education base and transportation hub (http://www.wh.gov.cn/zjwh/).
Location of Wuhan in Hubei Province, China. The green area indicates the location of Wuhan City, which situated in the east of Hubei Province, People's Republic of China. The map depicted in Fig. 1 was built in the map software packages in R 3.5.3, which was open access. Additionally, maptools, and mapproj software packages in R 3.5.3 were also used to draw Fig. 1
We collected the data on the number of daily new COVID-19 deaths, ambient temperature, humidity, air quality index (AQI), migration scale index (MSI) and urban travel index (UTI) from January 10 to March 31, 2020, in Wuhan.
The urban MSI can show the status of population mobility and reflect the scale of the population migration from a city in a unit time [31]. UTI is an another travelling index, which can be used to measure the population density of inner-city travel. The data of MSI and UTI were obtained from the Baidu map migration platform in the people's Republic of China (https://qianxi.baidu.com/).
The daily COVID-19 death toll was obtained from the websites of the National Health Commission of the People's Republic of China. The daily average temperature and humidity data were obtained from the Meteorological science data sharing service of the People's Republic of China (http://data.cma.cn/site/index.html) and AQI data from air quality monitoring analysis online platform of the People's Republic of China (https://www.aqistudy.cn/historydata/).
The semi-parametric generalized additive model was used to assess the relationship between environmental epidemiology exposure and death [32,33,34]. The influence of the latency period of COVID-19 and time of admission were also considered and put into the model. The average incubation period is 5.2 days (range: 2–7 days) [10, 35], and the median time of admission was about 10 days [36]. Since the coincidence of the delayed effect of latency period and time of admission in the relationship between temperature and death [37], the temperature delayed period of this study was set to 15 days.
Relative to the total population, daily COVID-19 deaths were defined as a small probability event, which follows the Poisson distribution [38]. The influence of air temperature on health usually has a delayed effect, and the relationship is not linear [26, 39, 40]. In this study, the Poisson function was used as the connection, and the generalized additive model (GAM) was used as the core model. The distributed lag non-linear model (DLNM) was used to analyze the time-series data to estimate the influence of temperature on the death of COVID-19 and the delayed effect. The temperature was included in the form of a cross-basis to estimate its impact on COVID-19 in both variable levels and time lag dimensions. Meanwhile, in order to balance the influence of other factors, relative humidity and AQI were incorporated into the model with the natural cubic spline function, and the model was finally established as follows:
$$ \mathrm{Log}\left[\mathrm{E}\left({\mathrm{y}}_{\mathrm{t}}\right)\right]=\upalpha +\upbeta {\mathrm{Temperature}}_{\mathrm{t},\mathrm{l}}+\mathrm{NS}\left({\mathrm{Humidity}}_{\mathrm{t}},\mathrm{df}\right)+\mathrm{NS}\left({\mathrm{AQI}}_{\mathrm{t}},\mathrm{df}\right)+\mathrm{NS}\left(\mathrm{time},\mathrm{df}\right)+\mathrm{NS}\left({\mathrm{MSI}}_{\mathrm{t}},\mathrm{df}\right)+\mathrm{NS}\left({\mathrm{UTI}}_{\mathrm{t}},\mathrm{df}\right) $$
yt is the number of death cases of COVID-19 on day t, which follows the Poisson distribution of E (yt). α is the constant term of the model, Temperaturet,l is the cross basis of temperature and delay time, β is its coefficient. NS is the natural spline function. Adjust variables such as relative humidity and AQI. Humilityt is day t relative humidity. AQIt is day t air quality index. MSIt is day t migration scale index and UTIt is day t urban travel index. l and df are the delay days and degrees of freedom, respectively, and time is the date of day t.
Sensitivity analyzes were performed to assess the robustness of the model. First, we assessed cumulative exposure using the mean temperatures of successive 0–14, 0–15, and 0–16 days, respectively. Secondly, we apply different degrees of freedom (6–8) to time to adjust the unmeasured time-varying confounding. Finally, the robustness of the results was evaluated by removing the daily average AQI or daily average relative humidity, respectively.
The tests were two-sided, and values of p < 0.05 were considered statistically significant. All statistical analysis and graphic plotting were conducted with the free software environment—R (version 3.5.3, R Development Core Team, March 2020). Specifically, we used the software packages 'mgcv' and 'dlnm' to examine the effects of non-linear delayed effects. Besides, the 'dlnm' package was also used to construct a cross-basic matrix for mortality and temperature. All the software packages used above are publicly available on the R Comprehensive Archive Network (CRAN) (https://cran.r-project.org/).
Descriptive analysis
By March 31, 2020, a total of 50,007 cases and 2553 deaths were reported in Wuhan, accounting for 73.75% of the cumulative COVID-19 deaths in China. The case fatality rate was 5.10%. Table 1 summarized the characteristics of the number of deaths, AQI, and relative humidity of COVID-19 in Wuhan from January 10 to March 31, 2020. The maximum number of deaths on COVID-19 was 216, and the minimum was 0. The daily average temperature was 9.0 °C, and the maximum temperature was 20.6 °C. Figure 2 shows the daily distribution of the number of deaths and the mean temperature on COVID-19. The results showed that the temperature was gradually increasing, and the death number of COVID-19 gradually increased and then decreased in Wuhan.
Table 1 Statistics of daily death cases and mean temperature in Wuhan
The daily distribution of daily death count and mean temperature in Wuhan from 1 January 2020 to 31 March 2020
Association of temperature lag and COVID-19 mortality
Using the Poisson generalized additive model for time series analysis, the correlation between the daily log of COVID-19 mortality and temperature and 15 lag days was visualized (Fig. 3a). From the figure, the correlation is U-shaped, and the delayed effect is non-linear. Besides, compared with the average temperature, when the temperature is lower, COVID-19 mortality is higher. As the ambient temperature increases, the Log (mortality) of COVID-19 patients due to temperature initially decreases rapidly and then slowly increases.
Temperature-mortality relationships (a) and death cumulative RR for daily mean temperature at lag0–15 days (b)
Figure 3b displays the overall correlation between cumulative relative risk of COVID-19 death and temperature, which is L-shaped. A significant negative association was shown between the temperature and the daily risk of COVID-19 death, in other words, a 1.0 °C increase in temperature was associated with a 12.3% (95% CI [3.4, 20.4%]) reduction in daily cumulative relative risk of COVID-19 death. When the temperature was lower than 20.0 °C, the relative risk of death is approaching 0 while it was close to 20.0 °C. Overall, the cumulative relative death risk of COVID-19 decreased with increasing temperature.
Figure 4 shows the general pattern of the relative risk death as a function of temperature and lag, by showing a three-dimensional plots of relative death risk along with temperature and lag 15 days. Overall, the effect of temperature on the daily mortality risk of COVID-19 was non-linear, with higher temperatures leading to lower relative risk. Figure 5 shows the relative mortality for the lag-specific effects (0, 5, 10, 15 days) and temperatures-specific effects (− 5.0, 2.0, 10.0, 20.0 °C). The death risk of COVID-19 at low temperature presented acute and short-term effects, and it showed a trend of first strong and then weak, with the greatest risk occurring in 5–7 days of exposure. The delayed effects of the high temperature appeared quickly, then decrease rapidly, and increased sharply 15 days of exposure, the mortality risk of COVID-19 presented as acute and longer-term effects. Also, low temperatures had a shorter impact on the mortality risk of COVID-19 than high temperatures.
Relative risks of mortality by daily mean temperature along 15 lag days
The relative risk of mortality by daily mean temperature at a specific lag day (0, 5, 10, 15 days) and temperatures (− 5.0, 2.0, 10.0, 25.0 °C)
Poisson generalized additive model
Temperature, humidity, AQI, MSI and UTI were incorporated into the final model. Especially for temperature, the distributed lag nonlinear structure was considered, the then the nonlinear lag period was set as 15 days, along with the degree of freedom of the long-term trend of time that was set as 7. After adjusting for humidity, AQI, MSI, and UTI, the relative risk of COVID-19 death decreased by 5.4% (95%CI [3.4, 6.9%]) for every 1 °C rise in average temperature (Table 2). Humidity, AQI, and MSI had no significant effect on COVID-19 deaths. In addition, for per 1-unit increase in UTI, the relative risk of COVID-19 death nearly doubled: 1.959 (95%CI [1.009, 3.804]).
Table 2 The effect of a one-unit increase in average temperature, relative humility, AQI, MSI and UTI on daily death cases of COVID-19
Sensitivity analyses
Changing the time degree of freedom (6–8) could control long-term trends and seasonality. Some influencing factors were eliminated to obtain the adjustment model. In this study, relative humidity and AQI were eliminated, respectively. Table 3 shows that under the conditions of temperature lag0–14 days, lag0–15 days, and lag0–16 days, the average cumulative death effects (RR [95% CI]) of COVID-19 did not change significantly. Under the conditions of temperature lag14 days, lag15 days, and lag16 days, the COVID-19 death effects (RR [95% CI]) also did not change significantly. In conclusion, the model applied in this study were robust.
Table 3 Sensitivity analysis death RR [95% CI] of COVID-19 caused by temperature in Wuhan
There is no doubt that the outbreak of COVID-19 has caused enormous economic loss and health burden around the world. Under such circumstances, independent and robust scientific evidence will undoubtedly provide a powerful weapon to deal with this crisis. We believe that it is significant to clarify the relationship between ambient temperature and mortality of the COVID-19, not only in Wuhan but also in other epidemic areas in the world. In this study, we used a rigorous and scientific mathematical model to reveal the unique relationship between temperature and death caused by COVID-19, even if the association between death and temperature in non-communicable diseases has been established [40]. We hope that the research results can provide some methodological guidance for the response to this crisis.
DLNM model was verified to be a useful tool in this study to assess the non-linear relationship between ambient temperature and COVID-19 mortality on a daily basis, including properly evaluating the non-linear associations and cumulative death relative risks related to temperatures for lag days. The model figures out the non-linear and negative correlation between ambient temperature and COVID-19 mortality [26, 40, 41]. The increase in temperature could reduce the death risk of patients, and the relationship between temperature and death effect was U-shaped.
Our study found that the relationship between death risk of COVID-19 and low temperature was different from the high temperature. The low temperature effect on the death risk of COVID-19 is first enhanced and then weakened. With increasing of outdoor temperature, the death risk of COVID-19 is decreasing. The increase in temperature may reduce the lethal intensity of COVID-19, which is related to the increase of virus inactivation caused by high temperature [36, 42]. When the ambient temperature rose to around 10.0 °C and continued to rise, the temperature and the death risk of COVID-19 gradually decreased and then increased, which is consistent with the findings in non-communicable diseases [39]. When the temperature is getting higher and higher beyond the inflection, the death risk of combined diseases such as AIDS, diabetes and hypertension may also increase [23], which potentially increases the death risk of patients with COVID-19. Besides, the low temperature effects are acute and short-term [43], with the most considerable risk occurring in 5–7 days of exposure. High temperature mainly reflects the acute effect, and the maximum effect occurs on the day of temperature exposure, which is similar to some studies [44, 45].
The results of the study show that low temperature has a more significant impact on the death risk of COVID-19 than high temperature is consistent with a meta-analysis [46]. At low temperatures, deaths from respiratory illnesses are greatly affected. Exposure to low temperature in humans can cause cardiovascular stress, which is affected by factors such as peripheral blood vessel constriction, plasma cholesterol, plasma fibrinogen, red blood cell count, blood viscosity, and inflammatory response [47, 48]. These factors together lead to respiratory distress, thus contribute to the deterioration of COVID-19 patients. At high temperatures, the number of patients dying from chronic non-communicable diseases increased, which forms a potential competitive relationship, leading to a gentle change in the number of COVID-19 deaths directly attributable to temperature [40, 49].
Overall, the temperature was negatively correlated with the cumulative effect of COVID-19 deaths [24]. At low temperatures, the cumulative death risk of COVID-19 was higher. With the increase of daily average temperature, the delayed effects of temperature exposure in patients with COVID-19 decrease rapidly and show protective effects. This data indicates that the risk of death of COVID-19 patients gradually decreases due to the increase in ambient temperature. With the advent of summer, the COVID-19 patient population may benefit from the high temperature effect.
Sensitivity analysis showed that the results of this study were robust. Firstly, the distributed lag non-linear method can flexibly dig out the possible relationship between temperature changes and daily mortality and cumulative delayed effects. Although the model is involved with many parameters, our sensitivity analysis shows that the results are robust [49]. Secondly, during the analysis, we adjusted a group of potential confounding factors, including daily average temperature, relative humidity and AQI, and compared the model results after excluding relative humidity or AQI. Generally, our results were relatively robust.
Some limitations should be considered in interpreting our findings: Firstly, this is an ecologically designed study, and the use of environmental monitoring data may not accurately reflect actual personal exposure. Secondly, COVID-19 patients basically receive isolation treatment in the designated hospital, and the patients live in the closed space, so the relationship between the ambient temperature and death may be different from that of indoor temperature. Third, this study did not adjust the social and demographic factors such as age and economy, which may affect the population structure and mortality [9]. Fourth, individual basic disease information such as diabetes, hypertension and AIDS are not available on the websites of the National Health Commission of the People's Republic of China, which will cause bias to our research results. Finally, in the process of treating and curing COVID-19 patients, clinical diagnosis and treatment guideline is continuously updated, and the impact from this inconstancy was not included in this study.
Despite these limitations, this study found out the non-linear Negative correlation between ambient temperature and death in COVID-19 patients. Besides, it was made clear that low temperature can potentially increase the risk of death, while high temperature manifests reversely. However, high temperatures may increase the risk of death from other complications, which are worthy of further study. Altogether, this study may provide a beneficial reference for the setting of COVID-19 clinical isolation and treatment environment.
The datasets analyzed in this study were publically available. The daily COVID-19 death toll was got from the websites of the National Health Commission of the People's Republic of China (http://www.nhc.gov.cn/). The daily average temperature and humidity data were obtained from Meteorological science data sharing service of China (http://data.cma.cn/site/index.html) and AQI data from air quality monitoring analysis online platform of China (https://www.aqistudy.cn/historydata/). The data of MSI and UTI were obtained from the Baidu map migration platform in the people's Republic of China (https://qianxi.baidu.com/).
DLNM:
Distributed lag non-linear model;
EHF:
Ebola hemorrhagic fever
AIDS:
AQI:
SD:
The minimum value
The maximum value
P (25):
Upper quartile
Lower quartile
RR:
Relative risk
Jones KE, Patel NG, Levy MA, Storeygard A, Balk D, Gittleman JL, et al. Global trends in emerging infectious diseases. Nature. 2008;451(7181):990–3. https://doi.org/10.1038/nature06536.
Malvy D, McElroy AK, de Clerck H, Günther S, van Griensven J. Ebola virus disease. Lancet. 2019;393(10174):936–48. https://doi.org/10.1016/S0140-6736(18)33132-5.
Brachman PS. Infectious diseases — past, present, and future. Int J Epidemiol. 2003;32(5):684–6. https://doi.org/10.1093/ije/dyg282.
Farrar J. Science, innovation and society: what we need to prepare for the health challenges of the twenty-first century? Int Health. 2019;11(5):317–20. https://doi.org/10.1093/inthealth/ihz047.
WHO. Update 49—SARS case fatality ratio, incubation period. 2003. https://www.who.int/csr/sarsarchive/2003_05_07a/en/. Accessed 5 April 2020.
Sajadi MM, Habibzadeh P, Vintzileos A, Shokouhi S, Miralles-Wilhelm F, Amoroso A. Temperature, humidity, and latitude analysis to predict potential spread and seasonality for COVID-19. SSRN Electron J. 2020. https://doi.org/10.2139/ssrn.3550308.
AlRuthia Y, Somily AM, Alkhamali AS, Bahari OH, AlJuhani RJ, Alsenaidy M, et al. Estimation Of Direct Medical Costs Of Middle East Respiratory Syndrome Coronavirus Infection: A Single-Center Retrospective Chart Review Study. Infect Drug Resist. 2019;12:3463–73. https://doi.org/10.2147/idr.S231087.
Emerson JA, Dunsiger S, Williams DM. Reciprocal within-day associations between incidental affect and exercise: An EMA study. Psychol Health. 2018;33(1):130–43. https://doi.org/10.1080/08870446.2017.1341515.
Kraemer MUG, Yang CH, Gutierrez B, Wu CH, Klein B, Pigott DM, et al. The effect of human mobility and control measures on the COVID-19 epidemic in China. Science. 2020;368(6490):493–7. https://doi.org/10.1126/science.abb4218.
Guan WJ, Ni ZY, Hu Y, Liang WH, Ou CQ, He JX, et al. Clinical Characteristics of Coronavirus Disease 2019 in China. NEJM. 2020;382(18):1708–20. https://doi.org/10.1056/NEJMoa2002032.
Zhu N, Zhang D, Wang W, Li X, Yang B, Song J, et al. A Novel Coronavirus from Patients with Pneumonia in China, 2019. NEJM. 2020;382(8):727–33. https://doi.org/10.1056/NEJMoa2001017.
Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497–506. https://doi.org/10.1016/S0140-6736(20)30183-5.
WHO. WHO Director—General's opening remarks at the media briefing on COVID-19. 2020. https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---11-march-2020. Accessed 5 April 2020.
Chen Y, Liu Q, Guo D. Emerging coronaviruses: Genome structure, replication, and pathogenesis. J Med Virol. 2020;92(4):418–23. https://doi.org/10.1002/jmv.25681.
Jiang F, Deng L, Zhang L, Cai Y, Cheung CW, Xia Z. Review of the Clinical Characteristics of Coronavirus Disease 2019 (COVID-19). J Gen Intern Med. 2020;35(5):1545–9. https://doi.org/10.1007/s11606-020-05762-w.
Cui Y, Zhang ZF, Froines J, Zhao J, Wang H, Yu SZ, et al. Air pollution and case fatality of SARS in the People's Republic of China: an ecologic study. Environ Health. 2003;2(1):1–5. https://doi.org/10.1186/1476-069X-2-15.
Butler MJ, Barrientos RM. The impact of nutrition on COVID-19 susceptibility and long-term consequences. Brain Behav Immun. 2020. https://doi.org/10.1016/j.bbi.2020.04.040.
Du RH, Liang LR, Yang CQ, Wang W, Cao TZ, Li M, et al. Predictors of Mortality for Patients with COVID-19 Pneumonia Caused by SARS-CoV-2: A Prospective Cohort Study. Eur Respir J. 2020;55(5). https://doi.org/10.1183/13993003.00524-2020.
Grasselli G, Zangrillo A, Zanella A, Antonelli M, Cabrini L, Castelli A, et al. Baseline Characteristics and Outcomes of 1591 Patients Infected With SARS-CoV-2 Admitted to ICUs of the Lombardy Region, Italy. JAMA. 2020;323(16):1574–81. https://doi.org/10.1001/jama.2020.5394.
Ma Y, Zhao Y, Liu J, He X, Wang B, Fu S, et al. Effects of temperature variation and humidity on the death of COVID-19 in Wuhan, China. Sci Total Environ. 2020;724. https://doi.org/10.1016/j.scitotenv.2020.138226.
Ballester F, Corella D, Pérez-Hoyos S, Sáez M, Hervás A. Mortality as a Function of Temperature. A Study in Valencia, Spain, 1991–1993. Int J Epidemiol. 1997;26(3):551–61. https://doi.org/10.1093/ije/26.3.551.
Qiao Z, Guo Y, Yu W, Tong S. Assessment of Short- and Long-Term Mortality Displacement in Heat-Related Deaths in Brisbane, Australia, 1996–2004. Environ Health Perspect. 2015;123(8):766–72. https://doi.org/10.1289/ehp.1307606.
Bunker A, Wildenhain J, Vandenbergh A, Henschke N, Rocklöv J, Hajat S, et al. Effects of Air Temperature on Climate-Sensitive Mortality and Morbidity Outcomes in the Elderly; a Systematic Review and Meta-analysis of Epidemiological Evidence. EBioMedicine. 2016;6:258–68. https://doi.org/10.1016/j.ebiom.2016.02.034.
Dadbakhsh M, Khanjani N, Bahrampour A, Haghighi PS. Death from respiratory diseases and temperature in Shiraz, Iran (2006–2011). Int J Biometeorol. 2017;61(2):239–46. https://doi.org/10.1007/s00484-016-1206-z.
Lytras T, Pantavou K, Mouratidou E, Tsiodras S. Mortality attributable to seasonal influenza in Greece, 2013 to 2017: variation by type/subtype and age, and a possible harvesting effect. Euro Surveill. 2019;24(14). https://doi.org/10.2807/1560-7917.ES.2019.24.14.1800118.
Chen R, Yin P, Wang L, Liu C, Niu Y, Wang W, et al. Association between ambient temperature and mortality risk and burden: time series study in 272 main Chinese cities. BMJ. 2018;363:k4306. https://doi.org/10.1136/bmj.k4306.
Gasparrini A, Armstrong B, Kenward MG. Distributed lag non-linear models. Stat Med. 2010;29(21):2224–34. https://doi.org/10.1002/sim.3940.
Armstrong B. Models for the Relationship Between Ambient Temperature and Daily Mortality. Epidemiology. 2006;17(6):624–31. https://doi.org/10.1097/01.ede.0000239732.50999.8f.
Ding Z, Li L, Wei R, Dong W, Guo P, Yang S, et al. Association of cold temperature and mortality and effect modification in the subtropical plateau monsoon climate of Yuxi, China. Environ Res. 2016;150:431–7. https://doi.org/10.1016/j.envres.2016.06.029.
Ban J, Xu D, He MZ, Sun Q, Chen C, Wang W, et al. The effect of high temperature on cause-specific mortality: A multi-county analysis in China. Environ Int. 2017;106:19–26. https://doi.org/10.1016/j.envint.2017.05.019.
Baidu migration. Baidu map insight. http://qianxi.baidu.com/. Accessed 5 April 2020.
Borge R, Requia WJ, Yagüe C, Jhun I, Koutrakis P. Impact of weather changes on air quality and related mortality in Spain over a 25year period [1993–2017]. Environ Int. 2019;133(Pt B):105272. https://doi.org/10.1016/j.envint.2019.105272.
Liu C, Chen R, Sera F, Vicedo-Cabrera AM, Guo Y, Tong S, et al. Ambient Particulate Air Pollution and Daily Mortality in 652 Cities. NEJM. 2019;381(8):705–15. https://doi.org/10.1056/NEJMoa1817364.
Wu R, Song X, Chen D, Zhong L, Huang X, Bai Y, et al. Health benefit of air quality improvement in Guangzhou, China: Results from a long time-series analysis (2006–2016). Environ Int. 2019;126:552–9. https://doi.org/10.1016/j.envint.2019.02.064.
Lauer SA, Grantz KH, Bi Q, Jones FK, Zheng Q, Meredith HR, et al. The Incubation Period of Coronavirus Disease 2019 (COVID-19) From Publicly Reported Confirmed Cases: Estimation and Application. Ann Intern Med. 2020. https://doi.org/10.7326/M20-0504.
Chan KH, Malik Peiris JS, Lam SY, Poon LLM, Yuen KY, Seto WH. The Effects of Temperature and Relative Humidity on the Viability of the SARS Coronavirus. Adv Virol. 2011:1–7. https://doi.org/10.1155/2011/734690.
Tan J, Mu L, Huang J, Yu S, Chen B, Yin J. An initial investigation of the association between the SARS outbreak and weather: with the view of the environmental temperature and its variation. J Epidemiol Community Health. 2005;59(3):186–92. https://doi.org/10.1136/jech.2004.020180.
Warner P. Poisson regression. J Fam Plann Reprod Health Care. 2015;41(3):223–4. https://doi.org/10.1136/jfprhc-2015-101262.
Yang J, Ou CQ, Ding Y, Zhou YX, Chen PY. Daily temperature and mortality: a study of distributed lag non-linear effect and effect modification in Guangzhou. Environ Health. 2012;11(63):1–9. https://doi.org/10.1186/1476-069X-11-63.
Ma W, Wang L, Lin H, Liu T, Zhang Y, Rutherford S, et al. The temperature–mortality relationship in China: An analysis from 66 Chinese communities. Environ Res. 2015;137:72–7. https://doi.org/10.1016/j.envres.2014.11.016.
Lee CC, Sheridan SC. A new approach to modeling temperature-related mortality. Non-linear autoregressive models with exogenous input. Environ Res. 2018;164:53–64. https://doi.org/10.1016/j.envres.2018.02.020.
Casanova LM, Jeon S, Rutala WA, Weber DJ, Sobsey MD. Effects of Air Temperature and Relative Humidity on Coronavirus Survival on Surfaces. Appl Environ Microbiol. 2010;76(9):2712–7. https://doi.org/10.1128/AEM.02291-09.
Zhang Y, Li C, Feng R, Zhu Y, Wu K, Tan X, et al. The Short-Term Effect of Ambient Temperature on Mortality in Wuhan, China: A Time-Series Study Using a Distributed Lag Non-Linear Model. Int J Environ Res Public Health. 2016;13(7):722. https://doi.org/10.3390/ijerph13070722.
Article PubMed Central Google Scholar
Guo Y, Barnett AG, Pan X, Yu W, Tong S. The Impact of Temperature on Mortality in Tianjin, China: A Case-Crossover Design with a Distributed Lag Nonlinear Model. Environ Health Perspect. 2011;119(12):1719–25. https://doi.org/10.1289/ehp.1103598.
Guo Y, Gasparrini A, Armstrong B, Li S, Tawatsupa B, Tobias A, et al. Global Variation in the Effects of Ambient Temperature on Mortality: A Systematic Evaluation. Epidemiology. 2014;25(6):781–9. https://doi.org/10.1097/EDE.0000000000000165.
Luo Q, Li S, Guo Y, Han X, Jaakkola JJK. A systematic review and meta-analysis of the association between daily mean temperature and mortality in China. Environ Res. 2019;173:281–99. https://doi.org/10.1016/j.envres.2019.03.044.
Carder M, McNamee R, Beverland I, Elton R, Cohen GR, Boyd J, et al. The lagged effect of cold temperature and wind chill on cardiorespiratory mortality in Scotland. Occup Environ Med. 2005;62(10):702–10. https://doi.org/10.1136/oem.2004.016394.
Gasparrini A, Guo Y, Hashizume M, Lavigne E, Zanobetti A, Schwartz J, et al. Mortality risk attributable to high and low ambient temperature: a multicountry observational study. Lancet. 2015;386(9991):369–75. https://doi.org/10.1016/S0140-6736(14)62114-0.
Lin H, Zhang Y, Xu Y, Xu X, Liu T, Luo Y, et al. Temperature Changes between Neighboring Days and Mortality in Summer: A Distributed Lag Non-Linear Time Series Analysis. PLoS One. 2013;8(6):e66403. https://doi.org/10.1371/journal.pone.0066403.
We are grateful for the data provided by the National Health Commission of the People's Republic of China, China's Air Quality Monitoring Analysis Online Platform, and China Meteorological Data Sharing Service System.
This study was partially supported by the National Natural Science Foundation of China (Project approval No.: 81872719, 81803337), the National Bureau of Statistics Foundation Project (Project approval No.: 2018LY79), the Natural Science Foundation of Shandong Province (Project approval No.: ZR2019MH034). The funders did not play any role in the study design, data collection and interpretation of data, or in writing the manuscript.
Department of Health Statistics, School of Public Health, Weifang Medical University, No. 7166 Baotong West Street, Weifang, 261053, People's Republic of China
Gaopei Zhu, Yuhang Zhu, Xiaoxuan Wang, Jianing Feng, Juan Li, Yufei Xiao, Fuyan Shi & Suzhen Wang
Department of Child and Adolescent Psychiatry, Psychotherapy, and Psychosomatics, Center for Psychosocial Medicine, University Medical Center Hamburg-Eppendorf, Martinistraße 52, W 29, 20246, Hamburg, Germany
Yuhang Zhu
School of Public Health, Cheeloo College of Medicine, Shandong University, Jinan, 250012, People's Republic of China
Zhongli Wang
School of Life Sciences and Technology, Weifang Medical University, No. 7166 Baotong West Street, Weifang, 261053, People's Republic of China
Weijing Meng
Gaopei Zhu
Xiaoxuan Wang
Jianing Feng
Juan Li
Yufei Xiao
Fuyan Shi
Suzhen Wang
G. Z., contributed to conceptualization, methodology, data curation, software, and original manuscript writing; Y. Z., contributed to data curation, methodology, review and editing of writing; Z. W., contributed to the review and editing of writing; W. M., contributed to data curation and the review and editing of writing; X. W., contributed to supervision, software, and validation; J. L., J. F. and Y. X., contributed to supervision and formal analysis; S. W. and F. S., contributed to methodology and the review and editing of writing. All authors gave final approval and agreed to be accountable for all aspects of the work.
S. W. is a professor at the department of Health Statistics, School of Public Health, Weifang Medical University, People's Republic of China. She specialized in causal inference of real-world study, health risk assessment research, infectious disease prediction and early warning.
Corresponding authors
Correspondence to Fuyan Shi or Suzhen Wang.
This was an observational study using national registry data from public platforms. The medical ethics were exempted during the epidemic.
The author declares that he has no competing interests.
Zhu, G., Zhu, Y., Wang, Z. et al. The association between ambient temperature and mortality of the coronavirus disease 2019 (COVID-19) in Wuhan, China: a time-series analysis. BMC Public Health 21, 117 (2021). https://doi.org/10.1186/s12889-020-10131-7
Distributed lag non-linear model
Negative correlation
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.